this post was submitted on 22 Jul 2023
178 points (97.3% liked)
Technology
59428 readers
2824 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I won’t comment on the ethical pros and cons of this being deployed in airports but from a system perspective it needs to be much higher than 97%. LAX processes about 240,000 travellers a day. 3% error translates to over 7000 travellers a day being incorrectly processed. What you want is closer to 99.9 so the error is in the hundreds and can reasonably be corrected with human intervention. This may sound like an easy push but anyone experienced in training AI/ML systems knows this is still a fair bit of work and every single percent increase in accuracy is significant.
I imagine the vast majority of those 3% error cases get rerouted to a human border official for handling. This is basically a sanity check, and sounds reasonable. The use of AI in the first instance shouldn't be making things worse, since AI is already superior than humans at facial recognitiob. I wouldn't be surprised if normal border officials have a significantly higher than 3% error rate in face matching.
No, the person who was misidentified will be routed to a human TSA agent for harassment. Every single time they fly.
Will this be like the "random" checks if your complexion is olive or darker or if your name seems kind of funny?
100% this will already be better than humans, but similar to say autonomous driving, the goals should be better than human otherwise we see vendors doing just enough to achieve the simple goal of saving costs or making sales. I would hope they run this in parallel and the system flags anything with confidence less than a threshold for human scrutiny and comparison. Analysing the human decisions in parallel to the AI decisions will help to refine the models and also give some visibility to current accuracy with just human checks. This training and review aspect is a lot of work.
I also want to know the statistics in regards to people in makeup. With a bit of makeup I bet you could get this system to think you're whoever's photo is on your ID.
Camera-based systems are usually quite easy to fool so it could result in a seriously false sense of security.
A "false sense of security" has always been TSA's mission statement, so that's on-brand AF.