Facial recognition still struggles with really bad mistakes that are always bad optics for the business that uses it. I'm amazed anyone is still willing to use it in its current form.
It's been the norm that these systems can't tell the difference between people of dark pigmentation if it even acknowledges it's seeing a person at all.
Running a system with a decade long history or racist looking mistakes is bonkers in the current climate.
Facial recognition uses a few key elements of the face to hone in on matches, and traditional makeup doesn’t obscure any of those areas. In order to fool facial recognition, the goal is often to avoid face detection in the first place; Asymmetry, large contrasting colors, obscuring one (or both) eyes, hiding the oval head shape and jawline, and rhinestones (which sparkle and reflect light nearly randomly, making videos more confusing) seem to work well. But as neural nets improve, they also get harder to fool, so what works for one system may not work for every system.
CV Dazzle (originally inspired by dazzle camouflage used on some warships) is a makeup style that tries to fool the most popular facial recognition systems.
Note that those tend to obscure the bridge of the nose, the brow line, the jawline, etc… Because those are key identification areas for facial recognition.
Yeah, if we can still recognize those as faces, it's possible for a neural net to do so as well.
But I'm talking more about differentiating faces than hiding entirely from such systems. Like makeup can be used to give the illusion that the shape of the face is different with false contour shading. You can't really change the jawline (I think... I'm not skilled in makeup myself but have an awareness of what it can do) but you can change where the cheekbones appear to be, as well as the overall shape of the cheeks, and you can adjust the nose, too (unless it's a profile angle).
I think the danger in trying to hide that you have a face entirely is that if it gets detected, there's a good chance that it will be flagged for attempting to fool the system because those examples you gave are pretty obvious, once you know what's going on.
It would be like going in to a bank with a ski mask to avoid being recognized vs going in as Mrs Doubtfire. Even if they are just trying to do banking that one time, the ski mask will attract unwanted attention while using a different face would accomplish the goal of avoiding attention.
The catch is that its only really a problem for the people getting flagged. Then you're guilty until proven innocent, and the only person to blame is a soulless machine with a big button that reads "For customer support, go fuck yourself".
As security theater, its cheap and easy to implement. As a passive income stream for tech bros, its a blank check. As a buzzword for politicians who can pretend they're forward-thinking by rolling out some vaporware solution to a non-existent problem, its on the tip of everyone's tongue.
For old stuff things like minority rights and all other principles about making people comfortable apply, and reliability standards with a lot of nines have to be met.
For new stuff - "if it fails 1/100 times, then it's fine, so screw you".
See, everybody (or at least people whose voices are heard, not us dumb fucks, authentic Zuck quote btw) wants all this tech bro surveillance centralized obscure blackbox ambiguous crap so fucking badly that other things don't matter.
Boeing planes dropping outta sky? Wait till "AI" reaches nuclear energy. Or until autonomous armed police drones roam your area, as something easier to imagine. (I've just remembered that in Star Wars police drones on Coruscant are unarmed, both under Republic and under Empire. EU writers couldn't imagine our times' degree of stupidity EDIT: so I'm imagining it now.)
For new stuff - "if it fails 1/100 times, then it's fine, so screw you".
The FMR (false match rate) for these systems is 1e-6. Which translates to about 0.000001% error rate or 99.999999% accuracy. And that number was from about 3 or 4 years ago. They are much better today.
How do I know? I worked in that industry building that kind of tech and read through the 500+ page NIST report on testing results for all the various Face Recognition vendors.
LMAO. You have no idea what I built the system for, and I have no skin in the game anymore as I moved on to a completely different industry that doesn't even use AI at all.
The implications of your argument is the same with flat earthers where they demand photographic proof of a spherical Earth, but when they are shown photos from space they simply say it's fake and NASA is in the lie.
Sometimes you just can't get past people's preconceived biases regardless of the facts.
It doesn't really matter whether the FMR is one in a hundred or one in a million, for the uses it's being put to it's still too high. If it was only being used as one factor for authenticating someone (I.e. the 'thing for are') but still required the other factor(s) (the 'thing you know' and the 'thing you have') then it'd be adaquate.
As it stands, when it's being used either to pick someone out without further scrutiny, or to make payments with no further checks, it's simply unacceptable. There's good arguments to say it's not just the error rate is unacceptable, but that the technology simply shouldn't be used in those scenarios as it's entirely inappropriate, but that's a separate discussion.
The truth is the numbers I cited are the 1:N stats. The 1:1 numbers are far higher, because you can immediately control for distance, lighting, angle, and gaze direction.
With my system I worked with had a 1:1 match rate that was statistically perfect (but nothing is ever 100%).
the technology simply shouldn't be used in those scenarios as it's entirely inappropriate, but that's a separate discussion.
Agreed. Its use as a payment system is just ridiculous.
That was a really garbage system then. Like disgracefully bad Fisher Price quality.
The reality is that there are more crap systems than really good ones out there. And there are as many algorithms and different ways of doing it as there are companies.
The system I developed was so good, even when we tried all kinds of shenanigans to trip it up, we just couldn't do it.
Other than that, if this system of yours makes 1 error in a million scans, that's still not very good, if that's treated as "virtually no errors" as in no talking to manager, no showing ID as a fallback, so on. Say, if it were employed in Moscow subway, that'd mean a few unpleasant errors preventing people from getting where they need every day.
As you reply to someone who spent a decade in the AI industry.
This has nothing to do with overfitting. Particularly because our matching algorithm isn't trained on data.
The face detection portion is, but that's simply finding the face in an image.
The system I worked with used a threshold value that equates to an FMR of 1e-07. And it wasn't used in places like subways or city streets. The point I'm making is that in the few years of real world operation (before I left for another job) we didn't see a single false detection. In fact, one of the facility owners asked us to lower the threshold temporarily to verify the system was actually working properly.
Edit: the truth is that saying x% accuracy isn't entirely correct, because the Numbers just don't work that way. It's just a way we convey the data to the average person. I can't count the amount of times I've had asked "ok, but what doesn't mean in terms of accuracy? What's the accuracy percentage?"
And I understand what you're saying now. Yes I did have the number written down incorrectly as a percentage. I'm on mobile this whole time doing a hundred other things. I added two extra digits.
That argument was only ever made by dumb fucks or evil fucks. The article reports about an actual occurrence of one of the problems of such technology that we (people who care about privacy) have warned about from the beginning.
I think IR lights on glasses can blind cameras, and there are also infrared-blocking glasses that also reflect IR light from cameras back at them. So yes, adding lights or reflectors can be effective.
But we shouldn't even have to consider these workarounds to maintain our privacy and security. And even if we start wearing these glasses or fooling the systems, governments will outlaw the use of such circumventing tech.
Our reaction to this needs to be "we will not allow this tech to be used against us, period." Ban it for law enforcement, ban it for commerce, ban it in airports and schools.
Very neat, I wonder how effective they are at confusing facial recognition and 3D facial scanning systems? Not that we often encounter the 3D scanning, but an interesting aspect to consider.
If the AI is flagging faces and immediately alerting employees, it is likely also going to throw up a flag for abnormal interference like that. Or if it doesn't do it now, it is a feature that could be added if such hats become a common enough.
Last time I ordered big boxes online, they just shipped me empty boxes. I don’t know how they screwed that up, but then I’ve always gone to big box stores so I can actually see the big boxes I’m buying.
Allowing bad character to reign generally speaking. Sociopaths are like seed crystals; it’s the rest of us implanting their way of life that do most of the evil, even if they’re the ones providing the purest examples.
We need to be good people, not just nice people, in order to improve things.
In 2016, three years after the events of Chicago, San Francisco becomes the first city to install the next generation of ctOS – a computing network connecting every device together into a single system, developed by technology company Blume. Hacker Marcus Holloway, punished for a crime he did not commit through ctOS 2.0 …
Idk, but as someone who has a fair skin tone and thus likely wouldn't trigger a false positive, I am prohibited from entering the store based purely on the fact that they use face recognition software. Screw that.
I mean wouldn't the prohibited person just have to wear some mask to avoid triggering? And if this is prohibited - wouldn't that be problematic when it comes to hijabs?
I'm guessing they use eye tracking, so they'd probably need something to block IR facial recognition. Another user mentioned Reflecticles, which should work but are a bit pricey.
This raises questions about how 'good' this technology is.
No it doesn't. For every 10 million good detections you only hear about the 1 or 2 false detections. The issue here are the policies around detections and how to verify them. Some places are still taking a blind faith approach to the detections.
But that really says more about the user then the tech. This issue here isn't that the tech has too many errors, it's that stores use it and it alone to ban people despite it having a low but well known error rate.
stores use it and it alone to ban people despite it having a low but well known error rate.
And it is absolutely predictable that some stores would do that, because humans. At very least, companies deploying this technology need to make certain that all the store staff are properly trained on what it does and doesn't mean, including new hires who arrive after the system is put in. Forcing that is going to require that a law be passed.
Considering the impact of these faults, it is obviously not good enough.
I was throwing a rough number out there, but the true error rate is lower than what I said. But when with those numbers this tech is statistically safer than driving a car.
The other half of the equation is policy management. Every single one of these systems should operate with human-in-the-loop. Meaning after a detection is made, it goes over to a person to make a "real" determination for accuracy.
Good as in ethical, not in capability. Facial recognition (and similar mass surveillance tech) is simply a tool for authoritarianism and should be banned for general usage. There's literally no good reason why this should be widely used at all.
Still, I think the only way that would result in change is if the hack specifically went after someone powerful like the mayor or one of the richest business owners in town.
ibtimes.co.uk
Active