The developers should be looking at jail time as they falsely accused someone of commiting a crime. This should be treated exactly like if I SWATed someone.
I get your point but totally disagree this is the same as SWATing. People can die from that. While this is bad, she was excluded from stores, not murdered
You lack imagination. What happens when the system mistakenly identifies someone as a violent offender and they get tackled by a bunch of cops, likely resulting in bodily injury.
That's not very reassuring, we're still only one computer bug away from that situation.
Presumably she wasn't identified as a violent criminal because the facial recognition system didn't associate her duplicate with that particular crime. The system would be capable of associating any set of crimes with a face. It's not like you get a whole new face for each different possible crime. So, we're still one computer bug away from seeing that outcome.
I mean, the article points out that the lady in the headline isn’t the only one who has been affected; A dude was falsely detained by cops after they parked a facial recognition van on a street corner, and grabbed anyone who was flagged.
No, it wouldn't be. The base circumstance is the same, the software misidentifying a subject. The severity and context will vary from incident to incident, but the root cause is the same - false positives.
There's no process in place to prevent something like this going very very bad. It's random chance that this time was just a false positive for theft. Until there's some legislative obligation (such as legal liability) in place to force the company to create procedures and processes for identifying and reviewing false positives, then it's only a matter of time before someone gets hurt.
You don't wait for someone to die before you start implementing safety nets. Or rather, you shouldn't.
In the UK at least a SWATing would be many many times more deadly and violent than a normal police interaction. Can't make the same argument for the USA or Russia, though.
I have no idea where Facewatch got their software from. The developers of this software could've been told their software will be used to find missing kids. Not really fair to blame developers. Blame the people on top.
It's impossible to have a 0% false positive rate, it will never be ready and innocent people will always be affected. The only way to have a 0% false positive rate is with the following algorithm:
Interesting. This is certainly not the first time there have been markdown parsing inconsistencies between clients on Lemmy, the most obvious example being subscript and ^superscript^, especially when multiple words ^get used^ or you use ^reddit ^style ^(superscript text).
But yeah, checking just now on Jerboa you're right, it does display correctly the way you did it. I first saw it on the web in lemmy-ui, which doesn't display it properly, unless you use the triple backticks.
Last time I ordered big boxes online, they just shipped me empty boxes. I don’t know how they screwed that up, but then I’ve always gone to big box stores so I can actually see the big boxes I’m buying.
Allowing bad character to reign generally speaking. Sociopaths are like seed crystals; it’s the rest of us implanting their way of life that do most of the evil, even if they’re the ones providing the purest examples.
We need to be good people, not just nice people, in order to improve things.
Facial recognition still struggles with really bad mistakes that are always bad optics for the business that uses it. I'm amazed anyone is still willing to use it in its current form.
It's been the norm that these systems can't tell the difference between people of dark pigmentation if it even acknowledges it's seeing a person at all.
Running a system with a decade long history or racist looking mistakes is bonkers in the current climate.
Facial recognition uses a few key elements of the face to hone in on matches, and traditional makeup doesn’t obscure any of those areas. In order to fool facial recognition, the goal is often to avoid face detection in the first place; Asymmetry, large contrasting colors, obscuring one (or both) eyes, hiding the oval head shape and jawline, and rhinestones (which sparkle and reflect light nearly randomly, making videos more confusing) seem to work well. But as neural nets improve, they also get harder to fool, so what works for one system may not work for every system.
CV Dazzle (originally inspired by dazzle camouflage used on some warships) is a makeup style that tries to fool the most popular facial recognition systems.
Note that those tend to obscure the bridge of the nose, the brow line, the jawline, etc… Because those are key identification areas for facial recognition.
Yeah, if we can still recognize those as faces, it's possible for a neural net to do so as well.
But I'm talking more about differentiating faces than hiding entirely from such systems. Like makeup can be used to give the illusion that the shape of the face is different with false contour shading. You can't really change the jawline (I think... I'm not skilled in makeup myself but have an awareness of what it can do) but you can change where the cheekbones appear to be, as well as the overall shape of the cheeks, and you can adjust the nose, too (unless it's a profile angle).
I think the danger in trying to hide that you have a face entirely is that if it gets detected, there's a good chance that it will be flagged for attempting to fool the system because those examples you gave are pretty obvious, once you know what's going on.
It would be like going in to a bank with a ski mask to avoid being recognized vs going in as Mrs Doubtfire. Even if they are just trying to do banking that one time, the ski mask will attract unwanted attention while using a different face would accomplish the goal of avoiding attention.
The catch is that its only really a problem for the people getting flagged. Then you're guilty until proven innocent, and the only person to blame is a soulless machine with a big button that reads "For customer support, go fuck yourself".
As security theater, its cheap and easy to implement. As a passive income stream for tech bros, its a blank check. As a buzzword for politicians who can pretend they're forward-thinking by rolling out some vaporware solution to a non-existent problem, its on the tip of everyone's tongue.
IF you want to create accountability among such arrogent tech executives,
THEN you need to enforce accountability, and 1 excellent way of doing it, would be to immediately, & permanently, ban the CEO of that company from having any right to any in-country right to purchase anything.
Force THEM to be subject to the abuse they enforce on "their inferiors", and .. oh, suddenly their motivations appear from "nowhere"??
Enforced-accountability against executives & oligarchs needs to be automatic, not "politically impossible, because they're the privileged ones, with real rights", the way our current dogshit for-profit manufactured "culture" insists.
I think IR lights on glasses can blind cameras, and there are also infrared-blocking glasses that also reflect IR light from cameras back at them. So yes, adding lights or reflectors can be effective.
But we shouldn't even have to consider these workarounds to maintain our privacy and security. And even if we start wearing these glasses or fooling the systems, governments will outlaw the use of such circumventing tech.
Our reaction to this needs to be "we will not allow this tech to be used against us, period." Ban it for law enforcement, ban it for commerce, ban it in airports and schools.
Very neat, I wonder how effective they are at confusing facial recognition and 3D facial scanning systems? Not that we often encounter the 3D scanning, but an interesting aspect to consider.
If the AI is flagging faces and immediately alerting employees, it is likely also going to throw up a flag for abnormal interference like that. Or if it doesn't do it now, it is a feature that could be added if such hats become a common enough.
Why, because all shoplifters are black? I don't understand. She's being mistaken for another person, a real person on the system.
I used to know a smackhead that would steal things to order, I wonder if he's still alive and whether he's on this database. Never bought anything off him but I did buy him a drink occasionally. He'd had a bit of a difficult childhood.
i think its more that all facial recognition systems have a harder time picking up faces with darker complexion (same idea why phone cameras are bad at it for example). this failure leads to a bunch of false flags. its hilariously evident when you see police jurisdictions using it and aresting innocent people.
not saying the womans of colored background, but would explain why it might be triggering.
Idk, but as someone who has a fair skin tone and thus likely wouldn't trigger a false positive, I am prohibited from entering the store based purely on the fact that they use face recognition software. Screw that.
I mean wouldn't the prohibited person just have to wear some mask to avoid triggering? And if this is prohibited - wouldn't that be problematic when it comes to hijabs?
I'm guessing they use eye tracking, so they'd probably need something to block IR facial recognition. Another user mentioned Reflecticles, which should work but are a bit pricey.
That argument was only ever made by dumb fucks or evil fucks. The article reports about an actual occurrence of one of the problems of such technology that we (people who care about privacy) have warned about from the beginning.
In 2016, three years after the events of Chicago, San Francisco becomes the first city to install the next generation of ctOS – a computing network connecting every device together into a single system, developed by technology company Blume. Hacker Marcus Holloway, punished for a crime he did not commit through ctOS 2.0 …
Reminds me of when I joined some classmates to the supermarket. We got kicked out while waiting in line because they didn't want middleschoolers there because we're all thieves anyways. So most of the group walked out without paying.
ibtimes.co.uk
Hot