Tech News

How to stop AI from recognizing your face in selfies

[ad_1]

Fawkes has already been downloaded nearly half a million times project website. It was also built by a user online version, is even easier for people to use (even if Wenger doesn’t make sure third parties use the code, a warning: “You don’t know what happens to your data while that person is processing it”). There’s no phone app yet, but no one is going to stop anyone from doing it, Wenger says.

Fawkes can save a new face recognition system for getting to know you – the next Clearview, say. But it will not sabotage existing systems that are trained in unprotected images. Technology is improving all the time, though. Wenger believes a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the ICLR teams this week, could address this issue.

Called LowKey, the tool is deployed in Fawkes, applying perturbations to images based on a stronger attack on the opposition, which also misleads preventive business models. Like Fawkes, It is also available on the LowKey line.

Ma and her colleagues have given an even bigger twist. Their vision, which turns them into what they call images examples that are not learned, effectively an AI completely rejects your selfies. “I think it’s very good,” Wenger says. “Fawkes trains a model to learn something wrong around you, and this tool trains a model to learn nothing about you.”

My images taken from the web (at the top) become inexplicable examples (below) that a facial recognition system will not ignore. (Credit to Daniel Ma, Sarah Monazam Erfani and colleagues)

Unlike Fawkes and his followers, examples that cannot be learned are not based on oppositional attacks. Instead of including changes in the image that force an AI to make a mistake, Ma’s team adds a few small changes that prevent the AI ​​from ignoring the workout. When presented with the image later, the assessment of what is in it will be no better than a random guess.

Unknown examples can be more effective than oppositional attacks because they cannot be trained. The more an AI sees the opposite examples, the better they will know them. But since Ma and her colleagues first stop to train an AI in pictures, they tell her that this won’t happen with examples she won’t learn.

Wenger has given up on a constant battle. His team recently noticed that Microsoft Azure’s face recognition service was no longer falsifying some of their images. “It suddenly solidified with the hidden images we created,” he says. “We don’t know what happened.”

Maybe Microsoft changed the algorithm or AI was able to see so many images of people using Fawkes that it learned to know them. In any case, Wenger’s team last week released an update to the tool that works against Azure. “This is another cat and mouse weapon race,” he says.

For Wenger, this is the story of the internet. “Companies like Clearview perceive freely available data and use it to do what they want,” he says. “

Regulation can help in the long run, but that won’t stop companies from exploiting gaps. “There will always be a mismatch between what is legally acceptable and what people really want,” he says. “Tools like Fawkes fill that gap.”

“Let’s give people the power they didn’t have before,” he says.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button