Tech News

Twitter’s Photo Crop Algorithms Are For Young And Slim Women

[ad_1]

In May, Twitter he said would stop using an Artificial intelligence when the images are automatically cropped, an algorithm in favor of white and female faces has been found.

Now, on unusual competition to study the AI ​​program of inappropriate behavior, it was found that the same algorithm, which identifies the most important areas of an image, also distinguishes it by age and weight, and favors texts in English and other Western languages.

Top entry Bogdan Kulynych, A graduate in computer security from EPFL Switzerland, shows how Twitter protects image-cutting algorithms for thinner and younger people. Kulynych used the deepfake technique to automatically create different faces, and then tested the cutting algorithm to see how he responded.

“Basically, the thinner, younger and more feminine the image, the more beneficial it will be,” says chief scientist Patrick Hall BNH, An AI consulting company. He was one of four judges in the competition.

A second judge, Ariel Herbert-Voss, security researcher OpenAI, says the biases found by the participants reflect the human biases that contributed to the data used to train the model. But he added that the entries show how in-depth analysis of an algorithm can help product groups eliminate problems with their AI models. “It’s a lot easier to fix if someone like ‘Hey, this is bad'”

The “algorithm bias award” was held last week Defcon, a computer security The Las Vegas conference suggests that allowing outside researchers to explore algorithms that behave badly could help companies solve problems before they do real harm.

Like some companies, Twitter included, encouraging experts to look for security flaws in their code by offering rewards for specific exploits. Some AI experts believe that companies need to give outsiders access to the algorithms and data they use to determine problems.

“It’s incredibly gratifying to explore that idea, and I’m sure we’ll see more of it,” he says Amit Elazari, Director of global cybersecurity policy at Intel and professor at UC Berkeley, has proposed a bug-bounty approach to root out AI bias. He says the search for bias in AI “could have the benefit of empowering the public”.

In September, Canadian the student attracted attention Twitter’s algorithm was cutting photos as it was. The algorithm was designed to include zero in faces and other areas of interest, such as text, animals, or objects. But the algorithm was often in favor of white faces and women, with several people appearing in the images. Twittersphere soon found other examples of bias that showed racial and gender bias.

For last week’s awards competition, Twitter made the code-cutting algorithm code available to participants, and offered prizes to teams that showed evidence of other harmful behaviors.

Others found additional biases. One showed that the algorithm was the opposite of people with white hair. Another revealed that the algorithm favors the Latin text over the Arabic script, giving it a west-centered bias.

BNH’s hall says it believes other companies will follow Twitter’s approach. “I think there’s some hope,” he says. “Because of future regulations, and because the number of incidents of AI bias is increasing.”

In recent years, much of the uproar over AI has been caused by examples of the ease with which algorithms encode biases. Commercial face recognition algorithms have been shown discriminate on the basis of race and gender, image processing code it has been found to show sexist ideas, and a program that judges the probability of a person breaking again has been proven Tendency against Black defendants.

It is becoming difficult to root the subject. Identifying accuracy is not easy, and certain algorithms, such as those used to study X-rays in medicine, may to internalize racial biases in ways that humans cannot easily detect.

“When you think about defining bias in our models or our systems, one of the biggest problems that all companies and organizations have is how to scale this?” he says Rumman Chowdhury, Director of the ML Ethics, Transparency and Accounts account group on Twitter.



[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button