This new way of training AI can prevent online bullying
[ad_1]
About six last month, Nina Nørgaard met weekly for seven hours with seven friends to talk about sexism and violent language used to target women on social media. Nørgaard, a doctoral student at the IT University in Copenhagen, and his discussion team were making unusual efforts to better identify misogyny online. The researchers paid seven to study thousands of posts on Facebook, Reddit and Twitter and decide whether they demonstrated sexism, stereotypes or harassment. Once a week, investigators would gather the team, mediated by Nørgaard, to discuss hard calls that they disagreed with.
Misogyny is a disease that shapes the way women are portrayed online. International Plan 2020 examination, one of the largest ever, more than half of women in 22 countries said they have been bullied or abused online. One in five women who were abused said their behavior had changed – as a result of the internet being reduced or discontinued.
They are used by social media companies Artificial intelligence identifying and removing messages that underestimate, harass, or threaten violence against women, but it is a difficult problem. Among researchers, there are no rules for identifying sexist or misogynistic messages; one final article proposed four categories of troubling content and another identified 23 categories. Most research is conducted in English, giving people who work in other languages and cultures less guidance in making difficult and often subjective decisions.
So Danish researchers tested a new approach, reviewing and labeling messages that Nørgaard and seven people hired full-time, instead of frequently relying on part-time contractors paid by post. People of different ages and nationalities were deliberately chosen from a variety of political perspectives to reduce their chances of being sidelined from a single worldview. The labelers included a software designer, a climate activist, an actor and a health worker. Nørgaard’s task was to bring them to a consensus.
“It’s a great thing that they don’t match. We don’t want a tunnel view. We don’t want everyone to think the same way, ”says Nørgaard. Its purpose is to “discuss among themselves or among the group.”
Nørgaard saw that his job was to help labelers “find the answers themselves”. Over time, he got to know each of the seven individually, and, for example, they spoke more than the others. He tried to make sure that no one was dominant in the conversation, because it had to be a discussion, not a discussion.
The loudest calls were messages of irony, jokes, or sarcasm; they became major topics of conversation. Over time, however, “the meetings were shorter and people discussed less, so I saw that it was a good thing,” says Nørgaard.
The researchers behind the project call it a success. They say the interviews led to more accurately tagged data to train an AI algorithm. Researchers say they can detect misogyny as defined with the AI set on 85% of the most popular social media platforms. A year earlier, a cutting-edge algorithm for detecting misogyny was accurate at about 75 percent of the time. In all, the group analyzed nearly 30,000 messages, of which 7,500 were considered abusive.
The messages were written in Danish, but researchers say their approach can be applied to any language. “I think if you’re going to tell misogyny, at least you have to follow the approach that has most of our elements. Otherwise, you’re risking low-quality data and that hurts all of that,” says research author Leon Derczynski and an associate professor at the University of Copenhagen IT.
Findings can be useful beyond social media. Companies have begun using AI to publicly view texts such as work lists or press releases for sexism. If women are excluded from online conversations to avoid harassment, this will stifle democratic processes.
“If you turn a blind eye to threats and attacks against half the population, then you won’t have a good democratic space like online,” Derczynski said.
[ad_2]
Source link