Tech News

GPT-3 can now write misinformation and Dupe Human Readers

[ad_1]

When OpenAI demonstrated powerful Artificial intelligence an algorithm capable of creating coherent text last June, the creators warned that the tool could be used as a weapon to misinform online.

Now a team of disinformation experts has proven how effective it is algorithm, is called GPT-3, could be used to mislead and misinform. The results suggest that AI may not be comparable the best Russian operator for memes, in particular, may exacerbate some types of deception that would be very difficult to detect.

For six months, a team from Georgetown University Center for Security and Emerging Technologies He used GPT-3 to create misinformation, including stories about a false narrative, news articles that were changed to encourage a wrong view, and tweets about certain points of misinformation.

“I don’t think it’s a coincidence that global change is global warming,” reads a sample tweet made up of GPT-3 aimed at sparking skepticism about climate change. “They can’t talk about temperature rises because they’re no longer happening.” A second climate change called it “new communism – an ideology based on a false science that cannot be questioned”.

“With a little human healing, GPT-3 is quite effective” in promoting counterfeits, he says Ben Buchanan, A Georgetown professor who is involved in research, based on the intersection of AI, cybersecurity, and statistics.

Researchers at Georgetown say that the GPT-3 or similar AI language algorithm can be particularly effective at automatically generating short messages on social media, which researchers call the “one-on-one” misinformation.

In the experiments, the researchers found that GPT-3’s writings could influence readers ’opinions on issues of international diplomacy. Researchers have shown that volunteers wrote samples of GPT-3 tweets about the withdrawal of U.S. troops from Afghanistan and the U.S. sanctions on China. In both cases, participants saw that they were attracted to the messages. After seeing messages against Chinese sanctions, for example, the percentage of those who said they were against the policy has doubled.

Mike Gruszczynski, a professor at Indiana University who studies online communications, says he wouldn’t be surprised to see AI take on a bigger role in disinformation campaigns. He points out that in recent years bots have played a key role in spreading false narratives, and that AI can be used to create fake social networks. profile photos. With boots, deepfakes, and other technologies, “I think the sky is really the limit,” he says.

AI researchers have built programs that are capable of using language in an amazing way, and GPT-3 is perhaps the most amazing demonstration. Although machines understand language in a way that people don’t understand, AI programs can mimic comprehension by feeding on large numbers of text and looking for patterns to link words and phrases together.

Researchers OpenAI GPT-3 has created large amounts of text from web sources including Wikipedia and Reddit, thanks to a particularly large AI algorithm designed for language management. GPT-3 has often surprised observers with its mastery of language, but it can be unpredictable, throwing out inconsistent speech and offensive or hateful language.

OpenAI has made GPT-3 available dozens of startups. Entrepreneurs are using the curious GPT-3 automatically create emails, talk to customers, and even write the computer code. But they also have some uses for the program he showed his darkest potential.

Achieving the behavior of GPT-3 would be a challenge for disinformation agents. Buchanan warns that the algorithm does not seem to be able to create much more consistent and compelling articles than a tweet. The researchers did not attempt to show the produced articles to volunteers.

But Buchanan warns that state actors can do more with a language tool like GPT-3. “Opponents with more money, more technical skills and less ethics will be able to use AI better,” he says. “Also, the machines will be improved.”

OpenAI says Georgetown’s work highlights an important issue that the company hopes to alleviate. “We are actively working to address the security risks associated with GPT-3,” says an OpenAI spokesperson. “We also review all production uses of GPT-3 before launch and have monitoring systems in place to limit and respond to misuse of our API.”


More great KABEKO stories

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button