Tech News

These former journalists are using AI to catch online defamation

[ad_1]

The view that drives CaliberAI is that this universe is a limited infinite. Although AI moderation is not capable of decisively deciding truth and falsehood, it should identify a set of assertions that can be detrimental.

Carl Vogel, a professor of computational linguistics at Dublin Trinity College, helped CaliberAI build his model. It has a working formula for statements that may be harmful: they must implicitly or explicitly name a person or group; file a claim as a fact; and use certain taboo languages ​​or ideas — theft, drunkenness, or other forms of inadequacy. If you give a large enough sample of text to an automatic learning algorithm, it will perceive negative word patterns and associations based on the company they maintain. This will allow intelligent inventions to be made as to which terms, if used in relation to a specific group or person, place a content in a defamation risk area.

Logically, there was no data on the harmful materials that CaliberAI could use, as publishers work very hard not to put these things in the world. So the company built its own. Conor Brady began using his long experience in journalism to create a list of harmful statements. “We thought about all the bad things that could be said about any person, and we split, split, and confused them until we addressed the full range of human vulnerability,” he says. Then, a group of observers, Alan Reid and Abby Reynolds, supervised by the group’s computational linguists and data linguists, used the original list to build a larger one. They use this completed data set to assign AI training test scores to sentences ranging from 0 (definitely not harmful) to 100 (call your lawyer).

The result, so far, is something like a spelling correction against defamation. You can play with A demo version It warns on the company’s website that “you can spot false positives / negatives when we improve our advertising models”. “I think John is a liar,” I wrote, and the program threw a 40-percent probability below the defamation threshold. Then I tested “Everyone knows John is a liar,” and the program threw an 80 percent probability, marking “Everyone knows” (actually), “John” (exact person), and “liar” (negative language). . Of course, that doesn’t solve the problem. In real life, my legal risk would depend on what I can prove that John is actually lying.

“We’re classifying at the language level and we’re returning that advice to customers,” says Paul Watson, the company’s chief technology officer. “Then our customers need to use many years of experience.” Do I agree with this advice? “I think that’s a very important piece of data that we’re trying to build and do. We’re not trying to build an earthly truth engine for the universe.”

It is fair to ask whether professional journalists really need an algorithm to warn them that they will defame someone. “A good editor or producer, an experienced journalist, should see it when he sees it,” says Sam Terilli, a professor at the School of Communication at the University of Miami and a former general counsel. Miami Herald. “They should at least identify statements or passages that can be dangerous and deserve a deeper look.”

This ideal may not always be available, especially when it comes to thin budgets and high pressure publishing as soon as possible.

“I think it’s a very interesting case with news organizations,” says media lawyer Amy Kristin Sanders and a professor of journalism at the University of Texas. He noted the special dangers of reporting the latest news, as a story may not require a specific drafting process. “For small and medium-sized copywriters who don’t have general consultants on a daily basis, who can trust many freelancers and have few employees, so the content gets less editorial review than in the past, I think it could be valuable in these types of tools.”

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button