Tech News

Can public wisdom help solve the problem of social media trust?

[ad_1]

The study found that with a group of only eight lay people, there was no statistically significant difference between crowd performance and a particular data checker. When the team gathered 22 people, they started above the real correctors. (These numbers describe the results when the lay people were told the source of the article. The crowd was a little worse when they didn’t know the source.) Perhaps most importantly, the lay crowds outperformed the authors ’editors in the stories classified as such. “Political,” because those stories allowed the editors to disagree with each other. It is a political verification of the facts really strong.

It seems impossible for random groups to go beyond the work of trained proofreaders, especially when it comes to knowing the headline, the first sentence, and the publication. That’s the whole idea behind the wisdom of the crowd: enough people to come together, act independently, and their results will surpass the experts.

“The sense of what’s going on is that people read this and ask themselves,‘ How well does this fit in with everything I know? ’” Rand said. “That’s where the wisdom of the crowd comes in. You don’t need everyone to know what’s going on. By averaging the ratings, the noise is stopped and you’ll get a resolution signal than any other person.”

This is not the same thing as a Reddit-style rise and fall system, nor is it a model for Wikipedia editors ’editors. In these cases, small subsets that are not represented by users are selected to care for the material, and each can see what the others are doing. The wisdom of the crowd is realized only when groups are diverse and individuals make their opinions independently. And relying on politically balanced groups gathered at random, rather than on the bodies of volunteers, is a much more difficult approach for researchers. (This also explains why the approach to the experiment is different on Twitter Birdwatch, a pilot program that calls a user to write notes, explaining why a particular tweet is misleading.)

The main conclusion of the newspaper is simple: social networking platforms like Facebook and Twitter can use a crowd-based system to dramatically and cheaply increase the fact-checking operations without sacrificing accuracy. (Research laity were paid $ 9 an hour, which costs about $ 0.90 per item.) The researchers say the approach to supplying people would help increase confidence in the process, as it is easy to assemble. politically balanced groups and therefore more difficult to denounce partisanship. (According to a 2019 Pew survey, Republicans fully believe that the verifiers are “one-sided”). Facebook has already debuted it something similar, paying groups of users to “work as investigators to find information that can counteract the most obvious online scams or confirm other claims.” This effort is designed to inform the work of official partners to verify the truth, not to increase it.

One thing to check is to increase the scale. The much more interesting question is how the platforms should be used. Should fake-labeled stories be banned? What about stories that are not objectively false information, but are still misleading or manipulative?

Researchers believe that platforms should move away from the true / false binary and leave only / mark the binary. Instead, the platforms propose to incorporate “continuous crowdsourced accuracy ratings” into ranking algorithms. Instead of being a single cut / false cut, and treating everything above in one way and everything below in another, platforms should proportionally insert the score assigned by the crowd to determine how a particular link should stand out in user sources. In other words, the more accurate the crowd judges a story, the lower the algorithm.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button