Tech News

Facebook is quietly making a big entry

[ad_1]

In February, Facebook announced a small experiment. The political content displayed would be reduced to some users in some countries, including the US, and then asked about the experience. “Our goal is to maintain people’s ability to find and interact with political content on Facebook, respecting each person’s mindset, at the top of their News Feed,” explained Aastha Gupta, director of product management in a blog post.

Tuesday morning, company has provided an update. The results of the survey are there and suggest that users see political things less often in their sources. Now Facebook plans to repeat the experiment in more countries, and is “teasing more spreads in the coming months.” Depoliticizing people’s feeds makes sense for the impact it has had on politics for a company that has always been in hot water. The move ultimately led to Donald Trump’s supporters attacking the Capitol and giving it away for the first time in just over a month some people, including the elected officials, wanted to be blamed on Facebook. The change could have consequences for political groups and media organizations accustomed to trusting Facebook distribution.

The most significant part of Facebook’s announcement, however, has nothing to do with politics.

It’s the basic premise of AI-driven social media feeds (think Facebook, Instagram, Twitter, TikTok, YouTube) that you don’t have to tell them what you want to see. By looking at what you like, sharing, commenting, or just stopping by, the algorithm learns what kind of interest catches you and keeps you on the platform. Then it shows you more things like that.

In a sense, this design feature provides a convenient defense for social media companies and their apologists to criticize: if some things are big on a platform, it’s because users like them. If you have a problem with this, you may have a problem with users.

And yet, at the same time, optimization of engagement is at the core of many criticisms of social platforms. An algorithm that is too focused on involvement can push users to content that is highly engaging but of low social value. It can feed the diet of increasingly attractive messages as they become more and more extreme. And it can encourage the proliferation of viruses in material that is false or harmful, because the system chooses what will drive the engagement first, rather than what needs to be seen. The list of illnesses associated with the first design of the engagement helps explain why Mark Zuckerberg, Jack Dorsey and Sundar Pichai are not. would accept at the congress in March, the platforms under their control were built that way. Zuckerberg stressed that Facebook’s real goal is “meaningful social interactions”. “Commitment,” he said, “if we provide that value, it will be natural for people to use our services more as a sign.”

In another context, however, Zuckerberg has acknowledged that things may not be so easy. In 2018 message, Explaining why Facebook deletes “borderline” messages that try to push without breaking the edge of the platform’s rules, wrote, “it doesn’t matter where we support drawing lines, because part of the content comes close to that. he’ll deal with it, even if they tell us later that they don’t like the content. ” But it seems that this observation was limited to knowing how to implement Facebook’s policies around banned content, rather than rethinking the design of its ranking algorithm more broadly.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button