Docments show tech giant ‘was aware’ that system boosting engagement favoured controversial posts
Five years ago, Facebook gave its users five new ways to react to a post in their news feed beyond the iconic “like” thumbs-up: “love,” “haha,” “wow,” “sad” and “angry.”
Behind the scenes, Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content – including content likely to make them angry. Starting in 2017, Facebook’s ranking algorithm treated emoji reactions as five times more valuable than “likes,” internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook’s business.
Facebook’s own researchers were quick to suspect a critical flaw. Favouring “controversial” posts – including those that make users angry – could open “the door to more spam/abuse/clickbait inadvertently,” a staffer, whose name was redacted, wrote in one of the documents. A colleague responded, “It’s possible.”
The warning proved prescient. Company data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news. That means Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to wider audiences. The power of the algorithmic promotion undermined the efforts of Facebook’s content moderators, who were fighting an uphill battle against harmful content.
The internal debate over the “angry” emoji and the findings about its effects shed light on the highly subjective human judgments that underlie Facebook’s news feed algorithm – the byzantine machine-learning software that decides for billions of people what kinds of posts they will see when they open the app. The deliberations were revealed in disclosures made to the US Securities and Exchange Commission and provided to Congress in redacted form by the legal counsel of whistleblower Frances Haugen. The redacted versions were reviewed by a consortium of news organisations, including The Washington Post. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Monday.
In several cases, the documents show Facebook employees on its “integrity” teams raising flags about the human costs of specific elements of the ranking system – warnings that executives sometimes heeded and other times seemingly brushed aside.
An algorithm such as Facebook’s, which relies on opaque machine-learning techniques to generate its engagement predictions, “can sound mysterious and menacing,” said Noah Giansiracusa, a math professor at Bentley University in Massachusetts and author of the book, How Algorithms Create and Prevent Fake News. “But at the end of the day, there’s one number that gets predicted – one output. And a human is deciding what that number is.”
Facebook spokesperson Dani Lever said, “We continue to work to understand what content creates negative experiences, so we can reduce its distribution. This includes content that has a disproportionate amount of angry reactions, for example.”
The weight of the angry reaction is just one of the many levers that Facebook engineers manipulates to shape the flow of information on the world’s largest social network.
Facebook takes into account numerous factors – some of which are weighted to count a lot, some of which count a little and some of which count as negative – that add up to a single score that the news feed algorithm generates for each post in each user’s feed, each time they refresh it. That score is in turn used to sort the posts, deciding which ones appear at the top and which appear so far down that you’ll probably never see them. That all-encompassing scoring system is used to categorise and sort vast swaths of human interaction in nearly every country of the world and in more than 100 languages.
Beyond the debate over the angry emoji, the documents show Facebook employees wrestling with tough questions about the company’s values, performing cleverly constructed analyses. When they found that the algorithm was exacerbating harms, they advocated for tweaks they thought might help. But those proposals were sometimes overruled. When boosts, like those for emojis, collided with “demotions” meant to limit potentially harmful content, all that complicated math added up to a problem in protecting users. The average post got a score of a few hundred, according to the documents. But in 2019, a Facebook data scientist discovered there was no limit to how high scores could go.
If Facebook’s algorithms thought a post was bad, Facebook could cut its score in half, pushing such posts way down in users’ feeds. But a few posts could get scores as high as a billion, according to the documents. Cutting an astronomical score in half to “demote” it would still leave it with a score high enough to appear at the top of the user’s feed. “Scary thought: civic demotions not working,” one Facebook employee noted.
A 2014 experiment sought to manipulate the emotional valence of posts shown in users’ feeds to be more positive or more negative, and then watch to see if the posts changed to match, raising ethical concerns,.
At one point, CEO Mark Zuckerberg even encouraged users, in a public reply to a comment, to use the angry reaction to signal they disliked something, although that would make Facebook show similar content more often.
Last September, Facebook finally stopped using the angry reaction as a signal of what its users wanted and cut its weight to zero, documents show. At the same time, it boosted “love” and “sad” to be worth two likes.
Time and again, Facebook adjusted weightings after they caused harm. Facebook wanted to encourage users to stream live video, which it favoured over photo and text posts, so its weight could go as high as 600 times. That had helped cause “ultra-rapid virality for several low-quality viral videos,” a document said. Live videos on Facebook played a big role in political events, including both the racial justice protests last year after the killing of George Floyd and the riot at the US Capitol on January 6. Immediately after the riot, Facebook frantically enacted its “Break the Glass” measures on safety efforts it had previously undone – including to cap the weight on live videos at only 60. Facebook didn’t respond to requests for comment about the weighting on live videos.
When Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found. As it turned out, after years of advocacy and pushback, there wasn’t a trade-off after all. According to one of the documents, users’ level of activity on Facebook was unaffected.
© Washington Post