Twitter to start hiding comments from suspected 'trolls' in conversations
The company says it will deploy a screen saying “show more replies” in front of responses that its systems adjudicate as vexatious, cynical or calculated to offend
Twitter is to start hiding comments from suspected ‘trolls’ in communal conversations and debates, even though such comments may not violate its terms of service.
The company says it will deploy a screen saying “show more replies” in front of responses that its systems adjudicate as vexatious, cynical or calculated to offend.
While not saying whether there are key phrases or words that indicate comments to be hidden, Twitter has said it will use “signals” such as an account-holder having multiple accounts or not having verified their account via email. In these cases, the company said, it may hide user comments from a visible Twitter conversation.
Twitter says that initial trials have shows that using this method has resulted in an 8pc fall in the number of abuse cases reported to it.
The initiative is all part of an attempt to boost ”healthy” conversations and demote malicious or cynical responses within those conversations, according to Twitter bosses.
“Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting,” said a joint statement by Del Harvey, Twitter’s vice president on trust and safety and David Gascam, Twitter’s director of product management with responsibility for health.
“The result is that people contributing to the healthy conversation will be more visible in conversations and search.”
A company spokesperson denied that the move was an attempt to influence political discussion on the platform.
"No. It’s important to remember this is about behaviour, not content,” she said. “We are identifying behaviours that disrupt and detract from the public conversation, we’re not assessing the value of content."
Twitter executives Harvey and Gascam said that the initiative is part of an ongoing attempt “to improve the health of the public conversation on Twitter”.
“One important issue we’ve been working to address is what some might refer to as trolls,” they said. “Some troll-like behaviour is fun, good and humorous. What we’re talking about today are troll-like behaviours that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.”
The Twitter executives said that less than 1pc of accounts make up the majority of accounts reported for abuse, but that a lot of what’s reported does not violate the company’s rules.
“While still a small overall number, these accounts have a disproportionately large – and negative – impact on people’s experience on Twitter,” they said. “The challenge for us has been how can we proactively address these disruptive behaviours that do not violate our policies but negatively impact the health of the conversation?”
In deciding which comments to place behind a ‘show more replies’ screen, Twitter will look at the factors around the accounts making the comment, the executives said.
“There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behaviour that might indicate a coordinated attack.”