Twitter aims to cut down on insulting and hateful comments
Company says it wants people to ‘enjoy healthy conversations’
Twitter will autoblock what it calls ‘mean tweets’ from people in a trial to see whether it can improve standards of civility on the platform.
The social media giant said it will temporarily block accounts for seven days for using “potentially harmful language such as insults or hateful remarks”, or sending “repetitive and uninvited mentions or replies”.
The new ‘safety mode’ feature can be turned on by Twitter users who either find that they’re being regularly harangued or abused, or who just don’t like acerbic exchanges on the platform.
“We want people on Twitter to enjoy healthy conversations, so we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations,” the company said of the new test feature, which will be trialled by a small number of users before a further rollout is planned.
When the feature is turned on in a user’s settings, Twitter’s systems “will assess the likelihood of a negative engagement by considering both the tweet’s content and the relationship between the tweet author and replier”.
The technology will take existing relationships into account, Twitter said, so that accounts that a user already follows, or frequently interacts with, will not be autoblocked.
“Authors of tweets found by our technology to be harmful or uninvited will be autoblocked, meaning they’ll temporarily be unable to follow your account, see your tweets, or send you direct messages,” the company added.
The feature is likely to be greeted with contrasting views by Twitter users.
While Twitter has a long history of users being hounded or harassed, sometimes as part of an orchestrated campaign,, the platform is also used to hold influential figures to account, sometimes through heated discourse.
There have been renewed calls for control of social media sites after England footballer Marcus Rashford was racially abused on Twitter after his penalty miss in the Euro finals.
Twitter has not yet given examples of what it might consider “insults or hateful remarks” to mean.
However, the company has given an indication of where it intends to concentrate the new feature’s focus, with its Trust and Safety Council recommending an emphasis on battling the targeting of women and journalists.
“As members of the Trust and Safety Council, we provided feedback on Safety Mode to ensure it entails mitigations that protect counter-speech while also addressing online harassment towards women and journalists,” said a spokesperson for Article 19, a British human rights organisation that was consulted on the new safety feature.