Sunday 18 August 2019

Twitter and Facebook's accounts purge is as much about PR as fighting the fakes

Even Twitter CEO Jack Dorsey lost 200,00 followers in the purge
Even Twitter CEO Jack Dorsey lost 200,00 followers in the purge

Leonid Bershidsky

I lost 120 Twitter followers overnight last week. US President Donald Trump lost 340,000, the New York Times 732,000, former US President Barack Obama three million or so. Even Twitter's CEO, Jack Dorsey, lost 200,000 in his company's much-hyped crackdown on dubious accounts.

But what looks like a major purge is more like a PR onslaught, as Twitter and Facebook try to outdo each other at showing they care about the health of the social network conversation.

Top Twitter lawyer Vijaya Gadde had alerted users before the purge in a blog, explaining that most of the accounts being targeted weren't bots. They were mostly set up by real people, she wrote, "but we cannot confirm that the original person who opened the account still has control and access".

To confirm this, Twitter tells the supposed account owners to solve a captcha or change their password. The accounts for which this doesn't happen get "locked"; after a month, they stop counting toward Twitter's total user number. Now, they no longer pad follower counts, either.

The interesting part here is how Twitter determines that there's something wrong. According to Gadde, the trigger is usually a sudden change in an account's behaviour. It might start tweeting "a large volume of unsolicited replies or mentions" or "misleading links". The same behaviour in a new account also sets off alarm bells: algorithms identify the account as potentially "spammy or automated" and "challenges" its owner, for example by asking her to confirm a phone number. Twitter reports a large rise in the number of accounts challenged from slightly over 2.5 million in September to 10 million in May. Given that Twitter had 226 million monthly active users in the first quarter of 2018, that looks like a large number - but only until one looks at Facebook's recent report on similar activity.

In May, Facebook said it had taken down 583 million fake accounts, down from 694 million in the fourth quarter of 2017. That's about 27pc of Facebook's monthly active users in the first quarter. But of course Facebook didn't decimate its user base - that would have sent the stock price tumbling. It explained it killed the fake accounts just as malicious actors tried to register them. The idea is that Facebook's user base is not inflated, it only contains 3 to 4pc fake accounts, but it would have been bloated with fakes had it not been for algorithms that, the company says, detected 98.5pc of the fakes before users reported them.

Facebook's criteria for spotting fake accounts are similar to Twitter's: repeated posting of the same content, sudden increases in the number of messages sent and other activity patterns. Both Twitter and Facebook also have systems to stop automatic account registration.

But on the scale on which the social networks operate, even a very high detection rate still allows millions of fake accounts to be added every month. Of the 583 million fake accounts Facebook removed in the first three months of this year, algorithms spotted 98.5pc. That means users flagged the rest, or 8.7 million accounts.

In a 2017 paper, a team of Canadian researchers showed that an internet-of-things botnet's requests to create accounts on Instagram, owned by Facebook, were successful in 18pc of the cases. Detection technology may work better now, but there's still no way for social networks to know exactly how well they're doing. At any rate, the market for fake followers and likes is still thriving.

The automatic detection of fake or hijacked accounts is a flourishing academic field because there's demand from the social networks, which are willing to devote significant resources to this work - and even to do it manually where algorithms fail. Facebook, for example, admits that its technology is better at detecting nudity than hate speech, which is flagged algorithmically in just 38pc of the cases before users report it.

No police force can prevent 100pc of crimes. The social networks are increasingly making their policing efforts public so that users, and society, might begin to think about them in these terms: they do what they can but some bad stuff just can't be prevented. But technically, nothing is stopping Twitter and Facebook from setting up an identification procedure that would make automated registration impossible.

As they try to navigate between the nuisances of spam and fake news on the one hand, and privacy concerns on the other, social networks can only step up the public relations activity around their fake-fighting efforts. In the process, they do their best not to hurt the user numbers their investors follow religiously. Does this approach improve the health of the social network conversation?

My answer, so far, is no. Your experience might be different. To find out, say something combative on Twitter and see what happens.


Sunday Indo Business

Also in Business