Adrian Weckler: Facebook-bashing won't work
'Why should we sit back and let Facebook get away with this? Who's in charge of this society anyway?"
Facebook-bashing is back. Politicians and media companies are up in arms, accusing the social media giant of having too much power and doing too little to curb harmful content.
"They have gotten away with it for far too long," press ombudsman Peter Feeney told Newstalk's Breakfast programme last week. "They're saying that they're not responsible, that they're not publishers, that they just host other people's opinions. That doesn't wash any more… an effort should be made to force the Facebooks of this world to face the same responsibilities [other media companies] face."
Feeney's point is a commonly-held one in media and political circles. But are we really ready to step in and regulate?
Right now, Facebook itself judges what is or isn't harmful content. This can lead to controversial outcomes. A trove of leaked Facebook documents published by the Guardian revealed that the phrase "someone shoot Trump" will be taken down, while the phrase "to snap a bitch's neck, make sure to apply all your pressure to the middle of the throat" will not be taken down (Facebook says that the first phrase constitutes a "credible" threat of violence, while the second phrase doesn't).
It's not just Facebook's conscious decisions, either. Last month, it was far too slow in removing live-streamed videos of two murders and a suicide.
Nevertheless, calls for legislative or regulatory solutions to the problem have, so far, not been backed up with much substance as to how it would work.
The biggest problem is the scope and complexity of what we ourselves write and say. This ranges from black humour to cries for help.
"Someone posts a graphic video of a terrorist attack. Will it inspire people to emulate the violence, or speak out against it?" says Facebook head of public policy Monicka Bickert, pictured. "Someone posts a joke about suicide. Are they just being themselves, or is it a cry for help? In the UK, being critical of the monarchy might be acceptable. In some parts of the world, it will get you a jail sentence."
These are fair points. Deciding on what should remain published is complex. Many people will remember the case of Paul Chambers, the Northern Ireland man who was arrested for tweeting: "Crap! Robin Hood airport is closed. You've got a week and a bit to get your s**t together, otherwise I'm blowing the airport sky high!"
Most people would have recognised this to be a joke. But a conservative, safety-first culture meant he had to be arrested and charged (he was later acquitted). Were the police wrong to arrest him? Many will say yes, some will say no.
The point is, there must be thousands, if not millions, of similar cases that occur on a daily or weekly basis.
Other charges against Facebook focus on security.
For example, British prime minister Theresa May last week called for governments to force Facebook (and Google, Reddit and others) into more proactive behaviour on reporting suspicious content to authorities. Previous prime minister David Cameron repeatedly did this, unsuccessfully threatening Apple to ditch its encrypted status on iMessage communications. Cameron's view, like May's today, is that companies like Facebook and Apple can be used by terrorists and terrorist sympathisers and, therefore, should work much more proactively with security police.
Are they right? Whether or not you think they are, it is fair to say that there is no consensus on the degree to which Facebook should be monitoring our communications with a view to reporting potentially unlawful behaviour.
There are cynics who argue that Facebook profits from all of this.
It is true that the company makes money from more engagement on its site. But it may be a stretch to argue that offensive, tragic content such as the murder or suicide of someone on Facebook Live is all part of some cynical monetisation plan. Looking at it from a commercial perspective, the opposite motivation makes more sense. Sensitive content jeopardises Facebook's delicate position as a platform instead of a publisher. The more that offensive material (or "fake news") is associated with Facebook, the more calls there are for new regulatory oversight or sanctions on the service. Commercially, avoiding this outcome is a priority for Facebook: regulation is a slippery slope.
Regulation to slow the company down, though, is exactly the favoured outcome of another group of trenchant Facebook critics: media organisations.
Broadcasters and newspapers, in particular, see in Facebook an entity that has stripped their ability to make advertising money. They're not wrong: Facebook now takes in some €30bn annually, much of which comes from advertising budgets that would otherwise have seen their way into the coffers of newspapers and broadcasters. Advertising agencies are almost as hostile for similar reasons - the automated ad systems that Facebook, Google, Twitter and others have introduced strip them of some value from their own consultancy services. So if you talk to any journalist or ad industry executive, there is a good chance that they'll fume about Facebook "getting away" with its industry model.
What these organisations would like to see is some event that stalls or reverses the inexorable trend of media drifting to specialist digital platforms such as Facebook, Snapchat and YouTube.
While there are well-argued points made ("fake news" and extremist content need to be tackled in a better way), this overall aspiration seems futile. There is no turning the clock back to a time where slow publication processes facilitated deliberative oversight processes on a piecemeal basis. Whatever answers we come up with on these big issues, there is no return to an age of analogue media dominance.
Sunday Indo Business