Fake news? Or dopey readers?
Is fake news solely the fault of Russians and Facebook?
Or should you, the reader, ever be responsible for showing a bit more cop on when reacting to made-up articles you see on social media platforms?
Moreover, how far should Facebook go in seeking to fix its fake news problem? Is there any way of doing it without blocking made-up articles from 'legacy' media companies, which are now Facebook's biggest critics?
Right now, the social media giant is caught up in a storm of political hearings and media accusations concerning its platform being manipulated by Russian agents and scammers, up to and including a critical role in the last US presidential election. Twitter is facing similar accusations with regard to fake 'bot' accounts pushing disinformation ahead of last year's Brexit referendum.
Facebook has acknowledged the issue, finding that $100,000 (€84,789) of ads on the platform were bought by Russia's controversial Internet Research Agency targeting race and immigration.
On the one hand, this is insidious. A semi-hostile country is suspected of deploying propaganda on social media platforms to strategically target democratic elections.
On the other hand, it's a tiny amount of content. The company's general counsel, Colin Stretch, says that it equals about four-thousandths of 1pc of content in the Facebook News Feed.
"Put another way, if each of these posts were a commercial on television, you'd have to watch more than 600 hours of television to see something from it," he said.
Aside from the issue of it happening or its volume, though, what about its effect on us as adult voters? How dangerous is fake news on Facebook and what, if any, should a reader's response be?
I recently had the chance to ask this of Jimmy Wales, the founder of Wikipedia. He's so freaked out by the decline in news media standards - including via Facebook fake news - that he has started up a new online news service called WikiTribune.
"Fake news is almost uniquely a Facebook problem," he said. "You might see a fake news story posted by an entity called something like the Denver Guardian. To many, that might sound plausible."
But is that wholly Facebook's problem, I asked? Doesn't someone reading the fake news piece have any responsibility for considering the credibility of the source they're looking at?
"Yes and no," he said. "In an open society, you shouldn't have to become an expert on the news in order to deserve to receive quality information. So when it's a week before the election and you pull your head out of the sports pages, you should be able to get a concise, clear explanation and make up your mind based on that."
There is something to what Wales says here. But doesn't it give readers something of a free pass to be utterly ignorant?
In answer to the rise of fake news, Facebook, Google and Twitter are rolling out a new 'trust indicator' system that will see a small 'i'-shaped sticker placed on articles from 75 major news organisations, which have agreed to sign up to the system.
The idea is simple: a reader should be able to click or tap on the article's symbol to get verified context about the publisher. The tech giant has also been engaged for months in a US fact-checking system designed to sniff out false stories. The success of that initiative appears to be in doubt, with the Guardian last week quoting anonymous journalists attached to the scheme who said that there was no indication of whether it was having any impact.
But there's a deeper, more fundamental question behind all of this, one that appears to be absent from much of the analysis so far.
It's one of personal responsibility.
Do we straightfacedly argue that Russian anti-EU articles outdid anti-EU fake articles in the Daily Mail or the Sun during the Brexit referendum?
Or that Twitter bots out-scored misleading UK front pages in whipping up racial tension?
Come off it, folks. We know they didn't.
Just as we know that a Daily Telegraph front page last week listing a number of Conservative MPs as traitors under the banner "Brexit mutineers" has a far more incendiary, divisive effect than a made-up Facebook post from a Macedonian scam artist.
In practice, we discount fake news narratives in British newspapers because we apply common sense and context.
When the Daily Mail runs a front page claiming that 100,000 African immigrants are hoping Brexit doesn't happen, we apply our own credibility filters. Is it plausible that those same filters fail us when we see a Facebook post claiming the Pope has endorsed Donald Trump?
The debate reminds me a little of arguments we used to see around spam and email scams.
To some, emails purporting to be from foreign princes or Irish banks, misspelled and using strange fonts, were beguiling. "It's email," they'd argue. "How am I supposed to know it's not real? I'm no tech expert."
Fortunately, we don't take that level of excuse seriously anymore.
Is it possible we start to apply such logic to our consumption of news?
If you see an article shared on Facebook claiming that a man in a white van is cruising around a suburb snatching children, it's likely that it's fake. Even still, one of your gullible friends shared it. Whom should you judge? Facebook for allowing your friend share an obvious lie? Or your friend for believing and sharing something so fishy?
There is certainly a serious problem with the gaming and manipulation of social media platforms for political and commercial ends. This is here to stay, because these platforms are the new news delivery systems.
But suggesting that it's purely a technical supply-side issue that has nothing to do with user discretion is a bit silly.
Surely we're better than gullible fools.
Sunday Indo Business