The Dublin-based director of content policy at Meta has revealed that the tech giant removed 24 million pieces of Covid misinformation since the start of the pandemic, closing down more than 3,000 accounts globally.
Twenty Irish accounts linked to Covid untruths were taken offline as part of the crackdown.
More than 40,000 people work in the safety and artificial intelligence systems at Meta ( Facebook’s parent company), but speaking to the Sunday Independent, Siobhan Cummiskey said misinformation and disinformation are "complex, ever-evolving, whole-society problems”.
The director of content policy maintains they are addressing the problem, but admits “there is always more we can do”.
“Having said all that, it is controversial that a private company would be taking so many decisions on matters of freedom of expression and harmful content — and that’s why we personally consult so heavily with experts,” she said.
But Facebook has known for years that it has a problem, having permitted the polarisation of its online community. And during the pandemic, it did not effectively block disinformation campaigns about vaccines.
Critics of the online platform say the issue is not about freedom of expression, but the freedom for some to be dangerously destructive with the click of a button.
Asked if she understood why Facebook critics do not believe enough is being done, Ms Cummiskey said: “I think these are important issues that we tackle and take seriously.”
She said Meta defines misinformation as “false or misleading content” and disinformation is defined as “misleading behaviour”, referring to things like manipulation campaigns and information campaigns.
“When it comes to misinformation, we take a three-prong approach: remove, reduce and inform. When it comes to removing, some misinformation qualifies as harmful content — and that is content that can contribute to imminent risk of physical harm."
Critics say only piecemeal attempts to remove false information about the pandemic have been made in the past year.
A study by the nonprofit group First Draft found that at least 3,200 posts made unfounded claims about vaccines. Does this prove whatever they are doing is not working?
“I think there’s always more we can do. I think it is a complex problem that we need to always get ahead of, and stay on top of.”
But is Meta really keeping on top of it? An example of Facebook’s slow response was the time it took to take down the page of one of Ireland’s most prolific anti-vaxxers, Dolores Cahill.
It was fact-checked more than 75 times by the Institute for Strategic Dialogue Digital, and was allowed to amass over 130,000 followers before being removed from the site.
Asked if the €420bn-valued company was too slow to act over Dolores Cahill, Ms Cummiskey declined to comment.
“I can’t comment on the specifics of any particular page we have removed, the reasons for that are various including legal and security reasons,” she said.
She added that she could not provide statistics for how many pages containing mistruths were shut down in Ireland, but explained that “since the start of the pandemic, we have removed 24 million pieces of Covid-related misinformation and 3,000 accounts”.
However, all too often the company only appears to act when these issues are publicised in newspapers, flagged by complaints, or raised by politicians. Why are they so slow to act?
“I think we are always trying to do better. The way we develop our policies is that we heavily consult with external organisations. We keep on top of that.”
Aoife Gallagher, an analyst at the Institute for Strategic Dialogue Digital, which focuses on the links between far-right extremism, disinformation and conspiracy theories, is not convinced.
“Effectively countering the impact of disinformation and conspiracy theories is admittedly not an easy task. However, it has become abundantly clear that the social media platforms — who are responsible for hosting this kind of content — either lack the will and/or the ability to tackle the problem,” she said.
“Companies such as Meta often point to their use of AI technology or fact-checking — but evidence shows these efforts are failing to have any kind of meaningful impact.”
Asked what Facebook is doing right now to monitor extremists and stop them from just signing up under a different email, Ms Cummiskey admits it is a difficult task.
“But we have a policy that means a piece of content, a page or a profile cannot reappear in the same form. We use a multitude of signals to determine if it is the same person, and use a combination of tools — including AI and human review — to do so.
"They cannot appear in the same form again.”