At the moment, hosting providers like us do not actively police the content we host. It would be impractical for us to do so. We rely on reports from third parties.
But in recent years, the internet infrastructure industry has been sucked into more and more debates around content and what should be online and who should have access to it.
In an ideal world, the website or service operators would self-police and handle content moderation properly. Unfortunately, as we’ve seen, there are some actors who either take an extreme view of freedom of speech or simply do not care.
This means that calls to get content removed or blocked end up being directed further down the line to the hosting providers, domain name registrars, domain name registries and other parts of the 'infrastructure stack'.
That trend will continue and there’s very little that anyone can do to stop it.
Providers are free to make their own decisions, but I think we will see more calls to protect speech from civil society on the one hand and to swiftly take action on extremist content from government and law enforcement on the other.
It’s not ideal as we don’t have a scalpel. We only have sledgehammers. As the DNS provider for a site or service, I can’t remove an offending image or video. I can only pull the plug on the entire site or service. If we host a site we might be able to block access to certain images or other content, but more often than not our only option is to pull the plug on the entire site.
This is a problem, as it means that instead of blocking or removing only the offending content it becomes ‘all or nothing’.
When Cloudflare pulled the plug on Daily Stormer back in August 2017 many of us in industry agreed that it was the right thing to do. However some of us were decidedly uncomfortable with how Matthew Prince explained the rationale behind that decision.
It wasn’t that Daily Stormer had gone too far, but that Matthew made an emotional decision. That is far from ideal. When someone signs up with ourselves or one of our competitors they need to feel confident that we aren’t going to knock them offline on a whim.
Providers are private companies so they can choose who they do business with. We all have terms of service which set out what we will or won’t allow people to do with the services we provide. As businesses we are conscious about our reputations, but we need to balance both protecting our clients’ speech with societal norms.
However, it does open up discussions around censorship and restrictions on free speech.
Industry leaders have been discussing their respective roles in dealing with these issues over the last few years and, as you’d expect, there are a wide range of views. However, a group of us came together to formulate the ‘DNS Abuse Framework’ which lays out when we will act without court orders or other firm legal obligations.
We agree that certain types of content are universally unwelcome so we will take action against things like child sexual abuse material (sometimes referred to as 'child porn'), malware or where there is a clear threat of imminent harm.
When it comes to other types of content, however, it’s not as simple. We have clauses in our terms of service that focus specifically on 'hate' sites. But many, even bodies such as Ireland's Law Society, debate how that should be defined. There’s a ‘line’ of some kind that exists which separates content that I personally don’t like and content that oversteps the line. But where is that line?
Michele Neylon is the CEO and owner of Blacknight Solutions, one of Ireland’s largest domain registration and web hosting companies.