Finally, big tech to block extremists
So big tech companies now get the message: society expects them to do more in helping to stop the spread of extremist content.
The latest move comes from Google-owned YouTube. It's beefing up its image-matching technology and artificial intelligence to sniff out and block terrorist videos.
It's also pledging to cut off comments and ad-related income to what it deems "extremely offensive" videos, even if they aren't illegal.
These will include "videos that contain inflammatory religious or supremacist content", according to Google's chief lawyer Kent Walker.
So the content that Google decides to be offensive will be made to drown on YouTube, even if it's not extreme enough to actually be removed.
"We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints," said Mr Walker.
YouTube's move comes after a major announcement by Facebook last week pledging similar actions. The social media behemoth has also recently pledged to hire 3,000 extra people (some of which will be in Dublin) to manually oversee a higher degree of prevention.
- Read more - Online jihad: the war to stop radicalisation
Twitter is also under pressure to make its platform a less accommodating place for terrorists and hate-mongers.
So why are these tech companies ramping up their activities all of a sudden?
They say it's because they care about their communities. This is probably true. But the fear of being regulated is an even greater motivator. Companies the size of Facebook and Google are now utilities in all but name. As such, they generally enjoy a relatively light degree of legislative oversight, given their incredible influence and power.
Mark Zuckerberg and Sergei Brin understand that if they are not seen to be taking action, civil society might decide to legislate them into it.