The world's biggest tech companies have been blasted by MPs for failing to do enough to tackle extremism which appears in videos and messages posted online.
Google, Facebook and Twitter have have been accused of "consciously failing" to tackle the incitement of extremism, despite multi-million pound profits, by a group of MPs on the home affairs select committee.
"Huge corporations like Google, Facebook and Twitter, with their billion dollar incomes, are consciously failing to tackle this threat and passing the buck by hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror," said MP and chair of the committee Keith Vaz.
Calling the internet the "modern day front line", Vaz said the platforms would become the "wild west" without greater efforts to tackle the problem.
Tech companies employing only a couple of hundred people each to monitor extremist content is not good enough, the MPs said, while Twitter was singled out for failing to proactively report content to the authorities.
The social network said it has suspended nearly a quarter of a million accounts in the past six months and told MPs in evidence submitted to the committee's investigation that the information posted on Twitter is public and "often has already been seen".
However, the MPs found it "alarming that these companies have teams of only a few hundred employees to monitor networks of millions of accounts and that Twitter does not even proactively report extremist content to law enforcement agencies".
Facebook said it works quickly to remove terrorism related content and worked closely with relevant authorities to counter extremist speech.
“As I made clear in my evidence session, terrorists and the support of terrorist activity are not allowed on Facebook and we deal swiftly and robustly with reports of terrorism-related content," said Facebook's UK director of policy Simon Milner.
"In the rare instances that we identify accounts or material as terrorist, we'll also look for and remove relevant associated accounts and content."
“Online extremism can only be tackled with a strong partnership between policymakers, civil society, academia and companies. For years we have been working closely with experts to support counter speech initiatives, encouraging people to use Facebook and other online platforms to condemn terrorist activity and to offer moderate voices in response to extremist ones.”
A spokesperson for Google-owned YouTube said: We take our role in combating the spread of extremist material very seriously.
"We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law. We’ll continue to work with government and law enforcement authorities to explore what more can be done to tackle radicalisation.”
The MPs are proposing that the Met Police's counter terrorism Internet referral unit (CTIRU) which is charged with monitoring and removing content inciting or glorifying terrorism, be beefed up - including the tech companies locating staff within the unit itself.
"We need to win the cyber-war with terrorist and extremist organisations. We recommend that CTIRU is upgraded into a high-tech, state-of-the-art round the-clock central Operational Hub which locates the perils early, moves quickly to block them and is able to instantly share the sensitive information with other security agencies," the report concluded.
"Representatives of all the relevant agencies, including the Home Office, MI5 and major technology companies, should be co-located within CTIRU. This will enable greater cooperation, better information-sharing and more effective monitoring of and action against online extremist propaganda."
The MPs called on the government to enforce measures that require tech companies to cooperate with the unit and to force them to publish quarterly data on the how many accounts have been taken down and for what reason in an effort to become more transparent.
However, an expert questioned the thrust of the report.
"I’m not convicted that the extremist content online is the key point that radicalises someone," said director of the Centre for the Analysis of Social Media at Demos and the University of Sussex Jamie Bartlett.
"Even academics still don’t know what makes it happen. What’s not to say it's pushing people away from being radicalised?"
He also raised concerns that MPs were looking to technology to help solve a problem that is still technically hard to achieve with the millions of videos and messages being posted online each day.
"Technologically it’s way more difficult than people realise. There are big limits on what they [tech companies] are able to do," said Bartlett.
"The problem is that we see Amazon suggesting what book to buy next and see Google suggesting what we’re searching for - we believe that an algorithm can then solve everything. Facebook has billions of pieces of content a day. As clever as they are, identifying material within that is very difficult to do.”
It comes as European leaders call for greater access to encrypted communications in its bid to tackle terrorism across the continent.
French minister Bernard Cazeneuve said on Tuesday there should be laws obliging operators to cooperate in investigations, singling out messaging app Telegram which he said was non-cooperative. EU laws on privacy are currently under review.
In the UK the government is pushing forward with the Investigatory Powers Bill which is seeking to hand greater access to communications data to security services. The new law was backed in a report published last week by the independent reviewer of terrorism legislation despite concerns among tech companies and campaigners that it intrudes on privacy.