London Bridge attacks: Facebook, Google and Twitter respond after Prime Minister Theresa May accuses them of providing “safe space” for terrorists
Silicon Valley tech giants have responded after Prime Minister Theresa May accused internet companies of providing a “safe space” for terrorism in the wake of the attacks at London Bridge.
In a strongly worded speech, May set her sights on the companies as one of the four important ways “things need to change” in the way the UK tackles terrorism.
May said:
“We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet, and the big companies that provide internet-based services, provide. We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning. And we need to do everything we can at home to reduce the risks of extremism online.”
Facebook, which also owns WhatsApp, Google and Twitter have said they are working to tackle the issue already and TechUK deputy chief Antony Walker praised them for being “absolutely committed to working through an international forum to strengthen existing initiatives; improve identification and removal of terrorist propaganda; and promote counter-speech, empowering those with inclusive and positive messages”.
“Putting in place the right solutions to combating the misuse of online platforms is just one part of the jigsaw in tacking extremism. These are highly complex, challenging issues and tech companies are committed to playing their part, working within a clear legal framework and in full recognition of the seriousness of these issues,” he said.
Read more: London Bridge attack: Theresa May’s statement in full
And a digital rights group has called the government’s focus on the internet and encryption “disappointing” and risked “distracting” from “hard and vital questions” of the drivers of terrorism elsewhere.
“This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe,” said Open Rights Group in a blog post.
“But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming.”
Home secretary Amber Rudd re-ignited the row over encryption, calling for an international agreement “to take down the material that is radicalising people” and “to help work with us to limit the amount of end-to-end encryption that otherwise terrorists can use to plot their devices.”
Experts have previously warned that an end to end-to-end encryption would put security at risk.
What Facebook said
Facebook director of policy Simon Milner said: “We want Facebook to be a hostile environment for terrorists”. He added that it works “aggressively” to remove terrorist content and notifies law enforcement of emergencies involving imminent harm.
“We condemn the attacks that took place in London on Saturday night and our thoughts are with the families of the victims and those who are injured. Facebook’s Safety Check was activated by the local community last night. We hope the people in the area found the tool a helpful way to let their friends and family know they are okay.
“We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement. Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.”
What Twitter said
“Terrorist content has no place on Twitter. We continue to expand the use of technology as part of a systematic approach to removing this type of content. We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia,” said Nick Pickles, head of public policy in the UK.
In its latest transparency report published in March, it said nearly 400,000 accounts were suspended in the last six months of 2016 for violations related to promotion of terrorism. Around three quarters of the violations were surfaced internally by Twitter’s own systems and less than two per cent were government requests.
What Google said
A spokesperson for the web giant said:
“Our thoughts are with the victims of this shocking attack, and with the families of those caught up in it. We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online. We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges.”