Tech firms demand legal safeguards over content moderation
Tech giants including Google and Facebook have called on the EU to introduce a “legal safeguard” to limit their liability when proactively tackling harmful content.
Brussels is currently drafting rules under its upcoming Digital Services Act that would increase companies’ responsibility for removing illegal material posted to their platform.
But in a report released today, a trade body representing major internet companies said a new framework was needed to ensure firms were not at risk of additional liability from their attempts to remove harmful content.
The European Digital Media Association (EDIMA) said the current lack of provision had a “chilling effect” on service providers.
“All of our members take their responsibility very seriously and want to do more to tackle illegal content and activity online,” said director general Siada El Ramly.
“A European legal safeguard for service providers would give them the leeway to use their resources and technology in creative ways in order to do so.”
The association, which represents 19 top tech firms, proposed the introduction of new safeguards including minimum information levels in notices of illegal content and the requirement for a human review of content removal appeals.
EDIMA highlighted similar regulation in place in the US, where the so-called Good Samaritan principle allows sites to take action against illegal material without incurring additional liability.
But the group said US rules were often viewed as a “carte blanche for service providers to overact on content that is not strictly illegal”.
The report stated that any new EU framework should be subject to oversight for “overaction” to ensure there was no impact to principles of free speech.
“The EU approach to the freedom of expression is different to that of the US so our approach to moderating content online must be different also,” El Ramly said.
“Our proposal is based on European values and laws and sets clear limits to the legal safeguard for service providers in order to protect the freedom of expression and to prevent overaction by service providers.”
Tech giants such as Facebook and Google are facing growing scrutiny from regulators in the US and the EU amid concerns they are failing to crack down on harmful material on their platforms.
The issue has reached fever pitch ahead of the US presidential election next week, with sites including Facebook and Twitter taking action to limit the spread of misinformation before voters head to the polls.