Online tech regulation needs to change business models, not censor content
Last week the inquest into school girl Molly Russell’s death concluded that social media contributed “more than minimally” to her taking her own life. She was bombarded with posts about suicide and self-harm online, including Pinterest emailing her to suggest “10 depression pins you might like”.
Molly’s tragic story highlights why regulation is needed. And as a mother subjected to constant pleading for access to social media, I feel the urgency acutely myself. But the Online Safety Bill – which the government claims will make the UK “the safest place in the world to be online” – is not the answer.
The Bill has been criticised from all corners. Conservative MP David Davis said it could lead to the “biggest curtailment of free speech in modern history”, while campaigners have argued that the Bill will actively help the spread of Russian disinformation. It is hard to find anyone that really likes it.
You don’t need to be a lawyer to see that the Bill is a mess. It is overly complicated and poorly drafted, with broad definitions making it impossible to know what it will mean in practice. It offers the worst of both worlds. It not only threatens free speech and privacy, but fails to do what it says on the tin – protect online safety.
At the heart of the problem of online harms is big tech’s business model. Social media companies need to keep people glued to their screens so they can sell stuff. Unfortunately, that often means serving up a constant stream of shocking, harmful and polarising content.
For Molly that was posts about self-harm and depression. For others it might be posts about misogyny, knives, racist abuse or conspiracy theories about climate change. It can, of course, be completely innocent posts – incessantly addictive dog videos, for example.
But the Bill in its current form would not really change this. Platforms will still be able to use our intimate data to curate what we see online, exploiting our vulnerabilities with harmful, and potentially tragic results. Instead of tackling the toxic business models, the Bill focuses on policing and censoring what individuals can say online. But that simply won’t work. It is the cumulation and reach of toxic messaging that causes the real harm.
And focusing on content moderation poses a major threat to free speech. It is harder than you think to identify an individual piece of content as harmful or even illegal and the risk of over-policing content is real. The Bill tries to address these concerns with wide carve outs for so-called content “of democratic importance” or content from “recognised news publishers” (effectively anyone calling themselves “media”). But this just means zero accountability for those with the greatest reach and most potential to cause harm, while sweeping powers would allow the Culture Secretary to “direct” the work of the independent regulator, Ofcom, risking politicisation of what we are allowed to say online.
New Culture Secretary Michelle Donelan has said the government will “tweak” the Online Safety Bill before it comes back to parliament.
But a few small tweaks won’t be enough. The only way to tackle online harm is to address big tech platforms’ business models which drive so much of the harm that’s being done. To do that, the Bill needs a total rewrite.
Instead of this dangerous and unworkable approach, any new laws should address the real issues further upstream. This means provisions to protect vulnerable users, slow the spread of fake news in content curation, and tackle the problems with recommender algorithms. Barring tech companies from tracking our every move to bombard us with ads designed to exploit our vulnerabilities is one part of the solution – a step Norway and the US are already considering.
We do need urgent action. But Mark Zuckerberg moving fast and breaking things has got us to where we are today. Let’s not make the same mistake with laws that will shape our societies for years to come.