AI rules cannot mimic the chaos of trying to legislate away online harm
The behemoth that is the Online Safety Bill has finally been made law, but when we come to legislate artificial intelligence, we can’t follow the same path, writes Dom Hallas
IN THE early 1990s, the then Home Secretary Kenneth Baker promised to rid the country of a “menace”. The legislation introduced, the Dangerous Dogs Act, has become a byword for well-meaning but poorly crafted policy. So bad in fact, that we’re still feeling the repercussions today.
Nearly thirty years on and faced with a new threat, another Home Secretary, this time Sajid Javid, promised to tackle what they described as a “hunting ground for monsters”. The resulting legislation, what will shortly become the Online Safety Act, has finally gone for Royal assent.
My fear, having followed the policy for the whole five years I’ve run the Startup Coalition, is that we are seeing history repeat itself – and that this bill risks becoming the Dangerous Dogs Act for the internet.
It’s important to be very clear. Everyone, including every single startup founder I speak to, hopes sincerely that this legislation has the impact its proponents wish. Only a fool could look online and always like what they see. But the problem is that, for the most part, what reflects back at them are challenges that we face as a society at large.
This is the difficulty of legislating in the internet age. And as technology such as artificial intelligence ingrains itself more deeply in society, we need to ensure future conversations about regulation do not follow online safety’s tortuous path. There are some clear lessons from this years-long saga about what not to do. And how we can do it better.
First – when something is called the “Online Safety Bill”, interest groups and politicians will want their many and varied concerns about things that come under the category of “online” to be addressed. We’ve seen calls for it to ban everything from ticket-touting to photoshop. As Benedict Evans says we never expected a single piece of regulation to tackle all the safety questions arising from the age of the car. There was not a single technology in cars, there is not a single technology called the internet, and there is not one artificial intelligence. When we regulate – we should be clear on our scope and intentions.
We also can’t mislead about the realities of what is possible. Early policy conversations too often assumed we could end bullying online with the stroke of a legislative pen. Likewise, we can not rid the world of something “harmful” unless we can agree on what harmful means.
More recently, fights over encryption have often simply denied reality. You had companies denying the risks private spaces create, campaigners denying encryption matters, and the government denying that there was any trade off at all. Regulating tech means trade offs – policymakers should collectively decide which to make, not pretend they aren’t there at all.
Finally, we also need policy delivery fit for the task. Ofcom have been staffing up and doing a laudable job considering what comes next. But you have to ask what success even looks like?
We’ve already had a partial lesson in the form of the Age Appropriate Design Code – a similarly well meaning but flawed regulatory drive in the tech sector. The result so far has been changes in the policies of big tech companies, confusion amongst smaller ones and lack of enforcement everywhere.
This reflects a deeper problem: almost half of crime is now online, but only a fraction of the policing budget is spent tackling it. Racist abuse online isn’t tracked properly because the creaky police IT systems make it difficult. Parents and teachers are asked to do a better job explaining risks online but we barely fund plans to help them. Put simply, we ask delivery bodies to play a tricky game of whack-a-mole without even giving them the money for the rubber mallet.
And of course, when this doesn’t work – what do the public, interest groups and politicians call for? More policy to fix the policy we just passed. So we do the same thing and expect for different results.
In the coming years, we’ll be trying to address the massive societal implications of the AI revolution. So far, the UK government has done laudable and thoughtful work. But if we don’t learn the lessons from the online safety process, I fear we’ll end up with another dog’s dinner.