ChatGPT has thrown artificial intelligence into the mainstream, and now it’s Rishi Sunak’s job to design rules which don’t prevent its growth, but keep us safe, writes Robin Röhm.
Artificial intelligence is clearly on Rishi Sunak’s mind. Barely a week goes by without a government announcement on how Britain can become a world leader in AI, or on the need for sensible regulation to balance innovation with some form of authority over a technology that threatens to get out of control very quickly.
In some ways, the prime minister is merely reacting to events. AI is everywhere and has broken through into the mainstream after years as a promising but largely unfulfilled prospect. ChatGPT has become the fastest-growing consumer application ever, reaching 100 million monthly active users just two months after launch. Meanwhile, the transformative implications of generative AI are just beginning to dawn on industry leaders, politicians – and the wider public.
Yet there are signs that Sunak and his team are prepared to think a bit differently in their approach to AI, especially when compared with attitudes in the European Union. The EU, through its AI Act, is moving in the direction of direct intervention. Britain, on the other hand, is taking a more hands-off approach, which could be good news for AI businesses operating in the UK.
Last week the prime minister was asked at the launch of the Business Connect forum how regulation could prevent AI from becoming “uncontrollable and potentially malicious”. His reply that “policy is intended to take advantage of the opportunities that [AI presents] but also put safeguards in place” was reassuring to startup founders who are developing products to enable the AI industry to thrive.
This is about far more than clickbait headlines around ChatGPT. The technology now emerging has the potential to facilitate a definitive breakthrough on issues such as climate change or healthcare. Indeed, AI is predicted to raise global GDP by 7 per cent by 2030 and could improve billions of lives around the world. But to do that, we need to make sure regulation fosters innovation in a way that is responsible and protects the vulnerable.
Sunak’s announcement last week of a £100m AI task force is a positive step. Modelled on the Covid-19 Vaccines Taskforce, it will examine the adoption of safe and reliable foundation systems – which are the AI models, like ChatGPT, that are trained on enormous data sets.
This is where the real danger lies in failing to regulate. Tech giants such as Microsoft, Google and Amazon are racing to be first to market with new products in an industry that is predicted to contribute up to $15.7 trillion to the global economy by the end of the decade. However, this escalating AI arms race between Big Tech means some products are reaching consumers without sufficient information on the data with which foundation systems have been trained and the reliability of these systems as they make predictions. The consequences of this are genuinely unknown.
As ever with government policy pronouncements, what is really needed is clarity and action. Responsibility for UK regulation is spread across multiple departments, including the Bank of England and Ofcom, and we do not yet know how that will work in practice. It is to be hoped that the new Taskforce can sharpen the focus.
One solution for businesses looking to comply with regulations is federated learning, which allows AI models to be trained on distributed datasets, leaving the data where it resides.This enables models that are safe and robust, while the owners of data retain full control.
Rishi Sunak and the government have made a promising start on their AI regulation journey. The prize for getting it right is enormous.