Discussions around artificial intelligence (AI) provoke both excitement and fear in strong measure.
As happens so often with new advances, policymaking lags significantly behind. We must think now about the consequences that AI will have very shortly in our lives – changing work, society, economics and more.
There’s some encouragement that at last governments around the world are waking up to the size of some of the societal questions that AI innovation will ask of us. Today I am back in Westminster to give evidence to a cross-party group of MPs and peers, to discuss and debate these seismic shifts that are coming our way.
There are doomsayers and evangelists, hype merchants and luddites, and as usual the truth lies somewhere in the middle. There are incredible opportunities in AI, and there are also incredible uncertainties and risks.
How do we get the former, and avoid the latter?
One particular challenge is how society should have control of companies and organisations doing AI. Too lax, and we will find that unwanted harms increase. Too tight, and we throw away the benefits from innovation. There are some great organisations already grappling with the potential of AI for this country. The Big Innovation Centre (a London-based think tank), is leading the debate on how AI may shape a more egalitarian and efficient society, in terms of business governance, effective policy making, and social interaction.
Another is DeepMind Health (DMH), owned by Google’s parent company Alphabet, which is taking an innovative approach. The application of AI to healthcare (and, indeed, a Google-linked company having access to medical records) is deeply controversial, though also has large potential gains.
Boldly, DMH has set up an Independent Review Panel, which I chair. We have been given huge independence, a generous budget to commission independent investigations into anything DMH do, very broad access to everything that they are working on, and the freedom to publicly review and critique them.
I know of no other private organisation that is even trying to be so open and transparent. I hope others will try to follow this example.
I’m a liberal, and as a starting philosophy I am worried about any over-concentration of power. There is a problem whenever too much power rests with any one person, group or organisation.
We should therefore guard against companies which seek to lock users of their AI products into using their systems and no others. We must encourage companies to provide open source interfaces, and be replaceable, so that others can come in and innovate.
This will stimulate competition and ultimately drive better products and services. That is not a comfortable position for many companies, but that is how society will get the greatest benefit from AI.
Then there are worries about the impact of AI on work. The Rustat Conferences, which I direct, has looked at the consequences of this. A lot of attention has been paid to the jobs that will be lost or replaced by AI-based systems, and this is obviously important. However, it doesn’t have to be a bad thing – if we address it properly.
There is nothing to say that our economic system requires a constant amount of human labour; we should treat the reduction in need for labour as a huge positive, rather than seeking to generate jobs simply to keep people occupied.
The Romans seem to have worked nine days in 10. This then changed to six in seven. We now work five. There is no reason that the status quo should remain as such. Shorter working weeks would give more people a stake in employment, and also give them more time to do other things with their lives.
AI has the potential to not only create new jobs and solve problems in different ways, but also free us up from drudgery, so we can focus our lives on things we actually enjoy. As such, AI can hugely boost our wellbeing.
If the industry, politicians, and society enter this new world together, we can ensure that it liberates us to lead better and more fulfilled lives.