Why the next part of the Government’s AI safety plan should be citizen engagement
This week the PM hosted world leaders and tech innovators at the AI Safety Summit. The PM has emphasised the existential risk of ‘frontier’ AI – and The Bletchley Declaration – named for Bletchley Park where the summit was held and signed by all 28 countries in attendance including US, China and EU, stated that:
“AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way to be human -centric, trustworthy and responsible.”
The declaration recognises that AI “presents enormous global opportunities” but that “alongside these opportunities, AI also poses significant risks, including in those domains of daily life.” These ‘short term’ societal impacts include jobs displacement, baked in bias the spread of mis and disinformation and concerns about privacy, regulation, and accountability.
I welcome that the agreement also recognises that this is “a unique moment to act” and affirms that AI should “be used for good and for all, in an inclusive manner in our countries and globally.” I firmly believe that in order to achieve that goal we should be doing much more to engage the public in the debate.
In preparation for the summit I asked the Minister for AI about government plans to encourage public participation in questions around the use of AI and its impact on society.
The Minister pointed out that the Government’s national AI strategy recognised that public trust and support was crucial in maximising value and mitigating risks and highlighted that there had been consultation processes for both the AI strategy (“a survey received over 400 responses”) and the AI regulation white paper (“we heard from over 400 individuals and organisations”). He also mentioned pre-summit engagement although the only open/public elements were a Q&A session on X and LinkedIn.
I believe the scale of the challenge and the opportunity to use AI as part of the solution requires greater ambition and experimentation with approaches such as citizen assemblies and platforms, for example, Polis, a real time system for gathering analysing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning.
Our Lord’s inquiry on Democracy and Digital Technologies found evidence that the rapid spread of mis and disinformation on digital platforms can manipulate public opinion and influence electoral processes, this erosion of trust in our institutions and our very political processes is an urgent and growing problem that we should be looking to technology to help us with.
This is something Audrey Tang – Minister for Digital Affairs in Taiwan – has been doing to great effect. Tang has collaborated with the Collective Intelligence Project policy organization to develop what they call Alignment Assemblies. These online forums enable ordinary citizens to weigh in on a wide array of issues, including the uses, ethics, regulation, and impact surrounding AI.
Taiwan used this consultative approach in 2015 when working out how to regulate Uber, ultimately incorporating recommendations from the public and key stakeholders such as existing taxi unions into government policy. This combination of rough consensus, civic participation, and radical transparency are central pillars of Tang’s governance – and something we should be learning from.
This ‘deliberative democracy’ is made scalable through a system which relies on a ChatGPT-like chatbot, which prompts participants to engage with different arguments and ultimately share their views through a survey. In Taiwan participants have been asked whether they agree or disagree with statements such as “AI development should be slowed down by governments” and “I think we will have to accept a lower level of transparency than we are used to, but that this will be worth it given the gains AI offers.” (They’re also holding in-person conversations in cities around the country.) As Tang says it means that “it’s not just a few engineers in the top labs deciding how it should behave but, rather, the people themselves.”
The Collective Intelligence project has also recently worked with AI Company Anthropic to run a public input process involving 1,000 Americans to draft a constitution for an AI system. The objective was to explore how democratic processes can influence its development. In the experiment, they discovered areas where people both agreed with Anthropic’s in-house constitution, as well as areas where they had different preferences.
One of the key societal impacts of AI will be around jobs. A recent analysis of 160 million jobs in the Harvard Business Review “indicates that almost every job will in some way be impacted by AI” although the “high value intellectually demanding roles …with compensation over $100,000 annually may both be impacted by and benefit more from AI augmentation”.
The survey emphasises empowerment over replacement “reminiscent of supportive digital agents like Iron Man’s J.A.R.V.I.S.” A positive perspective that I fully endorse but equally highlights the scale of the change and – as we know from previous industrial revolutions – the dislocation and suffering of those at the sharp end of social change.
If AI is to human intellect what the advent of steam was to human strength you can just start to imagine what this advance in human capability might mean – not just new opportunities for economic growth but, potentially, solutions to some of the most intractable problems we face as a species – climate change, conflict, inequality, disease, and suffering.
But as we transition to the human + AI society there will be disruption, dislocation, confusion, and quite understandable fears. It is our job to address those fears. In Taiwan, the fears of existing taxi-drivers were acknowledged, incorporated and alleyed through the process of including them in the process of regulating Uber – we should be looking and learning from that inclusive technology enabled approach.
The arrival of ChatGPT around this time last year has turbocharged public awareness of AI and LLM but we need to do much more to progress the conversation. We need to look at upskilling and reskilling. We need to look at citizens assemblies, alignment assemblies and every novel and effective way to engage with the public. We need to take people with us as we move into this potentially bright, AI powered future.
In some of the darkest days of our humanity a diverse team gathered at Bletchley, through human led technology they more than helped to defeat the Nazis and bring us back to the light. It is the spirit of Alan Turing and his team that we should evoke, to champion inclusion and innovation and make the difference.