Even if Google’s chatbot isn’t sentient, we need to think seriously about AI
Can a chatbot be sentient? That was one of the questions facing Google recently after it put an engineer on leave after he claimed their AI-driven bot talked about “having a soul”.
Many of the solutions to the global crises we face will be driven by technological advances. Politicians from across the spectrum espouse the benefits of tech. Just the other day, the Shadow Health Secretary Wes Streeting claimed technology has the ability to rapidly improve the quality of healthcare. And Environment Secretary George Eustice stressed the role of new tech to change the way we grow food. These are some of the many areas society hopes to see leaps and bounds: the prize for all sectors is gigantic.
But as we saw with the Google bot, it also raises serious ethical questions. For policing in the West Midlands this lesson has been apparent in our journey on our use of AI.
One of West Midlands Police’s first AI efforts targeted modern slavery. The total cost of these crimes in the UK is estimated to be as high as £4.3bn a year, based on figures from 2017. To fight it, West Midlands Police worked on a new algorithm to link disparate intelligence data logs into a data visualisation platform. Identifying networks of serious organised crime can take weeks or even months – this tool identifies dozens of potential modern slavery suspects in minutes. Following policing budgets cuts of over £175m in the West Midlands since 2010, data visualisation proved a cost-effective way to help target investigatory resources.
But this was not without complication. Having undergone close scrutiny by our data ethics body, the tool needed to adapt to ensure individuals wrongly associated with organised slavery were reliably removed from outputs. Scoring matrixes assessing the risk posed by certain individuals needed to address the nuances between being a victim or a perpetrator of crime. It was also deemed vital that slavery victim support services were properly engaged before the AI went live, ensuring they were prepared for what the AI could identify.
According to a conservative estimate from the Ministry of Justice, reoffending in England and Wales cost a staggering £18.1bn in 2017. In a large city like Birmingham, reoffending has a profound impact on communities, socially and economically. So West Midlands Police developed a new AI instrument seeking to identify individuals more likely to commit a high-harm crime, supporting offender management teams with preventing reoffending. The tool’s success will become clearer as it undergoes thorough independent evaluation.
But from an ethics standpoint, we need to ensure people are not penalised based on prediction. We need safeguards to prevent us blindly following the data at the expense of professional judgement, or sharing results with anyone unfamiliar with the tool’s limitations.
The commitment to genuine transparency, ongoing review and true diversity of perspectives has been paramount. There is a lot of potential, but without trust, it will get us nowhere. The Justice and Home Affairs Committee’s report “Technology Rules? The advent of new technologies in the justice system” concludes that there should be a national ethics body – based on our West Midlands ethics model – to scrutinise AI in policing.
The importance of making ethics central to AI design should be a lesson for all sectors. Businesses and tech entrepreneurs should embrace this transparency and the public attention paid to it – it will enable more long term success, rather than sowing distrust and division. The lack of transparency over other tech products, such as social media algorithms, is an example of the backlash if this isn’t in place at the beginning.
If we look, for instance, at the estimated £4.9bn lost through Covid-19 business support scheme fraud, commercial AI products may present ideal solutions for detecting and preventing financial impropriety. Within companies, machine learning may help predict and promote the patterns of working that best achieve employee satisfaction and wellbeing and the associated benefits for organisational productivity.
Those leading the AI landscape should make the ethics of their technology a top priority as a commercial imperative.
The ethical shaping of data science products today are the moral foundations of a technological world we can’t see yet. Getting it right now goes beyond immediate public palatability or legal compliance; it serves as a form of leadership more akin to effective green strategies – practices looking to the world we want our children to inhabit.