Even five years ago, domestic artificial intelligence (AI) seemed distant, a concept yet to be realised. It was the future, we were told, forgetting that “the future” can mean tomorrow, as well as decades down the line.
Today, our daily lives are punctuated by machines capable of imitation or simulation of intelligent human behaviour. If Spotify has made you a playlist, Siri has given you directions, or Cleo is managing your finances, you’ve interacted with a machine of extraordinary complexity. Most of the platforms that are second nature to us integrate some form of AI.
As its gradual omnipresence becomes a reality, some are concerned about AI’s potential omnipotence: its dark side, the abuses of power and lost jobs, the threat of machines knowing more about us than we do ourselves.
In media and academic discourse, some disregard these and other concerns as the whims of neo-luddites, unwilling to accept the future, impeding innovation in favour of maintaining the status quo. Others argue that we are sleepwalking into the next generation apace, acting blindly without thinking about the foundations upon which the future will be built. But what do the general population think?
That’s a question that SYZYGY, a digital team at WPP, wanted to answer. Its latest research paper – Sex, Lies and AI – aims to better understand the attitudes people have toward the proliferation of AI technologies in society.
Police and thieves
Psychologist Dr Paul Marsden took the reins of the study, and says that while the headline spells fear, people are highly conflicted about our AI future.
“People are unsure and uncertain about this technology,” he says. “I think there is an underlying anxiety about what this could do to me, and for me. Take Notting Hill Carnival this year. They used facial recognition – predictive policing – whereby a computer decides whether you’re kept in custody or not. I think people have a genuine fear, but can see the benefits.”
One of the primary findings of the survey was that 44 per cent believe police should arrest suspects if AI predicts they are likely to commit a crime. That’s a lot of faith to put in a machine that is unlikely to be infallible. Meanwhile, as is always the case in such studies, the threat of job losses topped the bill.
These sorts of conflicts occur throughout the survey: people want the benefits of AI, without any of the gritty negatives.
An aspect rarely discussed, especially not in the business world, is the impact of AI on our personal relationships. The proliferation of sex robots in recent years surfaces yet more conflicts. While 81 per cent of female respondents said that they would not have sex with a robot, men were 40 per cent more likely to say they would.
“I don’t think it’s about sexuality,” says Marsden. “It’s about image; it’s what men want to communicate rather than what women want to communicate. You see this in lots of social psychology studies looking at relationships. The amount of women men say they’ve slept with is completely different to the amount of men women say. But it actually can’t work like that – it’s actually got to be the same. It’s more about the image you want to put out.”
When the same group was asked whether they would consider it cheating if a partner had sex with a robot, 68 per cent agreed. Other than demonstrating a general readiness for future fallouts, the idea of an intimate relationship with technology conceptualises it as more than a tool: it humanises it, as something that could threaten a relationship.
“It’s another conflict – people are stuck between this ‘I’ll invite it into the bedroom, but if my partner does then it’s cheating, and I also think a third of my job is going to be gone in five years’.”
Interestingly, Marsden adds that there are two fields when it comes to relationships with AI: there’s bioengineered replicants – robots, and then there’s the virtual type, as in the 2013 Spike Jonze film, Her.
He says that “the virtual type has a more intimate relationship because it, she, her, is with you all day and knows you more intimately than this bioengineered replicant.”
Are we human?
The report highlights the risk of AI “dehumanising the world”. The top line is that nearly half of UK respondents believe that AI poses a threat to the survival of humanity.
This means that just over half don’t. Specifically, around one in 10 are actively afraid of the technology, with 12 per cent fearing AI will “de-humanise the world”.
The fear is that we spend more time touching our phones than touching each other. Marsden says to consider that the first thing you interact with in the morning and before bed is a bit of technology – not a person.
“For example, people used to go clubbing to connect with each other and jump up and down in a sweaty box and hook up. But now, if you go to clubs, everything is set up to be a photo opportunity. You still go to connect with people – but not people in the club – you go to connect with people on your social media through photo opportunities.”
Clearly there’s no simple answer to our anxieties about the technological future. But one thing is certain – the conversations we need to be having aren’t happening enough, especially with younger users. “It’s about digital literacy,” says Marsden. “And right now we’re not having hat conversation about the responsible and ethical use of AI.”
Elliott Haworth is business features writer at City A.M.