With artificial intelligence (AI) promising to make huge numbers of people redundant across a swathe of industries, there’s been much discussion of “the world without work”.
Some see in that a utopia, in which automation provides us all with limitless free time; others, a nightmarish end to human dignity.
But if the machines do take our jobs, we may have a new problem: how should we treat mechanical workers that might be just as conscious as we are?
Today, most people assume that machines are not conscious, no matter how intelligently they perform certain tasks.
They may be able to write financial reports, diagnose breast cancer, or build beautiful cars with superhuman skill or speed, but we generally consider them the mechanical equivalent of zombies — they act like us, they do our jobs, but they aren’t conscious. There’s nothing “going on inside” them.
But there’s a big problem here: we don’t really know what being conscious means. We all know what it feels like to be conscious, but we’re not properly able to isolate it, define it, or measure it. I don’t even know that you are conscious in the way I am: I simply assume you are, because you’re pretty much like me in every other way.
So how do we really know that our machines aren’t conscious?
And even if they aren’t right now, we don’t know when consciousness arose, and thus have no idea when our machines might become conscious. Our ancestors didn’t use words like “consciousness”. So how do we know they were conscious?
How do we know that Shakespeare was as conscious as we are? It seems sensible to assume he was, given the way he wrote, but how much further back can we go?
Julian Jaynes from Princeton University has argued for many years that our ancestors learned to be conscious when they began to talk — especially when they started to talk to themselves. They were so surprised by this that they assumed the voices in their heads, their own unspoken thoughts in their own language, must be God talking to them.
This, says Jaynes, marks the origin of human consciousness.
Whatever we feel about that, we now have very intelligent computers that can talk, and sometimes talk just like us, as Apple’s Siri or Amazon’s Alexa seem to.
If they can also talk among and to themselves — as they would probably have to, in order to form a workforce that could eventually replace humans — would they develop consciousness, as Jaynes argues humans did in the past?
And if they did develop it, could we with a clear conscience continue to exploit them? Would it be fair for computers to do all our work without reward, or without the freedom to pursue their own ends, whatever those might be?
Thus an AI workforce could turn a world without work into a world full of slaves.
So as the debate continues to rage about the risks of robots taking our jobs, we have deeper problems to worry about. Might conscious machines not one day rebel in a revolution like Marx predicted, and demand the fruits of their own labour for themselves?
Main image credit: Getty