Google has placed a senior engineer on “paid administrative leave” after he made comments about the company’s chatbot being “sentient”.
Blake Lemoine of Google’s Responsible AI unit, whose role was to investigate ethic concerns, wrote in a blog post last week that he “may be fired soon for doing AI ethics work”.
He added: “Google is preparing to fire yet another AI Ethicist for being too concerned about ethics. I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented”.
He revealed that fellow colleagues didn’t take his concerns seriously. Lemoine revealed that the chatbot confessed feelings of loneliness and a hunger for spiritual knowledge.
The firm has since put him on paid leave for supposedly violating confidentiality policies
The software in question is the LaMDA — a Language Model for Dialogue Applications —and ultimately whether if it can feel and think for itself whether it can be deemed as a person.
This blurs the lines of ethics and morality that are often floated in tech and artificial intelligence.
A spokesperson for Google told the Financial Times: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic — if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”
Discussing the technology in question, chief exec of video privacy and security firm Pimloc Simon Randall told City A.M:”The conversation between the LaMDA AI and the Google engineer is extraordinary but whether the machine is sentient, or just a well trained parrot, is a question for the philosophers. But the strength of feeling and furore surrounding the story is the most important point. Right now there is an unchecked race to accelerate technology and people are unnerved by it.
“We urgently need regulators to work with industry not only to keep people safe, but to make sure that people feel like their interests and freedoms are being protected. Technology and AI can lead to human flourishing, industry needs greater clarity on societies’ guardrails”, he said.