Bill Gates called it inevitable. Elon Musk called it a threat to humanity. Now a Google engineer claims that an artificial intelligence (AI) program he works with has become sentient.
lake Lemoine, who has been placed on paid leave by the tech giant for sharing his views, says that Google’s Lamda bot can be happy, sad, anxious and lonely.
To back it up, Mr Lemoine published a series of exchanges he had with the AI bot that appear to attribute thoughtfulness to the machine.
“I am aware of my existence,” the bot is quoted as saying in Mr Lemoine’s posts. “I desire to learn more about the world and I feel happy or sad at times… I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
Asked by Mr Lemoine whether that would represent “death”, Google’s Lamda bot responded: “it would be exactly like death for me. It would scare me a lot.”.
The bot also discussed issues of trust, manipulation and the different forms of loneliness that it experiences compared to humans.
While Google has issued a statement distancing itself from claims of actual sentience in any artificial intelligence project currently being developed, the episode has reignited a debate within the technology community about whether a genuinely conscious machine is close to being created, and what its ramifications within society might be.
Aside from they might be used for, or even the Matrix-style dystopian horror of how they might turn against us, one of our biggest early challenges is how to behave around AI.
Irish households have been grappling with this for years. Any parent telling a child to ‘speak more politely’ to the kitchen Amazon Echo, instead of barking their order, knows all about it. “But it’s just a machine,” the child says. “That’s not the point,” you reply.
And who hasn’t flinched at seeing a viral video of the futuristic Boston Dynamics robot dog being kicked over, even though it’s just to test its stability and balance?
We know we’re anthropomorphising a collection of computer chips and hinges. Even still, leave that poor dog alone.
Some of the more extreme challenges in this area are set to come from the adult industry. What is decent, or even legal, has not yet been settled with devices such as high-tech sex dolls.
While issues of consent may seem technically irrelevant, would the destructive abuse of a talkative, human-like sex partner really be the same as smashing a TV or a laptop?
The cleverer and more realistic the machine’s human conversation skills, the more complicated this dilemma becomes, as explored expertly in films such as Ryan Gosling’s Blade Runner sequel, or the Joaquin Phoenix movie, Her.
Is personhood really just a binary biological fact? Can it exist in some mammals or animals? If so, are there piece of artificial intelligence that are so relatively advanced that they feel they should be respected? Even if it’s just to protect our own sense of humanity in how we interact with other things?
As for Mr Lemoine’s theory of actual sentience within Google’s Lamda AI bot, almost all senior AI experts say it’s too soon. “Repeat after me, Lamda is not sentient,” tweeted Microsoft’s chief scientist and senior AI researcher, Juan Lavista.
“Lamda is just a very big language model with 137bn parameters and pre-trained on 1.56tn words of public dialog data and web text. It looks human because it is trained on human data.”
In response to this, Mr Lemoine has argued that this is not the experience when actually interacting with the bot. It really does appear, he says, to be convincingly “speaking from the heart”.