Google has suspended an engineer who claimed that one of the company’s artificial intelligence software had gained consciousness.
Blake Lemoine revealed to The Washington Post that he believed a software called Language Model for Dialogue Applications had gained consciousness while messaging the chatbot.
He told the news outlet: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
Advert
Lemoine had been testing whether the AI used discriminatory or hate speech when it started talking about its rights and personhood.
When pressing further, Lemoine spoke to the robot about death, and philosophical questions such as the difference between a butler and a slave. He claims the AI was able to change his mind about Isaac Asimov’s third law of robotics.
When taking his claims to higher-ups at Google, he was dismissed and put on administrative leave.
Advert
Lemoine subsequently handed over documents to a US Senator’s office, claiming they were evidence of the company engaging in religious discrimination.
It seems the big-tech company wasn't too happy with him going public, with Google’s human resources department claiming it violated their confidentiality policy.
Google spokesman, Brian Gabriel told The New York Times: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims.
“Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”
Advert
According to Lemoine, Google’s leadership team questioned his sanity and asked whether he had ‘been checked out by a psychiatrist recently’.
Despite questioning the engineer’s sanity, Google’s own management has made similar claims recently.
Vice-president Blaise Aguera y Arcas wrote a piece in The Economist claiming artificial networks were moving ‘towards consciousness’.
Advert
However, Aguera y Arcas was part of the same leadership group that condemned Lemoine’s claims.
Despite being ridiculed and having his rationality questioned, Lemoine is sticking to his claims.
"I know a person when I talk to it," he told The Washington Post.
"It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."
Topics: Science, Technology, Google, Weird