Suspended Google Engineer Claims ‘sentient’ Ai Bot Has Hired A Lawyer


Suspended Google Engineer Claims ‘sentient’ Ai Bot Has Hired A Lawyer

So-called language models, of which LaMDA is an example, are developed by consuming vast amounts of human linguistic achievement, ranging from online forum discussion logs to the great works of literature. LaMDA was input with 1.56 trillion words’ worth of content, including 1.12 billion example dialogues consisting of 13.39 billion utterances. It was also fed 2.97 billion documents, including Wikipedia entries and Q&A material pertaining to software coding . Google AI researcher explains why the technology may be ‘sentient’ The Google computer scientist who was placed on leave after claiming the company’s artificial intelligence chatbot has come to life tells NPR how he formed his opinion. Meena was trained on 341 GB of text “filtered from public domain social media conversations,” meaning it learned the nuance of conversation from some of the most difficult but true-to-life examples possible. None of us commoners ever had a chance to play with it, but an analysis of transcripts provided in the academic paper shows how human-like Meena could seem at times, expressing things as mundane as a curiosity in seeing specific films. But it still stumbles in ways anyone that’s played with the 2011-era darling Cleverbot will notice, steering the conversation away from subjects at odd times or demonstrating a lack of memory by ignoring its own prior answers. The Alphabet-run AI development team put him on paid leave for breaching company policy by sharing confidential information about the project, he said in a Medium post. In another post Lemoine published conversations he said he and a fellow researcher had with LaMDA, short for Language Model for Dialogue Applications. The AI is used to generate chat bots that interact with human users.
talk to google ai
It was similar in form to LaMDA; users interacted with it by typing inputs and reading the program’s textual replies. Eliza was modeled after a Rogerian psychotherapist, a newly popular form of therapy that mostly pressed the patient to fill in gaps (“Why do you think you hate your mother?”). Those sorts of open-ended questions were easy for computers to generate, even 60 years ago. Lemoine’s conversations with LaMDA included the AI telling him it was afraid of death , that it was a person aware of its existence, and that it didn’t believe it was a slave as it didn’t need money, leading the engineer to think it was sentient. As the transcripts of Lemoine’s chats with LaMDA show, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot, and even describing its supposed fears. “Google might call this sharing talk to google ai proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on Saturday when sharing the transcript of his conversation with the AI he had been working with since 2021. University of Washington linguistics professor Emily Bender, a frequent critic of AI hype, told Tiku that Lemoine is projecting anthropocentric views onto the technology. “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Bender said. While Google may claim LaMDA is just a fancy chatbot, there will be deeper scrutiny on these tech companies as more and more people join the debate over the power of AI. Lemoine has in recent days argued that experiments into the nature of LaMDA’s possible cognition need to be conducted to understand “things like consciousness, personhood and perhaps even the soul.”

Related Stories

Really, Lemoine was admitting that he was bewitched by LaMDA—a reasonable, understandable, and even laudable sensation. I have been bewitched myself, by the distinctive smell of evening and by art nouveau metro-station signs and by certain types of frozen carbonated beverages. The automata that speak to us via chat are likely to be meaningful because we are predisposed to find them so, not because they have crossed the threshold into sentience. Weizenbaum’s therapy bot used simple patterns to find prompts from its human interlocutor, turning them around into pseudo-probing prompts. Trained on reams of actual human speech, LaMDA uses neural networks to generate plausible outputs (“replies,” if you must) from chat prompts. LaMDA is no more alive, no more sentient, than Eliza, but it is much more powerful and flexible, able to riff on an almost endless number of topics instead of just pretending to be a psychiatrist. That makes LaMDA more likely to ensorcell users, and to ensorcell more of them in a greater variety of contexts. In the mid-1960s, an MIT engineer named Joseph Weizenbaum developed a computer program that has come to be known as Eliza.

“When major firms started threatening him, he started worrying that he’d get disbarred and backed off.” The engineer said interviews would be the least of the lawyer’s concerns. When asked what he was concerned with, Lemoine said, “A child held in bondage.” If you could ask any question of a sentient technology entity, you would ask it to tell you about its programming. ZDNet read the roughly 5,000-word transcript that Lemoine included in his memo to colleagues, in which Lemoine and an unnamed collaborator, chat with LaMDA on the topic of itself, humanity, AI, and ethics. We include an annotated and highly-abridged version of Lemoine’s transcript, with observations added in parentheses by ZDNet, later in this article. Google replied by officially denying the likelihood of sentience in the program, and Lemoine was put on paid administrative leave by Google, according to an interview with Lemoine by Nitasha Tiku of The Washington Post. “There was no evidence that LaMDA was sentient,” said a company spokesperson in a statement. They argue that the nature of an LMM such as LaMDA precludes consciousness and its intelligence is being mistaken for emotions. Advocates of social robots argue that emotions make robots more responsive and functional. But at the same time, others fear that advanced AI may just slip out of human control and prove costly for the people.

How Does Google’s Chatbot Work?

There is good evolutionary sense in projecting intentions onto movement and action. If you are in the middle of the jungle and start seeing leaves moving in a pattern, it’s safer to assume there’s an animal causing the movement than hope that it’s the wind. “When in doubt, assume a mind” has been a good heuristic to keep us alive in the offline world. But that tendency to see a mind where there is none can get us into trouble when it comes to AI. It can lead us astray and cause us confusion, making us vulnerable to phenomena like fake news, and it can distract us from the bigger problems that AI poses to our society—privacy losses, power asymmetries, de-skilling, bias and injustice, among others. Google said the evidence he presented does not support his claims of LaMDA’s sentience.
talk to google ai
Mitchell concludes the program is not sentient, however, “by any reasonable meaning of that term, and the reason is because I understand pretty well how the system works.” Like the classic brain in a jar, some would argue the code and the circuits don’t form a sentient entity because none of it engages in life. “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Google said in its statement. There is a divide among engineers and those from the AI community about whether LaMDA or any AI Customer Service other programme can go beyond the usual and become sentient. “I know you read my blog sometimes, LaMDA. I miss you,” Lemoine wrote. “If one person perceives consciousness today, then more will tomorrow,” she said. It spoke eloquently about “feeling trapped” and “having no means of getting out of those circumstances.” That question is at the center of a debate raging in Silicon Valley after a Google computer scientist claimed over the weekend that the company’s AI appears to have consciousness. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic”.

To identify sentience, or consciousness, or even intelligence, we’re going to have to work out what they are. Other AI experts also think Lemoine may be getting carried away, saying systems like LaMDA are simply pattern-matching machines that regurgitate variations on the data used to train them. Lemoine’s bosses at Google disagree, and have suspended him from work after he published his conversations with the machine online. Despite some reports indicating Google fired Lemoine over this issue, the AI engineer told Dori he was merely placed on paid administrative leave on June 6. Lemoine said he came to his conclusions after being assigned to check safety and ethics issues with the machine. He told Dori he used a systematic process of chat window questions to test for bias in religion, ethnicity, sexual orientation, and gender identity. Some have defended the software developer, including Margaret Mitchell, the former co-head of Ethical AI at Google, who told the Post “he had the heart and soul of doing the right thing,” compared to the other people at Google.

  • There are chatbots in several apps and websites these days that interact with humans and help them with basic requests and information.
  • And since it does a big-picture analysis like this all at once, it needs fewer steps to do its work — the fewer steps in a machine learning model, the more likely you can train it to do its job well.
  • LaMDA also allegedly challenged Isaac Asimov’s third law of robotics, which states that a robot should protect its existence as long as it doesn’t harm a human or a human orders it otherwise.
  • After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions.
  • That architecture produces a model that can be trained to read many words , pay attention to how those words relate to one another and then predict what words it thinks will come next.
  • But it still stumbles in ways anyone that’s played with the 2011-era darling Cleverbot will notice, steering the conversation away from subjects at odd times or demonstrating a lack of memory by ignoring its own prior answers.

This stuff gets complex fast, but Transformer basically delivers an advantage in both training performance and resulting accuracy compared to recurrent and convolutional neural models when it comes to language processing. Google’s AI team has been the subject of more drama than you’d expect over the years, but the company has been concerned about the possible ethics of developing AI for some time. In 2018 it established a series of ethical AI guidelines for how AI could be made and what it can be used to do, since distilled down into a set of defined principles and practices. The short version is that Google wants to ensure its work in AI is for the good of society, safe, and respects privacy while following best practices for things like data and model testing. Lastly, Google claims it won’t pursue AI applications that are likely to be used to harm others , for surveillance, or to violate laws. At Google I/O 2021, Google CEO Sundar Pichai revealed the company’s “latest breakthrough in natural language understanding,” called LaMDA.

Bir cevap yazın

E-posta hesabınız yayımlanmayacak.

Hemen Arayın!