Google Engineer Makes Alarming Claim About Chatbot



Have our computer overlords arrived? The Washington Post has an intriguing story about a Google engineer who argues that an artificially intelligent chatbot he was testing became sentient. If Blake Lemoine is correct, it might be step one of a sci-fi nightmare that critics of AI have long warned about. However, Google thinks Lemoine is off base, and it appears that the AI community is backing Google on this one. Coverage:

  • Human-like: Lemoine catalogued conversations he had with Google’s Language Model for Dialogue Applications, or LaMDA. “I know a person when I talk to it,” the 41-year-old tells the Post. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
  • Key exchange: When Lemoine asked the chatbot about its fears, it responded: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.” To the Guardian, that is “eerily reminiscent” of the order-defying computer HAL in 2001: A Space Odyssey, which also feared being switched off.
  • Consequences: Lemoine raised his concerns with superiors at Google, who looked into them and rejected them. When Lemoine began to make his case publicly, in online posts and by talking with a representative for a House panel, Google suspended him for breaching confidentiality rules, reports the Post.
  • Google’s stance: LaMDA is not sentient, period, says the company. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” says spokesperson Brian Gabriel. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
  • Outside view: “We in the AI community have our differences, but pretty much all … find the notion that LaMDA might be sentient completely ridiculous,” writes Gary Marcus in a Substack post. It simply has untold volumes of human language to draw from and mimic. To claim such systems are sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside,” tweets Stanford’s Erik Brynjolfsson.
  • Also skeptical: A post at Axios is similarly doubtful. “Artful and astonishing as LaMDA’s conversation skills are, everything the program says could credibly have been assembled by an algorithm that, like Google’s, has studied up on the entire 25-year corpus of humanity’s online expression.” There’s a world of difference between that and being able to think and reason like a human.
  • Then again: Coverage of this takes note that Google VP Blaise Aguera y Arcas, one of the execs who dismissed Lemoine’s claims, wrote a piece in the Economist last week about the “new era” of AI. The takeaway quote: “I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”

(Read more artificial intelligence stories.)

var FBAPI = '119343999649';

window.fbAsyncInit = function() { FB.init({ appId: FBAPI, status: true, cookie: true, xfbml: true, oauth: true, authResponse: true, version: 'v2.5' });

FB.Event.subscribe('edge.create', function (response) { AnalyticsCustomEvent('Facebook', 'Like', 'P'); }); };

// Load the SDK asynchronously (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); = id; js.src = ""; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));



Read original article here

Denial of responsibility! Vigour Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment
Enable Notifications    OK No thanks