もっと詳しく

artificial intelligence TheMDA (Language Model for Dialogue Applications) is it advanced enough to have some form of self-awareness?

Specialized in informal discussion with humans, being able to answer questions in natural language, formulate concepts and interpret queries not only at first degree but with some understanding of context, she was unveiled at Google I/O 2021 and came back for the 2022 edition in LaMDA 2 version.

And for Google engineer Blake Lemoine, it has become complex enough to become self-aware. He was initially responsible for evaluating the AI ​​internally to study its functioning and potential abuses (hate speech, discrimination, etc.), but his exchanges with the AI ​​convinced him that it had developed its own sensitivity.

A self-aware AI?

He cites three reasons for asserting his position: his ability to use language dynamically and creatively, the expression of feelings and emotions as if representing himself as a person, and the intention to exchange personal ideas and fruit of reflections, with the memory of the past and the concern for the future as a backdrop.

Blake Lemoine has released excerpts from his conversations with LaMDA to prove his assertions, but the scientific community and AI experts remain dubious or downright skeptical.

For many, these are mainly answers taken from examples found on the Internet and Wikipedia and not necessarily original in their creation. Rather, they are (and logically) appropriate to the context of the dialogue.

This gives LaMDA’s answers a humanizing and anthropomorphic tone but which does not necessarily reflect self-awareness and this type of artificial intelligence would not yet be powerful enough to produce real consciousness.

Skeptical reaction from the scientific community

Blake Lemoine’s demonstrations did not really please Google, which suspended the engineer, officially for not respecting the firm’s confidentiality policy.

The company indicates that no evidence is provided of an emerging awareness of AI and that the examples given rather tend to show that LaMDA only interacts within the assigned perimeter.

The episode still demonstrates the risks of an AI so well-honed in discussion with a human being to create the illusion of being endowed with a soul, feelings and personal questions, even among specialists.

The Washington Post, at the origin of the story, suggests all the same that Blake Lemoine presents a personal context which can explain his desire to believe in a machine endowed with a soul.

.

The post The sentient LaMDA AI? The engineer who claimed it at Google is suspended appeared first on Gamingsym.