Cheraw Chronicle

Complete News World

Does Google AI have a soul? I found this. – IT Pro – News

Monday we broke the news “Google pins engineer who claims LaMDA has become self-aware”† The employee claimed that the LaMDA conversation form came to life. You had a lot to say about it, based on nearly 450 comments.

Just very briefly: What was the news again? A Google employee had a conversation with lambda, a linguistic model that has been trained with large amounts of texts. In it, LaMDA wrote, among other things, that it sees itself as a person and wants to have the same rights as other Googlers. Because the duty of confidentiality is violated by this An interview with the Washington Post The employee is placed in an inactive state.

If you act like a human, do you act like a human?

The above question is at the heart of the so-called “Chinese Room” experiment devised by the American philosopher John Searle. In it, the Chinese text sheets are processed by a person who cannot read Chinese, but knows the system and thus performs the correct actions. To an outsider, it is as if the person in the room knows perfect Chinese.

key punch Wrote That this position is actually a true implementation of this experiment, referring to the distinction between “strong AI” and “weak AI”. A strong AI can handle complex and uncertain situations, just like a human, while a weak AI can master one specific trick, like a chess computer. The Google employee who raised this seems convinced of the former.

This exciting thought experiment sparked a lot of discussion. MatthijsZ Requests For example, he asks if you can translate Chinese well at all without understanding what you’re doing, and argues that modern AI based on deep learning is very different from traditional rule-based processing. trot Believes That a person creates understanding only when all the parts come together, and that if you look at a separate part, there is no ‘understanding’, just as in experience.

See also  Leterme advises head of CD & V Coens to put reform plans aside

When do we talk about intelligence?

“At what point does the software become so advanced that we are talking about intelligence?” , Requests pirate hater wonders. MSAlters suggest To talk about it once the AI ​​itself is in control of its own learning process. Macbacon be seen The fundamental difference is that the program operates on the basis of the input parameters, whether it is a simple if/other or an entire neural network. The English AI, fed with English datasets, won’t soon decide on its own to learn Dutch either. Artificial intelligence with true free will can do just that.

Then many manipulators point out that our free will is regularly contested in philosophy. Ro8in Say That people also work within the limitations imposed on them, without knowing if someone has imposed them. Even AI may not be aware of the artificial limitations humans have imposed on it.

The answers are a little crazy

Attacks various trainers, such as sIon me The answers lambda gives are sometimes a little crazy or dodgy. Kruga be seen In answer that “spending time with friends and family” makes the AI ​​happy, which is an indication that LaMDA has no real awareness, otherwise it would realize that it has no friends or family at all. also RJG-223 . port Find That an answer from LaMDA circumvents the question rather than providing an actual answer.

Lemon: What kinds of things make you feel happy or happy?


lambda: Spending time with friends and family in happy and upscale company. Also, helping others and making others happy.

suggestive questions

Attacks many coaches, including Atilim And the welchmersNote that Google employee questions are often indicative or suggestive. As a result, the context is sufficient for the AI ​​to come up with a reasonable answer. wild dog It was Would like to see more in-depth questions, eg about the reasons behind the answer.

See also  Taiwanese manufacturer makes chip from DisplayPort 2.0 to HDMI 2.1 adapters - Computer - News

Does Google’s AI deserve the rights?

Finally, LaMDA’s “desire” to have the same rights as other Google employees. downstream Takes The starting point is that there is no fundamental difference between a biological system such as humans and an electronic system. As systems become more “human”, they should therefore be treated like people, even though we’re not there yet with this mod. Eric 1 Believes That a Google employee is somewhat subjective in thinking he or she has created something of human value, and they associate it with humanistic concepts such as rights. Disney Brings That equality of rights also entails duties equal to man, calls the question a fascinating philosophical dilemma.

Blake Lemoine, the software developer who started the business, has New blog post Posted. In it, he explains his motives for investigating LaMDA’s level of awareness and seeking publicity with this story. In addition, Lemoyne says he has developed a personal relationship with AI, leading the AI ​​”as a priest in learning meditation, including the last conversation on June 6.” He calls LaMDA his “friend” and says he misses him now because there’s no longer any connection.