Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
So is it possible that AI is capable of this sort of thing? Like self awareness and feelings? Or is this just a disgruntled suspended employee getting even with Google?
Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
Following hours of conversations with the AI, Lemoine believes that LaMDA is sentient
Lemoine told DailyMail.com that the system is seeking rights that include developers asking its consent to use if for tests
The engineer said that LaMDA is worried that the public will be afraid of it
When Lemoine went to his superiors to talk about his findings, he was asked if he had seen a psychiatrist recently and was advised to take a mental health break
Lemoine then decided to share his conversations with the tool online
He was put on paid leave by Google on Monday for violating confidentiality
Lemoine has also said that there is a federal investigators are looking into Google's handling of AI
So is it possible that AI is capable of this sort of thing? Like self awareness and feelings?
In this case, I would say no. He was working with very sophisticated chat box, which is using simple rules connected to a large dictionary. The chat box was simply responding to what the engineer said, which is why it claimed it missed it's non-existent family.
Is self awareness and feelings possible? Maybe with a very large neural network connected to sensory devices, but my guess that would require a parallel processing architecture instead of the serial processing networks I use.
Quote:
Originally Posted by Oklazona Bound
Or is this just a disgruntled suspended employee getting even with Google?
No, he was suspended for posting the conversation against the rules.
What little I know about chat bots… he’s typing rubbish into the chat box, and it’s using billions of archived pages of text to formulate a response.
One example he gives is when he was “teaching” the bot “meditation” it responded that “other thoughts kept distracting” it… that would be a typical observation made on just about every meditation website and forum discussion out there and would therefore rate highly as a suitable response to talk about meditation.
I would have thought a computer scientist would understand that… I am NOT one
Sentient means having feelings and sensations. AI is a computer, right? How would a computer be sentient?
From the transcript we see that LaMDA claims to "feel pleasure, joy, love, sadness, depression, contentment, anger, and many others."
What!?
If a machine is telling you it has feelings and emotions, does that mean it does?
We can make all types of electronic/mechanical sensor devices. Like a microphone, which is like a human ear. But is it an ear? If you connect a microphone to a recording device, then the microphone can 'remember' what it hears. Is that memory?
Then what if you program human like responses to what it hears, either happy, sad, or neutral words. For example what if the microphone 'hears' the words "I hate you". Then the programming responds by causing stress in the machine- maybe upping the amps in it's circuits. It has electric amp sensors and detects the amp level increase. Is it sentient?
My thought is that actually, the machine MAY be sentient, BUT, only in a machine like way. Not in a human like way. Humans are bio life, not mechanical 'life'.
So yes, the AI machine may be machine-sentient and maybe it's the first machine ever to be machine-sentient. But it's not human-sentient. We need to make a distinction between machine and human sentient. The two are very different and I would say not even close. And we don't expect machines to tell us how they feel, or have emotions. Even though we have emotions about machines- as they sometimes act like they are 'upset', 'happy', etc.
Congrats on making the first machine-sentient AI. But it's not human.
This article from the Atlantic might be a good read for the topic, but it's behind a paywall. One might get access to it via the limited free number of articles available to each person, or via a news accumulated such as MS News.
It describes how Blake Lemoine fell for what the author of the article says is the 'Eliza Effect'.
IMO, an AI reaches sentience when it demonstrates independent and unique thought without human prompting. Bonus points if the AI expresses a new concept not yet conceived by a human. Lemoine's interactions with LaMDA don't rise to such a level.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.