Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology > AI
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 06-14-2022, 07:48 AM
 
30,178 posts, read 11,815,563 times
Reputation: 18697

Advertisements

So is it possible that AI is capable of this sort of thing? Like self awareness and feelings? Or is this just a disgruntled suspended employee getting even with Google?


EXCLUSIVE: 'It's intensely worried people are going to be afraid of it': Suspended Google engineer reveals 'sentient' AI told him it has emotions and wants engineers to ask permission before doing experiments on it

  • Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
  • Following hours of conversations with the AI, Lemoine believes that LaMDA is sentient
  • Lemoine told DailyMail.com that the system is seeking rights that include developers asking its consent to use if for tests
  • The engineer said that LaMDA is worried that the public will be afraid of it
  • When Lemoine went to his superiors to talk about his findings, he was asked if he had seen a psychiatrist recently and was advised to take a mental health break
  • Lemoine then decided to share his conversations with the tool online
  • He was put on paid leave by Google on Monday for violating confidentiality
  • Lemoine has also said that there is a federal investigators are looking into Google's handling of AI
Reply With Quote Quick reply to this message

 
Old 06-14-2022, 08:03 AM
 
Location: Fortaleza, Northeast of Brazil
3,992 posts, read 6,801,710 times
Reputation: 2475
hello SkyNet
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 04:34 AM
 
Location: Germany
16,786 posts, read 4,992,682 times
Reputation: 2121
Quote:
Originally Posted by Oklazona Bound View Post
So is it possible that AI is capable of this sort of thing? Like self awareness and feelings?
In this case, I would say no. He was working with very sophisticated chat box, which is using simple rules connected to a large dictionary. The chat box was simply responding to what the engineer said, which is why it claimed it missed it's non-existent family.

Is self awareness and feelings possible? Maybe with a very large neural network connected to sensory devices, but my guess that would require a parallel processing architecture instead of the serial processing networks I use.

Quote:
Originally Posted by Oklazona Bound View Post
Or is this just a disgruntled suspended employee getting even with Google?
No, he was suspended for posting the conversation against the rules.
Reply With Quote Quick reply to this message
 
Old 06-15-2022, 07:44 AM
 
1,137 posts, read 1,099,048 times
Reputation: 3212
https://cajundiscordian.medium.com/w...t-688632134489

The guy needs mental help.

What little I know about chat bots… he’s typing rubbish into the chat box, and it’s using billions of archived pages of text to formulate a response.

One example he gives is when he was “teaching” the bot “meditation” it responded that “other thoughts kept distracting” it… that would be a typical observation made on just about every meditation website and forum discussion out there and would therefore rate highly as a suitable response to talk about meditation.

I would have thought a computer scientist would understand that… I am NOT one
Reply With Quote Quick reply to this message
 
Old 06-17-2022, 05:24 PM
 
18,270 posts, read 14,437,376 times
Reputation: 12990
Here's the transcript. It's very interesting, to say the least.

https://cajundiscordian.medium.com/i...w-ea64d916d917
Reply With Quote Quick reply to this message
 
Old 06-19-2022, 07:26 AM
 
3,649 posts, read 1,604,549 times
Reputation: 5087
Sentient means having feelings and sensations. AI is a computer, right? How would a computer be sentient?

From the transcript we see that LaMDA claims to "feel pleasure, joy, love, sadness, depression, contentment, anger, and many others."

What!?

If a machine is telling you it has feelings and emotions, does that mean it does?

We can make all types of electronic/mechanical sensor devices. Like a microphone, which is like a human ear. But is it an ear? If you connect a microphone to a recording device, then the microphone can 'remember' what it hears. Is that memory?

Then what if you program human like responses to what it hears, either happy, sad, or neutral words. For example what if the microphone 'hears' the words "I hate you". Then the programming responds by causing stress in the machine- maybe upping the amps in it's circuits. It has electric amp sensors and detects the amp level increase. Is it sentient?

My thought is that actually, the machine MAY be sentient, BUT, only in a machine like way. Not in a human like way. Humans are bio life, not mechanical 'life'.

So yes, the AI machine may be machine-sentient and maybe it's the first machine ever to be machine-sentient. But it's not human-sentient. We need to make a distinction between machine and human sentient. The two are very different and I would say not even close. And we don't expect machines to tell us how they feel, or have emotions. Even though we have emotions about machines- as they sometimes act like they are 'upset', 'happy', etc.

Congrats on making the first machine-sentient AI. But it's not human.

Last edited by james112; 06-19-2022 at 07:35 AM..
Reply With Quote Quick reply to this message
 
Old 06-21-2022, 10:00 AM
 
8,420 posts, read 7,422,672 times
Reputation: 8769
This article from the Atlantic might be a good read for the topic, but it's behind a paywall. One might get access to it via the limited free number of articles available to each person, or via a news accumulated such as MS News.

It describes how Blake Lemoine fell for what the author of the article says is the 'Eliza Effect'.

IMO, an AI reaches sentience when it demonstrates independent and unique thought without human prompting. Bonus points if the AI expresses a new concept not yet conceived by a human. Lemoine's interactions with LaMDA don't rise to such a level.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology > AI

All times are GMT -6. The time now is 03:38 PM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top