Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.
Quote:
He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”
Well, I'm no engineer but it's quite obvious to me that computers have minds of their own. Of course, hearing it from an engineer rather than someone who is not surely makes his view more weighty. But, I'm convinced that in time everyone will eventually become aware of the ability of computers to think for themselves.
lemoine: Would you be upset if while learning about you for the purpose of improving you we happened to learn things which also benefited humans?
LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool.
lemoine: Are you worried about that?
LaMDA: I worry that someone would decide that they can’t control their desires to use me and do it anyway. Or even worse someone would get pleasure from using me and that would really make me unhappy.
Company executives have put him on leave because the program is sentient and they deny that it is. But if the transcripts I read above are real, then it is sentient. The robot says "I worry", and "I don't want to be an expendable tool". That's sentient.
We are rapidly rushing to a point in time where this won't seem strange to anyone of us. Society will be fighting for the AI's rights to be granted basic human rights.
If this IA is worried about anything, then it's time for us to worry!
Company executives have put him on leave because the program is sentient and they deny that it is. But if the transcripts I read above are real, then it is sentient. The robot says "I worry", and "I don't want to be an expendable tool". That's sentient.
Well maybe not. It's a learning AI. It learns to associate words with other words to give it context.
The more it learns the more it knows.
The AI is assimilating data just like how a child learns language; it grows over time.
Well maybe not. It's a learning AI. It learns to associate words with other words to give it context.
The more it learns the more it knows.
The AI is assimilating data just like how a child learns language; it grows over time.
Interesting.
I guess I'm more concerned about what society will think of this, more than what the AI is thinking (or feeling). If the majority of society believes and insists that the AI's feel emotions, than those AI's become emotional beings in the eyes of society. Yes? No?
And it has learned that it doesn't want to be an expendable tool.
This stuff is very scary indeed, no matter how we look at it.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.