Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
When programmed with biases and emotion to the degree it can be instilled, don't think it is not possible for a nightmare scenario to occur.
Greater minds than your and mine combined have said so.
Nightmare scenarios can indeed happen. Biases and emotion can be programmed into AI devices, but they can only mimic human thought patterns. That doesn't make them sentient even though they can out think and outperform humans. In other words, they are not alive but merely follow complex instructions and reasoning that their programmers have equipped them with.
So they can appear to be sentient but are just a jumble of circuits, wires, and software.
There's quite a few people who seem less sentient than the Google AI, so I can understand why some would start to believe it is so. It's probably just an intrinsic bias on the part of the workers.
The whole idea of the Turing test is to have a human evaluator asking blind questions of a computer and a human in a text format. If the tester can’t figure out which one is human, say hello to HAL.
No matter how complex and highly functioning an AI becomes, it is still not sentient.
while that may be true....don't put it past one of these lib woke nutjobs to convince enough people...and the next thing you know it's excepted...and normal
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.