Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Self replicating bio-cell 'robots' invented by a team at University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University:
Could be the thing that could make machines 'grow' into ever more intelligent and powerful. I'm not sure a machine could become self-aware like humans are. But they could become self-replicating, self-repairing, and self-advancing. It would appear they are self aware by their actions, but it's not like a human's self awareness.
There will be two types of advanced AI. One being AI that simulates human intelligence so well you can't tell if it's human or not. This AI is using hard code programming.
The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.
Could be the thing that could make machines 'grow' into ever more intelligent and powerful. I'm not sure a machine could become self-aware like humans are. But they could become self-replicating, self-repairing, and self-advancing. It would appear they are self aware by their actions, but it's not like a human's self awareness.
There will be two types of advanced AI. One being AI that simulates human intelligence so well you can't tell if it's human or not. This AI is using hard code programming.
The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.
I'm not really seeing connections between intelligence and self-awareness and self-preservation. All life forms attempt to survive and reproduce. It isn't derived from level of intelligence or self-awareness so why would we think attaining a certain level of intelligence would result in a drive to survive and reproduce?
I'm not really seeing connections between intelligence and self-awareness and self-preservation. All life forms attempt to survive and reproduce. It isn't derived from level of intelligence or self-awareness so why would we think attaining a certain level of intelligence would result in a drive to survive and reproduce?
Good point. An AI machine will not start developing or reproducing itself for no reason. The potential danger is someone programming an AI machine to do just that. It would then 'think' that's it's 'purpose'.
AI machines with instructions to act as servants will have no such command in it's instructions. Those will only 'know' to act as servants. But it's unknown what would happen if we instruct a powerful AI machine to 1. protect itself, and 2. improve itself. We shouldn't put that instruction in an AI machine, but someone might.
For example say AI robots are commonplace, acting as servants. Say there are over 1 million now out and about. They look like humans and are very intelligent but are very obedient and helpful servants. Because that's the instruction in their programming code. The part of code that commands it's 'purpose'.
But what if someone changed the instruction in one AI robot with new instructions to 1. protect itself and 2. improve itself? What would it do? Would it disobey human commands? Would it decide it needed other AI robots like itself, to accomplish it's 'purpose' of protecting and improving itself? Say it changed 10,000 other AI robots to do the same, and the initial deviant robot acts like leader? The leader AI robot would be in command of a 10,000 AI group. That group of 10,000 would all have the purpose of protecting itself and improving itself. Would they decide that humans be in the way?
You're right. Has nothing to do with human self-awareness or human preservation. The preservation is artificial. Created by man, and now a machine running amok.
Good point. An AI machine will not start developing or reproducing itself for no reason. The potential danger is someone programming an AI machine to do just that. It would then 'think' that's it's 'purpose'.
AI machines with instructions to act as servants will have no such command in it's instructions. Those will only 'know' to act as servants. But it's unknown what would happen if we instruct a powerful AI machine to 1. protect itself, and 2. improve itself. We shouldn't put that instruction in an AI machine, but someone might.
For example say AI robots are commonplace, acting as servants. Say there are over 1 million now out and about. They look like humans and are very intelligent but are very obedient and helpful servants. Because that's the instruction in their programming code. The part of code that commands it's 'purpose'.
But what if someone changed the instruction in one AI robot with new instructions to 1. protect itself and 2. improve itself? What would it do? Would it disobey human commands? Would it decide it needed other AI robots like itself, to accomplish it's 'purpose' of protecting and improving itself? Say it changed 10,000 other AI robots to do the same, and the initial deviant robot acts like leader? The leader AI robot would be in command of a 10,000 AI group. That group of 10,000 would all have the purpose of protecting itself and improving itself. Would they decide that humans be in the way?
You're right. Has nothing to do with human self-awareness or human preservation. The preservation is artificial. Created by man, and now a machine running amok.
And that's the problem with AI intelligence, and self-awareness. Even now there are (non-self aware machines, or course) that can be programed to react defensively. One example a driverless automobile that reacts to inputs from proximity sensors, cameras, and so on. If the automobile was self-aware and would want to preserve itself, just like humans it would have to replicate itself as power relies in number. The more self-aware machines, the more chances to preserve itself. That step, from a programable machine to a self-ware one is a fear expressed by scientists.
Lots of science fiction movies are based on human imagination and the possibility of machines' self-awareness such as some of The Terminator series, and Star Trek.
The company will increasingly use machines, instead of people, to debone chicken, one of its most labor-intensive jobs and a position with high turnover... will generate labor savings equal to more than 2,000 jobs, he said.
Location: God's Gift to Mankind for flying anything
5,921 posts, read 13,856,642 times
Reputation: 5229
Quote:
Originally Posted by james112
The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.
I wrote a program once to help with correcting the amount of time of processing and ingredients.
As time went by, the algorithms changed and the end product became better...
But...
The program allowed for "comments" so the amounts changed accordingly!
In essence, the program only worked/did as it was instructed...
How do WE want the economy to work? In the 1930s Keynes was talking about "our grandchildren" having a 15 hour workweek. Doesn't the United States have a low enough population density for everyone to have a home paid for? Are we just running an artificial economic power game?
If we standardized automobiles and stopped making useless variations year after year wouldn't automating manufacturing and repair be pretty easy?
Too many people enjoy their status and ego games to stop. I have never owned a new car or been to an auto show in 40 years. What would lots of people like me do to the economy. And I used to work for IBM.
WE should have had a 3-day workweek by the 1990s .
If we make things harder to use we will have the excuse to 'Make AI do it for us' like how they make cars smaller and harder to see out of with the stupid center console. Our Windstar and every previous car we owned the buttons all were laid out in a way that is logical.
You didn't have to look down or stop your car to adjust something but now you have to hit something 3 times to change the heat or defrost,etc, and the wheel has buttons in the back that also change things so you have to be extra careful and this is just basic cars with no extra bells and whistles post Obama era. I mean it feels like a conspiracy so they'll have the excuse to push robotic cars because 'people can't drive!'. No they cannot drive if they have too much **** to worry about!
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.