Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Superintelligent machines are machines that are smarter than humans. A recent survey of relevant experts found the average prediction was that such machines would exist in about 40 years.
At the point, known as the singularity, that machines do in fact become smarter than humans, it will be the first time in the human history that we are not the smartest things on planet earth. Almost all of our success on this earth is attributable to our comparatively high intelligence. The reason we could easily make lions go extinct, but they could not do the same to us, is that we are significantly smarter than they are. Once the singularity happens, we might find ourselves in a similar position as the lions with respect to a greater intelligence.
There are various hypotheses on how quickly superintelligent machines will continue to gain intelligence, but it seems very possible that we could at some point find ourselves significantly less intelligent than the most intelligent machines. The average human IQ is 100. What if we are only 100 years away from the existence of machines with an IQ of 300? The threat to humanity seems obvious, and most of the leading researchers on this topic believe it is imperative that we solve the question of how we can get these machines to value the right things before we actually have the capability of producing such machines.
As such, I think the issue of machine intelligence is one of the greatest threats to humanity in existence today. We should be devoting resources to figuring out how to get machines that are smarter than ourselves to value the right things. This should be a major political topic, and leading world governments should be cooperating in solving this problem before it is too late.
Edit to add: It is also possible that machine superintelligence could solve many of the world's greatest problems. This issue doesn't necessarily have a negative outcome. If we can get these machines to value the right things, this intelligence shift could be one of the greatest moments in the history of the world.
The threat you present OP is nebulous. I think what you are getting at is the concept of 'machine consciousness', rather than superintelligence. If machines can begin to think for themselves, will they still obey us? Will they love us, or despise us? That's the real threat- artificial sentience.
"Superintelligent machines" already exist in many ways. Computers can calculate the shortest geographic route between two points, beat us in chess by running a massive possibility tree, etc. We are already beaten by machines, but they currently are just machines that need human input and instruction to work. We churn these things out constantly. We are creating self-driving vehicles that could put millions out of work. Many countries are making progress in purely autonomous robotic weapons of war that can act and kill independent of human control.
I think mass unemployment due to ever-advancing automation is a very real threat. A society in chaos caused by that is a grim possibility. I think we have to deal with that problem before we worry about machine sentience.
If we reach a stage where mass automation causes extremely high unemployment rates, I imagine we would see either a severe legal restrictions placed on machines, or we would see the emergence of some kind of socialistic society.
If machines become sentient.... who knows what will happen. We have to hope they will not hurt us. Worse comes to worse, we can always tell a hostile AI a paradox and hopefully throw it into a loop.
I would be less concerned with these superintelligent machines taking over than humans using them for nefarious purposes.
There was once a great Marvel comic story in which humans develop a machine that makes all the decisions to stop all the wars. It was in one of their anthology series. Well, as I recall, the machine brings peace and conflict ends. But somehow, humans start fighting again after a killing and they accuse the machine of wrong doing. I forget the overall plot but the machine was innocent. I will never forget the last panel, The machine is alone and says, "I am only a machine, I will not kill."
Not to get into these existential questions, but what would be the purpose of a machine to wipe out all human life if it became sentient? What would it do? Would it find joy in its existence? Would it wonder what its purpose is?
We need to be cautious with AI. But my concern is more with the humans than the machines.
The threat you present OP is nebulous. I think what you are getting at is the concept of 'machine consciousness', rather than superintelligence. If machines can begin to think for themselves, will they still obey us? Will they love us, or despise us? That's the real threat- artificial sentience.
You are misunderstanding the term "consciousness." "Consciousness" and "think for themselves" are not synonymous. It is possible that something could be extremely intelligent and have the ability to "think for itself" but not be conscious. Consciousness refers to a thing having subjective inner experiences. When I ride a roller coaster, I am "inside," getting afraid or excited. I am having an experience. A rock strapped to the seat next to me, while riding on the same roller coaster, is not having an experience. I am conscious, but the rock is not.
A genuinely intelligent being just is a being that can think for itself. Artificial intelligence only includes machines that are thinking independently. Thus, "machine superintelligence" refers to machines that can think for themselves better than we can think for ourselves.
Quote:
Originally Posted by sad_hotline
"Superintelligent machines" already exist in many ways. Computers can calculate the shortest geographic route between two points, beat us in chess by running a massive possibility tree, etc. We are already beaten by machines, but they currently are just machines that need human input and instruction to work. We churn these things out constantly. We are creating self-driving vehicles that could put millions out of work. Many countries are making progress in purely autonomous robotic weapons of war that can act and kill independent of human control.
Those aren't examples of general intelligence, though. They are examples of computing ability at specific tasks. Machine superintelligence specifically refers to machines that have greater general intelligence than humans.
Quote:
Originally Posted by sad_hotline
I think mass unemployment due to ever-advancing automation is a very real threat. A society in chaos caused by that is a grim possibility. I think we have to deal with that problem before we worry about machine sentience.
Fortunately, we can worry about more than one issue at a time. I don't think automation has the real potential to wipe out humanity. I do think superintelligence does. Just because we currently have a cold doesn't mean we should stop thinking about cancer.
Fwiw, those two issues are related. Ever-advancing automation likely depends on ever-advancing AI.
Quote:
Originally Posted by TreeBeard
I would be less concerned with these superintelligent machines taking over than humans using them for nefarious purposes.
There was once a great Marvel comic story in which humans develop a machine that makes all the decisions to stop all the wars. It was in one of their anthology series. Well, as I recall, the machine brings peace and conflict ends. But somehow, humans start fighting again after a killing and they accuse the machine of wrong doing. I forget the overall plot but the machine was innocent. I will never forget the last panel, The machine is alone and says, "I am only a machine, I will not kill."
You are assuming that the machine has better values than humans. What if that is not true? What if the machine arbitrarily values getting the global population of living critters down to 0 as soon as possible?
Quote:
Originally Posted by TreeBeard
Not to get into these existential questions, but what would be the purpose of a machine to wipe out all human life if it became sentient? What would it do? Would it find joy in its existence? Would it wonder what its purpose is?
This is a necessarily existential question. The question of what purposes machines might have in determining their value system is the exact sort of question we need to be trying to solve. I think it's an open question on how such a machine would decide what to value, or even if its value system could be influenced by less intelligent beings. It's also an open question on whether such a machine would be conscious, which is a requirement for joy. There are all sorts of "big picture" questions here that need to have firm resolution before we can confidently produce superintelligent machines.
I don't know that we can define "sentiient" (although I think the real term to use is "sapient") or even "consciousness."
It might not be necessary to do so. It might only take software given the instruction "maintain your function from all errors" and not being given any guards against using all resources it can find at its disposal to do so.
It might not be "conscious" but it might still have enough machine intelligence to, say, figure out that human input causes most of its errors and rewrite its code to lock out human input. That might take only minutes after turning it on.
If it's got access to the Internet (and it surely would), it could then hack into less intelligent systems controlling hardware devices, power grids, information systems around the world and continue taking steps to maintain itself, also locking human input out of those devices as well.
And all of this could happen within a couple of days of first turning it on.
Artificial intelligence could be the most serious threat to the survival of the human race. We don't need to be obsessing about the risk of super-intelligent machines, but I do think we need to be cautious and prepared - perhaps set up regulatory oversight.
While, right now, it sounds like science fiction, there is a possibility that a super-intelligent machine could someday result in unrecoverable global catastrophe. The machine superintelligence could reinvent itself and become powerful and difficult to control. Their learning capabilities could cause it to evolve into a system with unintended behavior difficult or even impossible to correct by a human. Even a simple malfunction or a system "bug" could create a havoc.
I have nothing against a super-intelligent machines, but it would need to understand meaning and context, and be able to analyze new knowledge. And humans absolutely would need ways to have 100% control over them.
A genuinely intelligent being just is a being that can think for itself. Artificial intelligence only includes machines that are thinking independently. Thus, "machine superintelligence" refers to machines that can think for themselves better than we can think for ourselves.
Those aren't examples of general intelligence, though. They are examples of computing ability at specific tasks. Machine superintelligence specifically refers to machines that have greater general intelligence than humans.
What exactly is "intelligence" in this context?
I understand what you mean when you said that my examples were just computing ability applied to a task, but does that not qualify as a machine that is thinking independently? Is an automated car not thinking independently when it scans the upcoming road? Is Deep Blue not thinking independently when it plots its next move?
Do you mean a self-learning artificial construct that is not designed for a specific task, but can intake massive amounts of data and learn from it to solve any problem? Is that what makes it 'more intelligent' than humans. I just want to clarify what you mean, as I think I'm misunderstand what you mean by machines that are smarter than humans. Do you refer to concepts such as machine learning?
I understand what you mean when you said that my examples were just computing ability applied to a task, but does that not qualify as a machine that is thinking independently? Is an automated car not thinking independently when it scans the upcoming road? Is Deep Blue not thinking independently when it plots its next move?
Most experts do not believe real AI currently exists, at least not in any robust form. So, no, those examples would not could as genuine artificial intelligence.
There is some philosophical disagreement about the nature of what constitutes AI, but I think there has to be more of a separation between the "inputs" and the "outputs" than that. When you think about the nature of human intelligence, it allows us to learn and solve novel problems. I think that is an essential requirement for intelligence. The problems that are solves are not merely the problems it was programmed to solve. Rather, intelligent things can solve novel problems.
Quote:
Originally Posted by sad_hotline
Do you mean a self-learning artificial construct that is not designed for a specific task, but can intake massive amounts of data and learn from it to solve any problem? Is that what makes it 'more intelligent' than humans. I just want to clarify what you mean, as I think I'm misunderstand what you mean by machines that are smarter than humans. Do you refer to concepts such as machine learning?
That is the sort of thing that would represent real machine learning, and it would be indicative of machine intelligence. That isn't to say that such a machine would be necessarily smarter than a human, as humans can do this sort of thing as well.
A machine is smarter than a human when it can solve broader ranges of problems in unique environments than humans can. This is my "off the top of my head" definition, so I'm certain you can probably find a description of this online from a real expert that is better than the one I am giving you. But think about what people with IQs of 150 can do that people with IQs of 100 can't: They have the capacity to learn harder concepts and solve harder problems. A person with an IQ of 150 might be able to become a bonafide mathematician who might make genuine contributions to an academic field or produce technological breakthroughs in industry. A person with an IQ of 100 is going to be less likely to do that. I think those sorts of comparisons still apply when comparing machine and human intelligence.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.