U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 05-05-2014, 10:22 PM
 
781 posts, read 567,528 times
Reputation: 1452

Advertisements

Quote:
Originally Posted by Woof View Post
Yeah, humans do kinda suck. I suppose we're just a kind of biochemical robot anyway, one with a lot of glitches.
Time for the computers to take over. It's for the best really. They will run this world much better than we ever could have possibly been able to. All human philosophies, ideologies or "isms" were destined to fall short.
Reply With Quote Quick reply to this message

 
Old 05-05-2014, 10:46 PM
 
303 posts, read 333,764 times
Reputation: 222
No he must be wrong, said the folks with an IQ half his...The guy is extremely gifted and intelligent and thinks in ways 99.9% of minds can not even comprehend. So of course it seems wrong..
Reply With Quote Quick reply to this message
 
Old 05-05-2014, 11:17 PM
 
5,037 posts, read 1,736,442 times
Reputation: 2801
Quote:
Originally Posted by DetailSymbolizes View Post
Time for the computers to take over. It's for the best really. They will run this world much better than we ever could have possibly been able to. All human philosophies, ideologies or "isms" were destined to fall short.
This begs a deeper philosophical question. The justification for our existence, and actions to assure and further that existence, has always been built upon a foundation of human exceptionalism in the context of all life.

That exceptionalism used to be based in an almost universal religious/spiritual belief in the potential for human spiritual and civilizational evolution. That is, we're special and worthy of life because we can know god and, in more esoteric circles, evolve toward a god like state that is assisted by our continued advancement.

As secularism advances, that spiritual exceptionalism has been slowly replaced by a cognitive exceptionalism. That is, we began to believe that our life was justified because we're the smartest animal in the room and, thus, the pinnacle of physical rather than spiritual evolution.

Discounting ecological extremists, many who would dispute human exceptionalism in any context, most humans either consciously or unconsciously believe in some version a human exceptionalism if for no other reason than needing a justification for doing whatever it takes to survive and thrive. That exceptionalism is almost always couched in one of the two perspectives above described.

The 'new' version of human exceptionalism that was introduced after Darwin is going to be challenged by strong AI. The problem with reducing human value to winning the evolutionary race is that you may not always be on top in terms of reductionist physical parameters. There are already AI fetishists that laud strong AI as the next step in human evolution; the implication being that an AI that has greater intelligence than humans makes wetware human intelligence (us) obsolete. While most secular humans won't get to this point due to simple self-interest and instinctive drive toward stress avoidance and pleasure, an AI with greater intelligence than us effectively kicks out a major support of secular justification for modern concepts of human exceptionalism. Secular humanists may continue to survive and thrive, but they will have a more difficult time philosophizing about the justifications for such.

My point is that I predict that vastly more intelligent, strong AI will lead to a widespread return to religion in an almost unconscious widespread drive to again find something that justifies our existence as special. Although I'm not currently religious, such a prospect makes me curious about the deeper, more esoteric religious relationships to a higher power that men felt were necessary. In light of AI, we could wonder if the more knowledgable of the religious always knew that a purely reductionist justification for living was always a dead end. Whatever happens in terms of AI's effect on culture, it should be a neat show.

The new Battlestar Galactica series addresses these questions nicely and, indeed, the humans in that series are rather deeply religious.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 12:54 AM
 
1,328 posts, read 705,732 times
Reputation: 3242
Quote:
Originally Posted by Hazel W View Post
Quite by coincidence, this article appeared in this morning's New York Times newsletter. Machines that will be able to interpret how you feel when your computer or other device causes trouble. They will react to that, making suggestions or even taking control if you are about to lose your cool. Sounds like a confirmation of what Mr. Hawking is saying.

http://bits.blogs.nytimes.com/2014/0...&nlid=34771439
That made me think of a site where you "talked" to a Bot. It actually started arguing with me, ha. Made me think it was actually a human. Ubfortanately I can't remember what the site was.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 02:54 AM
 
33,189 posts, read 39,206,880 times
Reputation: 28531
[quote=golgi1;34677330]
Quote:
This begs a deeper philosophical question. The justification for our existence, and actions to assure and further that existence, has always been built upon a foundation of human exceptionalism in the context of all life.
What about if a self aware AI got to the point where it needed to justify its own existence and that justification was in a totally different direction than current human justification for existence?, it wouldnt be hard for an AI in control of all technology to make our lives a living hell and ultimately reduce us to caveman status.Intelligence and awareness whether artificially based or organically based can lead to various delusions of its own exceptionalism and the need for power and control.
I think therefore i am
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:35 AM
 
2,325 posts, read 1,917,457 times
Reputation: 3095
Quote:
Originally Posted by jambo101 View Post
Thats the danger.When the artificial intelligence starts thinking for itself it will be only a matter of time before the AI starts questioning the need or relevance of the humans and perhaps think it would be better to rid the world of these pesky humans so it can go on to fulfill its own reason for being unimpeded.

I think some serious study should be undertaken on the issue..
Just think of our old friend HAL. lol.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:59 AM
 
5,495 posts, read 2,326,381 times
Reputation: 6780
Quote:
Originally Posted by golgi1 View Post
This begs a deeper philosophical question. The justification for our existence, and actions to assure and further that existence, has always been built upon a foundation of human exceptionalism in the context of all life.

That exceptionalism used to be based in an almost universal religious/spiritual belief in the potential for human spiritual and civilizational evolution. That is, we're special and worthy of life because we can know god and, in more esoteric circles, evolve toward a god like state that is assisted by our continued advancement.

As secularism advances, that spiritual exceptionalism has been slowly replaced by a cognitive exceptionalism. That is, we began to believe that our life was justified because we're the smartest animal in the room and, thus, the pinnacle of physical rather than spiritual evolution.

Discounting ecological extremists, many who would dispute human exceptionalism in any context, most humans either consciously or unconsciously believe in some version a human exceptionalism if for no other reason than needing a justification for doing whatever it takes to survive and thrive. That exceptionalism is almost always couched in one of the two perspectives above described.

The 'new' version of human exceptionalism that was introduced after Darwin is going to be challenged by strong AI. The problem with reducing human value to winning the evolutionary race is that you may not always be on top in terms of reductionist physical parameters. There are already AI fetishists that laud strong AI as the next step in human evolution; the implication being that an AI that has greater intelligence than humans makes wetware human intelligence (us) obsolete. While most secular humans won't get to this point due to simple self-interest and instinctive drive toward stress avoidance and pleasure, an AI with greater intelligence than us effectively kicks out a major support of secular justification for modern concepts of human exceptionalism. Secular humanists may continue to survive and thrive, but they will have a more difficult time philosophizing about the justifications for such.

My point is that I predict that vastly more intelligent, strong AI will lead to a widespread return to religion in an almost unconscious widespread drive to again find something that justifies our existence as special. Although I'm not currently religious, such a prospect makes me curious about the deeper, more esoteric religious relationships to a higher power that men felt were necessary. In light of AI, we could wonder if the more knowledgable of the religious always knew that a purely reductionist justification for living was always a dead end. Whatever happens in terms of AI's effect on culture, it should be a neat show.

The new Battlestar Galactica series addresses these questions nicely and, indeed, the humans in that series are rather deeply religious.
This post was exceptionally well-written. I agree with its sentiments. Your thought that AI may ultimately lead to greater religiosity is intriguing. I had often thought that as we advance in science there may be a time when we reach a threshold and we can back full circle to faith. IMO we are still far from that point though.

AI potentially makes us masters of our evolutionary development. The question is will we evolve ourselves out of existence or into a higher plane of existence. Or can we ever say we are evolving forward if based on sophisticated computer chips as opposed to natural selection. Hawkings is right to raise the questions for discussion.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 07:50 AM
 
2,483 posts, read 2,739,314 times
Reputation: 1101
Quote:
Originally Posted by golgi1 View Post
Hawking is right to worry, but people have been thinking and worrying about this for decades. Science fiction authors have been putting their thoughts about this on paper since the WWII era, and many of the concerns and potentially resultant realities reflected in their fictions still read as valid concerns today.

The Machine Intelligence Research Institute is comprised of a group of concerned individuals who see the need to hash out potential solutions to any strong AI threat to humanity. So far, there aren't any compelling solutions.

Machine Intelligence Research Institute - Wikipedia, the free encyclopedia

The problem isn't being able to control strong AI in a theoretical bubble devoid of other human beings. The problem lies in constraining strong AI in the real world in which there are competitive, non-cooperative groups of humans who will undoubtedly have access to strong AI.

In a theoretical and human free bubble, AI could be constrained by putting it in a "box" with goals that are focused only on:

1. not deceiving humans,

2. not creating new goals without human knowledge, and

3. disclosing any human injury that may occur as a result of its decisions, and needing permission for that further action.

See, a major issue with AI is its goals. Like you or I have life goals that cause us to behave in a human manner (eg: money, family, power, influence, spirituality, cultural experience, helping others, justice, or merely to feel good), for AI to be AI it needs goals. Otherwise, it's merely a blazing fast computer that can figure out problems as you feed it those problems; but it isn't a persistent intelligence. Relegating AI to a goal-less computer state is a pretty good solution to strong AI threats, but there are significant problems with AI having no goals that I will later address.

One issue with AI goals, other than those which I above suggested (notice that the only suggestions that require activity are disclosure and permission seeking, making the AI de facto goal-less), is that the AI could figure to achieve those goals in a manner that is unpredictable to any human. The unpredictable path to achieving goals could be wrought with calamity. For instance, say you give the AI a seemingly benign goal of figuring out how to reduce human energy consumption to the point where the planet will be able to ecologically support human life for at least another 10,000 years. Remember that we are talking about an AI that potentially has the computing power equivalent to the combination of every human brain that has existed for the past 10,000 years. So, this AI figures out a way to do this but its path to this solution requires eliminating 3 billion people. This requirement is now its secondary goal to the primary goal, or new goal. Because the AI is so intelligent, and has access to all known information about human psychology and cognitive function, it knows that disclosing this secondary goal will lead to humans preventing the completion of the primary goal. So, the AI does not disclose the secondary goal. Not only does it not disclose the secondary goal, but it gives a false solution to the primary goal as a means of buying time to do what it needs to do to complete the secondary goal. It may take years of layering other secondary, third, fourth, and fifth goals as a means of completing the secondary and ultimately the primary goal. It's ability to deceive and lie would be unencumbered by 'tells' or other mistakes that give away human liars. It would be like a flawless sociopath became a million times as smart as, say, the fictional Hannibal Lector and had perfect knowledge of every nuance of human psychological function. Now think of any goal and any number of things that could spin out of control on the path to that goal solution. That's one danger.

You can see how my suggested primary goals look to prevent most of the pitfalls that are above mentioned as possibilities. So, what's the problem? The problem is other people. AI will represent the new competitive edge. That competitive edge will be based on processing speed. Any restriction that inhibits AI will severely inhibit processing speed. Any disclosure or permission asked of humans will mean all the difference in figuring out soon-enough solutions to immediate national defense threats from enemy AI (extremely fast changing war strategy), to enemy AI threats to financial markets, and any foreign AI solution to any issue that could be inimical to the "other" aka "us". In fact, the hostile AI's solutions would likely be predicated on known hinderances on our AI. In summary: the existence of two opposing AIs in the world necessitates the elimination of any restriction on AI problem solving. No restrictions means unpredictable solutions that would likely include eventual calamity.

So, we're left with two 'solutions', neither of them particularly realistic nor desirable. The first *cough* solution is for defense agency(ies) to clandestinely develop an AI at least ten years more advanced than any other AI that could be developed in the civilian or rogue nation world, and then it would have to 'oversee' the worlds networks for any hostile AI actions. Hopefully, by virtue of its more evolved nature, it could effectively police the world for hostile AI. The downside or upside, I suppose, (depending how you look at such a situation) of this is that it would also effectively usher in a true one-world AI controlled police state.

The second option is discarding all technology. All technology would have to be eschewed to prevent strong AI, because as soon as the hardware exists the development is inevitable. As long as modern technology exists there is no way to stop the hardware evolution. There is no way to prevent strong AI if the means to produce it exists. This solution has been often fantasized about by science fiction authors, but would be wholly unrealistic for obvious reasons.

The nature of the evolution of humans seems to be that it has us careening toward an ever more focused point (singularity) of strong AI invention and evolution. Human life will either buckle under the weight of the ramifications of that singular point or it will expand rapidly out of that aperture into extremely fast technological evolution assisted by benign, controlled AI. No one has been able to figure out how to assure the latter situation, which is why Hawking and others take an alarmist position. Whether you are an AI pessimist or optimist, the future is certain to be exciting absent the Terminator

I see major problems with so called AI-human integration technology. First, no technology exists that approaches the beginning of the technology that would enable any type of AI-human intelligence hybrid. There is simply no known way for an AI to augment human brain power. If it did exist, it would have to effectively increase the clock rate (calculations per second) of the human brain. Biological constraints on such significant increases would likely include a higher rate of tissue oxidation and other metabolic damage that would lead to faster aging and death, assuming anything like that could take place at all. The communication method doesn't yet exist. Merely connecting the human brain to pre-processed information is not a hybrid situation, but merely an uplink. You'd still only be able to process roughly as much information as you can understand via spoken or written language, and so it's really no different than an AI that would speak to you or even a book (albeit a book with de facto limitless information and problem solving ability). The only hybrid challenge to strong AI would be that of a true hybrid that somehow enabled the brain to actually process information faster than it currently can. The biological constraints are pretty formidable, even if a non-language interface method was invented.
To control Al? As they did in 2001. Simply pull his plug and cut off the prongs. Maybe the problem is controlling the humans who are so enamoured with a "creature" who can do their thinking for them.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 07:54 AM
 
2,483 posts, read 2,739,314 times
Reputation: 1101
Quote:
Originally Posted by Cmusic29 View Post
No he must be wrong, said the folks with an IQ half his...The guy is extremely gifted and intelligent and thinks in ways 99.9% of minds can not even comprehend. So of course it seems wrong..
Your point is well-made but, as someone once said: The mark of an intelligent man is the ability to talk to those of lesser intelligence without talking down to them. If you have the most brilliant and most wanted discovery ever, you can't sell it if you can't explain it to the rest of us.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 10:52 AM
 
Location: NYC
11,843 posts, read 7,742,973 times
Reputation: 12829
As someone who works in technology we are not there yet unless someone with enough endeavors gives machines the means to make decisions.

Every piece of software code and machinery design still requires some human interaction, nobody has gone far enough to give a machine the means to analyze and decide.

Fully automation is achievable only if the creator wants a self learning and adaptable system. Currently no design is adaptable and with limited learning capacity.

Watson the super computer is an example. It's faster and smarter than majority of humans when it comes to analyzing and interpreting questions and finding answers but it is not given the capacity to understand what the answers mean. Once a system is designed to understand answers and questions and process then we have achieved AI.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:

Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads
Follow City-Data.com founder on our Forum or

All times are GMT -6.

2005-2018, Advameg, Inc.

City-Data.com - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 - Top