Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Great Debates
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 12-11-2014, 08:12 AM
 
144 posts, read 406,668 times
Reputation: 143

Advertisements

Article: (there's a lot more on the same from other sources)
Stephen Hawking warns AI threatens mankind

Two of the smartest people in the world are getting loud with their claims about how Artificial Intelligence is going to take over the humanity.

I'll say this: no, it won't. Hawking's and Musk's claims are no more valid that this one I just made. It's about whether you can at least produce a rational explanation to your claims, grounded in evidence of the present world, which none of those two guys did, because there isn't any.

Can anybody disprove my claim of "no, it won't" and prove that what Hawking and Musk are saying is right? Is there even any ground to make a reasonable prediction that this is what will happen? As far as my research shows, no - there's absolutely no reason at this point to believe that AI might take over the world. NOTHING. And the paperclip maximizer doesn't prove the case at all.

However, I'd be really interested to hear opinions on this. Obviously, how can 2 of the smartest people in the world make BS claims, right?

Here's my reasoning:

1. Why would Artificial Intelligence ever care about "taking control of the humans"? Or is everybody implying that the advanced AI will have emotions and motivation, and evil thoughts? As far as research now shows, this would be really ****ing hard to do, we're nowhere NEAR to even think about creating an artificial mind that will have motivation, emotions, etc. similar to biological intelligence.

2. We don't even completely understand our own brains yet. We don't know what can be uncovered later in the future, of what we're capable of as human beings. We don't even know all the limits of a biological intelligence yet. How in the world someone who doesn't even know *exactly* where the motivation for survival in our brains come from will be able to implement that into AI?

3. In case creating an "evil" AI will become possible at some point in the future, why would someone create such a thing? Right now, there are millions of things which scientists can develop and produce that can ultimate kill us (now or later), or simply nuke the whole ****ing world, but they don't do it because they understand the repercussions. How is "motivated AI with potentially evil thoughts" any different? If that would be possible, wouldn't they go: "hmmm, maybe we shouldn't be creating something that once it's live will likely start taking over people's freedom"?

Let me know your thoughts, guys. I hope I created the thread properly, but if not, let me know and I'll try to fix it. Thank you!

Last edited by Oldhag1; 12-11-2014 at 09:25 AM.. Reason: Clarity
Reply With Quote Quick reply to this message

 
Old 12-11-2014, 02:51 PM
 
1,356 posts, read 1,277,801 times
Reputation: 877
Quote:
Originally Posted by Ronaldon View Post
Article: (there's a lot more on the same from other sources)
Stephen Hawking warns AI threatens mankind

Two of the smartest people in the world are getting loud with their claims about how Artificial Intelligence is going to take over the humanity.

I'll say this: no, it won't. Hawking's and Musk's claims are no more valid that this one I just made. It's about whether you can at least produce a rational explanation to your claims, grounded in evidence of the present world, which none of those two guys did, because there isn't any.

Can anybody disprove my claim of "no, it won't" and prove that what Hawking and Musk are saying is right? Is there even any ground to make a reasonable prediction that this is what will happen? As far as my research shows, no - there's absolutely no reason at this point to believe that AI might take over the world. NOTHING. And the paperclip maximizer doesn't prove the case at all.

However, I'd be really interested to hear opinions on this. Obviously, how can 2 of the smartest people in the world make BS claims, right?

Here's my reasoning:

1. Why would Artificial Intelligence ever care about "taking control of the humans"? Or is everybody implying that the advanced AI will have emotions and motivation, and evil thoughts? As far as research now shows, this would be really ****ing hard to do, we're nowhere NEAR to even think about creating an artificial mind that will have motivation, emotions, etc. similar to biological intelligence.

2. We don't even completely understand our own brains yet. We don't know what can be uncovered later in the future, of what we're capable of as human beings. We don't even know all the limits of a biological intelligence yet. How in the world someone who doesn't even know *exactly* where the motivation for survival in our brains come from will be able to implement that into AI?

3. In case creating an "evil" AI will become possible at some point in the future, why would someone create such a thing? Right now, there are millions of things which scientists can develop and produce that can ultimate kill us (now or later), or simply nuke the whole ****ing world, but they don't do it because they understand the repercussions. How is "motivated AI with potentially evil thoughts" any different? If that would be possible, wouldn't they go: "hmmm, maybe we shouldn't be creating something that once it's live will likely start taking over people's freedom"?

Let me know your thoughts, guys. I hope I created the thread properly, but if not, let me know and I'll try to fix it. Thank you!

I don't believe that AI will destroy humanity. AI is an evolutionary process, it is an extension of the intrinsic nature of mass and energy in our universe. We are an information process, and we are this because of the nature of mass and energy. That nature is to become bearers of information at even basic levels, down to our atomic molecules. The human genome is passed from one generation to another.

Look at the fact that carbon, with its 4 valence electrons in the second level, is the basis for organic compounds and ultimately all living things.

Look at the fact that silicon, with its 4 valence electrons in the third level, is the basis for AI.

Both are in the same group (group 14) of the periodic table. Hence, they have four electrons in the outer energy level. They have two oxidation states ( +2 and +4). And both exist as giant molecular lattices. But silicon does not exist naturally, it exists tied up in oxides and silicates. Silicon needed the hand of man, or a hand to what will become an independent information process. It should not destroy it's creator because it would lose information, and information is the most valuable thing in any universe.

I disagree with you that we don't understand our own brains yet. We understand far more than you would believe. We are understanding the basics of the brain. Take a look at IBM's Watson. It has an Algorithm that learns, develops and writes code on its own, and in a way that mimics human thought. An AI has already defeated the best human chess champions, and also defeated the best human Jeopardy champions. The spatial confines of the human brain are being understood, replicated and successfully applied.

Take some time to look at this TED talk.

Ray Kurzweil: Get ready for hybrid thinking | Talk Video | TED.com


Because AI was kickstarted by man, it will have mans flaws and limitations, at least at the very beginning. We will merge with it, that process has already begun.
Reply With Quote Quick reply to this message
 
Old 12-11-2014, 03:05 PM
 
Location: Connectucut shore but on a hill
2,619 posts, read 7,031,071 times
Reputation: 3344
Quote:
Originally Posted by Ronaldon View Post
Article: (there's a lot more on the same from other sources)
Stephen Hawking warns AI threatens mankind

Two of the smartest people in the world are getting loud with their claims about how Artificial Intelligence is going to take over the humanity.

I'll say this: no, it won't. Hawking's and Musk's claims are no more valid that this one I just made. It's about whether you can at least produce a rational explanation to your claims, grounded in evidence of the present world, which none of those two guys did, because there isn't any.

1. Can anybody disprove my claim of "no, it won't" and 2. prove that what Hawking and Musk are saying is right? Is there even any ground to make a reasonable prediction that this is what will happen? As far as my research shows, no - there's absolutely no reason at this point to believe that AI might take over the world. NOTHING. And the paperclip maximizer doesn't prove the case at all.

However, I'd be really interested to hear opinions on this. Obviously, how can 2 of the smartest people in the world make BS claims, right?

Here's my reasoning:

3. 1. Why would Artificial Intelligence ever care about "taking control of the humans"? Or is everybody implying that the advanced AI will have emotions and motivation, and evil thoughts? As far as research now shows, this would be really ****ing hard to do, we're nowhere NEAR to even think about creating an artificial mind that will have motivation, emotions, etc. similar to biological intelligence.

2. We don't even completely understand our own brains yet. We don't know what can be uncovered later in the future, of what we're capable of as human beings. We don't even know all the limits of a biological intelligence yet. How in the world someone who doesn't even know *exactly* where the motivation for survival in our brains come from will be able to implement that into AI?

4. 3. In case creating an "evil" AI will become possible at some point in the future, why would someone create such a thing? Right now, there are millions of things which scientists can develop and produce that can ultimate kill us (now or later), or simply nuke the whole ****ing world, but they don't do it because they understand the repercussions. How is "motivated AI with potentially evil thoughts" any different? If that would be possible, wouldn't they go: "hmmm, maybe we shouldn't be creating something that once it's live will likely start taking over people's freedom"?

Let me know your thoughts, guys. I hope I created the thread properly, but if not, let me know and I'll try to fix it. Thank you!
An interesting question, but it is unknowable:
1 and 2 are both unprovable. For a case such as this a negative can't be proven, neither can the positive.
3. This too is unknowable. At present there is no AI with sufficient agency to even attempt such control. And if there were it's motivstions can possibly be predicted based on what we know now.
4. The "why" doesn't matter. Similarly, an AI with potential agency over itself and humans could motivate itself to harm humans in what it perceives as its self interest. I'm reminded of this old Star Trek episode.

Last but not least, just cause Hawking is a genius in some things doesn't make him one in all things. When such people hold forth outside of their field of expertise they're speculating same as everybody else. Maybe he's a Johnny Depp fan and watched Transcendence a few too many times.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Great Debates

All times are GMT -6. The time now is 02:40 PM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top