Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Stephen Hawking thinks too far ahead sometimes. He worries about aliens potentially being dangerous nomads who used up all the resources on their home world, so now go from solar system to solar system draining resources from new planets and leaving.
He worries about humanity not surviving for thousands of years into the future unless we can live on other planets.
Now he's worried about AI.
He needs to focus on more immediate concerns. How about a strategy to keep a hydrogen bomb war from happening sometime in the next century? We'll quell the android rebellion as soon as we're reasonably sure we won't blow up Earth sometime in the next six months.
Understand it's not simply looking the answer up in database, they require reasoning and logic. That's Ken Jenning's it's competing against. If you are unfamilair with Jeopardy the winner gets to come back the next day, he was on the show for months.
Watsons a expert system, not an AI.
I'd guess AI is 20 years away. But some really really good expert systems that look a lot like AI are being seen now.
It's not as outlandish and Hollywood-esque as it may seem at first. Think of how rapidly technology has developed since just 1990. Quarter of a century, and we have a vast internet / big data industry, miniaturized computer systems at our fingertips, drones, self-driving cars on the horizon, prototype laser weapons on US ships, the beginning of truly wearable technology... And that's just 25 years.
Many of us may perhaps be dead when we reach this point of AI, but it's quite possible that many of us won't be - and that your kids will live to see the day when a machine is able to self-guide its programming and augment its behavior at a very high level of sophistication and independence to reach goals. Google, Apple, IBM, DARPA are all working very hard at making this happen... Look at the past progress in 25 years and think out another 50-75 years. That's not that far away, folks.
Aside from the fact that most of our jobs will long be toast by then, we could face a much more direct existential threat when we reach the point where machine intelligence can rival our own. Once a machine is capable of forming and executing an independent agenda in furtherance of its goals, there is no reason to think that such an agenda has to be friendly to human values. The thought of how we put safeguards on something of this sort should start occupying some real estate in our national dialogue. This could be a lot riskier for humanity than nuclear weapons, and much harder to manage.
This could become the most important issue humanity has ever faced.
We aren't really that close to true AI though. And even so, the idea of a 'robot takeover' is silly to me. Technology, no matter how smart, is always built to serve humans. Notice that I said humans, not humanity. The wielder of the technology is the threat, not the technology itself. The discovery of nuclear fusion and the technology to harness/control is benefits some. It ended a war in the US and is used to generate power in various places across the globe, but it didn't serve Japan well in the 1940s or Chernobyl in 1986. The issue is not with the technology itself, but who it is intended to serve.
You mention drones and self driving cars. How they work is misleading. They don't literally fly themselves anymore than a conventional jet flies itself. Sure, it's the engine, which is jet not human, that powers the aircraft, but where it goes is determined by a person. This is true of drones and self driving cars. Drones are controlled from far away, which is good for saving the pilots life in cases where the drone could be shot down (I have no problem with drones or the use of drones, I do have a problem with the currently policy we use them for). Self driving cars are programmed. They don't make a decision for you and likely never will as inventing that would serve absolutely no purpose whatsoever.
The real threat is already here. Constant surveillance. In order for a self driving car to work, it has to be tractable via satellite and who ever has access to that satellite could find you no matter where you are, and that's horrifying.
True AI no it's not but it mimics human intelligence and that is all you need.
Theres actually very significant differences. Expert systems wont design and write themselves, whereas a AI system could make further advances in it. The difference is subtle but important.
AI's are a lot more dangerous, but with a corresponding reward. Think of it as nuclear power.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.