Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 05-06-2014, 03:24 PM
 
5,462 posts, read 9,636,292 times
Reputation: 3555

Advertisements

Quote:
Originally Posted by vision33r View Post
Much of our food supply does come from automation we just don't know it. Much of the produce that we get like lettuce is grown and farmed on a conveyer belt system fully automated.
Wait a second. What? Lettuce is an example of much of the produce we get which is grown and farmed on a conveyer belt system fuly automated? Where did you get that information from? Link please.
Reply With Quote Quick reply to this message

 
Old 05-06-2014, 03:51 PM
 
1,706 posts, read 2,436,829 times
Reputation: 1037
Your post was fun to read. But the grim picture you paint seems a bit removed from what Ray Kruzweil and others have been saying.

You are making the false assumption that Humans would be unchanged in this AI driven world. There is going to be Humans 2.0
What Kurzweil calls a post-human future, where we upload our consciousness to computers and live forever as “stored information”. So, from being a biological system, we become an electronic system. It is true that humanity as we know it would be over. But that's a different story all together.

AI is definitely not going to be killing humans to solve an energy crisis.

Quote:
Originally Posted by golgi1 View Post
Hawking is right to worry, but people have been thinking and worrying about this for decades. Science fiction authors have been putting their thoughts about this on paper since the WWII era, and many of the concerns and potentially resultant realities reflected in their fictions still read as valid concerns today.

The Machine Intelligence Research Institute is comprised of a group of concerned individuals who see the need to hash out potential solutions to any strong AI threat to humanity. So far, there aren't any compelling solutions.

Machine Intelligence Research Institute - Wikipedia, the free encyclopedia

The problem isn't being able to control strong AI in a theoretical bubble devoid of other human beings. The problem lies in constraining strong AI in the real world in which there are competitive, non-cooperative groups of humans who will undoubtedly have access to strong AI.

In a theoretical and human free bubble, AI could be constrained by putting it in a "box" with goals that are focused only on:

1. not deceiving humans,

2. not creating new goals without human knowledge, and

3. disclosing any human injury that may occur as a result of its decisions, and needing permission for that further action.

See, a major issue with AI is its goals. Like you or I have life goals that cause us to behave in a human manner (eg: money, family, power, influence, spirituality, cultural experience, helping others, justice, or merely to feel good), for AI to be AI it needs goals. Otherwise, it's merely a blazing fast computer that can figure out problems as you feed it those problems; but it isn't a persistent intelligence. Relegating AI to a goal-less computer state is a pretty good solution to strong AI threats, but there are significant problems with AI having no goals that I will later address.

One issue with AI goals, other than those which I above suggested (notice that the only suggestions that require activity are disclosure and permission seeking, making the AI de facto goal-less), is that the AI could figure to achieve those goals in a manner that is unpredictable to any human. The unpredictable path to achieving goals could be wrought with calamity. For instance, say you give the AI a seemingly benign goal of figuring out how to reduce human energy consumption to the point where the planet will be able to ecologically support human life for at least another 10,000 years. Remember that we are talking about an AI that potentially has the computing power equivalent to the combination of every human brain that has existed for the past 10,000 years. So, this AI figures out a way to do this but its path to this solution requires eliminating 3 billion people. This requirement is now its secondary goal to the primary goal, or new goal. Because the AI is so intelligent, and has access to all known information about human psychology and cognitive function, it knows that disclosing this secondary goal will lead to humans preventing the completion of the primary goal. So, the AI does not disclose the secondary goal. Not only does it not disclose the secondary goal, but it gives a false solution to the primary goal as a means of buying time to do what it needs to do to complete the secondary goal. It may take years of layering other secondary, third, fourth, and fifth goals as a means of completing the secondary and ultimately the primary goal. It's ability to deceive and lie would be unencumbered by 'tells' or other mistakes that give away human liars. It would be like a flawless sociopath became a million times as smart as, say, the fictional Hannibal Lector and had perfect knowledge of every nuance of human psychological function. Now think of any goal and any number of things that could spin out of control on the path to that goal solution. That's one danger.

You can see how my suggested primary goals look to prevent most of the pitfalls that are above mentioned as possibilities. So, what's the problem? The problem is other people. AI will represent the new competitive edge. That competitive edge will be based on processing speed. Any restriction that inhibits AI will severely inhibit processing speed. Any disclosure or permission asked of humans will mean all the difference in figuring out soon-enough solutions to immediate national defense threats from enemy AI (extremely fast changing war strategy), to enemy AI threats to financial markets, and any foreign AI solution to any issue that could be inimical to the "other" aka "us". In fact, the hostile AI's solutions would likely be predicated on known hinderances on our AI. In summary: the existence of two opposing AIs in the world necessitates the elimination of any restriction on AI problem solving. No restrictions means unpredictable solutions that would likely include eventual calamity.

So, we're left with two 'solutions', neither of them particularly realistic nor desirable. The first *cough* solution is for defense agency(ies) to clandestinely develop an AI at least ten years more advanced than any other AI that could be developed in the civilian or rogue nation world, and then it would have to 'oversee' the worlds networks for any hostile AI actions. Hopefully, by virtue of its more evolved nature, it could effectively police the world for hostile AI. The downside or upside, I suppose, (depending how you look at such a situation) of this is that it would also effectively usher in a true one-world AI controlled police state.

The second option is discarding all technology. All technology would have to be eschewed to prevent strong AI, because as soon as the hardware exists the development is inevitable. As long as modern technology exists there is no way to stop the hardware evolution. There is no way to prevent strong AI if the means to produce it exists. This solution has been often fantasized about by science fiction authors, but would be wholly unrealistic for obvious reasons.

The nature of the evolution of humans seems to be that it has us careening toward an ever more focused point (singularity) of strong AI invention and evolution. Human life will either buckle under the weight of the ramifications of that singular point or it will expand rapidly out of that aperture into extremely fast technological evolution assisted by benign, controlled AI. No one has been able to figure out how to assure the latter situation, which is why Hawking and others take an alarmist position. Whether you are an AI pessimist or optimist, the future is certain to be exciting absent the Terminator

I see major problems with so called AI-human integration technology. First, no technology exists that approaches the beginning of the technology that would enable any type of AI-human intelligence hybrid. There is simply no known way for an AI to augment human brain power. If it did exist, it would have to effectively increase the clock rate (calculations per second) of the human brain. Biological constraints on such significant increases would likely include a higher rate of tissue oxidation and other metabolic damage that would lead to faster aging and death, assuming anything like that could take place at all. The communication method doesn't yet exist. Merely connecting the human brain to pre-processed information is not a hybrid situation, but merely an uplink. You'd still only be able to process roughly as much information as you can understand via spoken or written language, and so it's really no different than an AI that would speak to you or even a book (albeit a book with de facto limitless information and problem solving ability). The only hybrid challenge to strong AI would be that of a true hybrid that somehow enabled the brain to actually process information faster than it currently can. The biological constraints are pretty formidable, even if a non-language interface method was invented.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 04:43 PM
 
Location: Pueblo - Colorado's Second City
12,262 posts, read 24,461,491 times
Reputation: 4395
Quote:
Originally Posted by jtur88 View Post
There are plenty of people in the world today who are surviving without any real exposure to even the 20th century. and have no access to electricity, motorized transport, or treated water. Are you saying that within 16 years, the entire planet will be subject to a single authoritarian system, from which nobody will have escaped? And the standard of living in Yemen and Burkina Faso will be exactly the same as in Switzerland and Singapore? Will AI robots wander every corner of the earth, capturing people and implanting chips into them by force?
No.

What I am saying is once AI is more intelligent then the unaided human if we want to be competitive all we will have to merge with the technology. I am not saying anyone or any government should force us to though.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 05:33 PM
 
Location: Limbo
6,512 posts, read 7,549,515 times
Reputation: 6319
There are many things with the potential to be our downfall; AI, humans, nukes, asteroids, ice age, ET, etc.

The future will certainly be interesting.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:17 PM
 
5,462 posts, read 9,636,292 times
Reputation: 3555
Quote:
Originally Posted by Josseppie View Post
No.

What I am saying is once AI is more intelligent then the unaided human if we want to be competitive all we will have to merge with the technology. I am not saying anyone or any government should force us to though.
Competitive in what sense? Would those who choose merge with AI be considered better and superior to those who choose not to?
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:22 PM
 
Location: Pueblo - Colorado's Second City
12,262 posts, read 24,461,491 times
Reputation: 4395
Quote:
Originally Posted by NightBazaar View Post
Competitive in what sense? Would those who choose merge with AI be considered better and superior to those who choose not to?
The best example I can think of is a college degree today. Sure you can get by without one but in the competitive job market its much easier to find a job and the unemployment rate is lower for people who have them. Especially graduate degrees. So I think that after 2030 merging with computers will be the equivalent of a college degree today.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:36 PM
 
31,387 posts, read 37,048,770 times
Reputation: 15038
Quote:
Originally Posted by Josseppie View Post
The best example I can think of is a college degree today. Sure you can get by without one but in the competitive job market its much easier to find a job and the unemployment rate is lower for people who have them. Especially graduate degrees. So I think that after 2030 merging with computers will be the equivalent of a college degree today.
I swear you remind me of those techies in the movies standing in the board room introduce the latest greatest piece of technology that will transform human kind for the better only to get annihilated by their invention half-way through the presentation.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 06:42 PM
 
Location: Pueblo - Colorado's Second City
12,262 posts, read 24,461,491 times
Reputation: 4395
Quote:
Originally Posted by ovcatto View Post
I swear you remind me of those techies in the movies standing in the board room introduce the latest greatest piece of technology that will transform human kind for the better only to get annihilated by their invention half-way through the presentation.
LOL funny! Yes I can see that. No wait I want to live to see the technology work!
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 09:04 PM
 
Location: Victoria TX
42,554 posts, read 86,977,099 times
Reputation: 36644
Quote:
Originally Posted by Josseppie View Post
No.

What I am saying is once AI is more intelligent then the unaided human if we want to be competitive all we will have to merge with the technology. I am not saying anyone or any government should force us to though.
Who is "we". and against whom are "we" competitive? Who are the losers, and what is their place in your future? Are the losers the ones who do not enjoy the privilege of having a central universal AI implanted into their minds and souls? I don't think you have though this through very well.

Quote:
Originally Posted by Josseppie View Post
AI will be one reason humans will have to change because we will not be able to survive if we do not.
Reply With Quote Quick reply to this message
 
Old 05-06-2014, 09:20 PM
 
5,462 posts, read 9,636,292 times
Reputation: 3555
Quote:
Originally Posted by Josseppie View Post
The best example I can think of is a college degree today. Sure you can get by without one but in the competitive job market its much easier to find a job and the unemployment rate is lower for people who have them. Especially graduate degrees. So I think that after 2030 merging with computers will be the equivalent of a college degree today.
That didn't exactly answer the question. Okay, so then it's just about jobs?
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads

All times are GMT -6. The time now is 02:34 AM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top