Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 06-11-2017, 11:56 PM
 
2,639 posts, read 1,992,877 times
Reputation: 1988

Advertisements

Monck suggests that some version of AI might be merged with humans in a blended intelligence. This seems akin to Intelligence Amplification (IA).


I can imagine other scenarios, in which humans cooperate with, but do not directly interface with, AIs::

1. Some form of AI seeks out humans as allies.

2. Humanity develops a complex of (non-singularity) intelligences that, collectively, are greater than the sum of the parts.

Last edited by Tim Randal Walker; 06-12-2017 at 12:50 AM..
Reply With Quote Quick reply to this message

 
Old 06-14-2017, 10:10 AM
 
Location: Pueblo - Colorado's Second City
12,262 posts, read 24,452,401 times
Reputation: 4395
Some people on here have said computers are not advancing as fast and I tell them that is not true. Here is more proof to back up what I say!


AN IBM BREAKTHROUGH ENSURES SILICON WILL KEEP SHRINKING

THE LIMITS OF silicon have not been reached quite yet.

Today, an IBM-led group of researchers have detailed a breakthrough transistor design, one that will enable processors to continue their Moore’s Law march toward smaller, more affordable iterations. Better still? They achieved it not with carbon nanotubes or some other theoretical solution, but with an inventive new process that actually works, and should scale up to the demands of mass manufacturing within several years.

That should also, conveniently enough, be just in time to power the self-driving cars, on-board artificial intelligence, and 5G sensors that comprise the ambitions of nearly every major tech player today—which was no sure thing.


The link: https://www.wired.com/2017/06/ibm-si...mbid=social_fb
Reply With Quote Quick reply to this message
 
Old 06-14-2017, 11:54 PM
 
2,639 posts, read 1,992,877 times
Reputation: 1988
Came across a classification scheme for Artificial Intelligences. The different tiers:

1. Weak Narrow AI. Does one thing well. Example-calculator.

2. Strong Narrow AI. Somewhat greater capacity. Example-SIRI.

3. Weak General AI. Oddly enough, somewhat similar to a human idiot savant. Superior to humans in some respects, but subpar in others.

4. Strong General AI. Self improvement possible, leading to intelligence explosion and....

5. Super AI. The singularity.
Reply With Quote Quick reply to this message
 
Old 06-15-2017, 12:25 AM
 
2,639 posts, read 1,992,877 times
Reputation: 1988
I recall an experiment a few years ago. What would a computer make of the Internet, if set loose to data mine? The computer focussed on two different images. One was that of a cat. It was suggested that image was due to so many people posting images of their cats. And the computer focussed on another image, that of....a spatula? Where that came from I don't know.

I have to wonder if the transition from #2 to #3 will mean a form of AI that will include quirkiness, if not outright strangeness.
Reply With Quote Quick reply to this message
 
Old 06-18-2017, 04:32 PM
 
Location: North West Arkansas (zone 6b)
2,776 posts, read 3,244,991 times
Reputation: 3912
Quote:
Originally Posted by Tim Randal Walker View Post
Came across a classification scheme for Artificial Intelligences. The different tiers:

1. Weak Narrow AI. Does one thing well. Example-calculator.

2. Strong Narrow AI. Somewhat greater capacity. Example-SIRI.

3. Weak General AI. Oddly enough, somewhat similar to a human idiot savant. Superior to humans in some respects, but subpar in others.

4. Strong General AI. Self improvement possible, leading to intelligence explosion and....

5. Super AI. The singularity.
The transition from #4 to #5 could happen quite naturally without additional human input. The risk of AI becoming actually sentient would usher in evolutionary growth measured in days instead of hundreds of years as compared to biological evolution.

Humans would share their dominant position in the world with another sentient being for a short time (perhaps several weeks to several years) before being eclipsed by non-organic intelligence.
Reply With Quote Quick reply to this message
 
Old 06-19-2017, 06:42 AM
 
Location: Kent, Ohio
3,429 posts, read 2,730,990 times
Reputation: 1667
Quote:
Originally Posted by gunslinger256 View Post
Humans would share their dominant position in the world with another sentient being for a short time (perhaps several weeks to several years) before being eclipsed by non-organic intelligence.
I still think that we are more like to merge into tech, rather than being replaced by it. (Although there will eventually reach a point where the integration is so extensive we might lose all interest in even trying to define a difference be "machine-augmented humans" and "partially organic machines").

One thing to consider is "will to live" along with related concepts such as fear of death, hope for future, sheer stubbornness, parental protectiveness, and pride of social membership (i.e., "team spirit", nationalism, etc.). I'm sure we will try to design machines with some capacity for self-preservation, but so long as we don't design them to have an intense will to live that overrides all other considerations, I doubt that machine will set out to systematically exterminate humans. What's more likely is that we will keep merging new machine capacities into our human physiology (integrated brain chips, bionics, etc.).

One obvious wild-card unknown is whether machines, as such, might develop a powerful will to live despite our best efforts to prevent it. Until we really understand the nature of sentience and related emotional/cognitive capacities (fear, love, curiosity, etc.) we can't really be sure that sufficiently complex AIs won't develop sentience of their own along with their own sense of priorities. My main point is that a super-high priority for us as we develop AI should be to understand the nature of sentience. How/why do humans develop sentience? How intelligent can a machine be without sentience? We have very few clues at this point. Getting some clues needs to be a top priority.
Reply With Quote Quick reply to this message
 
Old 06-19-2017, 11:31 AM
 
8,943 posts, read 11,774,686 times
Reputation: 10870
I wonder where the fear that AI will destroy people came from? We have no example of this ever happening. How was this fear seeded? Oh I know, Hollywood movies. People grow up watching movies depicting aliens attacking earth and the seed was planted.

Or perhaps some people are projecting the terrible things human are capable of onto robots and AI. Humans can be greedy selfish, mean, nasty, murderous, psychotic, etc. Can AI and robots acquire these behaviors and values? How would they?

The people alive today along with their children and grandchildren will be dead before AI and robots are smart enough to become significant. There is nothing for me to worry about, as I am all for biological humans being replaced by way better digital/robotic versions of ourselves.
Reply With Quote Quick reply to this message
 
Old 06-21-2017, 11:00 PM
 
2,639 posts, read 1,992,877 times
Reputation: 1988
Default book, copyright 213

Our Final Invention Artificial Intelligence And The End Of The Human Era by James Barrat. Discusses intelligence explosion and the first super intelligence.
Reply With Quote Quick reply to this message
 
Old 06-22-2017, 10:49 AM
 
Location: Pueblo - Colorado's Second City
12,262 posts, read 24,452,401 times
Reputation: 4395
Quote:
Originally Posted by Tim Randal Walker View Post
Our Final Invention Artificial Intelligence And The End Of The Human Era by James Barrat. Discusses intelligence explosion and the first super intelligence.
There are a few good books on this topic and I am slowing reading them.
Reply With Quote Quick reply to this message
 
Old 06-24-2017, 11:32 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,761 posts, read 7,254,407 times
Reputation: 7528
I think Bill Nye makes an accurate assessment.

Quote:
In an interview with the Singularity.FM podcast, Nye said that he thinks that the machine revolution will not be as incredible as predicted. Since humans are making the machines, we don’t need to worry about a sudden onset of artificial intelligence taking over and replacing us, despite what Ray Kurzweil and Elon Musk worry about.
Bill Nye Disses Ray Kurzweil's Singularity Prediction
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads

All times are GMT -6. The time now is 08:10 AM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top