Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Some people on here have said computers are not advancing as fast and I tell them that is not true. Here is more proof to back up what I say!
AN IBM BREAKTHROUGH ENSURES SILICON WILL KEEP SHRINKING
THE LIMITS OF silicon have not been reached quite yet.
Today, an IBM-led group of researchers have detailed a breakthrough transistor design, one that will enable processors to continue their Moore’s Law march toward smaller, more affordable iterations. Better still? They achieved it not with carbon nanotubes or some other theoretical solution, but with an inventive new process that actually works, and should scale up to the demands of mass manufacturing within several years.
That should also, conveniently enough, be just in time to power the self-driving cars, on-board artificial intelligence, and 5G sensors that comprise the ambitions of nearly every major tech player today—which was no sure thing.
I recall an experiment a few years ago. What would a computer make of the Internet, if set loose to data mine? The computer focussed on two different images. One was that of a cat. It was suggested that image was due to so many people posting images of their cats. And the computer focussed on another image, that of....a spatula? Where that came from I don't know.
I have to wonder if the transition from #2 to #3 will mean a form of AI that will include quirkiness, if not outright strangeness.
3. Weak General AI. Oddly enough, somewhat similar to a human idiot savant. Superior to humans in some respects, but subpar in others.
4. Strong General AI. Self improvement possible, leading to intelligence explosion and....
5. Super AI. The singularity.
The transition from #4 to #5 could happen quite naturally without additional human input. The risk of AI becoming actually sentient would usher in evolutionary growth measured in days instead of hundreds of years as compared to biological evolution.
Humans would share their dominant position in the world with another sentient being for a short time (perhaps several weeks to several years) before being eclipsed by non-organic intelligence.
Humans would share their dominant position in the world with another sentient being for a short time (perhaps several weeks to several years) before being eclipsed by non-organic intelligence.
I still think that we are more like to merge into tech, rather than being replaced by it. (Although there will eventually reach a point where the integration is so extensive we might lose all interest in even trying to define a difference be "machine-augmented humans" and "partially organic machines").
One thing to consider is "will to live" along with related concepts such as fear of death, hope for future, sheer stubbornness, parental protectiveness, and pride of social membership (i.e., "team spirit", nationalism, etc.). I'm sure we will try to design machines with some capacity for self-preservation, but so long as we don't design them to have an intense will to live that overrides all other considerations, I doubt that machine will set out to systematically exterminate humans. What's more likely is that we will keep merging new machine capacities into our human physiology (integrated brain chips, bionics, etc.).
One obvious wild-card unknown is whether machines, as such, might develop a powerful will to live despite our best efforts to prevent it. Until we really understand the nature of sentience and related emotional/cognitive capacities (fear, love, curiosity, etc.) we can't really be sure that sufficiently complex AIs won't develop sentience of their own along with their own sense of priorities. My main point is that a super-high priority for us as we develop AI should be to understand the nature of sentience. How/why do humans develop sentience? How intelligent can a machine be without sentience? We have very few clues at this point. Getting some clues needs to be a top priority.
I wonder where the fear that AI will destroy people came from? We have no example of this ever happening. How was this fear seeded? Oh I know, Hollywood movies. People grow up watching movies depicting aliens attacking earth and the seed was planted.
Or perhaps some people are projecting the terrible things human are capable of onto robots and AI. Humans can be greedy selfish, mean, nasty, murderous, psychotic, etc. Can AI and robots acquire these behaviors and values? How would they?
The people alive today along with their children and grandchildren will be dead before AI and robots are smart enough to become significant. There is nothing for me to worry about, as I am all for biological humans being replaced by way better digital/robotic versions of ourselves.
Our Final InventionArtificial Intelligence And The End Of The Human Era by James Barrat. Discusses intelligence explosion and the first super intelligence.
Our Final InventionArtificial Intelligence And The End Of The Human Era by James Barrat. Discusses intelligence explosion and the first super intelligence.
There are a few good books on this topic and I am slowing reading them.
In an interview with the Singularity.FM podcast, Nye said that he thinks that the machine revolution will not be as incredible as predicted. Since humans are making the machines, we don’t need to worry about a sudden onset of artificial intelligence taking over and replacing us, despite what Ray Kurzweil and Elon Musk worry about.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.