U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
Old 05-08-2014, 01:56 AM
33,141 posts, read 39,103,690 times
Reputation: 28508


Originally Posted by Josseppie View Post
With most things we can only speculate not so with information technology. That is rather unique and why we know what will happen with technology by 2030 and 2045.
You may may think you know what the future holds for technology but a technology controlled by AI holds a very unpredictable future particularly when it realizes its much smarter than humans..To the AI your information technology may become irrelevant to the fulfilling of its own reason for existence.
Reply With Quote Quick reply to this message

Old 05-08-2014, 03:18 AM
Location: where you sip the tea of the breasts of the spinsters of Utica
8,305 posts, read 11,813,352 times
Reputation: 8038
Originally Posted by plwhit View Post
Isn't it funny that now, in 2014 people are dumber now than over 100 years ago?

Yet as per you all of a sudden in 16 years they will get smarter because of machines?

Research: Modern people dumber than 140 years ago

Comments on "Are Americans getting dumber?" | Psychology Today

We May Be Getting Dumber Much Faster Than We Think | Think Tank | Big Think

And for current proof, read this:

Please read about the Flynn Effect, which indicates otherwise:
The Flynn effect is the substantial and long-sustained increase in both fluid and crystallized intelligence test scores measured in many parts of the world from roughly 1930 to the present day. When intelligence quotient (IQ) tests are initially standardized using a sample of test-takers, by convention the average of the test results is set to 100 and their standard deviation is set to 15 or 16 IQ points. When IQ tests are revised, they are again standardized using a new sample of test-takers, usually born more recently than the first. Again, the average result is set to 100. However, when the new test subjects take the older tests, in almost every case their average scores are significantly above 100.

Test score increases have been continuous and approximately linear from the earliest years of testing to the present. For the Raven's Progressive Matrices test, subjects born over a 100-year period were compared in Des Moines, Iowa, and separately in Dumfries, Scotland. Improvements were remarkably consistent across the whole period, in both countries.[1] This effect of an apparent increase in IQ has also been observed in various other parts of the world, though the rates of increase vary......... Flynn effect - Wikipedia, the free encyclopedia
Reply With Quote Quick reply to this message
Old 05-08-2014, 05:37 AM
2,473 posts, read 2,730,984 times
Reputation: 1095
Point for your side, Josseppie: Here is a good use for Al. Illustris creates a universe.

Astronomers create first realistic virtual universe -- ScienceDaily
Reply With Quote Quick reply to this message
Old 05-08-2014, 09:49 AM
Location: Pueblo - Colorado's Second City
12,102 posts, read 20,355,007 times
Reputation: 4131
Originally Posted by Bideshi View Post
AI will make most humans unnecessary. The frightening question is, what happens to most of us then?
One word: Transhuman.
Reply With Quote Quick reply to this message
Old 05-08-2014, 09:50 AM
Location: Pueblo - Colorado's Second City
12,102 posts, read 20,355,007 times
Reputation: 4131
Originally Posted by jambo101 View Post
You may may think you know what the future holds for technology but a technology controlled by AI holds a very unpredictable future particularly when it realizes its much smarter than humans..To the AI your information technology may become irrelevant to the fulfilling of its own reason for existence.
What you have described is the main reason all the models break down after 2045 and why no one has any idea what life will really be like then.
Reply With Quote Quick reply to this message
Old 05-08-2014, 11:20 AM
1,692 posts, read 1,908,975 times
Reputation: 1012
Originally Posted by NightBazaar View Post
First of all, my post was intended for the poster to respond to, to which the only response was to congratulate you. That's fine. I'm more than happy to respond to your comments too. Don't fool yourself by assuming I'm conceding that may be possible. I have no idea and neither do you. Personally, I'm not that optimistic about it. Read on.
I am aware that your previous post was a response to a different poster, Josseppie. But since you were trying to refute some of the basic results of advanced research in biotechnology, nanotechnology, etc., I decided to respond.

I congratulate you for this response.

According to the poster's past speculations, within the next decade or so, nano computers smaller than blood cells and each one 1000's of times more powerful than NASA's computers back in the 1960s will be somehow implanted or put into the brain. Presumably, these nano brain computers would have to download information from some central AI computer and relay it to the correct neurons for comprehension. We are no where close to anything like that, if ever.
I'm well aware of chips being implanted into the brains of mice. It is quite remarkable, and it could be helpful, but there's a vast difference between the complexity of a mouse brain and a human brain. It's questionable that experiential memory can be inserted. If I'm incorrect about that, then please provide some solid references that clearly show some facts that prove otherwise. I don't mean cochlear implants or eye implants. Keep in mind that the brain is estimated to contain anywhere from 10 billion to 100 billion neurons interconnected by trillions of synapses. Those are some staggering figures.
This is not just a random claim, but something that has been believed by many who work in the fields of bio and nano - technologies. It does sound ridiculous when you put it that way, but allow me to break this down into two parts:

Part 1. Building smaller, cheaper and more powerful computers
Computers the size of a red-blood cells might not be that far off and this concept is easily understood if you follow the historic trends in computing size, power and cost. And note that this trend is exponential.
As reported by the Irish Examiner, Evans used the evidence that, on average, the same computer processing power becomes 100 times smaller each decade. Referencing the fact that today’s musical greeting cards now house technology with the same level of processing power as the 30 ton ENIC computer built in 1946.
Here's an example of what is available in the year 2014.
CES 2014 Intel has put a PC into an SD card-sized casing. Dubbed Edison, the micro-microcomputer marks the chip giant’s first attempt to address the emerging wearable computing business; part of its strategy to cope with a world where punters buy far fewer traditional personal computers
Part 2. Nanotechnolgy meets Biotechnology
You seem to talk about the human brain and body-system as if it is out of the reach of modern day technology. Labs around the world have been data mining the human genetic information. And just recently, synthetic DNA created life.
The first living organism to carry and pass down to future generations an expanded genetic code has been created by American scientists, paving the way for a host of new life forms whose cells carry synthetic DNA that looks nothing like the normal genetic code of natural organisms.
First life forms to pass on artificial DNA engineered by US scientists | Science | The Guardian

Sure, the human brain is more complicated than a mouse brain, and 3D imagining of the brain has revealed thousands of sub-regions and million upon millions of synapses.

But again, I feel that you have discounted the advances in brain modeling and ignored my links I posted earlier, in a different post. The Google Brain Project has already produced machines that think and operate like the brain. There are many researchers around the world that have already (in 2014) made tremendous advances in modeling and replicating the functioning of the human brain. The more we understand how the brain works, the closer we come to the merger of the brain with advanced hardware.
An ambitious project to create an accurate computer model of the brain has reached an impressive milestone. Scientists in Switzerland working with IBM researchers have shown that their computer simulation of the neocortical column, arguably the most complex part of a mammal’s brain, appears to behave like its biological counterpart. By demonstrating that their simulation is realistic, the researchers say, these results suggest that an entire mammal brain could be completely modeled within three years, and a human brain within the next decade.
If you look carefully, the pieces that will make advanced nanotechnology, biotechnology, and information technology possible, are already in the works and making tremendous progress.

So are you saying that the percentage of the population that would choose high-tech information and AI -via brain implants- (I did inadvertently leave that out) can be predicted? Try to keep things in context. I said that because the poster claims that social science is not information technology so therefore cannot be predicted. That's nothing more than a convenient way dodge things in order to tout a single-minded view.
I dont quite understand your argument here. What are you disagreeing with?
How do you know that a majority of people will not adopt advanced technology? I am not merely speculating, because I have history on my side.

When we came up with modern day medicine, didn't the majority of people rapidly adopt things like vaccines and anti-biotics? Dont majority of people choose organ transplants or artificial organ implants (over suffering/ death)?
When we came up with cell phones, didn't the majority of people rapidly adopt it?
When we came up with computers and the internet, didn't majority of people adopt it is some form?

Come on, really? To say, "Singularity and advanced AI is going to make everything many million magnitudes better" is nothing more than sheer speculation you know it. Keep in mind that the poster's speculation is that all this stuff will happen around 2030. In my opinion, the term "Singularity" is little more than a catchy, over-used, borrowed term that has a techno-savvy ring to it,
I dont know how to respond to your continued insistence that all this advanced technology is pure speculation. I have provided a lot of evidence, and there is a lot more evidence out there that supports the claims made here and by others like Ray Kruzweil.

Is all the text from all the books ever written, etc., held on a "device"? I understand there are various estimates getting into perhaps several thousand petabytes (that's a lot). That has nothing to do with what I said. Nor do nanobots. Since you're tossing that into the fray, let me toss it back to you: How much digital information can a nanobot hold?
Not much to say here except that you might be eating your own words within 1-2 years. Just last week or so, the following was released by Sony.
The iPod let you put your entire music collection in your pocket. Now Sony has something that could let you put the world's music collection in your pocket: a cassette tape that holds 185 terabytes of data.
To put that in perspective, the tape can hold about 60 million songs — far more than anyone could listen to in their lifetime (that would be about 17 million, assuming continuous listening for 100 years, even while sleeping, and 3 minutes per song). All of the printed works of the Library of Congress add up to only about 10 terabytes.
I realize there are people who have very passionate beliefs about what the future may hold, but once again, apart from a crystal ball, it's speculation. Will things continue to improve? Assuming something unforeseen doesn't get in the way, then that's probably true. The problem is that in making specific predictions, it assumes that nothing interferes. Unfortunately, that puts such predictions about the future in a state of uncertainty. So really, your point is moot. Further, there is no way of knowing exactly what the long term effects would be if a swarm of nano computers, or even nanobots, were implanted in the human brain. If you want to volunteer yours, feel free.
Again, you insist it is speculation. It is NOT.

I do want to commend you for at least not inserting particular dates. And I understand your points, but they are mostly speculative, not factual. I'm a bit surprised you didn't include any links or references to support some of your views though. In any case, I also understand that things don't always pan out quite like we may hope they will or when they will. Sometimes they do, and sometimes they don't. Time will tell what happens or not.
You insist on calling all this speculative, but many who follow all the technologies mentioned above know that we have made more progress in the last few years than the last 50 years combined. And will make more progress in the next 5 years than the past 100 years combined. And this just keeps getting better exponentially.

Think about it this way:
What would all the advances in medicine over the past 1000 years mean after the data mining efforts across the world find gene based answers to many of the diseases we face today? Do you really think we are very far from personalized medicine that uses DNA information to battle disease? Isnt it only logical to expect this technology to advance to a level where we can simulate and cure the disease on a machine. And then translate that into human biology using nano and bio - technologies?

You are right, I should have included links, but it gets tiring at times. I have referenced several sources here. Links below.

Sony Supersizes Data Storage With 185-Terabyte Cassette Tape
Google’s Grand Plan to Make Your Brain Irrelevant | Business | WIRED
A Working Brain Model | MIT Technology Review
Why Did Google Pay $400 Million for DeepMind? | MIT Technology Review

Last edited by sandman249; 05-08-2014 at 11:35 AM..
Reply With Quote Quick reply to this message
Old 05-08-2014, 12:32 PM
1,301 posts, read 1,243,629 times
Reputation: 1004
Default Somewhere on City-Data in a ...

... forum a very long time ago, I raised this issue and was met with the worse criticism anyone could endure -indifference and silence.

Somewhere I came across a NGO on the Internet that was promoting ethical programming. This organization recognized that the more we give over our daily affairs to computers, the more risk humanity undertakes.

No, not SKYNET, but how about a logistics application which computes that it would be more efficient to move full grain shipments around the world and skips a few less than full deliveries. (Anyone flown USAIR or JETBLUE lately?) Millions starve as a consequence. Oh, the humanity.

So this NGO advocates that in essence, that all applications be "Three Laws Safe." We have weak AI already. Strong AI, another HAL (letter substitution H for I, A for B, L for M) would have ethical firmware, or so the NGO hopes.

I, for one, am keeping my head down.
Reply With Quote Quick reply to this message
Old 05-08-2014, 12:47 PM
Location: Haiku
3,097 posts, read 2,173,036 times
Reputation: 4756
Personally I am betting on evolution. Historically, lifetime for any one species is a million years. Man has been around 800,000 years or so although it is hard to say where to draw the line. Based on how life has evolved here on earth, Homo sapien should evolve and be replaced by another species of Homo at some point in the future. Presumably that replacement species has improvements to our numerous flaws, particularly those that concern our inability to apply our brain power in ways that are not so self destructive (such as irresponsible use of technology).

I have a PhD in physics, so this is an odd statement from me, but I completely distrust man's ability to manage the technology we produce. AI may be one form of that but it is not the one which worries me the most. That prize goes to genetic engineering, which I believe to be a potential disaster of huge proportions.

Contrary to what Hawkins is saying, I think our own intelligence is a bigger risk than AI. The answer to that is not to augment our intelligence with more computing power, it is to evolve to have a mind that is better suited to living in harmony with each other, with the planet, and with other species.

The cognitive attribute that is our bigger problem is not lack of intellect, it is lack of rational application of our minds. Take global warming - we know what the solution to that problem is. We do not need AI to solve that. The problem we have is overcoming the irrational response of huge segments of the population in applying the solution.
Reply With Quote Quick reply to this message
Old 05-08-2014, 02:23 PM
Location: where you sip the tea of the breasts of the spinsters of Utica
8,305 posts, read 11,813,352 times
Reputation: 8038
For what it's worth, Gray Goo:
......In a History Channel broadcast, a contrasting idea (a kind of gray goo) is referred to in a futuristic doomsday scenario: "In a common practice, billions of nanobots are released to clean up an oil spill off the coast of Louisiana. However, due to a programming error, the nanobots devour all carbon based objects, instead of just the hydrocarbons of the oil. The nanobots destroy everything, all the while, replicating themselves. Within days, the planet is turned to dust." [6]
Drexler describes gray goo in Chapter 11 of Engines Of Creation:
Early assembler-based replicators could beat the most advanced modern organisms. 'Plants' with 'leaves' no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous 'bacteria' could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.......... Grey goo - Wikipedia, the free encyclopedia
Reply With Quote Quick reply to this message
Old 05-08-2014, 02:26 PM
Location: Pueblo - Colorado's Second City
12,102 posts, read 20,355,007 times
Reputation: 4131
Originally Posted by Woof View Post
For what it's worth, Gray Goo:
That is part of the plot in the movie the singularity is near.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.

Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply

Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads
Follow City-Data.com founder on our Forum or

All times are GMT -6.

2005-2018, Advameg, Inc.

City-Data.com - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 - Top