
02-11-2012, 01:00 AM
|
|
|
297 posts, read 875,346 times
Reputation: 165
|
|
What are some of the big reasons that are preventing us from reaching general AI?
I also heard that it's impossible to rely on first-order logic to achieve general AI, so I'm assuming that the next most likely option is to try to imitate nature.
At the same time, I read an article saying that there are hardware (chips) being developed that mimic neurons/synapses, but I was wondering if this would have some good computational benefits as well. For instance, will neuronally-configured hardware one day (in the far off future) replace the von-Neumann model of computer architecture we have today? Would an advanced, brain-like configuration for a computer really be better in *every way* compared to our current model of computer architecture?
Hmm... anyone study this as their research? If not, still feel free to chime in
http://www.stanford.edu/group/brains...neurogrid.html
http://www.stanford.edu/group/brains...html#Softwires
Last edited by avant-garde; 02-11-2012 at 01:23 AM..
|

02-11-2012, 02:14 PM
|
|
|
Location: Wasilla Alaska
119 posts, read 218,723 times
Reputation: 112
|
|
Interestingly enough I was watching something on the Discovery channel a while back, and they were discussing the difficulties of making AI, one of the examples that was given is if you look out a window, and see someone walking past holding an umbrella, no matter what angle that umbrella is at your brain can fill in the rest and it knows that it's an umbrella.
Because of how computers work, you'd need to have a stored image of EVERY single angle that an umbrella could be at, and the computer would have to process everything in it's image database to figure out what it was looking at.
Part of the problem is they don't really know how the brain processes the information, so they can't replicate that method, or even know if it would be beneficial to do so.
|

02-17-2012, 12:18 AM
|
|
|
Location: Conejo Valley, CA
12,470 posts, read 19,227,823 times
Reputation: 4344
|
|
Quote:
Originally Posted by avant-garde
I also heard that it's impossible to rely on first-order logic to achieve general AI, so I'm assuming that the next most likely option is to try to imitate nature.
|
I don't think "impossible" is the right word here. Firstly "first-order logic" refers to a very general class of logical systems though you probably have in mind classical first-order logic. So in that case its proven to be of limited use in AI, but at least in hindsight that seems pretty obvious. Classical logic models human deductive reasoning, but much of human decision making isn't deductive in nature. In the real world you need to make decisions with imperfect information so deduction ends up being entirely inadequate to develop "general AI".
In terms of "neuronally-configured hardward", that is pretty immaterial to AI. The brain is a parallel computing machine, but so are today's computers. But you can mimic parallel computing on serial hardware.... Once understood, the underlying hardware shouldn't matter. You could run AI on a serial computing machine, a parallel processor, etc.
|

02-17-2012, 12:22 AM
|
|
|
Location: Conejo Valley, CA
12,470 posts, read 19,227,823 times
Reputation: 4344
|
|
Quote:
Originally Posted by Viking Tech Solutions
Because of how computers work, you'd need to have a stored image of EVERY single angle that an umbrella could be at, and the computer would have to process everything in it's image database to figure out what it was looking at.
|
Beyond the basic hardware, computers "work" how you program them to work. You can solve the problem of detecting "person with umbrella" using neural nets and there wouldn't be a single imagine stored.
Take a look at Microsofts Kinect, it was developed using neural nets. Its able to detect human motion regardless of the background.
|

02-17-2012, 12:28 AM
|
|
|
Location: Victoria TX
42,661 posts, read 83,257,400 times
Reputation: 36547
|
|
I would think that humans, having evolved as social animals, have a certain hard-wired altruism, which it would be impossible to program as AI. This altruism is the check against self-destructing through selfishness and the unlimited acquisition of power. Humans instinctively recognize the boundary between competitive and cooperative, without which our species would not have successfully evolved.
So it's not about perfecting logical thought, it is about constraining logical thought within socially survivable perameters and being able to recognize those parameters We can't program a computer to remain within those parameters, because we don't fully understand them ourselves, or even agree among ourselves on their existence.
|

02-17-2012, 12:55 AM
|
|
|
Location: Conejo Valley, CA
12,470 posts, read 19,227,823 times
Reputation: 4344
|
|
Quote:
Originally Posted by jtur88
I would think that humans, having evolved as social animals, have a certain hard-wired altruism, which it would be impossible to program as AI.
|
The human brain is just computing device, there is no reason why altruism can't be executed by another computing device.
Though AI doesn't have to be social, or could have a much different social aspect...it will closely resemble human socialization because it is being developed by humans for use in human societies. Much of what we talk about only makes sense in a social context. But once AI becomes self-sustaining, that is AI producing AI, then that it will likely deviate from human forms of socialization.
The idea of a "perfectly logical" robot is a matter of fiction, you can't function in the real world unless you can act under imperfect information.
|
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.
|
|