U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 11-20-2019, 06:30 PM
 
Location: PRC
3,262 posts, read 3,372,537 times
Reputation: 2955

Advertisements

Making robots fear "death" is a really bad idea. Imagine if this got out of hand and became a reason to survive. It would mean that any human who was a threat (in any shape or form) would be subject to attack or at least "defensive behaviour". Robot reasoning may make it think that a human in the area might turn it off(kill) or disable(injure=die=death) in some way. I really wonder about the common sense and morality of some of these scientists sometimes.

Link 1 Popular Mechanics
Link 2 Futurism

It seems to me that the blame for a poorly perfoming robot should lie firmly at the door of the programmer NOT with the robot.
Reply With Quote Quick reply to this message

 
Old 11-21-2019, 01:17 AM
 
Location: Pacific 🌉 °N, 🌄°W
11,207 posts, read 5,062,869 times
Reputation: 7171
I disagree with the idea machines possess intelligence.

What would make any machine such as a calculator, lawnmower, computer...capable of housing intellectual capability? Machines simply run a program or set of operations designed into the system. Where is the evidence "they" (the machine) does anything more than this?

Believing machines are intelligent or have any sentience at all is a mistake of Anthropomorphism.

This false belief is caused in large part by cultural conditioning, as expressed in movies like The Terminator. It is part of our cultural narrative that machines "think" and "feel".

Machines are not intelligent, folks. Animals have intelligence and sentience, which is housed in their brains and expressed in their perceptual brain/body/environment "field of experience."

The brain is an electrochemical organ very unlike, and far, far more complex, than any machine in existence.

Logical and independent thought can reach past cultural conditioning to reveal the truth. The simple truth is no machine has any intelligence...its just a machine running as it was designed to.
Reply With Quote Quick reply to this message
 
Old 11-21-2019, 06:09 AM
 
40,340 posts, read 41,872,623 times
Reputation: 16846
Quote:
Originally Posted by Matadora View Post
...its just a machine running as it was designed to.

But those machines can now mimic human intelligence or whatever you want to refer to it as and they are getting better at it every day. When you combine that with the extraordinary computational power that no human could possibly compete with you are on path to creating something superior to yourself. Can they become self aware? Interesting idea....

A typical ten year old kid can identify the letter A in thousands of different fonts despite the fact they have never seen the letter A displayed in those fonts before. That requires intelligence. With a conventional computer you give it an example of the letter A, tell it that's an A and now it can identify it. Assuming accurate input it will always be right. Trouble is you need to do that for each font the letter A appears in. That's a pretty big task especially if you wanted to give it the entire knowledge of the world. Even if you could give it the knowledge of the world the resources required to operate such computer are going to eat your lunch.

With AI you give it 100 examples of the letter A and then program it so it can identify the rest of them. Not any different than what a human can do. It can make mistakes and when it does it doesn't file away that this is the letter A, it determines how it arrived at the wrong answer and fixes the human mistakes in it's programming. This may sound trivial to some but it's not, this is the brick wall that computer programmers have been butting their heads up against since intelligent computer design was first envisioned. It's an enormous hurdle and that technology is just now becoming available in the past decade.

If you want another great example, chess. Deep Blue beat Kasparov in the late nineties, if you are unfamiliar he was considered the best chess player of the time. It's certainly a landmark moment in computer technology but at the end of the day he was beaten by brute force computational power. Fast forward to today and Google has AI that first taught itself how to play chess and then went onto to bury the best computational chess computers available. The big difference? The computational computers calculations were measured in tens of millions, the AI was measured in tens of thousands.
Reply With Quote Quick reply to this message
 
Old 11-21-2019, 12:13 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,207 posts, read 5,062,869 times
Reputation: 7171
Quote:
Originally Posted by thecoalman View Post
But those machines can now mimic human intelligence or whatever you want to refer to it as and they are getting better at it every day. When you combine that with the extraordinary computational power that no human could possibly compete with you are on path to creating something superior to yourself. Can they become self aware? Interesting idea....
Mimicking is not the same as a sentient creature who has thoughts, feelings and motives.

Machines are just physical objects that obey physics and run the way they are designed to run (like a calculator, phone, or any other machine/computer program).

Animals on the other hand have sentience, consciousness, a sense of self, awareness, free will (of a sort), and a (potentially) powerful, willful mind. There is no evidence machines have any of those things.

Because a machine does not have any sentience or experience it therefore does not have any memories of such experience. The Terminator may run a lot of perceptual calculations, but "it" has no experience of a visual field (or the contents within) and therefore knows not what it sees or experiences. Except that it doesn't even see or experience anything.

People who view animal brains as a computer have no understanding of the complexity of biology.
Reply With Quote Quick reply to this message
 
Old 11-21-2019, 06:50 PM
 
40,340 posts, read 41,872,623 times
Reputation: 16846
Quote:
Originally Posted by Matadora View Post
Mimicking is not the same as a sentient creature who has thoughts, feelings and motives.

That's why I used the word mimic.


Quote:
Machines are just physical objects that obey physics and run the way they are designed to run (like a calculator, phone, or any other machine/computer program).
There is very big difference between a calculator which has a narrowly defined set of structured data/instructions and AI which does not. These computers are being programmed to operate like a human brain with reasoning, logic etc.

A calculator or normal computer may have an instructions that states "Dorothy's shoes are ruby red." It only knows the answer because you specifically told it, it will never be wrong as long as the input data is accurate. If you don't specifically give it this instruction it can't answer the question "What is the color of Dorthy's shoes?" but most kids could.

An AI computer like Watson for example has been fed vast amounts of data but the important thing to understand is it's unstructured data. There is no line in that says "Dorothy's shoes are ruby red." It has to examine it's vast stores of data and draw the conclusion you are asking about the shoes worn by a character in a famous movie. Once again when it answers this question it could be entirely wrong. Watson should easily get this right.


Going back to the chess example. The way it taught itself to play chess was by repeatedly playing itself. The very first game it played against itself would of been like watching two people with an IQ of 0 playing chess, a billion games later there is nothing that can beat it and it gets better every time it plays the game.
Reply With Quote Quick reply to this message
 
Old 11-21-2019, 07:39 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,207 posts, read 5,062,869 times
Reputation: 7171
Quote:
Originally Posted by thecoalman View Post
That's why I used the word mimic.

There is very big difference between a calculator which has a narrowly defined set of structured data/instructions and AI which does not. These computers are being programmed to operate like a human brain with reasoning, logic etc.

A calculator or normal computer may have an instructions that states "Dorothy's shoes are ruby red." It only knows the answer because you specifically told it, it will never be wrong as long as the input data is accurate. If you don't specifically give it this instruction it can't answer the question "What is the color of Dorthy's shoes?" but most kids could.

An AI computer like Watson for example has been fed vast amounts of data but the important thing to understand is it's unstructured data. There is no line in that says "Dorothy's shoes are ruby red." It has to examine it's vast stores of data and draw the conclusion you are asking about the shoes worn by a character in a famous movie. Once again when it answers this question it could be entirely wrong. Watson should easily get this right.


Going back to the chess example. The way it taught itself to play chess was by repeatedly playing itself. The very first game it played against itself would of been like watching two people with an IQ of 0 playing chess, a billion games later there is nothing that can beat it and it gets better every time it plays the game.
With respect to a calculator, in short, it involves representing the decimal numbers we use in a different format called binary and comparing them with electrical circuits known as logic gates.

Logic gates are the basic building blocks of any digital system. It is an electronic circuit having one or more than one input and only one output. The relationship between the input and the output is based on a certain logic.

The use of logic gates in computers predates any modern work on artificial intelligence or neural networks. However, the logic gates provide the building blocks for machine learning, artificial intelligence and everything that comes along with it.

Animals posses sentience, consciousness, a sense of self, awareness, free will (of a sort), and a (potentially) powerful, willful mind. There is no evidence machines have any of those things.

Logical and independent thought can reach past cultural conditioning to reveal the truth. The simple truth is no machine has any intelligence -- its just a machine running as it was designed to.
Reply With Quote Quick reply to this message
 
Old 11-22-2019, 08:21 AM
 
40,340 posts, read 41,872,623 times
Reputation: 16846
Quote:
Originally Posted by Matadora View Post
Animals posses sentience, consciousness, a sense of self, awareness, free will (of a sort), and a (potentially) powerful, willful mind. There is no evidence machines have any of those things.

I'm not saying a computer will ever have those things but at some point their intelligence will surpass ours. When you combine the ability to learn/understand with all knowledge and the superior computational power it's not something trivial. It won't be humans solving the riddles on the universe but computers and that is not just because they crunch numbers.



I don't expect the terminators to be showing up any time soon but it's not something to be ignored either. I believe it was Facebook had two AI computers talking to each other and on their own they invented a more efficient way of communicating. We need to be careful that they don't decide on their own it's more efficient to eliminate these pesky carbon based life forms.
Reply With Quote Quick reply to this message
 
Old 11-22-2019, 11:09 AM
 
Location: Pacific 🌉 °N, 🌄°W
11,207 posts, read 5,062,869 times
Reputation: 7171
Quote:
Originally Posted by thecoalman View Post
I'm not saying a computer will ever have those things but at some point their intelligence will surpass ours.
There's a distinction between all living intelligence and that of artificial origin.

An animal brain, and its functions described as "intelligent," work through the brain.

99% of its intelligence during daily, routine experience and action, is generated via general memory.

General memories I define as groups of past similar experience/action episodes (of visual field including body and brain).

Because a machine does not have any sentience or experience it therefore does not have any memories of such experience. As I stated before, The Terminator may run a lot of perceptual calculations, but "it" has no experience of a visual field (or the contents within) and therefore knows not what it sees or experiences. Except that it doesn't even see or experience anything.
Quote:
Originally Posted by thecoalman View Post
When you combine the ability to learn/understand with all knowledge and the superior computational power it's not something trivial. It won't be humans solving the riddles on the universe but computers and that is not just because they crunch numbers.

I don't expect the terminators to be showing up any time soon but it's not something to be ignored either. I believe it was Facebook had two AI computers talking to each other and on their own they invented a more efficient way of communicating. We need to be careful that they don't decide on their own it's more efficient to eliminate these pesky carbon based life forms.
You, like many have a lot of misconceptions about AI.

I will list a few quotes from a few leading figures in artificial intelligence research.

People talk about singularity -- the point at which an AI suddenly becomes sentient – and use that possibility to stoke fears already fueled by dozens of sci-fi movies.

The reality is less dramatic. There’s no questioning that AI has the potential to be destructive, and it also has the potential to be transformative, although in neither case does it reach the extremes sometimes portrayed by the mass media and the entertainment industry.

Prof. Gary Marcus, PhD states:
“The biggest misconception around AI is that people think we’re close to it.”

“There’s also a lot of misunderstanding around the singularity. As a concept, this reduces a complicated problem to a single dimension. There are so many dimensions in artificial intelligence and natural intelligence, questions around what perception is, how language develops and how memory works. Talking about the singularity is like trying to boil intelligence down to a single IQ number, which itself will change in an individual from day to day. What does an artificial superintelligence mean? Machines are already way smarter than us when it comes to playing games with very tight boundaries, but nowhere near as smart as us when it comes to playing a computer game a 12-year-old could play.”
Prof. Max Welling, University of Amsterdam and UC Irvine states:
“At this point, many people think that AI is a silver bullet that will solve everything. In reality, it’s more that we can do really good signal processing. In other words, AI can extract relevant features, analyse images, and understand speech, but there is a lot of high level reasoning it can’t do. It can’t look at a picture and project into time about what will happen next or extrapolate as to what were the things that happened before and what the causal relationships were that led to the current image. That is a much more complicated understanding of a situation and is something we can't do yet. It might take a while before we can.

“It’s important not to overestimate the current standards. It’s a glorified signal processing tool, but it can be super beneficial – almost any other scientist would benefit from collaborating with machine learning specialists, for example.”
Prof. Joanna Bryson, University of Bath states:
“When people think about AI, it’s usually around two concepts. One is human-like intelligence – referred to as general AI – and the other is a single algorithm that suddenly knows everything. The first of those must be possible because there already is human intelligence, but it’s unlikely we will build it. If you clone a human, you have a biological human, but if you build something like a human, everything changes – you’ve built something that can do anything a human can do. I don’t think we will ever recreate that.

There’s no one piece we can’t build, and we’ve gone super-human in many ways already, if you consider that ‘super-human’ means that machines can outperform humans in one specific task. A book is super-human in its ability to remember things, a plane is super-human in its ability to fly. But if we had something exactly like a person, which could transfer skills from one task to another, and which started competing with us that would create a problem. As for the algorithm that suddenly knows everything, which people sometimes refer to as the singularity – that’s impossible.”
Elizabeth Ling, Elsevier states:
“One common misconception is that AI has suddenly happened. In reality, it’s a longstanding domain of science that’s been evolving.

The flip side of that is that people think of it as something in the far future, but there are already a lot of applications of various forms of AI. It’s already used in many systems in society. The thing is they just don’t look like people expect. If you mention AI in warfare for example, people think of smart drones, but in reality, it’s more likely to appear in a logistics management system. AI applications will be on websites where you may not even notice them. It’s been with us longer than people think.”
Prof. Stuart Russell, UC Berkeley states:
“There’s a common misunderstanding that AI presents a risk because it will magically become conscious and spontaneously hate human beings. There’s no reason to be concerned about spontaneous malevolent consciousness.”

“It also sometimes gets reported as though five years ago we didn’t have AI, and now we do, but the research has been going for 60 years and we’ve made fairly continuous progress. Every so often research reaches a point where you can create a product that people will pay for and from the outside of the field it looks like some kind of breakthrough. But it’s not – we’re just showing how far we’ve got with a certain problem with 10 or 20 years of research.”

“When it comes to the singularity, it’s based on this misperception that machines will get faster and faster and faster than the brain and at some point they’ll just take off. Making machines faster doesn’t make them more intelligent. You’ll just get the wrong answer more quickly. The benefit of having faster machines is to speed up the cycle of experimentation. If it takes you 3 weeks to try something you can’t move forwards. If it takes 3 minutes you can go on to the next thing and you’re better able to quickly develop something that works well.”
Reply With Quote Quick reply to this message
 
Old 11-22-2019, 12:00 PM
 
40,340 posts, read 41,872,623 times
Reputation: 16846
Quote:
Originally Posted by Matadora View Post
You, like many have a lot of misconceptions about AI.

I have no misconceptions about AI, it's not something I'm talking out of my ass about. I wrote my first line of code in 1983. I can even tell you what that program did, it shifted the color of the TV from black to white rapidly... a strobe light. I'm sure this did wonders for longevity of my parents new color TV that was some stupid amount of money at the time.



Quote:
I will list a few quotes from a few leading figures in artificial intelligence research.

Prof. Gary Marcus, PhD states: “The biggest misconception around AI is that people think we’re close to it.”
I never said we were close. AI is in it's infant stages but will rapidly expand going forward. If you want to put this in an analogy compared to the automotive industry it's the late 1800's when automobiles were very expensive toys for the rich. They didn't work very well, prone to almost constant failure and not any real practical uses. Going back to the chess game while the AI computer can easily beat a computational computer using fractions of the calculations it also used a stupendous amount of processing power to learn how to do that beforehand playing itself. It's an interesting achievement that has advanced the technology but at the end of the day it's a very expensive toy for a very rich technology company that at most is proof of concept.



Given time the things I'm discussing will occur. Technology marches on and these things are inevitable.
Reply With Quote Quick reply to this message
 
Old 11-22-2019, 12:08 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,207 posts, read 5,062,869 times
Reputation: 7171
Quote:
Originally Posted by thecoalman View Post
I have no misconceptions about AI, it's not something I'm talking out of my ass about. I wrote my first line of code in 1983. I can even tell you what that program did, it shifted the color of the TV from black to white rapidly... a strobe light. I'm sure this did wonders for longevity of my parents new color TV that was some stupid amount of money at the time.
Writing a code does not mean you are an expert in AI research such as the one's I quoted above.
Quote:
Originally Posted by thecoalman View Post
I never said we were close. AI is in it's infant stages but will rapidly expand going forward.
AI has been around for 60 years and it's is hardly in it's infancy stage. That's a misconception.
Quote:
Originally Posted by thecoalman View Post
Given time the things I'm discussing will occur. Technology marches on and these things are inevitable.
I agree with Prof. Stuart Russell, UC Berkeley.

“There’s a common misunderstanding that AI presents a risk because it will magically become conscious and spontaneously hate human beings. There’s no reason to be concerned about spontaneous malevolent consciousness.”

Logical and independent thought can reach past cultural conditioning to reveal the truth. The simple truth is no machine has any intelligence -- its just a machine running as it was designed to.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads
Follow City-Data.com founder on our Forum or

All times are GMT -6.

© 2005-2019, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 - Top