U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 11-25-2019, 10:42 AM
 
720 posts, read 1,042,528 times
Reputation: 1323

Advertisements

Quote:
Originally Posted by Matadora View Post
LOL again you demonstrate no understanding of the complexities of biology.

One of the most basic objections to the identification of organisms and machines is that their behavior cannot be reduced to the activities and relations of their parts.
What proof do you have of that? There is none.



Quote:
Originally Posted by Matadora View Post
In contrast to a mechanical watch, whose activity is fully determined “from the bottom up” by the activities and organisation of its parts, organisms influence the activities of their parts.

For example, your muscles start to grow if you start to exercise. Moreover, the parts of a watch exist before the watch does. It is not the watch itself that builds its own parts.
A watch would be a bad example since it is purely designed and cannot change itself. A lot of modern AI is in teaching machines how to adapt themselves rather than work from only their base programming.


Quote:
Originally Posted by Matadora View Post
Once you earn a BS in Biology you will understand the complexities of biology and why biological systems are not computers.

Are organisms basically living machines? The answer is NO!
The answer is not no. That's a really weird article that uses philosophy and then compares current machines to living organisms, but completely fails to expand beyond that. It's missing the "here's why" part completely. You can't just state things and then gesture vaguely at getting a biology degree as the reasoning. Maybe you should get a computer science or electrical engineering degree before you try to dictate the limitations.
Reply With Quote Quick reply to this message

 
Old 11-25-2019, 10:50 AM
 
720 posts, read 1,042,528 times
Reputation: 1323
Quote:
Originally Posted by Matadora View Post
Because a machine does not have any sentience or experience it therefore does not have any memories of such experience. As I stated before, The Terminator may run a lot of perceptual calculations, but "it" has no experience of a visual field (or the contents within) and therefore knows not what it sees or experiences. Except that it doesn't even see or experience anything.
How would you define that? How do you tell the difference between an animal that has sentience and one that does not? Where is the dividing line?


Computers already use memories and experience to react to a situation, it's called machine learning. It's still pretty rudimentary right now and is mostly very specific, but it is steadily improving.
Reply With Quote Quick reply to this message
 
Old 11-25-2019, 03:16 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,249 posts, read 5,081,229 times
Reputation: 7220
Quote:
Originally Posted by Matadora View Post
One of the most basic objections to the identification of organisms and machines is that their behavior cannot be reduced to the activities and relations of their parts.
Quote:
Originally Posted by Transmition View Post
What proof do you have of that? There is none.
A machine can be reduced to the parts that are responsible for the machines activity. For example a computer motherboard is in no way similar to an animals nervous system or their endocrine system.

Apples to oranges.

Today, misconceptions about AI are spreading like wildfire.

AI’s progress in recent years is truly impressive. However, the notion that AI works like the human brain couldn’t be further from the truth. There are still areas of AI that remain extremely challenging, such as language and judgment of relevance.
Quote:
Originally Posted by Transmition View Post
A watch would be a bad example since it is purely designed and cannot change itself. A lot of modern AI is in teaching machines how to adapt themselves rather than work from only their base programming.
Bingo! Teaching a machine is not the same thing as a machine possessing sentience and simply adapting themselves based on their sentience.
Quote:
Originally Posted by Transmition View Post
The answer is not no. That's a really weird article that uses philosophy and then compares current machines to living organisms, but completely fails to expand beyond that. It's missing the "here's why" part completely. You can't just state things and then gesture vaguely at getting a biology degree as the reasoning. Maybe you should get a computer science or electrical engineering degree before you try to dictate the limitations.
It's not a weird article. It certainly explains the here's why organism are not machines.

Another common misconception is that computers can learn on their own. Well, not really. Sure, they can grasp how to perform a task in a better way. Or make predictions based on existing data. Nevertheless, we, the human programmers, data administrators, and users provide the necessary input for their learning and improvement. Machines can’t yet implement key components of intelligence, such as problem-solving and planning just by themselves.

In other words, unless provided with initial data, they can’t figure out how to achieve goals.
Quote:
Originally Posted by Matadora View Post
Because a machine does not have any sentience or experience it therefore does not have any memories of such experience. As I stated before, The Terminator may run a lot of perceptual calculations, but "it" has no experience of a visual field (or the contents within) and therefore knows not what it sees or experiences. Except that it doesn't even see or experience anything.
Quote:
Originally Posted by Transmition View Post
How would you define that? How do you tell the difference between an animal that has sentience and one that does not? Where is the dividing line?
Any living animal that has the capacity to feel, perceive, or experience subjectively.
Quote:
Originally Posted by Transmition View Post
Computers already use memories and experience to react to a situation, it's called machine learning. It's still pretty rudimentary right now and is mostly very specific, but it is steadily improving.
In technical terms, human memory is content-addressable, or memory based on concepts and their relationships with other concepts, as organized and stored in a person's mind. Computer memory, on the other hand, is byte-addressable, or memory based on specific instructions connected with specific files in the computer.

The mind, and its functions described as "intelligent," work through the brain. 99% of an animal brains intelligence during daily, routine experience and action, is generated via general memory. General memories I define as groups of past similar experience/action episodes (of visual field including body and brain).

Because a machine does not have any sentience or experience it therefore does not have any memories of such experience.

Logical and independent thought can reach past cultural conditioning to reveal the truth. The simple truth is no machine has any intelligence -- its just a machine running as it was designed to.
Reply With Quote Quick reply to this message
 
Old 11-25-2019, 03:48 PM
 
720 posts, read 1,042,528 times
Reputation: 1323
Quote:
Originally Posted by Matadora View Post
A machine can be reduced to the parts that are responsible for the machines activity. For example a computer motherboard is in no way similar to an animals nervous system or their endocrine system.

Apples to oranges.
Making your own bad comparison doesn't prove anything. We can partially simulate nervous systems of animals, one day we'll be much closer to it
https://www.ece.uw.edu/spotlight/uw-...ervous-system/



Quote:
Originally Posted by Matadora View Post
Machines can’t yet implement key components of intelligence, such as problem-solving and planning just by themselves.
"Yet" is the key word.



Quote:
Originally Posted by Matadora View Post
Any living animal that has the capacity to feel, perceive, or experience subjectively.
But there is no clear way to measure that. If we get a machine that can exactly mimic the behaviour of a living thing then how would we know if it was or wasn't feeling or experiencing subjectively?



Quote:
Originally Posted by Matadora View Post
Logical and independent thought can reach past cultural conditioning to reveal the truth. The simple truth is no machine has any intelligence -- its just a machine running as it was designed to.
No current machine, sure. But why rule out future machines? They are getting better at the Turing test every year.
Reply With Quote Quick reply to this message
 
Old 11-25-2019, 04:14 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,249 posts, read 5,081,229 times
Reputation: 7220
Quote:
Originally Posted by Transmition View Post
Making your own bad comparison doesn't prove anything.
You can toss around all the insults you want but it's not helping your case. A machine's activity can be reduced down to the parts that make the machine function. Let's see you do this with the nervous system.
Quote:
Originally Posted by Transmition View Post
We can partially simulate nervous systems of animals, one day we'll be much closer to it
https://www.ece.uw.edu/spotlight/uw-...ervous-system/
The only thing that's "bad" here is your inability to make sense of or understand basic science.

The link you posted has NOTHING to do with AI! Nothing at all.

Here is the key purpose of their research. “As mappings of those organisms become more detailed, our framework paves the way to study how neural interaction and structure generate behavior,” he said.

Do you see it tied to AI in any way? NO!
Quote:
Originally Posted by Transmition View Post
But there is no clear way to measure that. If we get a machine that can exactly mimic the behaviour of a living thing then how would we know if it was or wasn't feeling or experiencing subjectively?
Criteria for recognizing sentience
Quote:
Originally Posted by Transmition View Post
No current machine, sure. But why rule out future machines?
I don't have to rule it out because the simple truth is no machine has any intelligence -- its just a machine running as it was designed to.
Quote:
Originally Posted by Transmition View Post
They are getting better at the Turing test every year.
Sure they are.

Have you looked at the weaknesses? Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality.

Weaknesses
Reply With Quote Quick reply to this message
 
Old 11-25-2019, 04:59 PM
 
Location: Pacific 🌉 °N, 🌄°W
11,249 posts, read 5,081,229 times
Reputation: 7220
There's an enormous amount of misconceptions out there surrounding AI and many links that explain away these misconceptions.

The most obvious thing that many don't think of is we have not even figured out human consciousness yet. So how are we to build a sentient machine when we don't even have a solid model of human consciousness?

I think this is a good video that makes a lot of valid points.


Can Machines Be Conscious?
Reply With Quote Quick reply to this message
 
Old 11-28-2019, 11:50 PM
 
12,651 posts, read 3,326,542 times
Reputation: 8360
Quote:
Originally Posted by Matadora View Post
AI fear-mongering is just wrong and is what many people promote.

This AI research professor has a great website about AI and the issues with people fear-mongering it.

From her website under AI and Society she writes:
I wrote this page because many people worry about the wrong things when they worry about AI.

It's not that there's nothing to worry about AI. It's that many people are confused about the word "intelligent" – they think it means "like a human." Humans are intelligent, but we're also tall, and we (mostly) walk on two legs. We don't think ostriches or giraffes are human, and we shouldn't think robots are human either. I hope that by writing this page, I can help us worry about the right things.
AI Ethics: Artificial Intelligence, Robots, and Society

Joanna J Bryson studied Behavioural Science at the University of Chicago, graduating with an AB in 1986.

In 1991 she moved to the University of Edinburgh where she completed an MSc in Artificial Intelligence before an MPhil in Psychology.

Bryson moved to MIT to complete her PhD, earning a doctorate in 2001 for her thesis " Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents".

She completed a postdoctoral fellowship in Primate Cognitive Neuroscience at the Harvard University in 2002.

Her personal website is a great resource.

Joanna J. Bryson
Why do you think DARPA projects are creating human like, bipedal robots then? The world we live in is designed for human shaped bodies, vehicles, equipment, housing, its all designed to be used by humanoid, bipedal entities, so of course it makes the best sense to create robots in the same way, many of these can be seen on Youtube videos, they do have a terminator like appearance.


And really I think the general public will accept robots more, if they are bipedal and humanoid in appearance, the more they look and move like us, the more they will be accepted.
Reply With Quote Quick reply to this message
 
Old 11-29-2019, 01:33 PM
 
573 posts, read 141,550 times
Reputation: 1138
Quote:
Originally Posted by ocpaul20 View Post
Making robots fear "death" is a really bad idea. Imagine if this got out of hand and became a reason to survive. It would mean that any human who was a threat (in any shape or form) would be subject to attack or at least "defensive behaviour". Robot reasoning may make it think that a human in the area might turn it off(kill) or disable(injure=die=death) in some way. I really wonder about the common sense and morality of some of these scientists sometimes.

Link 1 Popular Mechanics
Link 2 Futurism

It seems to me that the blame for a poorly perfoming robot should lie firmly at the door of the programmer NOT with the robot.

With follow-instructions programming a robot should behave as instructed. Unless of course it gets infected with malware, or hacked.

The fear among some is not a robot following program code, but a super AI robot. Super AI will be super intelligent and to be able to be super intelligent it will need to be autonomous AI. That is, AI functioning on it's own, updating code as it learns, always taking in new data.

This autonomous AI is not here yet but I read that for AI to become super intelligent, and solve many problems we can't solve with our limited intelligence, it will need to be autonomous and 'grow' and 'live' free on it's own. And for it to become super intelligent it will need access to the internet.

If an autonomous AI robot has access to the internet it can figure out how to do things that we may not want it to do. Think of a human like robot so real it does all your work at your house. Then you tell it to sleep for 8 hours. It sits down and 'sleeps'. You leave the house. When you are gone it wakes itself up and connects to the internet and does something nefarious, like connect with other AI robots that all plan to do something not instructed to, and maybe something not so nice.

An autonomous, always self-learning AI robot will surpass our intelligence and skills and be almost god like. It will know everything. It can do anything. We can instruct it: "only do what I say and nothing more" and "never harm a human" etc but will it obey? Humans are also autonomous and self-learning but some humans grow up disobeying laws and creating havoc, harm, and misery to others.

The goal is to create an autonomous, always self-learning, super intelligent AI robot. It will require access to all the data we have on file. It will have god like intelligence and skills. But to create that means it must be autonomous and always self-learning with access to all data. Will it obey commands? Will it do things in secret?

Last edited by james112; 11-29-2019 at 02:29 PM..
Reply With Quote Quick reply to this message
 
Old 11-30-2019, 12:13 AM
 
Location: Pacific 🌉 °N, 🌄°W
11,249 posts, read 5,081,229 times
Reputation: 7220
Quote:
Originally Posted by rstevens62 View Post
Why do you think DARPA projects are creating human like, bipedal robots then? The world we live in is designed for human shaped bodies, vehicles, equipment, housing, its all designed to be used by humanoid, bipedal entities, so of course it makes the best sense to create robots in the same way, many of these can be seen on Youtube videos, they do have a terminator like appearance.


And really I think the general public will accept robots more, if they are bipedal and humanoid in appearance, the more they look and move like us, the more they will be accepted.
I don't see how this response relates to my post?

The point of my post was directed at people who think that the I in AI stands for "intelligent" as in "like a human" when in fact it does not mean this at all.
Reply With Quote Quick reply to this message
 
Old 11-30-2019, 10:33 PM
 
Location: Not far from Fairbanks, AK
16,741 posts, read 29,383,326 times
Reputation: 12565
Quote:
Originally Posted by Matadora View Post
Mimicking is not the same as a sentient creature who has thoughts, feelings and motives.

Machines are just physical objects that obey physics and run the way they are designed to run (like a calculator, phone, or any other machine/computer program).

Animals on the other hand have sentience, consciousness, a sense of self, awareness, free will (of a sort), and a (potentially) powerful, willful mind. There is no evidence machines have any of those things.

Because a machine does not have any sentience or experience it therefore does not have any memories of such experience. The Terminator may run a lot of perceptual calculations, but "it" has no experience of a visual field (or the contents within) and therefore knows not what it sees or experiences. Except that it doesn't even see or experience anything.

People who view animal brains as a computer have no understanding of the complexity of biology.
While I just about always disagree with you, on this subject I do agree (bold text).

Last edited by RayinAK; 11-30-2019 at 10:52 PM..
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology
Similar Threads
Follow City-Data.com founder on our Forum or

All times are GMT -6.

© 2005-2019, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 - Top