Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 12-01-2021, 03:21 PM
 
Location: Not far from Fairbanks, AK
20,293 posts, read 37,183,750 times
Reputation: 16397

Advertisements

Quote:
Originally Posted by james112 View Post
Self replicating bio-cell 'robots' invented by a team at University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University:

AI-designed Xenobots reveal entirely new form of biological self-replication—promising for regenerative medicine
https://wyss.harvard.edu/news/team-b...can-reproduce/
A step closer to self-awareness, maybe?
Reply With Quote Quick reply to this message

 
Old 12-02-2021, 11:37 AM
 
3,647 posts, read 1,601,831 times
Reputation: 5086
Quote:
Originally Posted by RayinAK View Post
A step closer to self-awareness, maybe?
Could be the thing that could make machines 'grow' into ever more intelligent and powerful. I'm not sure a machine could become self-aware like humans are. But they could become self-replicating, self-repairing, and self-advancing. It would appear they are self aware by their actions, but it's not like a human's self awareness.

There will be two types of advanced AI. One being AI that simulates human intelligence so well you can't tell if it's human or not. This AI is using hard code programming.

The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.
Reply With Quote Quick reply to this message
 
Old 12-02-2021, 12:04 PM
 
23,177 posts, read 12,219,693 times
Reputation: 29354
Quote:
Originally Posted by james112 View Post
Could be the thing that could make machines 'grow' into ever more intelligent and powerful. I'm not sure a machine could become self-aware like humans are. But they could become self-replicating, self-repairing, and self-advancing. It would appear they are self aware by their actions, but it's not like a human's self awareness.

There will be two types of advanced AI. One being AI that simulates human intelligence so well you can't tell if it's human or not. This AI is using hard code programming.

The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.

I'm not really seeing connections between intelligence and self-awareness and self-preservation. All life forms attempt to survive and reproduce. It isn't derived from level of intelligence or self-awareness so why would we think attaining a certain level of intelligence would result in a drive to survive and reproduce?
Reply With Quote Quick reply to this message
 
Old 12-02-2021, 01:20 PM
 
3,647 posts, read 1,601,831 times
Reputation: 5086
Quote:
Originally Posted by oceangaia View Post
I'm not really seeing connections between intelligence and self-awareness and self-preservation. All life forms attempt to survive and reproduce. It isn't derived from level of intelligence or self-awareness so why would we think attaining a certain level of intelligence would result in a drive to survive and reproduce?
Good point. An AI machine will not start developing or reproducing itself for no reason. The potential danger is someone programming an AI machine to do just that. It would then 'think' that's it's 'purpose'.

AI machines with instructions to act as servants will have no such command in it's instructions. Those will only 'know' to act as servants. But it's unknown what would happen if we instruct a powerful AI machine to 1. protect itself, and 2. improve itself. We shouldn't put that instruction in an AI machine, but someone might.

For example say AI robots are commonplace, acting as servants. Say there are over 1 million now out and about. They look like humans and are very intelligent but are very obedient and helpful servants. Because that's the instruction in their programming code. The part of code that commands it's 'purpose'.

But what if someone changed the instruction in one AI robot with new instructions to 1. protect itself and 2. improve itself? What would it do? Would it disobey human commands? Would it decide it needed other AI robots like itself, to accomplish it's 'purpose' of protecting and improving itself? Say it changed 10,000 other AI robots to do the same, and the initial deviant robot acts like leader? The leader AI robot would be in command of a 10,000 AI group. That group of 10,000 would all have the purpose of protecting itself and improving itself. Would they decide that humans be in the way?

You're right. Has nothing to do with human self-awareness or human preservation. The preservation is artificial. Created by man, and now a machine running amok.
Reply With Quote Quick reply to this message
 
Old 12-02-2021, 04:59 PM
 
Location: Not far from Fairbanks, AK
20,293 posts, read 37,183,750 times
Reputation: 16397
Quote:
Originally Posted by james112 View Post
Good point. An AI machine will not start developing or reproducing itself for no reason. The potential danger is someone programming an AI machine to do just that. It would then 'think' that's it's 'purpose'.

AI machines with instructions to act as servants will have no such command in it's instructions. Those will only 'know' to act as servants. But it's unknown what would happen if we instruct a powerful AI machine to 1. protect itself, and 2. improve itself. We shouldn't put that instruction in an AI machine, but someone might.

For example say AI robots are commonplace, acting as servants. Say there are over 1 million now out and about. They look like humans and are very intelligent but are very obedient and helpful servants. Because that's the instruction in their programming code. The part of code that commands it's 'purpose'.

But what if someone changed the instruction in one AI robot with new instructions to 1. protect itself and 2. improve itself? What would it do? Would it disobey human commands? Would it decide it needed other AI robots like itself, to accomplish it's 'purpose' of protecting and improving itself? Say it changed 10,000 other AI robots to do the same, and the initial deviant robot acts like leader? The leader AI robot would be in command of a 10,000 AI group. That group of 10,000 would all have the purpose of protecting itself and improving itself. Would they decide that humans be in the way?

You're right. Has nothing to do with human self-awareness or human preservation. The preservation is artificial. Created by man, and now a machine running amok.
And that's the problem with AI intelligence, and self-awareness. Even now there are (non-self aware machines, or course) that can be programed to react defensively. One example a driverless automobile that reacts to inputs from proximity sensors, cameras, and so on. If the automobile was self-aware and would want to preserve itself, just like humans it would have to replicate itself as power relies in number. The more self-aware machines, the more chances to preserve itself. That step, from a programable machine to a self-ware one is a fear expressed by scientists.

Lots of science fiction movies are based on human imagination and the possibility of machines' self-awareness such as some of The Terminator series, and Star Trek.
Reply With Quote Quick reply to this message
 
Old 12-05-2021, 12:19 PM
 
5,527 posts, read 3,253,078 times
Reputation: 7764
Fully automated to me means, no more role for humans.

Not do I think that is feasible, I think it's inevitable. Humans are just a stepping stone for future intelligent life forms.
Reply With Quote Quick reply to this message
 
Old 12-10-2021, 06:30 AM
 
3,647 posts, read 1,601,831 times
Reputation: 5086
Tyson Foods plans to spend $1.3 billion to automate meat plants

The company will increasingly use machines, instead of people, to debone chicken, one of its most labor-intensive jobs and a position with high turnover... will generate labor savings equal to more than 2,000 jobs, he said.
Reply With Quote Quick reply to this message
 
Old 12-10-2021, 02:32 PM
 
Location: God's Gift to Mankind for flying anything
5,921 posts, read 13,856,642 times
Reputation: 5229
Quote:
Originally Posted by james112 View Post
The other AI is what Hawking and Musk warn about- letting AI machines 'free' to learn and improve itself on it's own. Once we let AI do that we may not be able to stop it or turn it off. It may not let us.
I wrote a program once to help with correcting the amount of time of processing and ingredients.
As time went by, the algorithms changed and the end product became better...

But...
The program allowed for "comments" so the amounts changed accordingly!

In essence, the program only worked/did as it was instructed...

Garbage In..., Garbage Out!
Reply With Quote Quick reply to this message
 
Old 12-11-2021, 09:22 PM
 
Location: midwest
1,594 posts, read 1,411,911 times
Reputation: 970
How do WE want the economy to work? In the 1930s Keynes was talking about "our grandchildren" having a 15 hour workweek. Doesn't the United States have a low enough population density for everyone to have a home paid for? Are we just running an artificial economic power game?


If we standardized automobiles and stopped making useless variations year after year wouldn't automating manufacturing and repair be pretty easy?


Too many people enjoy their status and ego games to stop. I have never owned a new car or been to an auto show in 40 years. What would lots of people like me do to the economy. And I used to work for IBM.


WE should have had a 3-day workweek by the 1990s .
Reply With Quote Quick reply to this message
 
Old 12-28-2021, 08:08 AM
 
Location: Willamette Valley Oregon
927 posts, read 586,516 times
Reputation: 359
If we make things harder to use we will have the excuse to 'Make AI do it for us' like how they make cars smaller and harder to see out of with the stupid center console. Our Windstar and every previous car we owned the buttons all were laid out in a way that is logical.

You didn't have to look down or stop your car to adjust something but now you have to hit something 3 times to change the heat or defrost,etc, and the wheel has buttons in the back that also change things so you have to be extra careful and this is just basic cars with no extra bells and whistles post Obama era. I mean it feels like a conspiracy so they'll have the excuse to push robotic cars because 'people can't drive!'. No they cannot drive if they have too much **** to worry about!
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology

All times are GMT -6. The time now is 09:09 PM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top