U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Philosophy
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 05-26-2018, 10:42 PM
Status: "I'm an Unmherkun puppy-kicking Socialist" (set 21 days ago)
 
Location: Dallas, TX
4,046 posts, read 2,120,926 times
Reputation: 3781

Advertisements

Summary

1) If a person’s main source of worth is their survival skills and productivity (including prerequisites like intelligence, strength, fearlessness, competence, courage, etc), then we concede that a human has less right to exist than Trans-Turing AIs with robust physical bodes similar to those in The Terminator and The Matrix.
2) Refusing to accept the conclusion in (1) implies that some traits are more important than mere survival and productive abilities
3) We can escape this trap only by conceding that the said personal traits are not the most important source of a person’s worth. Basing it on the ability to feel pain and suffering is very likely a good starting point, if nothing else.

MAIN BODY

Everybody knows about the “Technocalypse” movie genre, with high-level AI out to conquer or destroy humanity, (e.g, The Terminator, The Matrix). They are superior to humans in all or some ways: strength, agility, physical robustness, intelligence, fearlessness, and in general more survival abilities. Even the AI perform only peaceful productive tasks, they still will outproduce and generally outperform us economically. Yes, in the real world, all this is debatable, but for the sake of argument let’s simply assume this is true. If this ever becomes real, then movie plots notwithstanding, humans become pets at best, extinct at worst. In any case, AI replace humans as Earth’s most powerful conscious, self-aware entities.

For those of us who measure a person’s worth of personhood by their productivity, self-defense abilities, strength, intelligence, fearlessness, and all-around survival in chaos abilities – it seems difficult to justify humans having more value than these AI. After all, they’d be superior to humans in practically every relevant way - although we might be useful test subjects for medical and high-stress psychological experiments; or if we’re lucky, provide-for (though probably not cared-for) pets. This conundrum is even more problematic for those of us who outright disdain people who lack one and especially all these traits.

So if some of us devalue people with inferior strength, intelligence, bravery, competence, etc. primarily on that basis, then how can they sensibly claim that humans’ right to exist (individually or as a species) is equal to an AI’s, let alone superior to the AI? No way that I can see, IF one assumes the bases just described. Should I simply concede, no matter how anguishing, that these AI would indeed have more right to exist than do Homo sapiens? Not necessarily.


I hold that a stronger basis for humans' value is that we have emotions, particularly the capacity to feel pain and anguish, not to mention suffer in some way or another. Certainly the by-now old argument but still solid arguments about whether we should give AIs emotions sheds light on matters. No matter how “smart” a computer is, if it doesn’t have emotions, it’s still just a glorified calculator crunching 1s and 0s (or 2s, 3s, etc. if quantum computing ever comes to be) – with all that implies about emotions, capacity to feel badness, happiness*, and such. Certainly most humans would feel strong empathy for an AI that did have the capacity to suffer, feel pain, or even feel sadness, or feel anger over some injustice or another; and thus would definitely object to anyone (human or other AI) causing that AI anguish outside the role of defense or reasonable levels of punishment for a wrongful act or expression it committed.

Thus, if a human could feel sympathy for an AI with emotional capacities, especially suffering, yet not feel empathy for mere “glorified calculators” without emotions and/or capacity to suffer and feel bad, then we can transfer that same principle back to humans. A humans’ ultimate worth is not in its productivity, strength, courage, and other things mentioned – it’s at least in large part, its capacity to feel pain and suffering, something which nobody wants, masochists aside.

Obviously the ability to experience pleasure or even provide goodness (actively or passively) is one basis for value, but claiming that it’s the most important basis doesn’t work. Lots of people can do horrendous things to some people, even as they do great good for others - Pablo Escobar and Harvey Weinstein being the most famous examples in most posters’ personal living memory. So unless we discount the “negative value” of badness even in the most severe cases, we can’t say ability to provide goodness alone is a sound basis.

It is for these reasons that I put primary emphasis on “not doing bad” rather than “providing good”.

*I don’t consider happiness, especially “surplus happiness” to be a primary value. Happiness (and goodness in general) seems important only to the extent that a consciousness would suffer or feel badness were not that happiness present (or thing that generates that happiness, if you prefer).

Last edited by Phil75230; 05-27-2018 at 12:07 AM..
Reply With Quote Quick reply to this message

 
Old 06-03-2018, 09:40 AM
 
Location: Cincinnati
42 posts, read 169,572 times
Reputation: 106
So if I am reading this correctly, you were concerned that if we base people's worth off of objective, measurable things, like strength, intelligence, productivity, then if/when AI is created we will be inferior to them.

Thus to escape this possible inferiority, you arbitrarily declare that a being's worth stems from its empathy, a trait that AI won't/can't (in your opinion) have.

This is akin to saying that no race can be superior to white people, no matter how advanced they get, because the true measure of superiority is whiteness of skin. Or to abstract it more, Group A can never be superior to Group B because Group B possesses a single trait that Group A lacks.

I think to make this argument hold up, you need to effectively demonstrate why empathy and emotions are superior to intelligence, productivity, speed, etc.

For my part, I would not be willing to assign any trait or traits as being what determines a being's worth or superiority, as the concept is inherently subjective. An AI without emotions, would not care that they did not have emotions, I would think. As a human being, I value human beings, no matter what traits robots possess. Or if you like, I would side with simple self-preservation.
Reply With Quote Quick reply to this message
 
Old 06-03-2018, 05:41 PM
 
13,493 posts, read 4,996,362 times
Reputation: 1365
I say we make the next life form in 200 years or less. i thinks its going to be done through bioengineering and we probably won't be exactly how we did it. like make the brain 40% fat and 60% brain cells.

i thinks its pretty cool we can almost see it.
Reply With Quote Quick reply to this message
 
Old 06-04-2018, 02:14 AM
Status: "I'm an Unmherkun puppy-kicking Socialist" (set 21 days ago)
 
Location: Dallas, TX
4,046 posts, read 2,120,926 times
Reputation: 3781
Quote:
Originally Posted by aca1 View Post
So if I am reading this correctly, you were concerned that if we base people's worth off of objective, measurable things, like strength, intelligence, productivity, then if/when AI is created we will be inferior to them.

Thus to escape this possible inferiority, you arbitrarily declare that a being's worth stems from its empathy, a trait that AI won't/can't (in your opinion) have.
If the "objective measurable things" you stated in your paragraph (along with mine in the OP) are the measure, then I see no way around conceding that humans are inferior in worth. I only used an AI w/o empathy or ability to feel to address the precise point. AIs that are superior in both intelligence and overall survival ability raises doubts about the conventional ways we humans tend to value others (namely by the traits I mentioned in the OP). Giving the AI empathy in this example would kill the whole point of the OP.

Quote:
Originally Posted by aca1 View Post
This is akin to saying that no race can be superior to white people, no matter how advanced they get, because the true measure of superiority is whiteness of skin. Or to abstract it more, Group A can never be superior to Group B because Group B possesses a single trait that Group A lacks.

I think to make this argument hold up, you need to effectively demonstrate why empathy and emotions are superior to intelligence, productivity, speed, etc.
For my part, I would not be willing to assign any trait or traits as being what determines a being's worth or superiority, as the concept is inherently subjective. An AI without emotions, would not care that they did not have emotions, I would think. As a human being, I value human beings, no matter what traits robots possess. Or if you like, I would side with simple self-preservation.[/quote]

Some trait has to determine the worth of a person (or AI). Otherwise what's the difference between an intelligent entity and a common rock? You offer self-preservation as a reason (I assume you mean the desire or drive to preserve one's life and those of certain favored others), which is a fair enough move. However, I have to bring up the Subjectivity and Self-Preservation matters:

1) Subjectivity of something doesn't make it any less real. The pain of, for example, a torture victim is certainly real despite it being subjective to him or her. I have no right to invalidate his or her pain and suffering and how they ought to feel. A similar, if much less severe case, goes for personal preferences in sex partners (assuming no asexuality). I have no right to tell you which of two or more partners you should prefer without utmost excellent reasons to do so (that reason: if your preferred partner is likely to hurt you badly in some way, for that's warning you about the person. I'd want you to do so for me in a similar situation, so I'm going to warn you about that person).

2) Life Preservation. Building on the torture-victim example above, the victim's torturer also wants to preserve his or her life. If you saw that sadist in a locked room with a strong door doing his agonizing deeds, and only killing the torturer would stop further agony and suffering for the victim, then killing the torturer would seem a reasonable act - precisely because it prevents that further great agony to the victim, even if the latter's life was not in actual danger. I wouldn't want to feel that level of suffering and agony, especially if done for another's amusement. So yes, I'd want you to kill my torturer if that was the only way to stop him or her doing his or her dirty deeds. That makes possessing a self-preservation drive, at best an incomplete reason to value a person (or an AI).

I also see elements of the https://en.wikipedia.org/wiki/Trolley_problem in this issue. One variant would be "Do you let the trolley hit one kind-hearted, civilized, humane person or five members of a very violent and brutal gang?" I'd choose to save that one kind-hearted person and let the trolley hit the violent criminals. The reason being, the civilized person does not set out to hurt or degrade others, the criminals do - even assuming all six people have equal desire and determination to life.

At any rate, very often the drive is not to survive - it's to prevent causing the sensory system to transmit signals of discomfort to our brain. That seems to explain why suicidal people often cannot act on their thoughts. We don't want to feel pain and suffering even when there's no (no imminent and direct, anyway) danger to our lives. The sex drive is also similar. The drive usually is not to procreate, the drive is to engage in acts that lead to procreation. Thus we humans tend to engage in sex even if it doesn't lead to procreation, and even if we don't intend to procreate, the drive is so strong that it happens during a particular occurrence of sex anyway.

Hope this helps.
Reply With Quote Quick reply to this message
 
Old 06-05-2018, 05:08 AM
 
Location: Cincinnati
42 posts, read 169,572 times
Reputation: 106
Quote:
Giving the AI empathy in this example would kill the whole point of the OP.
Ok, fair enough. It seems I misunderstood your original post somewhat. You (and I may be wrong again) are not really concerned with actual AI development, but rather you are concerned with the way in which people are valued and assigned worth. You use the AI of Skynet et al in order to demonstrate the error of using 'survival skills and productivity' as a guide to worth.

I still feel a lot of my earlier objection still holds. I realize that showing an illogical conclusion to a premise is a way to win an argument, however, I do not feel that your scenario leads to an illogical conclusion. If we examine it as such:

Productivity is the measure of a being's worth
Robots are more productive than humans
Thus, Robots are worth more than humans

I believe this is a valid argument. The objection would come from the conclusion being undesirable to a human. Which means it rolls back to a self preservation argument. I.E. I don't want to admit inferiority to an AI so I create a measurement that allows me to remain superior.
(An aside here; yes, I mean self preservation as you think I meant it, but also I mean it in a broader sense, but simply lack a word. The desire to preserve oneself, along with the desire to favor oneself and similar beings over others, and the mental attempt to justify it)

Quote:
Some trait has to determine the worth of a person (or AI). Otherwise what's the difference between an intelligent entity and a common rock?
Here we will really diverge. I will answer that there is no ultimate difference (in worth) between an intelligent entity and a common rock.

The problem that often arises in philosophical arguments, is that they must lead backwards into more and more basic premises until a shared one is reached. If we were both Muslims for example, we could fall back to the Koran, and argue interpretations of it. However, here, I don't know if we will.

Quote:
Subjectivity of something doesn't make it any less real.
Agreed. And I agree that pain has reality. Temperature also is subjective, but real. However, pain and temperature both have a physical, material basis. Pain is the response of the nervous system to some stimuli, and heat and cold depend on the the movement of electrons ultimately, or if you like the body's response to that stimuli. Morals, including rights and value, exist only in the minds of humans, and have no actual reality.

Back to the rock, we value humans more than rocks, so we make morals that reinforce this. But I see no logical reason for a rock to be less valuable. If you argue to the consequences, (insane psychopaths can torture people) as unfortunate as that is, it provides no actual, reality based reason for assigning more worth to empathy. Just because it offended humans that the universe does not revolve around the Earth, doesn't mean that the universe revolved around the Earth at anytime.

I agree with your actions in your two scenarios, but for me, it is not because they were the right thing to do, but rather because they are the things that agree with my personal emotions on what is right. It would be similar to if we both agreed that we like pizza.

Quote:
Hope this helps.
Yes, seeing a response longer than two sentences with one being an insult, is always nice, whether I agree with it or not.
Reply With Quote Quick reply to this message
 
Old 06-10-2018, 02:10 AM
 
Location: 'greater' Buffalo, NY
3,067 posts, read 2,108,277 times
Reputation: 3965
I object to Harvey Weinstein being mentioned in the same breath as Pablo Escobar, but otherwise, good post, sir.
Reply With Quote Quick reply to this message
 
Old 06-10-2018, 02:17 AM
 
Location: 'greater' Buffalo, NY
3,067 posts, read 2,108,277 times
Reputation: 3965
Quote:
Originally Posted by aca1 View Post
Morals, including rights and value, exist only in the minds of humans, and have no actual reality.
Neurochemical reality. Try to disabuse someone of their morals, or even of a single ethical belief, and watch them recoil as they strive to protect their psychological homeostasis. This is not mere concept but does in fact align with something neuroscientifically concrete (if elusive)
Reply With Quote Quick reply to this message
 
Old 06-10-2018, 02:39 AM
 
Location: 'greater' Buffalo, NY
3,067 posts, read 2,108,277 times
Reputation: 3965
Quote:
Originally Posted by aca1 View Post
Thus to escape this possible inferiority, you arbitrarily declare that a being's worth stems from its empathy, a trait that AI won't/can't (in your opinion) have.

As a human being, I value human beings, no matter what traits robots possess. Or if you like, I would side with simple self-preservation.
Your values too are arbitrary. Anyone's values are arbitrary. Self-preservation is 'natural' but logically arbitrary. All we can do as a species is come to a consensus, or as close to that as possible to maintain harmony/civility. I think the OP was quite reasonable, given the arbitrary axiom from which he began--one I think many could 'agree to agree' on.
Reply With Quote Quick reply to this message
 
Old 06-10-2018, 08:50 AM
 
Location: Cincinnati
42 posts, read 169,572 times
Reputation: 106
Quote:
Neurochemical reality. Try to disabuse someone of their morals, or even of a single ethical belief, and watch them recoil as they strive to protect their psychological homeostasis. This is not mere concept but does in fact align with something neuroscientifically concrete (if elusive)
I agree that it is a Neurochemical reality, but neurons and biochemical reactions are what constitute the human mind, so morality is still existing only in the human mind/brain. They are the result of millions of years of survival (and I suppose sexual selection) based evolution. While we can go further down the rabbit hole about whether these neurochemical realities are the same for everyone or different, or if different whether or not differences constitute some sort of mental disease, for the purposes of my original thought, it remains the same. Human morality is a self-preservation mechanism, which I maintain is the key to the OP argument.

Quote:
Your values too are arbitrary.
True. It would be absurd for me to accuse everyone else of having arbitrary values, while maintaining that mine alone are reality. I could be completely wrong. I merely enjoy pointing out what I believe, and seeing where it leads, and what possible arguments can be offered to the contrary.

Quote:
All we can do as a species is come to a consensus, or as close to that as possible to maintain harmony/civility. I think the OP was quite reasonable, given the arbitrary axiom from which he began--one I think many could 'agree to agree' on.
Perhaps many can agree on valuing beings based on their empathy. Perhaps this will lead to increased harmony and civility(which need not necessarily be the goal of humanity). Perhaps after thousands of years of various religious, philosophical, and social thinkers trying to get humans to value each other, it will be the rise (or threat of the rise) of A.I. that will finally unite all humans into valuing each other.

I do not necessarily object to the goal of the original post of valuing people based merely on the fact that they can feel emotions. I just hold that this would be another arbitrary act of self-preservation and justification in the long evolutionary chain of survival of the fittest, and that ultimately, our (i.e. humanity's) thoughts on what constitutes a beings value has no basis in reality. Nor (and I realize now that this was not the OPs main point) would it help stop a technocalypse. After slaughtering all humans, the A.I. would then assume our position of rationalizing why their traits were the most valuable.
Reply With Quote Quick reply to this message
 
Old 06-10-2018, 11:35 PM
Status: "I'm an Unmherkun puppy-kicking Socialist" (set 21 days ago)
 
Location: Dallas, TX
4,046 posts, read 2,120,926 times
Reputation: 3781
First, to aca1's last comment in the previous post

Quote:
Originally Posted by aca1
Yes, seeing a response longer than two sentences with one being an insult, is always nice, whether I agree with it or not.
Thinking about it for a while I can now see how this sentence can come off patronizing, even if it’s not my intent. I thought it a friendly way to close my last post but obviously it didn’t come off that way. I apologize for this.

Quote:
Originally Posted by aca1
Ok, fair enough. It seems I misunderstood your original post somewhat. You (and I may be wrong again) are not really concerned with actual AI development, but rather you are concerned with the way in which people are valued and assigned worth. You use the AI of Skynet et al in order to demonstrate the error of using 'survival skills and productivity' as a guide to worth.
That’s certainly my ultimate point, but it subsumes into the deeper issue of how intelligent self-aware entities in general should value each other (human or AI, and whether as individuals or a society – including AI societies, such as they would be). For every entity of human-like intelligence, productivity, “doing what needs doing” despite the difficulties (fearlessness), physical strength and/or robustness – whether human or AI – then the same person-valuing system applies to all (or at leave extremely similar valuation-systems). This should include “Skynet”, etc. as well if we be consistent. For the sake of sticking close to the original post, I’ll assume an AI without emotions (in the human sense, at least).

Quote:
Originally Posted by aca1
I still feel a lot of my earlier objection still holds. I realize that showing an illogical conclusion to a premise is a way to win an argument, however, I do not feel that your scenario leads to an illogical conclusion. If we examine it as such:

Productivity is the measure of a being's worth
Robots are more productive than humans
Thus, Robots are worth more than humans

I believe this is a valid argument. The objection would come from the conclusion being undesirable to a human. Which means it rolls back to a self preservation argument. I.E. I don't want to admit inferiority to an AI so I create a measurement that allows me to remain superior. (An aside here; yes, I mean self preservation as you think I meant it, but also I mean it in a broader sense, but simply lack a word. The desire to preserve oneself, along with the desire to favor oneself and similar beings over others, and the mental attempt to justify it)
Self-preservation is, in most cases, part of it, but not all of it. While most people want to live as long as possible, a second aspect is pain/suffering avoidance (physical or mental). People also want to avoid pain and suffering, independent of the self-preservation drive (incidentally, that’s my take about why it’s so often difficult to do painful things that are actually a long-term survival aid).

Both these things are directly impacted by how others value the entity. The more other people value the entity, the more likely it is to continue existing and/or have a high quality of life (or functionality, in AI’s case).

So entity-valuation of entities are about two matters: (a) how we actually size up others’ value and (b) how we should value each other (again, human or AI). This is because the same assessment process so many of us frequently use for other people can also be used by AI to size up all individual humans.

Quote:
Originally Posted by aca1
Here we will really diverge. I will answer that there is no ultimate difference (in worth) between an intelligent entity and a common rock.

The problem that often arises in philosophical arguments, is that they must lead backwards into more and more basic premises until a shared one is reached. If we were both Muslims for example, we could fall back to the Koran, and argue interpretations of it. However, here, I don't know if we will.
AND

Quote:
Originally Posted by aca1
Agreed. And I agree that pain has reality. Temperature also is subjective, but real. However, pain and temperature both have a physical, material basis. Pain is the response of the nervous system to some stimuli, and heat and cold depend on the the movement of electrons ultimately, or if you like the body's response to that stimuli. Morals, including rights and value, exist only in the minds of humans, and have no actual reality.
One commonality is that, masochists aside, people do not want to be hurt, harmed, or demeaned in dignity; especially have it initiated against themselves. As you said, pain has an objective basis and probably measurable in principle (if not already). This includes mental/emotional pain. This seems to be a commonality all of us share, including mental patients and even the robot boy in Spielberg’s movie AI.

Quote:
Originally Posted by aca1
Back to the rock, we value humans more than rocks, so we make morals that reinforce this. But I see no logical reason for a rock to be less valuable. If you argue to the consequences, (insane psychopaths can torture people) as unfortunate as that is, it provides no actual, reality based reason for assigning more worth to empathy. Just because it offended humans that the universe does not revolve around the Earth, doesn't mean that the universe revolved around the Earth at anytime.

I agree with your actions in your two scenarios, but for me, it is not because they were the right thing to do, but rather because they are the things that agree with my personal emotions on what is right. It would be similar to if we both agreed that we like pizza.
Empathy, however it comes about, is what allows us to feel what other neurological organisms feel, plus it also helps others to recognize us an entity as one that has the capacity to feel pain and joy, and general good or bad. Furthermore, there are parts of the brain (limbic system, mirror neurons) involved with empathy, so it would seem that has a material basis as well.

I see the first paragraph’s last sentence more in terms of bruised egoes or strong disillusions than in terms of other people actually degrading their core essential value *relative to other people*. That means while it was upsetting to learn that heliocentrism (a major plank in some theologies) downgraded the earth to just another ball of rock in the solar system, it still said nothing (in non-theological terms) about how humans ought to behave toward or value one another.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:

Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Philosophy
Follow City-Data.com founder on our Forum or

All times are GMT -6.

© 2005-2019, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35 - Top