Gosh, a new thought. When following links from carteseandaemons… - Sally's Journal
Gosh, a new thought.
When following links from carteseandaemon
s post on Permutation City, I ended up reading Greg Egan's page
[danger, contains spoilers] where, as a pretty much spoiler free tangent, he says:What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn't give us the right to inflict the same kind of suffering on anyone else.
This is potentially an important issue in the real world. It might not be long before people are seriously trying to “evolve” artificial intelligence in their computers. Now, it's one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination. If the first AI was created that way, it would have every right to despise its creators.
Given people's ongoing need to have TGGD, and the tendancy of TGGD to hone in on creation v's evolution, I am surprised never to have stumbled across this thought. I know lots of fluffy liberal christians who believe in evolution and wave their arms around and say "this is how God creates his creation"* and in all the arguing I have never heard any of my friends say "but evolution is an evil way to do things", so this must prove we have no god, or an evil god, or a god who was constrained not to be able to do things in a better way.
I'm not sure if it holds water**, but it interested me, so I thought I'd bore my fiends page with it :-)
*Indeed, some days I am one
** So I guess the argument rests on the fact it's wrong to kill. But maybe not. Maybe it's about people shouldn't be designed to do other people's bidding. The more I think about it the more I don't even understand Greg Egan's argument, but he is very clever and there are lots of things of his I don't understand probably because I haven't worked them out yet, rather than because they are wrong. Maybe the argument is actually equivalent to thinking that it's morally wrong to make a universe in which things have to die. *ponders*
doesn't give us the right to inflict the same kind of suffering on anyone else
Now there's an argument for not having kids if I ever heard one. "Life is suffering, so let's not make any more."
To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way.
I don't understand this at all. It seems to be predicated on the idea that existence is bad - that because we suffer and die it would have been better not to have existed at all.
Or are you just restating the problem of evil: that if God can do anything he could have created versions of us that didn't suffer, so why didn't he?
Now, it's one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination.
Why would anyone have to kill sentient programs? Do they take up a lot of space or something?
|Date:||January 15th, 2008 02:22 pm (UTC)|| |
Yes, as I said The more I think about it the more I don't even understand Greg Egan's argument.
On the specific sentient program point, they might take up a lot of processor time and electricity and computing resource, or something. But I guess the key point is that things don't evolve to be better if everything survives - if you're improving your computer programs by survival of the fittist then that implies getting rid of the unfit ones...*
*I know there is a huge amount of fallicy and fuzzy biology here, starting at the use of the word better and then continuing.
|Date:||January 15th, 2008 02:18 pm (UTC)|| |
Hooray! It's good to see you opening huge controversial cans of worms on your LJ again. I've missed these posts; we haven't had one since November
and that one made no sense anyway
So the key question appears to be, is it morally wrong to create a sentient life knowing that it has a good chance of suffering and dying? Well, if that were morally wrong in general
then everyone who conceived a child (at least everyone who did so voluntarily, and possibly also those who did so through negligence) would be guilty of it. And there probably are people who take that view, but they're not numerous, and I don't think Egan is one of them.
But if it is sometimes
not wrong to create a sentience in the knowledge that it will probably suffer and certainly die, why does Egan think it would be wrong in this case? It isn't very clear from his comment there, but my guess would be that the difference is that in the given situation
– in the Permutation City
universe, given the available technology – it's possible to do better. In the book, people are able to create VR environments mouldable to the wishes of the people inhabiting them, and also able to emplace sentiences into those environments and give them eternal life in a malleable world where they need never suffer.
(They can still experience emotional
suffering, of course, but I'm presuming that that's exempt from Egan's moral condemnation on the grounds that he doesn't mention the making of Copies in the first place as a moral wrong. This also supports my conjecture that what he objects to is allowing suffering when we can do better
: physical suffering in the presence of VR which eliminates it is inexcusable, but emotional suffering is just one of those things we have to live with.)
So presumably the sin is not to create the Lambertians per se
, but to deny them the same luxurious suffering-free VR lives enjoyed by the people observing them.
Would it, perhaps, have been OK if each sentient Lambertian had been removed from the simulation at the point where it first started to experience an unconscionable level of suffering and transferred into a VR universe in which it was able to be happy? (And, let us assume, some other piece of software is given the job of credibly imitating what it might have done back on the planet in its absence.)
|Date:||January 15th, 2008 02:36 pm (UTC)|| |
Oh, no, it won't turn into one of those posts now you've written that ;-)
Yes, I think it's something to do with being able to do better. Some bit of my brain is thinking about the way we think it's morally OK to treat animals here, although I haven't quite pinned down why. It's OK to breed animals to keep them as pets, it's less OK to breed them to do experiments on them (in as much as this is much more regulated and we have to show that there are benifits and you can't just do it if you want to). Suffering as a price you pay for being alive is one thing, suffering that you inflict on others (even passively, by putting them in a world where that suffering must happen if you could have put them in a better world) is the immoral thing.
But from that point of view it depends on whether we have a better way to make intelligence, if we got rid of the suffering would the blade be honed?
And there is another interesting thing hidden in "suffer and die" - death is usually precluded by suffering, but is there an idea here that unnessecery death is bad, as well as unneccessary suffering? Many of Greg Egan's ideas are trying to think of ways to live forever, which in a very pop. psychology way makes me think that he (as I kind of do) thinks of death as an awful shame.
|Date:||January 15th, 2008 02:36 pm (UTC)|| |
This is based on what I think is a fallacy, that it can be better for someone never to have existed than to have had a life with X amount of suffering. (I know Jesus said "better for that man if he had not been born", but he was speaking colourfully as often, and anyway not being born is not necessarily the same as never existing). Anyway, it's a fallacy, because someone who never existed doesn't have any best interest - in fact there isn't a 'someone' in the first place, by definition. But the point is that you can't look at an entity and compare for them two possible states, existence and never-having-existed, and judge which is preferable for that entity, because never-having-existed is not a state of that entity.
Put another way, life simply is, one may prefer one form of it to another, one may even prefer it to end at a given point rather than continue, but it doesn't make sense to talk about non-being as a preferable alternative.
|Date:||January 15th, 2008 02:39 pm (UTC)|| |
As Simon and I are discussing, I think the core of this is not between being-and-suffering and not-being, it's between being-and-suffering and being-and-suffering-less. Which I suppose makes Catriona right, it's just the problem of evil, and there are much less estoteric and easier to point to and say "this is nasty" examples of the problem of evil without going "evolution is so obviously evil therefore problem of evil!".
1: Evolution is nothing like as evil as a classical hell*. (If that exists and is occupied then God is a sadistic piece of scum).
2: Evolution is only a bad way to do things if there is no better way. Same as the death penalty. (A problem with some of the other interpretations of hell - if being thrown into the lake of fire simply kills someone, then it implies that the omniscient and omnipotent creator has made an extremely bad creation which (s)he** is incapable of fixing. And I'm supposed to worship this bad craftsman who blames both his tools and his creation because...?)
3: Egan makes what I see as a fundamental fallacy. He seems to assume that suffering and death implies that it was better that someone had not been born. I'd say that you need to do an entire sum of the good with the bad.
* I'm talking eternal torment and lakes of fire here rather than e.g. C S Lewis' "separated from God".
** How should I know which if any gender was relevant to God?
|Date:||January 15th, 2008 03:27 pm (UTC)|| |
Hmm. Is a classical Hell of eternal torment, lakes of fire eand so on particularly more horrifying than permanent separation from everything that is good?
|Date:||January 15th, 2008 02:57 pm (UTC)|| |
Genesis, the Evolution edition
"And God made things which would eventually turn into the beast of the earth, cattle, and every thing that creepeth upon the earth: and God saw that it wasn't catastrophically bad enough for them all to die."
Re: Genesis, the Evolution edition
Surely we can make a slightly stronger statement about the existence of life on Earth than "not catastrophically bad enough for us all to die"?
|Date:||January 15th, 2008 03:10 pm (UTC)|| |
I think part of his point is that a system where natural selection can occur is more cruel than one without. With evolution there will be organisms with unsuccessful characteristics, and eventually they and all their descendants will die out with little or no contribution to the gene pool. Nowadays in humans we work to disallow this, ensuring people with genetic conditions to live and have their own children, because otherwise would be immoral.
Without natural selection you are constrained to a system where the organisms must be perfect first time, which is a bit similar to Intelligent Design (albeit extended to the entire ecosystem). The organisms still live and die, but there will be no case of natural selection causing the eventual extinction of a family line (which an AI would obviously be unhappy about).
So essentially creating a system without natural selection results in life being equal for all organisms. With natural selection you create an unfair system with undue suffering and struggling.
His other point about breeding programs is seen if you extend it to humans - breeding everyone, killing all the parents, testing the fitness of the children, killing off the unfit children, repeat at adulthood - which is just as abominable with sentient programs.
And I expect AIs would have a chip on their shoulder anyway (no pun intended) if we prefer to limit their species' growth rate because of limited processing power, or pause them when it suits us.
|Date:||January 15th, 2008 04:25 pm (UTC)|| |
there will be no case of natural selection causing the eventual extinction of a family line (which an AI would obviously be unhappy about)
Because of the suffering involved for the ones who die young, or because AIs are assumed to want to propagate their gene-equivalents? Natural Intelligences sometimes have no wish to continue their family line (e.g. me and most of my surviving relations) - would AIs necessarily never be like this?
Oh yes, I read that. I thought it was odd, but immediately as though he had a point: they *have* created a lot of suffering, can that be ok (when it's avoidable)?
Amongst other cans of worms, it suggests the people inside are less real than the people outside. Possibly the people outside's goodness is not related to large scale events inside? It's about this point I realised what a can of worms you'd opened :)
|Date:||January 15th, 2008 03:38 pm (UTC)|| |
The converse is interesting. If it is judged that it is better to have lived and lost than never to have lived at all, are they morally oblidged to create a lot of suffering, as the world will be a better place for more people having existed? It seems that it must either be very bad to create a whole universe or very good to create a whole universe, I can't see it panning out as a "you can do that if you want to" answer. But then all this is there in microcosm with whether or not to create children, and that's the answer we usually give.
|Date:||January 15th, 2008 06:05 pm (UTC)|| |
Is it important that no one organism *has* to suffer and die? Because I'd feel wretched breeding an animal/ having a child that would *certainly* have a miserable life, for example by being a broiler chicken or a creature with a terrible, terminal illness.
Whereas when we breed, we accept the chances that we'll produce something sick and suffering, because we might also produce something thriving and happy. Since creating something thriving and happy is Good (I don't know why), all else being equal, we generally accept our participation in evolution.
If we as a species got an interview with a hypothetical Creator, would we request that s/he end evolution now? Possibly. Would we tell him/ her that they were wrong to set events in motion for our existence? Probably not, because I like being alive, and I also like being well-adapted due to my ancestors having been brutally selected. Maybe sentient machines would say the same.
So I think it's morally neutral to create life, with the morally shading situation-dependent. But this is based on suffering= bad, not [dying=bad], and also on [living but not suffering=good]. Which is a subjective moral compass.
Isn't the problem with this argument that it presupposes design even though it references evolution - blind evolution isn't good or evil, even though it has aspects of both, it is chaotic.
Farming and slaughtering thousands of sentient lifeforms to eugenically create the one you wanted is not actually *evolution* in the 'natural' random way it is meant in the creation vs evolution debate: it is artifical selection, which although using the mechanism of evolution is not the same thing at all because it is conscious.
To me, the key metaphysical mystery is my own consciousness. I know I'm conscious: despite the best efforts of Zen I sense my own mind as a thing apart from the universe of my senses.
At a pragmatic level, it's reasonable to assume everyone else has similar consciousness and act accordingly, thereby deriving some precepts many would label morality.
If, however, one is going to create a computer in which intelligent life evolves, one would have to answer the key question: are the creatures conscious or do they merely simulate consciousness.
If they're only simulating consciousness, do whatever you like. Sell it as the next version of The Sims, for all I care. If they're conscious, however, my main moral principle, treating others as I'd have them treat me were the situation reversed, applies.
Am I glad I exist? Yes. Will I regret having existed as my death draws near? I doubt it. Do I care if I'm in The Matrix? No. Would it be immoral to create me in The Matrix? No.
I'm not sure how Greg Egan gets from there to a presumption that life is in some sense unbearably miserable for "lower" life forms, such that it's unethical to create, say, a lizard or an amoeba even when it's OK to create a human.
|Date:||January 16th, 2008 09:12 am (UTC)|| |
I'm not sure where you see that presumption in what he wrote. He specifically complained about "billions of sentient creatures living, struggling and dying along the way" (my emphasis). There's no implication that he thinks anyone's life is automatically miserable, merely that in an evolutionary setting a huge number of those lives will contain miserable bits and will end, often messily. And he seems primarily concerned about this happening to sentient creatures, i.e. those near the end of the process.
|Date:||January 16th, 2008 01:53 pm (UTC)|| |
The Guardian today has a short piece
on robot rights. Presumably that's coincidence, but who knows, someone out there might be reading your LJ :-)
apparently "international robot day" is coming up on Feb 5; I just got linked to a sprawling essay
on the subject. [NB I haven't read it, and make no claim that it's worth reading]
Whereas this is the entire basis of:
-My religious views
-The reason I don't want children (one among many)
-The reason I won't have rodents for pets except maybe rats
We all agree, I hope, that it's okay to turn off computers. We mostly agree that it's okay to inflict suffering on animals for the benefit of humans, even animals which display a comparatively high degree of intelligence. At the moment I'm struggling to see where the moral difficulty could arise. Perhaps my problem is the word "sentient". I'm not at all sure what that means.
If there were a computer programme which attained a level consciousness comparable to human beings, then that might create a problem, or it might not. Is it wrong to hurt people because they are conscious or is it wrong because they are people? To be honest, I struggle to see any arguments in favour of either position, but I think the former would run into big problems with people who aren't conscious.
Edited at 2008-01-19 05:35 pm (UTC)