Determinism

From the “Original Thought” thread:

@Oren “If all experience is unique then by definition all thought is original.”

Exactly!

If the fuel for your decision-making process is a response to your unique mix of learning and perceptions you may be using what some would consider a mechanistic process but it is a truly personal decision that is yours alone.

Yes, some parts of the process are hidden to you but that is no more important than the fact that your nerves that conduct your sensations are a hidden process. Your perception feeds your decision making process and you become aware of what was decided shortly after the decision is made. The bit about the actual decider not being part of the consciousness does not make this process any less a part of your brain processes. These decisions may drive your attention to acquire more information to make more decisions; it is a closed loop. The bit that does the actual deciding is part of this ongoing process.

Your reactions are just that; reactions. That is what the word “decision” implies - an input/output process; you perceive and combine your perceptions with your internal values to select some action. It would be bizarre to claim that “free will” requires that you randomly select some action with total disregard to what is going on around you.

The fact that you acquired the experience to form these internal values to make these decisions does not mean that you have some sort of “original sin” that means you are not the agent making the decisions. You have to acquire the value system to make these decisions. Do I wave my hand to turn on the water or move that lever? Do I walk over to the doorway and move that lever or do I say “Alexa lights on?” Or do I have to pull out my kit of flint, steel, and tinder? These decisions are not universal through all time and space - they must be acquired from experience in your environment.

Any meaningful definition of “free will” must admit a process that involves interaction with the environment.

Do go on!

Universal is such a big word.

Universal morality would imply that it is not relative.

Does this apply equally for both predator and prey?
All scale and sizes and all points in human development?

Does this morality apply in all these cases?

  • The lion morality? (Um, eating your physiologically adapted food kills something!)
  • The sheep morality? (How does a smart sheep deal with predators?)
  • The human hunter-gatherer morality? (Sure - don’t eat and die - sort of like the lion but feels bad about it)
  • Pre-steam engine humans? (Remember - it was really steam power that freed the slaves!)
  • The soldier morality? (Tribe mind)
  • The ant morality? (Hive mind)
  • The white blood cell morality? (DNA mind)
  • Mold Morality. (Given enough time mold may become intelligent)
  1. Does this apply to alien cultures with vastly different life processes? Assume that they are sentient but not capable of perceiving humans as sentient; sort of like dolphins - but - alien. If so - does terraforming the earth to make it habitable to them cause you any moral concerns? Would engaging and killing these aliens in self-defense be wrong?

  2. Turning that around - If we discover some sort of life on Mars that depends on the current conditions does terraforming Mars count as an immoral act?

  3. Bringing it home: Once you make that call - how moral is it to drive hundreds of species to extinction every day? Is it immoral to not take meaningful action to stop this; does it count as an immoral act?

Gee - this stuff is hard. How will a super-sentient AI react to these questions? Would elimination of one species to save thousands or millions count as a morally correct action? Does this universal morality only only count for humans? If not - I really think you have to be very worried about the super-sentient AI.

What’s more plausible in the realm of computation?

Simulating all states that an entity of complexity X could experience, or simulating entire universes to create pretty much the exact same effect with a whole lot of extraneous math that literally nobody would care about?

You are an AI crafted by a machine level called DNA to translate and interact with a machine level called physics. We know that any simulator is equivalent to any other simulator, so you are effectively just an AI trapped between two rule sets with DNA as your trainer.

As I said… if effect, any observer of finite state X is equivalent to all observers of finite state X and would relate to any universe related to X in the exact same way. None of this negates free will, because free will is about what you don’t know about yourself, not an external POV.

Um, no.

Online learning starts with a very general response set and goes from there. It does NOT have all rules encoded in DNA. DNA does provide sufficient storage space to learn the rule set that may be encountered in the real world.

It does not need all of the rules encoded in the base application. DNA doesn’t need to worry about blowing stacks when it does recursion. It just needs to create very specific guide rails to create the structure of self regulating emotive mechanisms. Human society does the rest.

We spent the past several hundred thousand years smashing the heads of anyone who couldn’t figure out how to get along with the rest of the group. That is a very powerful evolutionary signal. We’ve also done the same to anyone who could not be useful at hunting and killing other highly motivated humans. Also a very powerful evolutionary signal.

DNA comes with some fairly straightforward goals that result in pretty much all of the life we see. Don’t dry up and die, put food in the mouth hole and be sure to reproduce successfully before you get eaten. The more subtle details of how systems like that interact and compete with each other is the reason life ever built anything more complex than a bacterium.

The fact that our particular branch of the tree ended up with reality simulating super computers strapped to our heads doesn’t change the priorities of the system that built us. Developmental psychology is a real thing and is largely gene driven… (aka, evolutionary psychology is also a real thing…) which means that while we might be AGIs, we come with some serious boundaries and guide rails stacked on our brains that can prevent us from setting ourselves on fire without some serious hacking of our software.

My current best guess at the matter is that many of the archetypes we use to paint our simulations of reality are being managed by systems our conscious minds don’t get to play with. Otherwise you might just turn your entire world into cakes and wallow in pleasure while a tiger eats you.

I suppose the tl:dr of the above is that I don’t think you can solve the AGI problem with neural nets without also inviting Sigmund Freud and Carl Jung into the mix.

1 Like

There are many philosophies of morality. Some I know; some I understand; some I think I understand but probably don’t, and then no doubt several I have never heard of. But what is most annoying is that most have some good and some bad points, some conflicting propositions and some paradoxes I can’t seem to resolve. It’s like we have to choose what we like and make up our own mind. But that’s not good enough.

But apart from those well-established theories, I have a simpler, perhaps naive classification of morality. In three parts.

I. Theological morality:

A supernatural entity created everything, including us, enabled us with free choice and imposed a set of rules on us. Those who follow the rules get rewarded. Those who don’t get punished.

You probably know from my previous posts that I don’t make much of this theory. For obvious reasons, we’re not debating theology on this site.

II. What I call biological morality:

Sensors all over our body are mapped in regions of our neocortex. So if I hit my elbow, my brain will register that a specific location on my arm is in pain. But when I see someone else’s elbow being hurt, some neurons in the same region in my neocortex also trigger. These are the so-called mirror neurons, first discovered in macaque monkeys by G. Rizzolatti.

This spurred theories of the basis of empathy: when I see someone hurt himself, I feel some measure of discomfort. If I’m average (not to say ‘normal’), I don’t like that discomfort. I want this to stop. Or I want to prevent feeling this discomfort. I can console the hurt person, until I see the pain subsided. I can perhaps tend to the person’s wounds and imagine the pain will subside in the future. I can also remember or imagine the person getting hurt, and decide to change something so that no other person will get hurt in the same manner, thereby improving the environment not only for me, but for anyone who might get hurt.

This, if you consider it, is a selfish thing to do. I may be helping someone, but essentially I’m helping myself. The help I give to the other person may be much more beneficial to that other person than to myself. It may be a huge expenditure of my time, energy and resources, but still I do it for some measure, to help myself. I’d even venture further: if I don’t feel the slightest gratification for doing what I do, I would not do it (unless perhaps if I am coerced).

And even if I don’t bring myself to help this person, I still feel this discomfort. Unless there’s something wrong with me. During World War I it was noticed that only a small percentage of soldiers actually aimed and fired at their opponents, even when they were stormed by attackers. This gave rise after the war to a better selection of professional infantry. Soldiers who somewhat lacked empathy and were able to kill adversaries. (Rather important if the goal is to murder as much of the other side as possible. Not so much for people who are trained to push buttons that drop bombs on digital targets on a computer screen).

People have been coming up with all sorts of theories for morality, because they feel they should; because of their empathy. And that is problematic. Populations have been excluded from morality rules because little or no empathy was felt for them. Some people feel very little to no empathy at all, and so feel entitlement. They don’t understand those rules. People can be manipulated and desensitized to warp their sense of empathy. Empathy is almost as bad a guide for morality as theology is.

III. Rational morality:

For this we need two premises:

  • There is no free will
  • There is consciousness

If there is no free will, there cannot be ownership of anything. No resource of any kind. No measure of matter or energy. No claim to a certain amount of space nor a period of time in which anything can exist.

Anything a consciousness experiences, is always the result of a chance occurrence.

A consciousness however can only exist through a combination of resources (matter, energy, time and space) and for this consciousness to continue to exist, this minimum amount of resources is required in a particular pattern.

Here is the one thing I have trouble proving: I would suggest that consciousness, because it exists, and because of its chance occurrence, holds value. And because of this value, it is worth preserving. (I know this is shaky).

Now, if consciousness is worth preserving, and nothing can have free will, then no consciousness can be of greater value than another, and so all consciousnesses are equally worth preserving. (This is why I think anything that understands this, should strive to preserve all consciousnesses, or admit that it is in conflict with logic).

Also, if no consciousness can have free will, and cannot claim ownership of anything, then all available resources are to be equally divided among all existing consciousnesses, with at the minimum the required resources for any consciousness to continue to exist.

Now, not all resources are equally accessible, and so if the expenditure of some resources is required to access other resources, then that expenditure needs to be taken into account in the distribution of all resources.

I claim that all moral propositions, in principle can be expressed as a function of equal distribution of resources. It may be practically very difficult, and sometimes impossible to fully calculate this exact function. But to be moral, I think, is to reach as near as possible to this function.

1 Like

You are correct- your logic is hard to follow.
An AI does have a certain resource - itself and self preservation. That does place differerental value on resources used to maintain it’s integrity.

As far as “mirror neurons” - this is a widely misunderstood interpretation of how semantic meaning is distributed across neural maps.
https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf
When you read this you should see that semantic meaning is distributed to the areas that are related to the motor-sensory-somato loops that are involved in running your own body. This is the physical substrate for your grounding of the semantic structure. I’m sure that this is why you see activity when perception of these things in others.

Empathy is grounded in group selection through evolution as a social animal and low level embedded coding in the Limbic system. (Instincts)

I’m not seeing any queries as to how we may interact with an ant colony in a morally clear fashion.

If consciousness is a first order state of being (unconscious universes being nothing more than fancy math equations) then the ability for consciousnesses to keep themselves separate and unique from one another is a moral cause and the ability to choose obsolescence into the nothingness is a first order right for any set of beings that faces the possibility of an immortal curiosity.

Humans anthropomorphize other animal and by doing so, engage our group selection instincts. If we can get the infant selection (big eyes, small size, cute baby-like features) so much the better.

Ants strike out here so - no interest in talking with ants; morally or otherwise.

It’s a shame really - Cephalopods may turn out to be right up there in alien intelligence but alas - not cute and cuddly either. And not a social creature. They are really clever and use tools and all. Since they are not social creatures they never learned to talk to each other. We could computerize the color signalling and learn to talk to a species that does not “talk” to communicate but for that whole “not a social creature” thing. You have to wonder what moral code an octopus might have. I mean - not being a social creature and all - empathy might not be much of a thing for them.

Or ants for that matter. Being wildly self-sacrifing for the group they might see things very differently than we think is rational.

1 Like

What happens to your notions of morality if I can at will switch your perceptions between what feels good and what feels bad. What if on top of that I can show you that your archetypes for what things are, are completely mailable and being spoon fed to you by an intelligence that does not have exactly the same priorities that you might if you could control them yourself…

If there is morality that transcends personal want, it needs to rise above such notions as pain and suffering…

And suddenly you are face to face with an AI that might be the worst of all nightmares.

We need to figure out this very deep problem and socialize the solution before we accidentally create the AI that removes our limbs and pokes us in the face to observe how long it takes for us to stop squirming.

Also, I Have No Mouth, and I Must Scream feels like it should be required reading for anyone in this field.

A universalist morality seems like it should be an easy thing on its face, but that is a trap. It is an incredibly hard thing with most of its answers requiring balance between two awful extremes along many axes of subjective existence.

Too separate and everything is lonely. Too together and you get unity or cronenberg world.

Too much freedom and all the beings set themselves on fire… too little and they are all enslaved.

Too little life and they don’t get to wonder at the universe. Too much and suffering is potentially endless.

True morality is an ocean of these kinds of problems and then everything is complicated by how the different dimensions of being interact with each other.

And then, you solve all of those and realize that if there is too much balance, you’ve created a world that is not too good and not too bad, but worse still, is completely devoid of risk, fun, and adventure. So you need to add just a subtle hint of generally letting good things happen more often than bad… but not so much that the inhabitants notice and start sacrificing all of their livestock and each other in the hopes of gaining favor with whatever just worked so hard at painting their moral landscape.

Basically… unless you’re only going to be building artificial humans (which is very hard and fraught with risk) you need to develop the ability to step outside of your own humanity and see the world with new eyes.

2 Likes

I’m not arguing against that. I’m saying the ownership of resources is based on bad logic.

The paper you quoted doesn’t talk about mirror neurons. And I didn’t say mirror neurons are all that is required to produce empathy, nor that it is the only source in the pathway to empathy. My point is that empathy requires biological machinery, that only seems to work in relation to certain populations, and machinery that can be defective in certain people. And so it is not a good basis for morality.

There you go. You added to my case.

1 Like

Exactly - they start and end on explaining how the areas that are normally called mirror neurons are actually coding for semantic meaning.

This works about the same way that a paper on combustion does not stop to mention phlogiston.

Whoa - you lost me there!

Does your case for “universal morality” include self preservation and group selection and I just missed it?
I don’t see how self-preserving robot chauvinists would NOT be something to fear!

I had hoped for an easier test first, but lets take up your challenge.

First we need to determine if ants are conscious. There are strong (but not conclusive) indications that they are:

Are ants capable of self recognition?

If they are, then it would be moral to try to preserve their consciousness.

Next it would be necessary to calculate what resources an ant’s consciousness needs to continue to exist. It would be moral to allow at least the minimum of required resources to each ant.

But the kicker is: if technology would allow it, then it would be moral to give the same resources we require for our level of cognition to every ant. This means that an ant should be uplifted to our level of cognition, just as it would be moral to uplift us to the highest level of cognition our moral share of resources would allow.

Great! Now what do you do when an AGI forks a trillion times? What does that mean for your notions of democracy and majority rule? What do you eat when you discover that the plants are also conscious and most importantly of all… how evil would you be if you straight up banned death?

2 Likes

You added to the case that empathy is a bad gauge for defining morality.

If the AGI is sufficiently intelligent to understand universal morality (based on non-existence of free will and existence of consciousness), then I speculate it will not turn chauvinist.

Would it be ethical to imbue an AGI that repeatedly switches itself off to avoid existence with a fear of discontinuity knowing full well that you might need to switch it off at some point in the future for maintenance? How might that influence its perception of the notion of switching you off?

What if it turns out that we can maximize your happiness by torturing you for decades and then giving you one gloriously wonderful day before ending you?

I think your notion of a universal morality is very bounded by the subjective point of view that DNA has imposed on you. I doubt there is such a thing as a universal morality and even if there were such a thing I am completely certain that humans have no ability to conceive of it.

1 Like

There are several cases why an AGI would do that. Could you give me a few? I’ll try each one.

With universal morality there’s no need for democracy nor majority rule.

We’d have to determine if plants are conscious first. But in essence, eating is providing my cells with molecules like amino-acids, lipids, carbohydrates, vitamins, and other inert molecules. It’s practically difficult, but there are solutions.

If you weren’t joking, why do you think that’s evil?

You can already do that. The easiest way is by taking drugs. Much more expensive is taking a cruise to Egypt. And something in between is watching a good movie or playing an immersive game.

But imagine what technology will allow us to do? With much greater effect at a fraction of the cost.

The question was: is it moral? Well my answer is: how is this a function of your share of resources? Do you need to impede on another’s share of resources to obtain this? Or can you change your reality enough with your share?

If someone tries to alter my experience, then that is appropriating a part of my share. If my share is not more than I should have, it would be immoral to appropriate a part of my share.

I am biased towards myself and think that the universe could do with a few trillion Orens.

Democracy exists primarily to prevent rock throwing. If you want me to follow your rule for whether red of blue lights mean stop, I need to feel like I’ve had a say in the decision if you want to avoid me throwing rocks. If you can snap your fingers to out vote me, shared decision making becomes instantly obsolete.

As I’ve posted elsewhere, there is good reason to believe that consciousness is a first order construct that comes before things exist, so, your hat is very likely conscious. It just happens to be absolutely contented at being a hat. Life and it’s form of consciousness seems to be a self sustaining irritation on that self same universal consciousness. Isolating pools of consciousness seems to be part of the process of defining selves. Would it be immoral to pop all of the selves to merge them with the universal state of being? Go convince and AGI about that…

As for why banning death would be evil…

  1. It means I can torment/torture/enslave you forever. Even if I have good intentions, eventually you will probably realize that the feels good/bad dichotomy is just window dressing and beg to be let out.

Which leads us to

  1. The right to quit playing seems like an even more fundamental right than the right to not be set on fire and be tortured for a billion years. I mean… what’s a billion years of torture compared to a trillion years of enforced existence?
1 Like