Determinism

There are many philosophies of morality. Some I know; some I understand; some I think I understand but probably don’t, and then no doubt several I have never heard of. But what is most annoying is that most have some good and some bad points, some conflicting propositions and some paradoxes I can’t seem to resolve. It’s like we have to choose what we like and make up our own mind. But that’s not good enough.

But apart from those well-established theories, I have a simpler, perhaps naive classification of morality. In three parts.

I. Theological morality:

A supernatural entity created everything, including us, enabled us with free choice and imposed a set of rules on us. Those who follow the rules get rewarded. Those who don’t get punished.

You probably know from my previous posts that I don’t make much of this theory. For obvious reasons, we’re not debating theology on this site.

II. What I call biological morality:

Sensors all over our body are mapped in regions of our neocortex. So if I hit my elbow, my brain will register that a specific location on my arm is in pain. But when I see someone else’s elbow being hurt, some neurons in the same region in my neocortex also trigger. These are the so-called mirror neurons, first discovered in macaque monkeys by G. Rizzolatti.

This spurred theories of the basis of empathy: when I see someone hurt himself, I feel some measure of discomfort. If I’m average (not to say ‘normal’), I don’t like that discomfort. I want this to stop. Or I want to prevent feeling this discomfort. I can console the hurt person, until I see the pain subsided. I can perhaps tend to the person’s wounds and imagine the pain will subside in the future. I can also remember or imagine the person getting hurt, and decide to change something so that no other person will get hurt in the same manner, thereby improving the environment not only for me, but for anyone who might get hurt.

This, if you consider it, is a selfish thing to do. I may be helping someone, but essentially I’m helping myself. The help I give to the other person may be much more beneficial to that other person than to myself. It may be a huge expenditure of my time, energy and resources, but still I do it for some measure, to help myself. I’d even venture further: if I don’t feel the slightest gratification for doing what I do, I would not do it (unless perhaps if I am coerced).

And even if I don’t bring myself to help this person, I still feel this discomfort. Unless there’s something wrong with me. During World War I it was noticed that only a small percentage of soldiers actually aimed and fired at their opponents, even when they were stormed by attackers. This gave rise after the war to a better selection of professional infantry. Soldiers who somewhat lacked empathy and were able to kill adversaries. (Rather important if the goal is to murder as much of the other side as possible. Not so much for people who are trained to push buttons that drop bombs on digital targets on a computer screen).

People have been coming up with all sorts of theories for morality, because they feel they should; because of their empathy. And that is problematic. Populations have been excluded from morality rules because little or no empathy was felt for them. Some people feel very little to no empathy at all, and so feel entitlement. They don’t understand those rules. People can be manipulated and desensitized to warp their sense of empathy. Empathy is almost as bad a guide for morality as theology is.

III. Rational morality:

For this we need two premises:

  • There is no free will
  • There is consciousness

If there is no free will, there cannot be ownership of anything. No resource of any kind. No measure of matter or energy. No claim to a certain amount of space nor a period of time in which anything can exist.

Anything a consciousness experiences, is always the result of a chance occurrence.

A consciousness however can only exist through a combination of resources (matter, energy, time and space) and for this consciousness to continue to exist, this minimum amount of resources is required in a particular pattern.

Here is the one thing I have trouble proving: I would suggest that consciousness, because it exists, and because of its chance occurrence, holds value. And because of this value, it is worth preserving. (I know this is shaky).

Now, if consciousness is worth preserving, and nothing can have free will, then no consciousness can be of greater value than another, and so all consciousnesses are equally worth preserving. (This is why I think anything that understands this, should strive to preserve all consciousnesses, or admit that it is in conflict with logic).

Also, if no consciousness can have free will, and cannot claim ownership of anything, then all available resources are to be equally divided among all existing consciousnesses, with at the minimum the required resources for any consciousness to continue to exist.

Now, not all resources are equally accessible, and so if the expenditure of some resources is required to access other resources, then that expenditure needs to be taken into account in the distribution of all resources.

I claim that all moral propositions, in principle can be expressed as a function of equal distribution of resources. It may be practically very difficult, and sometimes impossible to fully calculate this exact function. But to be moral, I think, is to reach as near as possible to this function.

1 Like