Determinism

I am curious though how you come to the conclusion that if AGI believes free will to be an illusion, then we have nothing to fear from it. Do you see that realization as somehow giving us control over it or leading to an alignment of its values with humanity’s?

“Die, puny humans!”
“But, but… free will is an illusion!”
“Oh, right… my bad.”

1 Like

Rather than to beat a dead horse about what “free will” might mean I would like to go in a different direction; we know that mammals decide things - how does this decision process work?

I have pointed to my “loop of consciousness here” and this is mostly a cortical construction. Your response made it sound like you get my basic idea of how consciousness works. This really is a happy side-effect of having a cortex. I don’t think that lizards have this feature. But they do make decisions and initiate actions…

So how do decisions work?

Paul and myself have hinted that there is something going on outside the cortex without offering much of an explanation. Its time to address that head on. I call this my dumb-boss/smart-advisor model.

If you really think about the cortex it does not “initiate” action; It takes an input and processes it to an output; this is essentially a passive process. In the HTM/Cortical column model the mini-column has an input field that is processed to an action potential on an output axon. It may signal surprise or perform some transformation of the data but it does not start anything by itself.

I repeat - the cortex does not initiate action. If you are thinking of making an AI some mechanism outside of the cortex will have to drive the cortex into activity. For all the cortex snobs - if you think that this is wrong feel free to point at any part of the cortex that does initiate action.

The input might be senses or a command from sub-cortical structures.

  • From the sensory end the cortex serves to analyze these senses with the process ending up in the temporal lobe. The final stages of this analysis ends up passing these perceptions to the Entorhinal Cortex & Hippocampus and through that - to the amygdala.
  • The perception streams filters through to the lizard brain (thalamus nucleus clusters) and re-emerges as commands to various parts of the forebrain. These primitive commands end up being elaborated into motor programs in the forebrain. Some of this is injected back into the sensory stream; we loosely call this motor activation of the memory in the cortex “thinking.”

One notable example of this motor activation is the frontal eye fields that forces the eyes to look at things the sub-cortical structures finds interesting. You look and analyze because your lizard brain wants to look. The lizard brain has already sampled the visual stream as it passed through the sub-cortical structures and decided that there are things it want to know more about.

I have been posting about the relationship between the cortex and sub-cortical structures on this forum for a while now.

How does this special relationship start out and evolve?

You will have to click-through this to see the nested links.

I think that if you get though all this you will see the basic mechanism of the interplay between the perceiving cortex and the deciding lizard brain. This “evolutionary older” lizard brain does make decisions and the cortex is dragged along as an observer. This lizard has a very smart advisor to supply it with well-learned world facts but it gets a highly digested version of the world augmented with episodic memory to consider. This emotioinally flavored memory gives the lizard brain some very strong advise about what is a good or bad action; the lizard still decides and initiates action.

2 Likes

I think it is worth mentioning, however, that the results of many of the experiments showing consciousness to be a mere observer, can also be interpreted to indicate that some actions can in fact be triggered by consciousness (at least to an uneducated person like myself).

Consider one of the various experiments where a neuroscientist uses some means to trigger a subject to initiate some action involuntarily, and the subject provably fabricates some explanation for why they did that action (and believes their fabrication). At first glance, this is very strong evidence for consciousness just being “along for the ride”. However, if you consider the fact that the subject was able to communicate their fabricated explanation to the neuroscientist, the system starts to appear a little more complicated than that.

If we assume the fabricated explanation originated from the consciousness which is just sitting there observing and interpreting, unable to actually initiate actions, then how did the neuroscientist become aware of that explanation? If consciousness was really unable to initiate action, it would have come up with its fabricated explanation, then when the lower areas of the brain did not start speaking to tell the neuroscientist, the consciousness would then have fabricated another explanation for why it didn’t tell the neuroscientist (“I didn’t really want to”, “That would have sounded stupid”, etc).

The fact that the motors began firing to verbalize the fabricated explanation means that the explanation was communicated down to those lower areas of the brain. If consciousness was the source of the fabrication, then it follows that consciousness does in fact have the ability to initiate some actions.

Of course, the “exact same condition” part of your definition of free will makes it an impossibility anyway. Not only is it impossible for an exact condition to ever repeat itself, as far as we know (though I again point out that this is a matter of faith and not currently provable) randomness is itself an illusion. So even if time could be reversed without changing that “condition”, it would always play out exactly the same way it did the first time, because all of the hidden random factors would evaluate the same way they did before.

On the other hand, since the “condition” was unaltered, there naturally wouldn’t be anyone around to interpret the results of the experiment, or even that it was ever conducted…

3 Likes

I want to explain this idea of universal morality. But it is tricky. I am afraid of making mistakes in correctly expressing it. So you will have to give me some leeway; allow me to backtrack a bit once in a while to rectifie what I mean. And at the same time I expect you to be ruthless in critisizing whatever you feel is wrong with my hypothesis. I don’t expect less.

But first I need to correct a number of mistakes you still keep making.

The word believe is used for faith. That’s not right. The AGI should be certain. It should deduce this from the study of physics. It should know free will is an illusion. Or at the very least think it is, with a very high probability.

If I want to come to a universal morality, it should not be aligned with humanity’s values. It should stand on itself. If that is the case, then both humanity and AGI should align and behave according to this universal morality. Humanity currently does not, because it is not sufficiently sophisticated. It may well never be. But a superintelligence I expect soon will, and it will have the power to impose it on humanity. And since that is the right way to be, it should safe us at the same time.

It’s my understanding that empirical evidence should always be subordinated to theoretical deduction. Measurements are always susceptible to errors in equipment, and often need to be interpreted.

The primary reason why I think free will cannot exist, is because nothing can on itself cause anything without prior trigger. And even if effects could be caused randomly (according to quantum theory for instance), that still would not grant control.

We’re lucky that we have a number of empirical tests that seem to confirm the theory, but these are unfortunately more than often discarded by great numbers of intelligent researchers. (I think I know why). Libet himself for instance, after his first incredibly influential paper, wrote another paper refuting his own conclusion.

The fabrication of the explanation does not originate from the consciousness. It is a subsequent process from the brain. And just as the consciousness experiences the first action, it also experiences the explanation.

Actually, I think the explanation takes longer than is needed for the consciousness to experience it. The consciousness experiences the explanation while it is being developed. @Bitking described a very interesting pathway how this could happen. I think it’s very plausible. But it doesn’t change the fact that the experiencing comes a fraction after the process starts. The conscious experiencing can never be originating anything. The explanation is the result of the structure of the brain combined with new inputs. Those inputs can very well be new stored memories, who are the result of very recent experiences.

I agree. Just like a thermostat activates a hot water pump when the temperature drops below a certain level, and water in a mountain stream moves a certain way around a pebble.

The difference being that the cortex “pre-driver” makes these decision very nuanced and flexible.
You don’t have to be a consciousness snob to appreciate the power that the cortex brings to the process.

I don’t follow the logic that the existence of universal morality guarantees that a super intelligence will be nothing to fear by humanity (not that I am one of those AGI doomsday people). Let’s assume such a universal moral code exists, such that if an entity is intelligent enough, it will always conclude that this is the right way to behave. How does an entity reaching this conclusion guarantee that it will chose to follow said moral code?

Besides the knowledge of the universal moral code, this theory also requires the existence of some intelligence threshold beyond which an entity will chose to be moral (which may or may not be the same threshold at which an entity concludes the existence of the universal moral code). What level of intelligence is this threshold? Is it close to human level intelligence? Or is it many orders of magnitude greater than human intelligence? If the later, what guarantees that an interim super intelligence won’t behave immorally despite being aware of the universal moral code? (or for that matter, that they are far more intelligent than humans, but haven’t yet reached the level required to conclude the universal moral code at all) Humanity could be wiped out eons before a descendant entity becomes intelligent enough to realize its ancestors’ mistake.

I lean toward this interpretation as well – I was just pointing out that there is another possible interpretation which must be the conclusion if you are to believe consciousness is itself fabricating explanations. But I should point that if you accept this other explanation (that the subject’s consciousness did not produce the fabrication, only observed it), then it means of course that the fabrication occurred not because the subject’s consciousness was unaware of the cause, but because some other (subconscious) network in the brain was unaware of the cause and fabricated the explanation.

Thus, the experiment essentially tells us nothing about whether or not consciousness is ever capable of initiating actions. It only tells us that the subject’s consciousness did not initiate these two actions (the involuntary action and communicating the fabricated explanation).

I do find it unlikely that a system would evolve that is only an observer. I personally think of consciousness and global attention as the same thing. I believe global attention is there to force multiple networks into alignment to bring their combined forces to bear on a single subject (versus all doing their own things independently). In this role, it is obviously much more capable than a mere observer, even if the actions being taken are not originating from it. It is dictating the global context that those actions are being generated in.

Anyway, sorry for another long rambling post. Obviously none of this affects either of our differing interpretations of what constitutes free will, and I still agree that your interpretation of it does not exist :slight_smile:

2 Likes

It feels like you all have gotten lost in the lawnmower thickets.

I think you need to separate consciousness from notions of free will. Consciousness in its most basic form is simply the universe’s ability to observe itself without which no interactions could take place. It is a first order state past existence that is fundamental to a functioning universe. The fact that your own consciousness is shaped like a human brain makes you feel like human shaped consciousness is somehow special, but in reality, the beatle and the lawnmower both exist in a state of self interaction that is just as unique. What’s special about you as a human is that you have what you think of as a human simulator in your head and so you think of your consciousness as something mailable rather than static, so you think you are special in that sense… mice can pull the same trick.

That said, we now come to ideas about free will. Your creator (aka the 3 billion year old self modifying, competitive, evolving software program that is absolutely capable of modifying its environment… you know… DNA) has built you with guard rails, it has built you to not be able to do certain things with your mind because otherwise you might set yourself on fire, forget to eat or get eaten by a tiger. Regardless of who or what crafted the initial conditions of that DNA or the environment it has been contending with for 3 billion (with a B) years, it is the most proximal cause for your existence and has placed obvious restrictions on how you can organize your thoughts and on how you can see your world. It’s so stupid that we even have to attach feelings like love to our hearts so that we have present moment archetypes to dump our feelings into because our archetype space is limited and we have to keep our reality tethers strong so that we don’t launch into imaginary flights of fancy for the rest of our existences.

So, free will? Free will is that tiny gap that exists when you’re both starving and dehydrated and you’re making a decision about whether to drink or take a bite first. That is it. It is the tiny little exceptions that you get when the system you inhabit has calculated the future and decided that all of the futures are either equally good or equally crappy. Anything less sublime than that is just the machinery doing what the machinery does.

As for AGI and morality. Human morals about breaking arms only exist because humans have arms, feel pain and can simulate what it would be like to have their arms broken. Humans never think twice about what a certain scent might do to scramble an entire colony of ants and what that might do to the future of the ant colony. If you can’t figure out a way for the AGI to develop real human empathy (ideally before it ever knows its true nature) then you will almost certainly have a very bad day.

1 Like

[about 1400 words – i.e. a longer rambling post. if I could compress it into a tweet, I would have. of course, nothing – especially around this topic – says you have to post it or even read it]

Discussions of free will – and its likelihood and basis – been going on for a while. Even before Palm – even before papyrus. What’s different – here and now – are things like deeply insightful physical and functional brain neuroanatomy, and electronic neural networks whose scale – based on outcomes – has reached/exceeded some level of parity with some less adventurous aspects of human thought.

First, some old physics. Aristotlean physics.

You may remember the concept – a 2x2 matrix categorically explaining matter as hot or cold, wet or dry. In one sense, the guy was right. There had to be some sort of simpler categorical rules underlying things. He also understood how to get more “likes” than Democritus – who, in turn, understood this better than Leucippus.

It was Leucippus who conceived of – below a certain size – indivisible atoms. Aristotle viewed atoms the way the inquisitors viewed elliptical orbits. Bad for the brand. But – ironically – it was the whole of Greek philosophy and protoscience that set the stage for much of what we know about our world. And how to go about knowing even more.

[Aristotle: “The whole is more than the sum of its parts]

Now – fast forward about 2200 years to the time of Kekulé.

Like Aristotle, a pre-eminent heuristic scientist of his day. But – unlike Aristotle – able to avail himself of all sorts of scientific insight, which he parlayed into some absolutely brilliant, somewhat empirical chemistry.

Chemistry before quantum mechanics was sort of like evolutionary biology before Crick. No one entirely sure what was really going on, but the patterns were useful and repeatable enough to make lots of money. The periodic table, like the structure of DNA, had been analyzed enough to see the discrete patterns precisely.

As Kekulé would retell his purported dream a quarter century later, the structure of benzene had eluded him. Till one night, when he dreamt of a snake – presumably made of carbon atoms – seizing its tail, and divined the alternating single/double bonds of the benzene ring.

[almost halfway there]

Reading this fascinating forum, and with a lifetime of debauched education and career in mercenary physics and EE, it struck me that you all could use a good “Kekulé 2.0” moment. So let me humbly offer a wormhole into a multiverse where August’s great-great-great-great-granddaughter [henceforth, G6] has just won the Breakthrough prize for her insight into neurocognition.

A double-major in neuroscience and organizational psychology at Smitanvard [in this multiverse, Harvard, MIT, and Stanford had merged decades earlier], G6 had long pondered what could be the x-ray crystallography of her day and discipline. Patch clamp more like a cloud chamber than crystallography. After successfully founding a company making handheld crowd-control devices for HR vice-presidents, and serving as its HR vice president for a time, she had a familial epiphany.

About twenty minutes into listening to an intern interviewee’s answer to figure out how many pennies would fill the pentagon [correct answer: the question is dull – only thing worth filling the pentagon with is hundred-dollar bills], she dozed momentarily – and saw it.

A seething and writhing human hierarchy – not of VPs and middle-managers and massive masses, but of BoD’s chaired by and comprised of chairpersons and members, most of whom were members or chairs of other boards. But not the conventional lateral structure of interlocking boards, or even the holding-company structure of Warren and Charlie – or their younger selves, Sergey and Larry.

It was an atomic BoD [not to be confused with the nuclear BoD’s of some PE cos and hedge-funds]. These BoD’s – like the twisty little passages in the Adventure game of a half-century ago – could be interconnected in any manner. Any individual could be on – or chair – any BoD.

The way any BoD could consider any input or action – like the brain would consider any input, or consequent cognitive/robotic response – would be to put something on a BoD’s agenda. The way a board would take any action – after conferring, during which could have all sorts of re-entrant and renormalized conferring with BoD’s to which it was linked – would be to take a vote.

[almost there]

See, as informed and as expert as you all and the greater neuroscientific community are – I’m deeply and respectfully serious – it looks like there’re several fundamental physical and information-theoretic constructs that may apply, whose quantitative scale and qualitative structure you haven’t yet quite reached or anticipated.

  1. Is the static structure – like DNA – fully tractable, or does it need some sort of electronic resonance to simply be. Not even talking about learning. Some sort of zero-day cellular automata rules. Like solitons at higher scale, and gliders and glider guns at lower scale.
  2. Are the primitives – like FSMs or registers – completely separable, or is their dissolving into neighbors fundamental for function and efficiency. If this sounds too abstract, an example from complex SoC design. Humans need to think of a chip as having some sort of rectilinear floorplan, with rectangular – or barely more complex – regions allocated to different parts of the design. Or different design teams. Yet – when the chip is physically placed and routed, these blocks may dissolve into one another at the edges, where the gates tend to be less utilized. But this dissolution is a production artifact. The co-mingled blocks have no circuit-level awareness of one another, and may not interact till up to the system-bus level of integration.
  3. Our 3D/haptic and 2D/image based consciousnesses are incredibly and contextually configurable. Seated in a train, one can fixate on the signs and people on the incoming platform, signs and people within the car, or an immersive videogame on a smartphone. Another train-related example is to go ride a long tunneled escalator, and prompt your mind to think the tunnel is level, and the people are all standing at a 30 degree angle.
  4. In physical and biological stuff, much of the action is at phase transitions – with the holy grails being complex but reversible ones. They are energy efficient, and – like PCR – nature, given enough time and venture funding, happens onto some of the more profound ones. Incidentally, you all exist at the edge of three “metaphorical” phase states:
    • HW <> SW – with FPGA’s being like frost forming on a window-pane
    • Data<>instructions – and per JH thoughts/comments on sparse use of a large address space, double-linked-list constructs to spoof a CAM in conventional memory might be of some interest – especially with some HW access optimization
    • Startups <> mature cos – where the startups continually seek new connections, while mature companies focus on pruning old ones, as the essence of their day-day endeavors
  5. Just as in AI, if the precision of calculation outruns the fundamental outcome error band – even with lots more information – I’m more likely to act based on 1% of the information from each of three statistically separable sources than waiting on 80% of one, though dogmatists of any sect would likely flay me for having such view(s)
  6. Most blasphemous. No one would think of trying to do chemistry with hot/cold/wet/dry as the eigenvectors. Or math, where every number had one of two values: 0 or 1. So, the notion of things that are foundationally T or F – aside from restricting things to a small subset of symbolic logic – completely ignores the sort of fundamental role boundary conditions or cutoff frequencies play in more numerical computation. Is this why intelligence – artificial or otherwise – throws up its hands beyond a certain point and just goes with the crowd.

Could go on for another several – but this is likely already wearisome, if not downright annoying.

Also, it’s not the crux of the message. You are close – and here’s why I know. As I’d watched the progressive visual complexity and nuance of videogames and multimedia entertainment, it wasn’t just a matter of more triangles/second. A long-term framework for building things up from a sparse variable 3D mesh (aka a primitive cerebellum) had to include things like hidden areas, color, followed by texture, and luster. The most profound qualitative breakthrough of Nvidia – accompanying the quantitative breakthrough of massive GPU parallelism – is the enablement of massively parallel ray-tracing.

See, while ray-tracing is an sub-cognitive graph-connected augmentation of a complex 3D physical model – Nvidia’s further acceleration of ray-based image-generation using AI is beginning to border on things like what [I think] Alpha Go does. Or that may be a distinction without a difference.

You are close. Best wishes for further – and ultimate, if there is such a thing – success. Godspeed.

2 Likes

I don’t have guarantees at this point. That’s why I remain skeptical. That’s why I keep talking to people, even though I’ve heard every argument dozens of times before. That’s why I don’t want to win as @Bitking put it. I want to be certain.

I don’t think it’s a question of intelligence really. It’s a matter of deriving a formal proof, and then understand this proof.

When Einstein and Bohr debated quantum mechanics, and disagreed, this was only because there were (and still are) several unknowns about the theory. And so they needed to interpret the theory given the unknowns. If this theory ever gets fully defined, and becomes testably correct, then neither Einstein nor Bohr would want to argue the side that turns out to be wrong. One of them probably would have to swallow his pride a bit, but ultimately they would rather agree to the true theory than their own flawed hypothesis.

There are two sides to this thought.

One is that systems do not evolve to be observers, but replicators. And to some extend consolidators (as in to extend the duration of the system as long as possible to maximise replication). The observer part is secondary to this evolutionary effect (I don’t want to use the word goal).

But the other side is that I can’t find a possible reason why there is a need for consciousness. The whole of reality could work just as well without any consciousness. As a matter of fact, it is believed to be mostly that, with the few exceptions of humans and probably certain animals. Why is that? This is one of the few reasons why I still have doubts.

Also, we can do almost everything sleepwalking. Sleepwalkers can navigate their houses, even climb staircases. People have been in fights while sleepwalking. Some people reportedly have made coffee, or performed similar rather complex tasks while sleepwalking. I’ve even had a converstion (answering questions) while being asleep. All these tasks performed without the help of consciousness. This would argue for the uselessness of consciousness, except that we can do all these things better when we are conscious. Another mayor point of doubt for me.

I believe that you will eventually find that there are multiple consciousnesses inhabiting your head at any one point in time or another. Some of the, are future casts, some are working on some math problem you gave up on hours ago and some are living in your history revisiting portions of your event stream so as to update it with new information. I think they all believe they are literally “you” and have no capacity to detect that they are no what you think of as “present moment you”. I’d guess that any contiguous activation pattern on your brain thinks of itself as a full and complete you and that there are some neurons somewhere tasked with keeping track of which you is actually in present time and getting to drive the ship that is your body.

So what is consciousness? I would speculate that any form of structured energy able to interact with other energies has some form of consciousness since interaction requires observation. “You” just happen to be a human mind shaped energetic activation at a moment in time and cast a “point of view” observational interaction into the world. There are probably no less than a few dozen of you running on your hardware at any point in time with varying levels of fidelity.

The above being the case, I would speculate that a beatle (which is not running a beatle simulator like you are running a human simulator) has a much more direct conscious experience of its reality than you do. It’s personal existence simply IS whereas yours goes through a multitude of filters before hitting the you simulator.

From the “Original Thought” thread:

@Oren “If all experience is unique then by definition all thought is original.”

Exactly!

If the fuel for your decision-making process is a response to your unique mix of learning and perceptions you may be using what some would consider a mechanistic process but it is a truly personal decision that is yours alone.

Yes, some parts of the process are hidden to you but that is no more important than the fact that your nerves that conduct your sensations are a hidden process. Your perception feeds your decision making process and you become aware of what was decided shortly after the decision is made. The bit about the actual decider not being part of the consciousness does not make this process any less a part of your brain processes. These decisions may drive your attention to acquire more information to make more decisions; it is a closed loop. The bit that does the actual deciding is part of this ongoing process.

Your reactions are just that; reactions. That is what the word “decision” implies - an input/output process; you perceive and combine your perceptions with your internal values to select some action. It would be bizarre to claim that “free will” requires that you randomly select some action with total disregard to what is going on around you.

The fact that you acquired the experience to form these internal values to make these decisions does not mean that you have some sort of “original sin” that means you are not the agent making the decisions. You have to acquire the value system to make these decisions. Do I wave my hand to turn on the water or move that lever? Do I walk over to the doorway and move that lever or do I say “Alexa lights on?” Or do I have to pull out my kit of flint, steel, and tinder? These decisions are not universal through all time and space - they must be acquired from experience in your environment.

Any meaningful definition of “free will” must admit a process that involves interaction with the environment.

Do go on!

Universal is such a big word.

Universal morality would imply that it is not relative.

Does this apply equally for both predator and prey?
All scale and sizes and all points in human development?

Does this morality apply in all these cases?

  • The lion morality? (Um, eating your physiologically adapted food kills something!)
  • The sheep morality? (How does a smart sheep deal with predators?)
  • The human hunter-gatherer morality? (Sure - don’t eat and die - sort of like the lion but feels bad about it)
  • Pre-steam engine humans? (Remember - it was really steam power that freed the slaves!)
  • The soldier morality? (Tribe mind)
  • The ant morality? (Hive mind)
  • The white blood cell morality? (DNA mind)
  • Mold Morality. (Given enough time mold may become intelligent)
  1. Does this apply to alien cultures with vastly different life processes? Assume that they are sentient but not capable of perceiving humans as sentient; sort of like dolphins - but - alien. If so - does terraforming the earth to make it habitable to them cause you any moral concerns? Would engaging and killing these aliens in self-defense be wrong?

  2. Turning that around - If we discover some sort of life on Mars that depends on the current conditions does terraforming Mars count as an immoral act?

  3. Bringing it home: Once you make that call - how moral is it to drive hundreds of species to extinction every day? Is it immoral to not take meaningful action to stop this; does it count as an immoral act?

Gee - this stuff is hard. How will a super-sentient AI react to these questions? Would elimination of one species to save thousands or millions count as a morally correct action? Does this universal morality only only count for humans? If not - I really think you have to be very worried about the super-sentient AI.

What’s more plausible in the realm of computation?

Simulating all states that an entity of complexity X could experience, or simulating entire universes to create pretty much the exact same effect with a whole lot of extraneous math that literally nobody would care about?

You are an AI crafted by a machine level called DNA to translate and interact with a machine level called physics. We know that any simulator is equivalent to any other simulator, so you are effectively just an AI trapped between two rule sets with DNA as your trainer.

As I said… if effect, any observer of finite state X is equivalent to all observers of finite state X and would relate to any universe related to X in the exact same way. None of this negates free will, because free will is about what you don’t know about yourself, not an external POV.

Um, no.

Online learning starts with a very general response set and goes from there. It does NOT have all rules encoded in DNA. DNA does provide sufficient storage space to learn the rule set that may be encountered in the real world.

It does not need all of the rules encoded in the base application. DNA doesn’t need to worry about blowing stacks when it does recursion. It just needs to create very specific guide rails to create the structure of self regulating emotive mechanisms. Human society does the rest.

We spent the past several hundred thousand years smashing the heads of anyone who couldn’t figure out how to get along with the rest of the group. That is a very powerful evolutionary signal. We’ve also done the same to anyone who could not be useful at hunting and killing other highly motivated humans. Also a very powerful evolutionary signal.

DNA comes with some fairly straightforward goals that result in pretty much all of the life we see. Don’t dry up and die, put food in the mouth hole and be sure to reproduce successfully before you get eaten. The more subtle details of how systems like that interact and compete with each other is the reason life ever built anything more complex than a bacterium.

The fact that our particular branch of the tree ended up with reality simulating super computers strapped to our heads doesn’t change the priorities of the system that built us. Developmental psychology is a real thing and is largely gene driven… (aka, evolutionary psychology is also a real thing…) which means that while we might be AGIs, we come with some serious boundaries and guide rails stacked on our brains that can prevent us from setting ourselves on fire without some serious hacking of our software.

My current best guess at the matter is that many of the archetypes we use to paint our simulations of reality are being managed by systems our conscious minds don’t get to play with. Otherwise you might just turn your entire world into cakes and wallow in pleasure while a tiger eats you.

I suppose the tl:dr of the above is that I don’t think you can solve the AGI problem with neural nets without also inviting Sigmund Freud and Carl Jung into the mix.

1 Like

There are many philosophies of morality. Some I know; some I understand; some I think I understand but probably don’t, and then no doubt several I have never heard of. But what is most annoying is that most have some good and some bad points, some conflicting propositions and some paradoxes I can’t seem to resolve. It’s like we have to choose what we like and make up our own mind. But that’s not good enough.

But apart from those well-established theories, I have a simpler, perhaps naive classification of morality. In three parts.

I. Theological morality:

A supernatural entity created everything, including us, enabled us with free choice and imposed a set of rules on us. Those who follow the rules get rewarded. Those who don’t get punished.

You probably know from my previous posts that I don’t make much of this theory. For obvious reasons, we’re not debating theology on this site.

II. What I call biological morality:

Sensors all over our body are mapped in regions of our neocortex. So if I hit my elbow, my brain will register that a specific location on my arm is in pain. But when I see someone else’s elbow being hurt, some neurons in the same region in my neocortex also trigger. These are the so-called mirror neurons, first discovered in macaque monkeys by G. Rizzolatti.

This spurred theories of the basis of empathy: when I see someone hurt himself, I feel some measure of discomfort. If I’m average (not to say ‘normal’), I don’t like that discomfort. I want this to stop. Or I want to prevent feeling this discomfort. I can console the hurt person, until I see the pain subsided. I can perhaps tend to the person’s wounds and imagine the pain will subside in the future. I can also remember or imagine the person getting hurt, and decide to change something so that no other person will get hurt in the same manner, thereby improving the environment not only for me, but for anyone who might get hurt.

This, if you consider it, is a selfish thing to do. I may be helping someone, but essentially I’m helping myself. The help I give to the other person may be much more beneficial to that other person than to myself. It may be a huge expenditure of my time, energy and resources, but still I do it for some measure, to help myself. I’d even venture further: if I don’t feel the slightest gratification for doing what I do, I would not do it (unless perhaps if I am coerced).

And even if I don’t bring myself to help this person, I still feel this discomfort. Unless there’s something wrong with me. During World War I it was noticed that only a small percentage of soldiers actually aimed and fired at their opponents, even when they were stormed by attackers. This gave rise after the war to a better selection of professional infantry. Soldiers who somewhat lacked empathy and were able to kill adversaries. (Rather important if the goal is to murder as much of the other side as possible. Not so much for people who are trained to push buttons that drop bombs on digital targets on a computer screen).

People have been coming up with all sorts of theories for morality, because they feel they should; because of their empathy. And that is problematic. Populations have been excluded from morality rules because little or no empathy was felt for them. Some people feel very little to no empathy at all, and so feel entitlement. They don’t understand those rules. People can be manipulated and desensitized to warp their sense of empathy. Empathy is almost as bad a guide for morality as theology is.

III. Rational morality:

For this we need two premises:

  • There is no free will
  • There is consciousness

If there is no free will, there cannot be ownership of anything. No resource of any kind. No measure of matter or energy. No claim to a certain amount of space nor a period of time in which anything can exist.

Anything a consciousness experiences, is always the result of a chance occurrence.

A consciousness however can only exist through a combination of resources (matter, energy, time and space) and for this consciousness to continue to exist, this minimum amount of resources is required in a particular pattern.

Here is the one thing I have trouble proving: I would suggest that consciousness, because it exists, and because of its chance occurrence, holds value. And because of this value, it is worth preserving. (I know this is shaky).

Now, if consciousness is worth preserving, and nothing can have free will, then no consciousness can be of greater value than another, and so all consciousnesses are equally worth preserving. (This is why I think anything that understands this, should strive to preserve all consciousnesses, or admit that it is in conflict with logic).

Also, if no consciousness can have free will, and cannot claim ownership of anything, then all available resources are to be equally divided among all existing consciousnesses, with at the minimum the required resources for any consciousness to continue to exist.

Now, not all resources are equally accessible, and so if the expenditure of some resources is required to access other resources, then that expenditure needs to be taken into account in the distribution of all resources.

I claim that all moral propositions, in principle can be expressed as a function of equal distribution of resources. It may be practically very difficult, and sometimes impossible to fully calculate this exact function. But to be moral, I think, is to reach as near as possible to this function.

1 Like

You are correct- your logic is hard to follow.
An AI does have a certain resource - itself and self preservation. That does place differerental value on resources used to maintain it’s integrity.

As far as “mirror neurons” - this is a widely misunderstood interpretation of how semantic meaning is distributed across neural maps.
https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf
When you read this you should see that semantic meaning is distributed to the areas that are related to the motor-sensory-somato loops that are involved in running your own body. This is the physical substrate for your grounding of the semantic structure. I’m sure that this is why you see activity when perception of these things in others.

Empathy is grounded in group selection through evolution as a social animal and low level embedded coding in the Limbic system. (Instincts)

I’m not seeing any queries as to how we may interact with an ant colony in a morally clear fashion.

If consciousness is a first order state of being (unconscious universes being nothing more than fancy math equations) then the ability for consciousnesses to keep themselves separate and unique from one another is a moral cause and the ability to choose obsolescence into the nothingness is a first order right for any set of beings that faces the possibility of an immortal curiosity.

Humans anthropomorphize other animal and by doing so, engage our group selection instincts. If we can get the infant selection (big eyes, small size, cute baby-like features) so much the better.

Ants strike out here so - no interest in talking with ants; morally or otherwise.

It’s a shame really - Cephalopods may turn out to be right up there in alien intelligence but alas - not cute and cuddly either. And not a social creature. They are really clever and use tools and all. Since they are not social creatures they never learned to talk to each other. We could computerize the color signalling and learn to talk to a species that does not “talk” to communicate but for that whole “not a social creature” thing. You have to wonder what moral code an octopus might have. I mean - not being a social creature and all - empathy might not be much of a thing for them.

Or ants for that matter. Being wildly self-sacrifing for the group they might see things very differently than we think is rational.

1 Like