Determinism

Hacking your own programming is the result of your programming. Something causes you to do that. Hence, no free will.

So what does my lawnmawer lack that you have? What do I need to give it so that you attribute it free will?

IMO, the existence of some perfect defining border between things which can be attributed free will and those which cannot, is probably no more plausible than the concept of a perfect straightedge. Focusing on edge cases like the lawnmower is definitely an interesting way to tease out some of the subtleties of our mental abstractions, though.

We could probably have a similar argument on what makes one edge straight and another not. On the other hand, most of us can probably agree that something like this is not :wink:

image

1 Like

What would anything have to have for you to attribute it with free will?

Yes, but you can define a straightedge. And you can use it to define other structures that are provable. Even if it’s impossible to draw a perfect infinite straightedge, it’s not too difficult to make sure everyone understands what you’re talking about.

The question about free will is so vague because people have to come up with questionable assessments just so they can keep having it. This sounds suspiciously like religion to me.

As I said before: you’d have to show that its actions originate from its consiousness.

Please, let me ask you again, what does my lawnmawer lack?

I love this example because it reaches really close (according to my own mental abstraction) to the boundary between things which can or cannot be attributed free will. Also, coincidentally, my son and I happen to be building a home-brew automated lawn mower :smile:

In my mind, what the mower lacks is the capacity to accumulate new understanding through its interaction with the world, which in turn shapes its internal model and ultimately influence the decisions it makes over time. The reason it lacks this is because the problem space that it works in can be comprehensively understood by the developer which wrote its software.

This is similar to writing a piece of software to play Tic Tac Toe or Checkers. I can easily write a comprehensive piece of software which accounts for every possible scenario and chooses the optimal course of action. I can even throw a little randomness into it to make it unpredictable or even beatable.

On the other hand, to write a piece of software for playing Go, I cannot use that same strategy, because I cannot comprehensively account for every situation. I instead need to write the software so that it self-modifies and improves on its internal model and get better at playing over time. If I then have it learning by playing against real people or other external agents, that edge is starting to look pretty good…

1 Like

From merriam-webster
free will

Definition of free will (Entry 2 of 2)

1 : voluntary choice or decision - I do this of my own free will

2 : freedom of humans to make choices that are not determined by prior causes or by divine intervention

Well - for one thing - not a human?

Moving on …

I don’t see consciousness listed as a requirement; I see this as a red herring you have thrown in.
Can you show the connection to your requirement that our agent has consciousness?

Since I see no observable evidence of the divine intervention mentioned in the linked definition, the question of free will based on “prior causes” turns on what you are willing to include in this list.

If you include everything back to the big bang you get a useless but technically correct refutation of free will. Even if you restrict the list to include just the events of everything that happened from innate programming and experience - again you get a technically correct but useless definition.

Debate won - you may go over in the corner and and grin like an idiot by narrowly defining your way to the win.

If you consider the more useful case - put any agent in a situation and according to it’s experience you may not be sure of the outcome then you get a more interesting case where the question of free will is relevant.

Does this agent have to be aware of why it is making the choice? You seem to think so by including consciousness but that leads to the question - why to you think this is somehow special?

That “special sauce” that sticks this definition to humans is an important question as we inch closer to functioning AIs.

If no one including the actor can explain why the actor chose one path rather than another then that actor can be said to have a free will. The free part of free will means freedom from initiating conditions and the capacity to erase history creates that condition.

I understand your desire to eliminate the “prior causes” from the picture.

I say that this is not strictly possible; the agent will have to learn enough to function and understand what it is deciding about. (perception and some sort of output) This will include some sort of value system to enable making a choice. I find it difficult to make up any useful example where ALL priors are eliminated and an agent is capable of making a choice.

This is also creating an unrealistic restriction that defines away any useful meaning to the term “free will.”

As I stated (indirectly) above - it may be more useful to state a particular domain of priors and conditions when considering free will.

Cool! When that’s finished, I want to see it! :-).

Are you going to fit it an HTM brain?

Wait… mower is written with an ‘o’? I misspelled that all along.

I don’t understand what exactly a black hole is. I don’t understand what time is. I don’t understand infinity. I don’t really understand gravity either. I don’t understand sleep. I don’t understand most women, but that’s a whole other matter. Yet I can navigate the world.

When the mower detects some large entity in front of its sensors, it doesn’t know if it’s a box or a chair or a dog or another mower. Or a black hole for that matter. It knows how to calculate a path around it.

Early civilizations allegedly feared thunder. They did not really understand what it was. Some god? Probably a strong male one, with some kind of heavy tool to bang on something. That made sense. It allowed them to navigate their world.

We’ve discussed Go before. According to you, does AlphaGo have free will?

HTM is not really suitable for this application yet, but in a few years, who knows?

Exactly my point. You navigate the world based on your experiences. It doesn’t require a comprehensive understanding of everything up front. Your brain runs on a very different type of program than what I would write to play Tic Tac Toe, etc.

The mower follows a set of basic rules allowing it to operate in a problem space that we the developers (hopefully) fully understand. After mowing a yard a hundred times and encountering a wide range of different unexpected scenarios, challenges, obstacles, etc., it will go out for the hundred and first time following exactly the same rules it did the first time it went out. It is not self-modifying or improving its internal model of the world, and it has no needs or desires or emotional states influencing its decisions. There is no impact on its behavior no matter how long it operates or what it experiences in its lifetime. I as the developer can tell you exactly how it will behave in any given scenario.

My point isn’t that there is something special about not understanding. It is that not being able to understand a problem space as a software developer requires a very different strategy for writing an agent to navigate in that space.

It is probably near the edge of the continuum, but I believe so. However, it is missing some important elements like needs and emotional context which are a significant factor in how a biological brain makes choices. So it probably wouldn’t be too difficult to convince me that it does not. :thinking:

You honestly study science from a dictionary?

I haven’t thrown anything in. For me free will is impossible. Contradictio in terminis. At odds with fysical reality.

You asked me to find a way to make free will work, so I have to come up with something consistent with the span of filosofy on the matter. I didn’t invent this. Libet came up with this EEG test. He showed empirically what in hindsight (at least for me) should have been obvious. It is fysically impossible.

But let’s have a go at your great book of science:

Lawn mower (with an ‘o’)? Fits the bill. Automatic door? Check! Thermostat? Check!

Only humans can have free will then? What’s this post about your arcuate fasciculus. Lots of animals have those. Choices that are not determined by prior causes? Sounds about as impossible as it gets. That’s the only part I agree with. Devine intervention? Give me a break!

I run with technically correct, and I am convinced that it is not useless. Just as useful as refuting the existence of an interventionist god. All those d*mn churches and cathedrals where we could have built schools and libraries. And all it does is put people on the wrong track of morality.

I don’t want to win, @Bitking. But if I’m wrong, I want to be convinced on the merits of good science.

That is still a mischaracterisation. There are cases where an agent is aware of why it is doing something. But it still isn’t free will. The something happens (for instance, an idea pops up in its mind). The agent notices it. The agent reasons what may have triggered this something happening. And even if this agent comes to the correct conclusion, being aware of its triggering condition, it still is an after-the-facts observation/reasoning.

Because I think my consciousness is the only thing that is me. It is the product of my brain. I am a passenger of my brain. And only if I see myself like this, can I truly understand what morality is. Universal morality. Detached from humanity. Detached from theology (thank Scott!). Pure logic.

This is potentially the most important fact we have to understand on the eve of building General Artificial Intelligence. So I have to be very careful. I have to find out if my assessment is correct.

If someone can convince me free will does exist, then I want to learn it. But it d*mn well be on solid bases. Not on some vague intuition of easily brused ego’s. And certainly not on some definition from Merriam-Webster’s.

I still think this is core of where my point of view differs from yours. I do not believe “me” consists only of my consciousness. It also includes the neural networks in my brain which triggered an action – “I” am the whole system. Where “me” ends and “not me” begins is at the Markov blanket. As you point out, there is evidence that my consciousness is just there observing the activity. If I believed that “me” consisted of only my consciousness, then I would agree with your point of view. The reason I do not is merely because I have a different definition of “me”.

1 Like

I had a bunch of tabs open looking for an actual definition and after a bit of thinking (ocam’s razor being what it is) the M-W seemed to sum it up as well as anything.

But if it makes you feel any better you can read the longer versions I was looking at here:
https://plato.stanford.edu/entries/freewill/


https://www.iep.utm.edu/freewill/

If it is important to you feel free to offer a different (more scientific?) definition.

I did ask “What would anything have to have for you to attribute it with free will?” and you offered “As I said before: you’d have to show that its actions originate from its consciousness.” so you do seem to say that it exists and that consciousness was a requirement; you did inject that into the discussion.

How to reconcile that with: “For me free will is impossible. Contradictio in terminis. At odds with fysical reality.”

Dude - please pick a line and stick with it.

Yes - the M-W definition is fluffy - but the longer form versions are also dogmatic that free will is a human thing. I don’t feel strictly bound by these definitions; I have witnessed dogs showing remorse at getting caught doing something they know is wrong so there is a certain degree of awareness going on there. They know both the “right” action and that what they did is wrong. This speaks volumes about canine cognition. Dogs have self awareness. I don’t think your lawn mower does.

I don’t mix my definition of consciousness in with free will - it was you that brought that up.

Can I choose my actions? Sure. I do it all the time and I am aware of doing so. You may use whatever sophistry you wish to try to convince me that my perception of this is wrong but I have to balance that with my experience and that is a lot more convincing then anything you have offered.

Note that this awareness is not the source of my free will - but it is my tool to observe it in action.

My animal brain is making these choices. Since it does not have a direct representation in my cortex its action are not directly perceivable as a conscious experience. I accept that there are parts of my brain that I am not directly aware of. That does not mean that these parts are not part of my brain or its mechanisms. They clearly are. As I said - my definition of free will does not require that I be conscious of the full process; I don’t perceive the decision process outcome until it is expressed in some way in the cortex portion. I still decide.

I know this because I am aware of it. Quia scio quod me de isto!

In a like manner, I am not directly aware of the beating of the various chambers of my heart but I accept via various sensation in my body that they do beat. Awareness is not required for many bodily functions.

It is as useless as saying that the next coin flip is NOT random and is in-fact completely determined since the start of the big bang. (thank you determinism for this important insight) Perhaps true but utterly useless.

One of the key features of the body of scientific knowledge is predictive power. If I do a thing a different thing will happen. For certain classes of things there is a direct cause and effect relationship. Gravity does this, electromotive force does that …

The special nature of randomness is that I can’t predict the outcome of a trial but I can model the outcome of a series and it will follow certain statistical rules. If I could predict the outcome of a single trial it would not be random.

Free will is somewhere in-between - the outcome of an agents decisions is based on a large enough set of hidden factors that an outside observer is not able to say definitely what the outcome will be in all circumstances.

If you want to program your mower to cover your yard in neat parallel tracks, probably. But those autonome mowers need smart software to navigate out or around tricky obstructions. A casually discarded set of garden furniture can box your mower in, even if there are exits. You should look up pathfinding algorithms on game developer sites. It’s really interesting.

To solve a tricky path, those programs need to include recursion, which means managing a memory stack, and stack pointers. And that is a type of self-modifying code. Adaptive behavior.

Maybe your mower doesn’t need to be that complex, but the point is you can design one that has those characteristics. And then you end up with something that according to your position, must have free will.

As for needs, your mower has a charge gauge, and must include a routine to go connect when it’s hungy.

And its relatively long-term goal is to cover the whole yard. You could have a yard that takes weeks to cover. You could also design an algoritm that scans the edges of the garden you put it in, so that it can automatically adapt to its new environment. Or equip it with a hover and set it loose in your sitting room.

And on the other hand, yes, all these different behaviors are kind of predictable, when you detect which routine is selected. But that’s the same for humans. It’s just that we have so many more of them, and they are very complex.

This blurs the lines to the extreme, to the extend that you can’t help but wonder if there is a difference between these machines and us? I think there isn’t.

Just like free will, it’s another topic where the line is difficult to draw. Does the system include a prosthectic arm? Or even a studded soccer shoe? Or the car your drive? Or a piano? Given enough time, specific regions in our neocortex code for these implements. These tools litterally become part of us.

So, just like I can understand and respect your personal definition of free will, I can understand how you see your personal ego. But just like free will, it comes with difficult problems most people don’t want to address.

Could you explain?

The mower will incorporate a pathfinding algorithm to navigate around known obstacles and avoid unexpected ones, and will have memory of coverage area to ensure it mows the whole yard. I do not personally consider that “intelligence”, but it is admittedly a little more complicated than a calculator. I don’t have any insight into how commercial systems are programmed, but I don’t imagine they are overly complicated.

Not really. It doesn’t learn to get better at mowing the lawn. It does react to unexpected conditions and will modify its behavior to cover the whole yard, but the rules behind that are quite simple to understand.

Not really either. These are not “needs” in the same sense as a biological brain, and there is no emotional context involved in make any decision. Every condition is very cleanly described in the code. Low battery is simply another condition which is explicitly called out and handled in the code. The only complexity in the system is the pathfinding algorithm, and again pathfinding algorithms are very simple to understand. I have a hard time comparing this to a neural network that I would begin to call intelligent.

If the prosthetic arm includes sensors, then probably.

There is no nice clear sharp line that marks the edge of the Markov blanket (like there is no such thing as a perfect straightedge). It is a boundary between a thing and not that thing – primarily associated with the sensors by which a thing is able to perceive, and the actions by which it can influence the external world.

If you want to talk about something, there has to be a separation between the thing you are talking about and everything else. If there were no boundaries, there would be nothing, because there would be no distinction between a thing and not that thing. That boundary is referred to as the “Markov blanket”. It is a way of separating things which are internal to the boundary and things that are outside the boundary.

Sensory states are dependent on the states outside the boundary, and they influence the states inside the boundary. Active states are dependent on the internal states and influence the external ones. When such a boundary exists over time and does not dissipate, it requires the internal states to model the external states, in order for it to combat entropy. This is the free energy principle, and is the best answer I can give you for where to “draw the line”. I agree it is a difficult concept, but at the same time it is also fairly self evident if you think about it seriously.

… or, you may have misunderstood.

Consider the atomists. They opposed the theists, who believed they had been blown vital life in them by some god and had their destiny drawn up by some other. Unacceptible for the atomists: the universe is built out of elementary particles that move according to unmutable laws. But then they had this new problem. Whether ruled by the mercy of the gods, or by the natural laws, how could they explain their freedom. How could they pretend to be special?

That’s where I stand: we are not special in our freedom. Free will does not exist. Free will cannot exist. We are not free. Nothing is.

But we are special in another way. We have consciousness. Or at the very least, I have. (I think you too, but I can’t prove that).

Now you ask me to make free will work. That thing I have been telling everyone that does not exist, for over 30 posts. “Please, Falco, break reality for me in such a way that free will does work”.

I could have come up with a wand of sapient pearwood. Or maybe use the Force. Whatever. Anything ridiculously impossible, as long as it breaks reality to make free will, that thing that is impossible, possible.

So, in view of what was said before, that free will is an illusion, I have to return to what an illusion is. Because I have to admit (I litterally have no choice), that we perceive free will. We are constantly under the impression that our decisions are our own. I even like it. I have said this before. Even though I know it is is fysically impossible. Elementary particles can not change course by the force of my mind. Cannot be done. Yet, I experience it.

So, if this is an illusion, there must be something that experiences this illusion: my consciousness. And so to break reality in a somewhat more meaningfull manner, other than use a magic wand, or a sci-fi power, is to state that you’d have to show that consciousness is a first mover.

Now, I understand that you have another definition of free will. What I call the illusion of free will, you call the full-blown free will. It’s a slightly different definition than @Paul_Lamb’s definition, and an even slightly bigger different definition than @Oren’s. But it is radically different than my definition of free will. It’s the complete opposite.

And that is fine. You may even call mine useless if you want. That’s ok. But I cannot help but poke inconsistenties in your definitions. And it would not matter, if it weren’t for the fact that I base a universal morality on my definition.

And I repeat again that I can be wrong. I have to be careful. It sounds extremely presumptuous, so I have to keep being skeptical. Because it is not a small matter. But if I’m right, (and I soooooo hope I’m right), then we have nothing to fear from AGI. Because if little dumb pleistocene-brain me can come up with this, then a superintelligence will certainly arrive at the same conclusion. And we will be safe.

1 Like

Do we also have nothing to fear from Adolf Hitler?

(ignoring the fact that he is dead… you get my point)

1 Like

The more interesting question is, does AH have nothing to fear from a superintelligent AGI?

1 Like