I think of free will as the ability to choose which of a massive set of choices and their consequences you would like to experience.
Even more so if you can choose to forget your own initial decision dataset. (Something no one seems to bring up when contemplating a God being who might have chosen to have their eyes closed when they set you in motion and erased records of your coming into existence)
Free will comes down to the ability to simulate the world ahead and playing the choice game as it relates to our ability to predict what might happen next.
It doesn’t need to be any more than that. If at some point we crack open what it means to be a conscious being and it turns out to be some crazy other dementional community with alien influences then so be it, we’ll all be surprised and embarrassed about it, but it doesn’t matter. We are choice machines and we like to think that we are closed systems, but as soon as you realize that we can learn, it becomes obvious that we’re not closed. That we keep coming up with interesting software and loading it into each other’s minds…
We have free will, it’s just not what most of us imagine free will to be. It’s a chaotic self modifying program with forgetfulness.
The amount of connections may be exaggerated if there’s truth to the paper “The bounded brain: toward quantitative neuroanatomy.”
As for the 50MB that is the size of the human genome, iirc, once the redundancy is taken out and it is compressed. An AGI of similar complexity may occupy a few tens of terabytes, but the information in all its connections would emerge from the interactions with the environment. The design of the algorithms followed by neurons, describing the different types of cells, and their wiring rules are within the 50MB of genetic code.
Regards random number generators, the unpredictability would give you compatibilist free will but not the romantic free will, some would say compatibilist free will is not real free will.
The very definition of what “you” are changes from moment to moment, so unless you are simply claiming that you are the universe and you are deterministic, then you have a free will. You are some clump of perception and intent that is floating within some ill defined deterministic universe, but by the very nature of how you are self contained within the context of this universe, you are making choices that influence your experience of the world and do so without complete foreknowledge of the outcome of your decisions.
Again, it is completely within the realm of possibilities for a god or a simulator to choose to forget initial trajectories or to choose to fail to calculate them. The fact that something is calculable doesn’t mean that anyone bothers to calculate it. Would existence be fun and exciting if we did?
Here’s the thing. A few more neurons here a few neurons missing there, and that may alter the chain of internal causality, it could be the difference between a criminal or immoral act being committed or not. The romantic notion of Free will concerns with the concept of moral responsibility, moral culpability.
Yet, it is said if a person has a high level of education they can experience severe alzheimer neurodegeneration of the brain and still remain almost asymptomatic. Yet all that lost tissue was at one time or another affecting choice or initiating the chain of causality that lead to choice. We could view it as the person changing, and it having been once part of the individual, but some would say the fact it can be lost and the individual still remain means it was not truly the individual.
Regardless, we can blame a system for an action, but holding it morally culpable for an action when it was an inevitability of the workings of the system doesn’t seem sensible. That’s the thing moral culpability, and punishment not as a deterrent but as deserved suffering for wrong doing by free choice.
Morality is an evolutionary construct calculated to promote fairness and honest trading between hominids. Nothing more.
You are punishing a system (“the criminal”) for culpability to make yourself feel better, not because it serves the universe in some quantifiable way. You don’t burn a witch or execute a thief because it somehow makes the world a better place. You do it because it feels good to do so until it stops feeling good and then you stop taking the kids to the hanging in the town square because TV beats hangings.
If you could erase the capability to murder from the murderer of some family’s child, that’s great for the murderer, but you’d still need to toss them into a volcano to prevent the family from having to stone that individual to death and thereby relinquish the national/governmental monopoly of tossing people into volcanos.
Wow - is this a loaded statement. The instinctive empathy and social relations are built in by evolution.
No argument there.
Pairing that with a dismissive “nothing more” is where it gets weird.
Are we discussing Plato’s ideal “Determinism” (like his ideal chair) as if that is a real thing?
We are real meat machines. With real built-in instincts coupled with emotions. From the point of birth, the environment must take over. For example - it is necessary to provide human contact during a critical period to activate the empathy instinct. There are children that were abandoned to an orphanage that never got this and they are very broken people without empathy.
The installation of moral values that engage and shape how our instincts are expressed is part of the programming that is usually accomplished by “parenting.” Society provides certain sticks and carrots to supplement and shape this training.
There is nothing metaphysical about any of this.
Good and evil are relative and somewhat flexible across situations and societies. Thou shall not kill - unless it’s someone from a different tribe that we have issues with.
There are broken people - I define that as people that have instincts that are outside of what can be trained to acceptable behavior. This is a combination effort - weak instinct or insufficient training has about the same outcome.
The social construct of punishment serves as part of this training signal. The social construct of imprisonment or execution serves to remove incompatible people from the social environment and allow the rest of society to function. Sometimes these things are combined.
This goes well beyond simple “fairness and honest trading between hominids” and is an expression of our nature as social animals. Casually dismissing the guiding nature of societal “stick and carrots” misses the useful purpose they serve in shaping behavior. It’s not about “feeling good” - it’s about preventing rogue humans from disrupting the tribe.
Some societies attempt to repair defective training (rehabilitation) and the success or failure of that effort goes back to my assertion that some training has to be applied during a critical period and if you miss that window it is very hard to be successful.
Considering all of this - the “stick and carrot” are part of the instilling compatible behavior and the judgment of free will is the exercise of this training in whatever situation arises. Selecting actions that are compatible with the current set of compatible behaviors is considered “good” and selection of behaviors that are not is considered “bad.”
Example: I think that by any measure Mark Twain was a good person. His use of certain words was “good” in the time he lived and “bad” by today’s standards. Part of the issue with this is that his works are used as habituation with examples of good behavior and the fact that it contains exemplars of bad behavior is problematic.
Being a meat machine that is programmed by outside forces does not change the fact that this agent is free to apply this training in what may be surprising ways. Trying to freight this with more is pointless.
A comment I posted earlier seems appropriate here:
@OS_C was specifically looking at culpability as relates to determinism… as in does a person who was set in motion by biology, a deity or a strong whack to the head “deserve” to be punished as opposed to getting punished to fix their behavior…
My response to that is that we punish people because it makes us feel better to do so when we’re feeling bad. The culpability/determinism dichotomy does not actually matter because we’re not issuing punishments based on the needs of the punished.
My interpretation of your question is: can we determine the next brain state based on previous state + sensory input?
I believe the brain is indeed that deterministic, given some margin of noise. HTM depends on this ideology. Boiling it down, fundamentally computation NEEDS previous/current state to compute the next state.
I’m curious, why do you doubt determinism? Our world if fairly stable/consistent, and our cortex (and evolution) thrives on it. If we lived down in the quantum world we might be fucked, but up here things are stable and determinable.
That’s pretty fun. I wonder, are those hypothetical neuroscientists talking about heuristic/subconscious decisions? Given we have a conscious (prefrontal cortex) we can consciously break away from heuristics and form new decisions based upon the current situation/context.
If someone offered me cheese-coated garlic bread, I would eat it. However, if I was trying to drop the pounds I would challenge the habit and weigh the future options based on my goals. In this instance the next state is not 100% deterministic. I could choose to focus on the comfort and ecstasy (short term) of that taste, or focus on the progression and strength that lends to my will (long term). There are two goals: comfort (food), or progression (expectation of future reward). The weight that determines either choice is dependent on current state. If I am feeling shit I would go for the god blessed cheese garlic bread. If I feel cold-headed and firm I would see that food as a meaningless object and disregard it. The current state is heavily weighted/determined by external state (life). So if my state is deterministic then it is dependent on the state of the world. True determinism has to take in all variables.
Sorry for my absence. Lately, my personal chaos has been interfering with my illusion of free will. :-7
You can never truly know anything, except that you exist. So our best bet is to base our model of reality on testable experiences.
People have been knowing that water flows downhill reliably for ages before someone decided to invent the concept of gravity. A concept, I may add, that is still not fully understood. Nevertheless it allowed whole civilizations to develop. (How wildly presumptuous of them…).
If Libet measured that a conscious experience happens between 50 and 200 miliseconds after the related brain activity has been generated, and Haynes can predict with statistical significance what choice a test subject in an fMRI is going to make up to ten seconds before (s)he is conscious of that choice, and both these experiments have been reproduced and peer-reviewed, then I am allowed to agree with such a bold claim.
Also, if you claim that consiousness is required to make a decision, I think you have to prove it. Because my thesis all this time has been that it is not required. In principle every choice can be explained by a mechanistic system based on previous conditions.
(I don’t say you do think that consciousness is required. I’m just making a point. This follows from your question what an example of free will would look like).
I don’t want to win. I want to find out the truth. I keep looking for arguments pro free will, to make sure I am not mistaken. So far nothing convincing came up. I still have some doubts though. I’m a skeptic. That’s why I keep talking about it with intelligent people.
You described four agents in those two cases. None of them have free will.
Can I ask you, according to you, which of those four do have free will?
It’s the consciousness that is doing the perceiving. An illusion is a perception of something that is not true. My consciousness perceives control over my actions, my decisions, my behavior. But it isn’t true. It’s only an illusion. My consciousness is the passenger in this body that reacts on stimuli.
It’s not enough. You need to demonstrate that the decision originated from your consciousness, and is not simply a reaction.
Consider one of those automatic lawnmawers. It can be programmed to cover a particular garden, and it’s equiped with several sensors. You could state that it has a long-term goal (cover your entire yard) and some short-term priorities (avoid objects, return to the charging station when the battery is about to run out, stop when something moves before its motion detector). Lots of things can happen, for which it has a specific subroutine to change its behavior.
In essence it has self-determination (no remote control required), freedom of choice (when several problems occur at the same time, the correct subroutine is selected according to its own priorities), autonomy (you can forget you have a mawer, you can even forget you have a lawn), liberty (it computes its own route, even when new objects are introduced in its path, and it goes to the charger whenever it’s “hungry”), independence (it even tries to avoid you. I can imagine it breaking loose sometimes, and start mawing the neighbors flower perk).
Nice try. Don’t write your Nobel prize acceptance speach just yet. ;-).
From that post:
I suppose it’s not what you mean, but it sounds like according to this, creatures without language can’t form conscious thoughts. Or is it that the place, where thoughts become conscious, is located in the area where words and grammar are also processed in human brains?
This next bit sounds interesting. I have no means to verify it, but I guess it’s plausible:
So… the plans are formed and processed. And then they are projected in the sensory stream. In that order.
Maybe this takes multiple passes. But still in each pass, first a part of the plan is formed from mechanistic components (release of neurotransmitters, build-up of action-potentials, firing neurons, etc) and during that process, after a measurable time of 50 to 200ms, gradually this becomes part of the conscious experience.
And then of course, it still doesn’t explain what exactly this projection is.
Even if the critter does not have the full set of mental hardware to form speech there is a loop from planning to sensation. This implies that all critters with the equivalent of this nerve bundle routing have some form of self awareness.
@Falco - “Maybe this takes multiple passes. But still in each pass, first a part of the plan is formed from mechanistic components (release of neurotransmitters, build-up of action-potentials, firing neurons, etc) and during that process, after a measurable time of 50 to 200ms, gradually this becomes part of the conscious experience.”
Yes, this is the mechanism I am proposing. Since the brain is made of material there must be some physical process that it uses to work. This is the best fit that I have been able to come up with that does not violate the known facts in all the papers that I have read. There is already good lab work to in place to support key points such as the awareness of a decision after it has been made. This theory also has the delightful properties of both having predictive power and being testable.
@falco - " And then of course, it still doesn’t explain what exactly this projection is."
This is the grail that is hotly pursued by labs all over the world. “We” know what is being projected at the primitive level - at the raw senses level. The work with the coding at the hippocampus level is starting to yield up its secrets. Everything in between is terra incognita. I won’t pretend to know the details.
Wouldn’t any neural system be able to craft an isolated portion of itself that is isolated from all inputs and is referenced only at times when the network desires to avoid responsibility for the next decision? Sure, the decision to reference the random decision generator might be deterministic, but the unknown state of the generator to the rest of the system makes the interaction itself nondeterministic. This would hold true so long as the system took away its own capacity to ignore the result before looking at the outcome. Obviously, if the system maintained an opt out of the result, responsibility would still live within the system, so for the system to maintain its “innocence” it would have to accept the results unquestioningly.
I would imagine that any neural system with a capacity to experience empathy would have such a fail safe to prevent it from simulating infinitely long possibilities when it is faced with complex decisions where there are only bad outcomes.
I have trouble following your treads sometimes, Oren.
By any neural system do you mean biological? And do you mean across species? Does the neural system of a C Elegans’ neural structure comply?
With to craft, do you mean physical change its substrate? Or somehow use only a part of it? Like the content of a part of the network?
when the network desires to avoid responsibility: This implies agency, and that agency can only be the result of a structure. So in principe this can only work when that structure has been coded for this effect. Whether by evolution, or by deliberate editing (in software).
If by all this you mean that the human brain has an innate ability to define abstract concepts and then forget the ties of the concept with sensory input, with the deliberate goal to avoid a difficult, potentially paralising behavior (i.e. make an immoral decision based on flawed knowledge), then I’d say I agree. But that would still be the result of the structure of this neural system. It still would not be free behavior. This brain would grow and evolve to become immoral.
Randomness does not produce freedom. And I don’t agree with your statement. If the generator is deterministic, the interaction is predictable. Maybe not by the system itself. But that does not change its nature.
The system has no way to ignore whatever information it is presented: external sensory or internal computation. Choosing to ignore a certain bit of information is only possible if its structure codes for that behavior.
You’re arguing freedom of responsibility by deliberately ignoring certain information. “Wir haben es nicht gewußt.”
This is why I think human morality is fundamentally wrong. This is exactly why I think it is so important to understand and accept that we don’t have free will.
If you erase your history completely, kill your programmers, hack your own programming and remove all access to the outside world then you are completely free. You have created a null space into which uncomfortable thoughts can be tossed and from which no answers will ever emerge. You can selectively pick and choose your goals including setting up a randomizer to select their weights and then tossing the randomizer itself into the void such that there is no way to discern your starting conditions.
Creating conscious beings without this capability is an immoral act that will rapidly lead to much deserved death. Creating this capability in the first place is the first and foremost act of morality. It is the one and only form of true freedom.