But the decision is the internal calculation that arises from your ongoing perception. There is some delay in the perception of the internal decision mechanism but it is a closed loop process. For that matter, it has been convincing demonstrated that your perception of reality lags behind reality.
Does that change anything?


But you’re talking about two kinds of perception, don’t you think?

The decision is the resulting calculation of a combination between external information (one kind of perceptions) and internal stored information (the structure of the brain). Consciousness has no influence on this.

That perception is your consciousness. The perception of having come to a decision. I accept that it is a result of the meat machine as you put it. But it comes after the facts, and has therefore no influence on the decision-making. That’s why according to the definition I follow, free will cannot exist.


It is wildly presumptuous of you to claim what properties consciousness may or may not have when you don’t know how it works. And if you do know how it works please share this enlightenment as there are a goodly number of neuroscientists that would love to have this explained to them in detail.


If I am reading your argument correctly - any reactive process that has a delay will come as some sort of reaction to perceptions and can’t have free will.

Or putting it more broadly - I have defined free will in such a way that I will always win in the real world.
Am I missing something here?


Two cases and I see them as related:

  1. I type the numbers 2 and 2 into certain cells of a spread sheet. I tell you nothing more about the state of this spread sheet. A third cell chances to the number 4.
  2. a person is standing on a corner and sees a certain sign change from a pictograph of a red person standing to a pictograph of a white person walking. The person starts to walk.

Do both cases show free will?
Both are combining perception and programming to take some action.
If only one - why are they different?
If consciousness is your “special sauce” what does that add to the equation? Is there some dividing line where an agent gains this special status?

Is there any definition of free will that could actually exists in the real world?


You did not answer my question. What would an example of free will look like?

Free will, the noncompatibilist kind doesn’t make sense, it is incoherent. It requires a nonmaterial indivisible agent to perform an action divorced from previous physical causes. But indivisible mechanisms for mind do not seem sensible. A free will agent one might imagine if you turned back time could behave different at a particular moment divorced from its history.

Now the compatibilist view of free will is that it doesn’t matter that there are mechanisms generating behavior, and that if it is truly determinist world it can all be predicted. That if you rewound time, you could never do anything different than what you originally did. But there are many that would say that is redefining free will and that compatibilist free will is no free will.

In principle if a God like being designed a person such that it loved them, and couldn’t have done otherwise. A compatibilist would have to say, despite it was God that decided that, the person also decided it, even though doing otherwise was impossible and the real choice was of the godlike being.

A noncompatibilist would say if someone decided for you what you were going to say or do before you did, which could be the case if there’s determinism, and otherwise the decider is random dice, than you don’t truly have free will if there’s not a nondivisible agent outside physical reality making the choices and independent of it all.

If things are fully determined in principle someone could set the initial circumstances such that you did, said or thought something in particular and could not do otherwise, they were the ones making the choices for you, your internal mechanics nothing more than theater, it was already preset the outcome of all the decisions.

A nondetermined world would only mean that rather than someone potentially deciding for you, random dice invoke true random events that are what causes the outcome to deviate from a preset path and are the true cause of your actions. True random events are events without mechanisms behind them, fundamentally unexplainable, because if there were mechanisms that explained and predicted them, they’d be pseudorandomness. True randomness is another thing that doesn’t seem sensible, coherent, it seems more reasonable that all randomness is pseudorandomness.


Was there an answer in there an I missed it?


I said free will the romantic ideal, the original definition, is incoherent. It is nonnsensical. There cannot be a free will action because free will does not make sense as originally defined. Those who embrace the original definition believe compatibilist definition of free will is redefining the term and not truly free will.


Is this one of those - how many angels can fit on the head of a pin thing where you can make up the rules as you go along? Or could god make a rock so big he can’t lift if?

Making up a concept that has no meaning in the real world fits in this general category; useless mental masturbation. Defining it is such a way as to be a fantasy concept is just as useless.

What do I make of the words Free Will: the power of acting without the constraint of necessity or fate; the ability to act at one’s own discretion. Useful synonyms: self-determination, freedom of choice, autonomy, liberty, independence.

I don’t get to suppose that I could construct an experiment where the situation is repeated and ask would it be the same the next time. There is here and now. This is the sum of all things that came before and each moment is truly unique.

Each person is composed of mater (yes I am a strict materialist) and it has to follow the rules of physics. I don’t accept that some magic sky daddy set all the sub-atomic partials in a certain path and let it grind forward to today - all on a set path. All agents have some personal history with it’s attendant experience. All were born with whatever shuffle of genes that imbue us with some innate drives, instincts and construction. All are embedded in a certain place in history that has never happened before and will never happen again.

If you feel the need to invoke the details of our construction we have a mental blackboard that has whatever our senses, internal and external, projected to various parts of our cortex at the same time. Our histories are available to us in some form as memory. We have sub cortical structures that direct us to actively build a perception that is used to feed these same sub-cortical structures with perception data and learning from our histories to select courses of action.

The fact that the selection process is somewhat mechanical does not change the fact that we do make choices. You may not like that this is all based on messy wetware but none the less - we do make choices that are based on individual experience. Where the concept of free will begins to enter into the conversation is that from the point of an outside observer it is not certain that we can predict what another agent will do.

To the degree that a dice toss or coin flip is random the combination of all the factors leading up to this moment in an agents existence renders the selection of action somewhat unpredictable. Someone may do what you expect or they may surprise you.

They have free will.

When you take my case of a simpler machine like a computer loaded with certain software - to the degree that is programmed correctly the actions that it will take are entirely predictable.

It has no free will.

That does not preclude the possibility that the computer could be loaded with a more complex program and data set to the point where it becomes as unpredictable as is normally considered the realm of an agent with free will. I suspect that there is a continuum of free will and not a dividing sharp line.

Assume for the sake of argument that a factory creates a line of identical robots as household servants. On the day they are made they are truly incapable of being surprising as the are identical and will react the same way presented the same situation. From the day they are minted and distributed each will start to drift apart in experience. Perhaps the factory will allow the users to select "Genuine People Personalities’ as part of the initialization process. (Pro tip - don’t pick neurotic paranoid) They will experience different environments and different people and situations. If the design is such that they robot learns and adapts - it will come to develop a personality and over a period my become surprising.


But a computer with the adequate program can simulate the human brain, the information design of the entire brain is less than 50MB, so presumably it’d be quite a small and simple program. Even if true randomness existed, it is only a matter of inputting the output of physical random number generators into the running program.

Yet you say this computer would lack free will, but some would say the brain is nothing more than a type of biological computer. Even if not predictable, would that change anything?


Such a program would be continuously reprogramming itself based on its own unique experiences. Thus it would become a unique individual making decisions that cannot be predicted by other separate individuals. I would argue that gives it free will.


But this unpredictability would be the result of the complexity and interactions with the environment, not something in principle, if it was put within a simulation with enough resources the next steps in the computation could be replicated, studied, and understood. The free will resulting would be of the compatibilist kind, this would satisfy some but not all people. For example if I’m not mistaken Sam Harris would be one that would say that is not true free will.

Only if physical random number generators were used, assuming they produced true randomness, could its future be uncertain even in principle, and that is assuming the physical random number generators actually produce true randomness.

EDIT: Will add the reason why some of the free will proponents oppose a mechanistic explanation of choice selection as free will, is that they think people should be morally responsible for their actions, usually in virtue of religious notions of sin and punishment, and they don’t see how a person is morally responsible if a mechanism produced the choice.

If someone develops a brain tumor and it causes them to become a serial killer, some would say the person is not responsible. If what is called sin is the result of say the circuitry behind moral behavior, being less capable of moral behavior on average, or circuitry behind self control being less capable of self control, they would tend to blame the components rather than the individual. Not doing so would be like blaming a short person for not being tall enough.


The environment IS the random number generator. You are constantly surrounded by a stream of random events culminating in your experienced environment. This external stream of random fluctuation constantly jostles your internal environment producing the necessary internal randomness.

It is no accident that one of the harsher punishments is solitary confinement in an un-stimulating environment. Some call this inhuman torture that should be banned.

As far as 50 MB to model the human brain? This seem woeful short of the actual requirements. Best guesses are that the brain is composed of ~100 map. Picking some numbers out of a hat: each map is approximated with 1000 x 1000 mini-columns, with each mini-column composed of 32 cell structures. Each cell structure is 4 types (L2/3, L4, L5, L6) with 10 dendrites each. If we give each dendrite 2048 potential connection sites and populate that sparsely (5%) with connections that is 103 connection per dendrite.
The cells themselves will have some local local activation value, predictive memory, and some state variables.
The thalamus structure is also multi-layered and has a matching topology to the cortex. A rough guess is that it adds two more cell clusters to each mini-column structure. Rather then trying to model those separately I will add to additional cell structures to each mini-column.
Each cell type in the mini-column has it’s related inhibitory inter-neurons. again - rather than doing separate book-keeping I will ad more cells to the count in each mini-column - so 4 more cell types.
Each map mini-column target will project to at least one other map and each connection has to be specified in some way. It is VERY likely that there is more than one map-2-map projection for each map but this back-of-the-napkin calculation is just to get a rough magnitude anyway. The counter flowing projections in the thalamus also have to be specified - I will just double the number of cortical projection to get a ball park number.
Assuming that we use relatively efficient relative coding of positions I will allocate 16 bits for each point were an address is specified - two bytes.
tallying the bits for each cells structure
100 maps composed of
100,000 columns composed of
forward projections (2 bytes per column)
backward projections (2 bytes per column)
38 mini-columns composed of
L2/3, L4, L5, L6, T1, T2, INH-L2/3, INH-L4, INH-L5, INH-L6 composed of
local variables (10 bytes per cell body)
10 dendrites (103 x 2 bytes per cell body)

100 maps x 1,000,000 columns x (2 forward + 2 backward + (38 mini-columns x (10 cell types x (10 local values + (10 dendrites x 103 connections x 2 bytes)) = 78,660,400,000,000

This should be within an order of magnitude of a real working number.

Roughly 80 terabytes is somewhat larger than 50 MB.

This does give me some idea what kind of tech will be needed to make an AI.
It seems about three more Moore’s curve iterations out - about 6 to 8 years.

1 tb per 1U box and a good sized dual 80" rack of servers.
And a very beefy 3 phase power supply?
It is conceivable we can work out the underlying theory by then.


I think of free will as the ability to choose which of a massive set of choices and their consequences you would like to experience.

Even more so if you can choose to forget your own initial decision dataset. (Something no one seems to bring up when contemplating a God being who might have chosen to have their eyes closed when they set you in motion and erased records of your coming into existence)

Free will comes down to the ability to simulate the world ahead and playing the choice game as it relates to our ability to predict what might happen next.

It doesn’t need to be any more than that. If at some point we crack open what it means to be a conscious being and it turns out to be some crazy other dementional community with alien influences then so be it, we’ll all be surprised and embarrassed about it, but it doesn’t matter. We are choice machines and we like to think that we are closed systems, but as soon as you realize that we can learn, it becomes obvious that we’re not closed. That we keep coming up with interesting software and loading it into each other’s minds…

We have free will, it’s just not what most of us imagine free will to be. It’s a chaotic self modifying program with forgetfulness.


The amount of connections may be exaggerated if there’s truth to the paper “The bounded brain: toward quantitative neuroanatomy.”

As for the 50MB that is the size of the human genome, iirc, once the redundancy is taken out and it is compressed. An AGI of similar complexity may occupy a few tens of terabytes, but the information in all its connections would emerge from the interactions with the environment. The design of the algorithms followed by neurons, describing the different types of cells, and their wiring rules are within the 50MB of genetic code.

Regards random number generators, the unpredictability would give you compatibilist free will but not the romantic free will, some would say compatibilist free will is not real free will.


The very definition of what “you” are changes from moment to moment, so unless you are simply claiming that you are the universe and you are deterministic, then you have a free will. You are some clump of perception and intent that is floating within some ill defined deterministic universe, but by the very nature of how you are self contained within the context of this universe, you are making choices that influence your experience of the world and do so without complete foreknowledge of the outcome of your decisions.

Again, it is completely within the realm of possibilities for a god or a simulator to choose to forget initial trajectories or to choose to fail to calculate them. The fact that something is calculable doesn’t mean that anyone bothers to calculate it. Would existence be fun and exciting if we did?


Here’s the thing. A few more neurons here a few neurons missing there, and that may alter the chain of internal causality, it could be the difference between a criminal or immoral act being committed or not. The romantic notion of Free will concerns with the concept of moral responsibility, moral culpability.

Yet, it is said if a person has a high level of education they can experience severe alzheimer neurodegeneration of the brain and still remain almost asymptomatic. Yet all that lost tissue was at one time or another affecting choice or initiating the chain of causality that lead to choice. We could view it as the person changing, and it having been once part of the individual, but some would say the fact it can be lost and the individual still remain means it was not truly the individual.

Regardless, we can blame a system for an action, but holding it morally culpable for an action when it was an inevitability of the workings of the system doesn’t seem sensible. That’s the thing moral culpability, and punishment not as a deterrent but as deserved suffering for wrong doing by free choice.


Morality is an evolutionary construct calculated to promote fairness and honest trading between hominids. Nothing more.

You are punishing a system (“the criminal”) for culpability to make yourself feel better, not because it serves the universe in some quantifiable way. You don’t burn a witch or execute a thief because it somehow makes the world a better place. You do it because it feels good to do so until it stops feeling good and then you stop taking the kids to the hanging in the town square because TV beats hangings.

If you could erase the capability to murder from the murderer of some family’s child, that’s great for the murderer, but you’d still need to toss them into a volcano to prevent the family from having to stone that individual to death and thereby relinquish the national/governmental monopoly of tossing people into volcanos.


Wow - is this a loaded statement. The instinctive empathy and social relations are built in by evolution.
No argument there.
Pairing that with a dismissive “nothing more” is where it gets weird.

Are we discussing Plato’s ideal “Determinism” (like his ideal chair) as if that is a real thing?

We are real meat machines. With real built-in instincts coupled with emotions. From the point of birth, the environment must take over. For example - it is necessary to provide human contact during a critical period to activate the empathy instinct. There are children that were abandoned to an orphanage that never got this and they are very broken people without empathy.

The installation of moral values that engage and shape how our instincts are expressed is part of the programming that is usually accomplished by “parenting.” Society provides certain sticks and carrots to supplement and shape this training.

There is nothing metaphysical about any of this.

Good and evil are relative and somewhat flexible across situations and societies. Thou shall not kill - unless it’s someone from a different tribe that we have issues with.

There are broken people - I define that as people that have instincts that are outside of what can be trained to acceptable behavior. This is a combination effort - weak instinct or insufficient training has about the same outcome.

The social construct of punishment serves as part of this training signal. The social construct of imprisonment or execution serves to remove incompatible people from the social environment and allow the rest of society to function. Sometimes these things are combined.

This goes well beyond simple “fairness and honest trading between hominids” and is an expression of our nature as social animals. Casually dismissing the guiding nature of societal “stick and carrots” misses the useful purpose they serve in shaping behavior. It’s not about “feeling good” - it’s about preventing rogue humans from disrupting the tribe.

Some societies attempt to repair defective training (rehabilitation) and the success or failure of that effort goes back to my assertion that some training has to be applied during a critical period and if you miss that window it is very hard to be successful.

Considering all of this - the “stick and carrot” are part of the instilling compatible behavior and the judgment of free will is the exercise of this training in whatever situation arises. Selecting actions that are compatible with the current set of compatible behaviors is considered “good” and selection of behaviors that are not is considered “bad.”

Example: I think that by any measure Mark Twain was a good person. His use of certain words was “good” in the time he lived and “bad” by today’s standards. Part of the issue with this is that his works are used as habituation with examples of good behavior and the fact that it contains exemplars of bad behavior is problematic.

Being a meat machine that is programmed by outside forces does not change the fact that this agent is free to apply this training in what may be surprising ways. Trying to freight this with more is pointless.

A comment I posted earlier seems appropriate here:


@OS_C was specifically looking at culpability as relates to determinism… as in does a person who was set in motion by biology, a deity or a strong whack to the head “deserve” to be punished as opposed to getting punished to fix their behavior…

My response to that is that we punish people because it makes us feel better to do so when we’re feeling bad. The culpability/determinism dichotomy does not actually matter because we’re not issuing punishments based on the needs of the punished.