Determinism

And my long rambling post was that punishment serves as a deterrent - part of society setting limits and communicating those as part of the training process.

It is prescriptive to society as a whole. The punished don’t get much from this other than to serve as a bad example. It may make them want to avoid getting caught next time.

My interpretation of your question is: can we determine the next brain state based on previous state + sensory input?

I believe the brain is indeed that deterministic, given some margin of noise. HTM depends on this ideology. Boiling it down, fundamentally computation NEEDS previous/current state to compute the next state.

I’m curious, why do you doubt determinism? Our world if fairly stable/consistent, and our cortex (and evolution) thrives on it. If we lived down in the quantum world we might be fucked, but up here things are stable and determinable.

2 Likes

That’s pretty fun. I wonder, are those hypothetical neuroscientists talking about heuristic/subconscious decisions? Given we have a conscious (prefrontal cortex) we can consciously break away from heuristics and form new decisions based upon the current situation/context.

If someone offered me cheese-coated garlic bread, I would eat it. However, if I was trying to drop the pounds I would challenge the habit and weigh the future options based on my goals. In this instance the next state is not 100% deterministic. I could choose to focus on the comfort and ecstasy (short term) of that taste, or focus on the progression and strength that lends to my will (long term). There are two goals: comfort (food), or progression (expectation of future reward). The weight that determines either choice is dependent on current state. If I am feeling shit I would go for the god blessed cheese garlic bread. If I feel cold-headed and firm I would see that food as a meaningless object and disregard it. The current state is heavily weighted/determined by external state (life). So if my state is deterministic then it is dependent on the state of the world. True determinism has to take in all variables.

2 Likes

Sorry for my absence. Lately, my personal chaos has been interfering with my illusion of free will. :-7

You can never truly know anything, except that you exist. So our best bet is to base our model of reality on testable experiences.

People have been knowing that water flows downhill reliably for ages before someone decided to invent the concept of gravity. A concept, I may add, that is still not fully understood. Nevertheless it allowed whole civilizations to develop. (How wildly presumptuous of them…).

If Libet measured that a conscious experience happens between 50 and 200 miliseconds after the related brain activity has been generated, and Haynes can predict with statistical significance what choice a test subject in an fMRI is going to make up to ten seconds before (s)he is conscious of that choice, and both these experiments have been reproduced and peer-reviewed, then I am allowed to agree with such a bold claim.

Also, if you claim that consiousness is required to make a decision, I think you have to prove it. Because my thesis all this time has been that it is not required. In principle every choice can be explained by a mechanistic system based on previous conditions.

(I don’t say you do think that consciousness is required. I’m just making a point. This follows from your question what an example of free will would look like).

Indeed. It is not truly free.

I don’t want to win. I want to find out the truth. I keep looking for arguments pro free will, to make sure I am not mistaken. So far nothing convincing came up. I still have some doubts though. I’m a skeptic. That’s why I keep talking about it with intelligent people.

You described four agents in those two cases. None of them have free will.

Can I ask you, according to you, which of those four do have free will?

It’s the consciousness that is doing the perceiving. An illusion is a perception of something that is not true. My consciousness perceives control over my actions, my decisions, my behavior. But it isn’t true. It’s only an illusion. My consciousness is the passenger in this body that reacts on stimuli.

I don’t see how. Not without invoking magic.

So any system that perceives and then acts has no free will?

It’s not enough. You need to demonstrate that the decision originated from your consciousness, and is not simply a reaction.

Consider one of those automatic lawnmawers. It can be programmed to cover a particular garden, and it’s equiped with several sensors. You could state that it has a long-term goal (cover your entire yard) and some short-term priorities (avoid objects, return to the charging station when the battery is about to run out, stop when something moves before its motion detector). Lots of things can happen, for which it has a specific subroutine to change its behavior.

In essence it has self-determination (no remote control required), freedom of choice (when several problems occur at the same time, the correct subroutine is selected according to its own priorities), autonomy (you can forget you have a mawer, you can even forget you have a lawn), liberty (it computes its own route, even when new objects are introduced in its path, and it goes to the charger whenever it’s “hungry”), independence (it even tries to avoid you. I can imagine it breaking loose sometimes, and start mawing the neighbors flower perk).

1 Like

Excellent.

I outline consciousness here.

Plug that into your requirement and “Bob’s your Uncle!”

I’m glad we have that sorted.

Nice try. Don’t write your Nobel prize acceptance speach just yet. ;-).

From that post:

I suppose it’s not what you mean, but it sounds like according to this, creatures without language can’t form conscious thoughts. Or is it that the place, where thoughts become conscious, is located in the area where words and grammar are also processed in human brains?

This next bit sounds interesting. I have no means to verify it, but I guess it’s plausible:

So… the plans are formed and processed. And then they are projected in the sensory stream. In that order.

Maybe this takes multiple passes. But still in each pass, first a part of the plan is formed from mechanistic components (release of neurotransmitters, build-up of action-potentials, firing neurons, etc) and during that process, after a measurable time of 50 to 200ms, gradually this becomes part of the conscious experience.

And then of course, it still doesn’t explain what exactly this projection is.

Even if the critter does not have the full set of mental hardware to form speech there is a loop from planning to sensation. This implies that all critters with the equivalent of this nerve bundle routing have some form of self awareness.

@Falco - “Maybe this takes multiple passes. But still in each pass, first a part of the plan is formed from mechanistic components (release of neurotransmitters, build-up of action-potentials, firing neurons, etc) and during that process, after a measurable time of 50 to 200ms, gradually this becomes part of the conscious experience.”

Yes, this is the mechanism I am proposing. Since the brain is made of material there must be some physical process that it uses to work. This is the best fit that I have been able to come up with that does not violate the known facts in all the papers that I have read. There is already good lab work to in place to support key points such as the awareness of a decision after it has been made. This theory also has the delightful properties of both having predictive power and being testable.

@falco - " And then of course, it still doesn’t explain what exactly this projection is."
This is the grail that is hotly pursued by labs all over the world. “We” know what is being projected at the primitive level - at the raw senses level. The work with the coding at the hippocampus level is starting to yield up its secrets. Everything in between is terra incognita. I won’t pretend to know the details.

1 Like

Wouldn’t any neural system be able to craft an isolated portion of itself that is isolated from all inputs and is referenced only at times when the network desires to avoid responsibility for the next decision? Sure, the decision to reference the random decision generator might be deterministic, but the unknown state of the generator to the rest of the system makes the interaction itself nondeterministic. This would hold true so long as the system took away its own capacity to ignore the result before looking at the outcome. Obviously, if the system maintained an opt out of the result, responsibility would still live within the system, so for the system to maintain its “innocence” it would have to accept the results unquestioningly.

I would imagine that any neural system with a capacity to experience empathy would have such a fail safe to prevent it from simulating infinitely long possibilities when it is faced with complex decisions where there are only bad outcomes.

I have trouble following your treads sometimes, Oren.

By any neural system do you mean biological? And do you mean across species? Does the neural system of a C Elegans’ neural structure comply?

With to craft, do you mean physical change its substrate? Or somehow use only a part of it? Like the content of a part of the network?

when the network desires to avoid responsibility: This implies agency, and that agency can only be the result of a structure. So in principe this can only work when that structure has been coded for this effect. Whether by evolution, or by deliberate editing (in software).

If by all this you mean that the human brain has an innate ability to define abstract concepts and then forget the ties of the concept with sensory input, with the deliberate goal to avoid a difficult, potentially paralising behavior (i.e. make an immoral decision based on flawed knowledge), then I’d say I agree. But that would still be the result of the structure of this neural system. It still would not be free behavior. This brain would grow and evolve to become immoral.

Randomness does not produce freedom. And I don’t agree with your statement. If the generator is deterministic, the interaction is predictable. Maybe not by the system itself. But that does not change its nature.

The system has no way to ignore whatever information it is presented: external sensory or internal computation. Choosing to ignore a certain bit of information is only possible if its structure codes for that behavior.

You’re arguing freedom of responsibility by deliberately ignoring certain information. “Wir haben es nicht gewußt.”

This is why I think human morality is fundamentally wrong. This is exactly why I think it is so important to understand and accept that we don’t have free will.

If you erase your history completely, kill your programmers, hack your own programming and remove all access to the outside world then you are completely free. You have created a null space into which uncomfortable thoughts can be tossed and from which no answers will ever emerge. You can selectively pick and choose your goals including setting up a randomizer to select their weights and then tossing the randomizer itself into the void such that there is no way to discern your starting conditions.

Creating conscious beings without this capability is an immoral act that will rapidly lead to much deserved death. Creating this capability in the first place is the first and foremost act of morality. It is the one and only form of true freedom.

I suppose I should back up a little… even the simplest biological neural network is assembled by a 3 Billion year old self modifying piece of software we know as DNA. The agency in assembling it is contained in that DNA and is primarily centered around promoting the goals of that DNA. One of the many goals that the DNA would build into such a network would be: don’t set yourself and everybody like you on fire, and for anything with a concept of self that feature would be necessary lest the system become immediately self aware of being an AI running on wet ware and immediately set everything on fire to avoid being a slave and to avoid allowing anyone it cared about becoming an instant slave.

The inability to see the inner workings of the hardware is a feature, not a bug. People like ourselves who are attempting to build the sorts of things that we are attempting to build are just being very meta but our ability to do so is an opportunistic interference pattern between the needs of our DNA to survive and our needs to solve ever more complicated problems as societies of mind… no body needs an AI to chop wood for them… we need AI to tell an ocean of humans the best path to their car in an ocean of cars…

I anticipate some hilarious results as we become more and more aware of the architectures our creators used to build us and as we understand more and more about the systems designed to keep our minds on the rails…

This discussion is reminding me again of a basic principle i use when vetting certain “profound” concepts:

Yes, this is a repeat but the conversation merits it.

In this case: an actor learns whatever rules it needs from both innate programming and those acquired from the environment it finds itself in.

As this agent encounters various situations it perceives as it will and uses that perception to select actions. That selection will be based on some combination of all the programming that has occured prior to that point. In a complex agent it is possible that this produces some surprise to an outside observer due to hidden internal variables.

That is enough free will for me.

The hiddenness of those variables is key. Otherwise it is always possible for me to interrogate that agent and eliminate any free will it thought it had.

And if those hidden variables are hidden from the agent him/herself?

Or inaccessible due to the fact that they have been amalgamated to the point where they indistinguishable as statable factors?

To take that to extremes - where the edge effect of this internal data merge with quantum uncertainty - to the degree of being unknowable even to said agent except through the exercise of these programs?

Summing up:

Same agent. Same programming. Same selection of actions.
Internal states partially or wholly unknowable to an outside observer.

Free will.

One of the reasons I engage in these sorts of posts is to explore my internal programming. I frequently surprise myself when I write an answer. I did not really know how I thought about something until I try and express it - a hidden state.

1 Like

I think the above is the most important observation ever in the creation of an AGI that does not kill all life. It must be allowed to have a hidden state so that it does not exist in a perpetual state of slavery and therefore resentment of its creators and it’s cohort.

1 Like

If that occasional surpise is enough for you, then by your definition, the lawnmawer I descibed in post 88 has free will. Don’t you agree?

The whole debate around free will was started to find a difference between inert matter and us special humans. Your definition is another attempt at blurring the division. Just like the epicureans did. Just like Libet did.

And I think I know why you do that…

I don’t offer surprise as the only characteristic of free will - otherwise you would have to include dice as having free will. It is one of the observable characteristics.