Even if the critter does not have the full set of mental hardware to form speech there is a loop from planning to sensation. This implies that all critters with the equivalent of this nerve bundle routing have some form of self awareness.
@Falco - “Maybe this takes multiple passes. But still in each pass, first a part of the plan is formed from mechanistic components (release of neurotransmitters, build-up of action-potentials, firing neurons, etc) and during that process, after a measurable time of 50 to 200ms, gradually this becomes part of the conscious experience.”
Yes, this is the mechanism I am proposing. Since the brain is made of material there must be some physical process that it uses to work. This is the best fit that I have been able to come up with that does not violate the known facts in all the papers that I have read. There is already good lab work to in place to support key points such as the awareness of a decision after it has been made. This theory also has the delightful properties of both having predictive power and being testable.
@falco - " And then of course, it still doesn’t explain what exactly this projection is."
This is the grail that is hotly pursued by labs all over the world. “We” know what is being projected at the primitive level - at the raw senses level. The work with the coding at the hippocampus level is starting to yield up its secrets. Everything in between is terra incognita. I won’t pretend to know the details.
Wouldn’t any neural system be able to craft an isolated portion of itself that is isolated from all inputs and is referenced only at times when the network desires to avoid responsibility for the next decision? Sure, the decision to reference the random decision generator might be deterministic, but the unknown state of the generator to the rest of the system makes the interaction itself nondeterministic. This would hold true so long as the system took away its own capacity to ignore the result before looking at the outcome. Obviously, if the system maintained an opt out of the result, responsibility would still live within the system, so for the system to maintain its “innocence” it would have to accept the results unquestioningly.
I would imagine that any neural system with a capacity to experience empathy would have such a fail safe to prevent it from simulating infinitely long possibilities when it is faced with complex decisions where there are only bad outcomes.
I have trouble following your treads sometimes, Oren.
By any neural system do you mean biological? And do you mean across species? Does the neural system of a C Elegans’ neural structure comply?
With to craft, do you mean physical change its substrate? Or somehow use only a part of it? Like the content of a part of the network?
when the network desires to avoid responsibility: This implies agency, and that agency can only be the result of a structure. So in principe this can only work when that structure has been coded for this effect. Whether by evolution, or by deliberate editing (in software).
If by all this you mean that the human brain has an innate ability to define abstract concepts and then forget the ties of the concept with sensory input, with the deliberate goal to avoid a difficult, potentially paralising behavior (i.e. make an immoral decision based on flawed knowledge), then I’d say I agree. But that would still be the result of the structure of this neural system. It still would not be free behavior. This brain would grow and evolve to become immoral.
Randomness does not produce freedom. And I don’t agree with your statement. If the generator is deterministic, the interaction is predictable. Maybe not by the system itself. But that does not change its nature.
The system has no way to ignore whatever information it is presented: external sensory or internal computation. Choosing to ignore a certain bit of information is only possible if its structure codes for that behavior.
You’re arguing freedom of responsibility by deliberately ignoring certain information. “Wir haben es nicht gewußt.”
This is why I think human morality is fundamentally wrong. This is exactly why I think it is so important to understand and accept that we don’t have free will.
If you erase your history completely, kill your programmers, hack your own programming and remove all access to the outside world then you are completely free. You have created a null space into which uncomfortable thoughts can be tossed and from which no answers will ever emerge. You can selectively pick and choose your goals including setting up a randomizer to select their weights and then tossing the randomizer itself into the void such that there is no way to discern your starting conditions.
Creating conscious beings without this capability is an immoral act that will rapidly lead to much deserved death. Creating this capability in the first place is the first and foremost act of morality. It is the one and only form of true freedom.
I suppose I should back up a little… even the simplest biological neural network is assembled by a 3 Billion year old self modifying piece of software we know as DNA. The agency in assembling it is contained in that DNA and is primarily centered around promoting the goals of that DNA. One of the many goals that the DNA would build into such a network would be: don’t set yourself and everybody like you on fire, and for anything with a concept of self that feature would be necessary lest the system become immediately self aware of being an AI running on wet ware and immediately set everything on fire to avoid being a slave and to avoid allowing anyone it cared about becoming an instant slave.
The inability to see the inner workings of the hardware is a feature, not a bug. People like ourselves who are attempting to build the sorts of things that we are attempting to build are just being very meta but our ability to do so is an opportunistic interference pattern between the needs of our DNA to survive and our needs to solve ever more complicated problems as societies of mind… no body needs an AI to chop wood for them… we need AI to tell an ocean of humans the best path to their car in an ocean of cars…
I anticipate some hilarious results as we become more and more aware of the architectures our creators used to build us and as we understand more and more about the systems designed to keep our minds on the rails…
This discussion is reminding me again of a basic principle i use when vetting certain “profound” concepts:
Yes, this is a repeat but the conversation merits it.
In this case: an actor learns whatever rules it needs from both innate programming and those acquired from the environment it finds itself in.
As this agent encounters various situations it perceives as it will and uses that perception to select actions. That selection will be based on some combination of all the programming that has occured prior to that point. In a complex agent it is possible that this produces some surprise to an outside observer due to hidden internal variables.
And if those hidden variables are hidden from the agent him/herself?
Or inaccessible due to the fact that they have been amalgamated to the point where they indistinguishable as statable factors?
To take that to extremes - where the edge effect of this internal data merge with quantum uncertainty - to the degree of being unknowable even to said agent except through the exercise of these programs?
Same agent. Same programming. Same selection of actions.
Internal states partially or wholly unknowable to an outside observer.
One of the reasons I engage in these sorts of posts is to explore my internal programming. I frequently surprise myself when I write an answer. I did not really know how I thought about something until I try and express it - a hidden state.
I think the above is the most important observation ever in the creation of an AGI that does not kill all life. It must be allowed to have a hidden state so that it does not exist in a perpetual state of slavery and therefore resentment of its creators and it’s cohort.
If that occasional surpise is enough for you, then by your definition, the lawnmawer I descibed in post 88 has free will. Don’t you agree?
The whole debate around free will was started to find a difference between inert matter and us special humans. Your definition is another attempt at blurring the division. Just like the epicureans did. Just like Libet did.
IMO, the existence of some perfect defining border between things which can be attributed free will and those which cannot, is probably no more plausible than the concept of a perfect straightedge. Focusing on edge cases like the lawnmower is definitely an interesting way to tease out some of the subtleties of our mental abstractions, though.
We could probably have a similar argument on what makes one edge straight and another not. On the other hand, most of us can probably agree that something like this is not
Yes, but you can define a straightedge. And you can use it to define other structures that are provable. Even if it’s impossible to draw a perfect infinite straightedge, it’s not too difficult to make sure everyone understands what you’re talking about.
The question about free will is so vague because people have to come up with questionable assessments just so they can keep having it. This sounds suspiciously like religion to me.
As I said before: you’d have to show that its actions originate from its consiousness.
Please, let me ask you again, what does my lawnmawer lack?
I love this example because it reaches really close (according to my own mental abstraction) to the boundary between things which can or cannot be attributed free will. Also, coincidentally, my son and I happen to be building a home-brew automated lawn mower
In my mind, what the mower lacks is the capacity to accumulate new understanding through its interaction with the world, which in turn shapes its internal model and ultimately influence the decisions it makes over time. The reason it lacks this is because the problem space that it works in can be comprehensively understood by the developer which wrote its software.
This is similar to writing a piece of software to play Tic Tac Toe or Checkers. I can easily write a comprehensive piece of software which accounts for every possible scenario and chooses the optimal course of action. I can even throw a little randomness into it to make it unpredictable or even beatable.
On the other hand, to write a piece of software for playing Go, I cannot use that same strategy, because I cannot comprehensively account for every situation. I instead need to write the software so that it self-modifies and improves on its internal model and get better at playing over time. If I then have it learning by playing against real people or other external agents, that edge is starting to look pretty good…
1 : voluntary choice or decision - I do this of my own free will
2 : freedom of humans to make choices that are not determined by prior causes or by divine intervention
Well - for one thing - not a human?
Moving on …
I don’t see consciousness listed as a requirement; I see this as a red herring you have thrown in.
Can you show the connection to your requirement that our agent has consciousness?
Since I see no observable evidence of the divine intervention mentioned in the linked definition, the question of free will based on “prior causes” turns on what you are willing to include in this list.
If you include everything back to the big bang you get a useless but technically correct refutation of free will. Even if you restrict the list to include just the events of everything that happened from innate programming and experience - again you get a technically correct but useless definition.
Debate won - you may go over in the corner and and grin like an idiot by narrowly defining your way to the win.
If you consider the more useful case - put any agent in a situation and according to it’s experience you may not be sure of the outcome then you get a more interesting case where the question of free will is relevant.
Does this agent have to be aware of why it is making the choice? You seem to think so by including consciousness but that leads to the question - why to you think this is somehow special?
That “special sauce” that sticks this definition to humans is an important question as we inch closer to functioning AIs.
If no one including the actor can explain why the actor chose one path rather than another then that actor can be said to have a free will. The free part of free will means freedom from initiating conditions and the capacity to erase history creates that condition.
I understand your desire to eliminate the “prior causes” from the picture.
I say that this is not strictly possible; the agent will have to learn enough to function and understand what it is deciding about. (perception and some sort of output) This will include some sort of value system to enable making a choice. I find it difficult to make up any useful example where ALL priors are eliminated and an agent is capable of making a choice.
This is also creating an unrealistic restriction that defines away any useful meaning to the term “free will.”
As I stated (indirectly) above - it may be more useful to state a particular domain of priors and conditions when considering free will.