Agency, Identity and Knowledge Transfer

I came across this stream highlight:

I’m sorry I missed it live. There are two points I’d like to get feedback on:

Isn’t this in conflict with the kilo-brain-theory where different columns vote about features (i.e. the room)?

If for instance I loose a hand, and get a clunky prosthetic limb instead, I understand I have to learn to use it and will have some re-exploring to do. But lots of features (I suspect) are going to feel familiar fairly quickly.

Saying that I can’t use the previous model of the room is (in my opinion) saying that there is no crosstalk between senses. Whether these are currently attached senses or future senses does not matter, as long as the senses are (or get) attached to the neocortex.

Another way to look at it is learning to play Dust in the Wind on an acoustic guitar, and then relearn it on an electric guitar. This is a bit like changing your sensor set. The feel is quite different, but lots of knowledge of playing the song (your model) will help you in the relearning. And also when you learn it on a ukulele (different scale), or on a piano or kazoo (different senses).

The other point I wonder about is the brain transfer conundrum:

I think the upload scenario is long overdue. Let me counter with another thought experiment:

Imagine we can build a single neuron in an enclosed silicium chip. It performs exactly the same functions at exactly the same speeds as its biological counterpart. It receives input voltages from thousands of inputs and produces an output signal just like the real neuron would. It creates/changes/destroys simulated synapses according to the same rules.

When we implant this electronic neuron instead of the real one, would that still be you? Or even more basic, would your brain operate differently?

If that one neuron represents one bit in a sparse memory, would the electronic version invalidate the SDR? Would the spacial pooler no long work?

Now do that with 1000 neurons. With 10% of your neurons. And with 100% of your neo-cortex. At which point would your brain stop working?

Now lets backtrack a bit. Instead of replacing your neurons, lets build a few million cortical columns’ worth of virtual neurons. And lets connect them through whatever layers with some of your (real) columns and with your thalamus.

Wouldn’t your brain start using this expanded virtual neo-cortex to store new models?

1 Like

I don’t think it will work very well as your semantics are grounded to your body senses/controls.

They are trained at the same time; this is true up to and through the association regions.

All the connections that carry meaning would be disrupted; it would all be trash.

See this paper for details:

1 Like

What do you mean by it ? Which part of which post are you referring to, Mark?

See this paper for details:

I skimmed through it, but I’m not sure I understand. Actually I’m sure I don’t understand everything. It seems that Dr. Pulvermüller argues against centralised information (to explain the binding problem?). But as far as I understand, Numenta’s HTM works with a very distributed and probably heavily redundant information model.

Observing memory loss or skill loss in patients with particular lesions could also be explained by disruption of the grid cell mechanics. If in a region the column is no longer able to generate grid cell signals, then the locally encoded sensory information can’t be accessed anymore. Or can’t be put in relation to its relative location to other sensory information. Same goes for abstract information.

I’m out of my depth with this article. Could you explain which part specifically invalidates my point(s)?

Hello GoodReason. Welcome to the forum.

I’m puzzled by your post. What is this video? Has this anything to do with anything? Or is this a troll bot?

I talked with @Falco today on Twitch about this.


I’m not entirely convinced by your arguments @rhyolight. By conceding that you could replace your entire body with another sensor set through gradual substitution, you agree that your current model can actually be useful for the new sensor set. (It has to adapt slightly to the new sensors, but it’s faster than learning from scratch, and you retain your old knowledge about the world. )

So your claim is really equivalent to that “without ever being able to directly relate old and new sensory input, no part of the model can be reused”.

I’m sure that that is true for things like learning to use a new limb. You could learn faster if you can relate interactions with this new limb by comparing them to interactions with your arm (or something else that was part of your old model). Without being able to cross-reference like that, you would have to learn how to control this limb from scratch.

However, even without your old sensors, you are able to think like before. You would retain high-level concepts like language, math, knowledge about science and history, etc. Even if you are robbed of all old senses, I don’t see any reason why your internal monologue or ability to think about concepts with which you are already familiar must stop.

In some sense, you could say that your ability to remember, imagine, reason and perform these highly abstract tasks is a separate sense on its own. If you are able to think 1+1=2 using your old model, and you are taught 1+1=2 in the new model, that allows for a type of cross-referencing even without the involvement of any old external sensors. This could also hold for language and many other abstract concepts. Through these means, you could eventually learn to understand how the new and old models are related.

So my stance is that low-level sensory-motor control would absolutely have to be relearned from scratch, but abstract concepts could eventually be reused.

1 Like

I think you’re possibly right. I alluded to this in my original tweets:

The question is how much of the old model would still exist after you’ve been forced to re-learn reality through a new sensor set?

1 Like

First of all, such a drawing is a brilliant way to discuss a problem!
But you don’t talk about intelligence in this video nearly at all, only about agent/environment.
What kind of intelligence is inside the agent, how the model of the world is built? It can be done in many different ways, and in some of them what you are talking there will be true (look at the DeepMind’s Atari results), but for some of them, including cortex-base intelligence it’s not.
You can learn an environment, for example by looking at it, then successfully navigate yourself in it using any other modality: touch, smell, hearing, etc. Perhaps it’s not so obvious because our vision is much better sensor then others to explore the environment, but if you do it with a rat, it’s absolutely clear.


I’m not sure if this adds anything but at the risk of muddying the waters, in two related threads we have been discussing the biological underpinnings of intelligence.
The evolutionary history of integrated perception, cognition, and action
Affordance Competition Hypothesis
In both the basic premise is that there are control loops that go from senses to motors, with some degree of processing between the two. In all cases, the goal is to select the best motor plan.
The cortex part that adds “intelligence” has morphed considerably over time but the basic function really has stayed much the same. See:

Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.

This suggest that this stimulus-response processing exists on some sort of continuum, from some rather simple stimulus-response actions to the extending of these control loop to very far outside the body and over very long time scales. At some point, these control loops start to look pretty smart.


Right, and this is the point when these control loops are not that relevant anymore.
What is the action part of reading a fiction book, for example?

As I have stated elsewhere, part of the control loop drives can consist of loading the local memory store with environmental data to make other control loops smarter.

Sure, it makes sense.
However, I think that it would be more productive to accept the fact that the human mind has emerged properties, which are not useful from the control theory perspective. Otherwise, we would need to agree that binge-watching instead of, for example, preparation for an exam, is a beneficial adaptive behaviour :slight_smile:

Or we could agree that the basic drives do not always converge on the optimal behavior.
A moth flying around a light bulb is trapped in the outcome of its control loops.

As I just intimated, the loading of data to shape behavior results in a certain degree of plasticity. The genetic innovations of social groups and external memory bring high flexibility and much more rapid response to changing environmental situations - but also more variation in the ways that that behavior can result in sub-optimal expression of basic drives.

1 Like

@rhyolight, are you sure? How exactly do you know? I don’t think this is true.

Let me be very specific about what I think is incorrect here: it’s impossible that the structures you created on the original sensor set have absolutely no use in learning the structure of the environment using the new sensor set. (Maybe that’s not what you’re claiming, maybe you’re claiming they have no direct use, which is probably true; they can’t just be ported over, but they are indirectly useful).

Those structures are indirectly useful because they’re analogous. They guide your predictions. That’s what intelligence does - reasoning by analogy. So if you find a structure in your brain that seems to have predictive usefulness when creating a new model, you will naturally use it to guide your predictions and therefore your attention (how you go about discovering the new structure).

Just as you show (later in the video) that the agent can use its developed knowledge of how its sensors work to explore a new environment so too can it use its developed knowledge of the abstract conception of the environment to learn how to explore the same environment with new sensors. They’re two sides of the same coin, they have to be.

That’s not to say it doesn’t have to build a whole new conception - it does. but what I’m saying is the existing model necessarily is analogous to what it’s new conception must be and therefore is useful to help it learn it’s a new conception of the same environment with new sensors, and that a generally intelligent agent will capitalize on that analog - because that’s essentially what it means to be generally intelligent: to recognize the analog and capitalize on it.


Yes, this is what I was claiming.


I whole-heartedly agree then :slight_smile:

1 Like

I think both are present in people’s minds (sub-optimal control loops and emerged properties, which are not useful from the control theory perspective), but only first is related to a moth.

If I may suggest an even simpler example. Every six months my ex-wife would completely change around the furniture. She is quite good at it and imaginative. I suck btw.
So we see a many functions involved ranging from visualization to long term memory.
And yet there many mental constructions we do promptly discard and retain a description rather than any details. We remember they are not wanted though.
These are highly integrated and yet some things, such as agency or intent are foundational. So that is a behavioralist view perhaps. I guess it’s my view prior to reading the documents.

G12 is the next generation of miracles

“Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.”
That’s not only my view but my most frequently used definition of consciousness. It means different things in different contexts. Define your terms (for purposes of discussion) eh?
So at the most fundamental level we have a state, intent (rules, etc.) and a loop in an anatomical and therefore purposeful hierarchy.
From there we can look at influence of energy economy, the medium, memory, prediction vs correction and more complex properties specific to mammals.
The repetition of functionality in different contexts is central to biology and the universe more broadly.
Your comment shows how this certainly applies to the brain. For example, dialog; that’s very high level. Dialog has multiple error signals.