Agency, Identity and Knowledge Transfer

I don’t think it will work very well as your semantics are grounded to your body senses/controls.

They are trained at the same time; this is true up to and through the association regions.

All the connections that carry meaning would be disrupted; it would all be trash.

See this paper for details:

https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf

1 Like

What do you mean by it ? Which part of which post are you referring to, Mark?

See this paper for details:

I skimmed through it, but I’m not sure I understand. Actually I’m sure I don’t understand everything. It seems that Dr. Pulvermüller argues against centralised information (to explain the binding problem?). But as far as I understand, Numenta’s HTM works with a very distributed and probably heavily redundant information model.

Observing memory loss or skill loss in patients with particular lesions could also be explained by disruption of the grid cell mechanics. If in a region the column is no longer able to generate grid cell signals, then the locally encoded sensory information can’t be accessed anymore. Or can’t be put in relation to its relative location to other sensory information. Same goes for abstract information.

I’m out of my depth with this article. Could you explain which part specifically invalidates my point(s)?

Hello GoodReason. Welcome to the forum.

I’m puzzled by your post. What is this video? Has this anything to do with anything? Or is this a troll bot?

I talked with @Falco today on Twitch about this.


2 Likes

I’m not entirely convinced by your arguments @rhyolight. By conceding that you could replace your entire body with another sensor set through gradual substitution, you agree that your current model can actually be useful for the new sensor set. (It has to adapt slightly to the new sensors, but it’s faster than learning from scratch, and you retain your old knowledge about the world. )

So your claim is really equivalent to that “without ever being able to directly relate old and new sensory input, no part of the model can be reused”.

I’m sure that that is true for things like learning to use a new limb. You could learn faster if you can relate interactions with this new limb by comparing them to interactions with your arm (or something else that was part of your old model). Without being able to cross-reference like that, you would have to learn how to control this limb from scratch.

However, even without your old sensors, you are able to think like before. You would retain high-level concepts like language, math, knowledge about science and history, etc. Even if you are robbed of all old senses, I don’t see any reason why your internal monologue or ability to think about concepts with which you are already familiar must stop.

In some sense, you could say that your ability to remember, imagine, reason and perform these highly abstract tasks is a separate sense on its own. If you are able to think 1+1=2 using your old model, and you are taught 1+1=2 in the new model, that allows for a type of cross-referencing even without the involvement of any old external sensors. This could also hold for language and many other abstract concepts. Through these means, you could eventually learn to understand how the new and old models are related.

So my stance is that low-level sensory-motor control would absolutely have to be relearned from scratch, but abstract concepts could eventually be reused.

1 Like

I think you’re possibly right. I alluded to this in my original tweets:

The question is how much of the old model would still exist after you’ve been forced to re-learn reality through a new sensor set?

1 Like

First of all, such a drawing is a brilliant way to discuss a problem!
But you don’t talk about intelligence in this video nearly at all, only about agent/environment.
What kind of intelligence is inside the agent, how the model of the world is built? It can be done in many different ways, and in some of them what you are talking there will be true (look at the DeepMind’s Atari results), but for some of them, including cortex-base intelligence it’s not.
You can learn an environment, for example by looking at it, then successfully navigate yourself in it using any other modality: touch, smell, hearing, etc. Perhaps it’s not so obvious because our vision is much better sensor then others to explore the environment, but if you do it with a rat, it’s absolutely clear.

2 Likes

I’m not sure if this adds anything but at the risk of muddying the waters, in two related threads we have been discussing the biological underpinnings of intelligence.
The evolutionary history of integrated perception, cognition, and action
and
Affordance Competition Hypothesis
In both the basic premise is that there are control loops that go from senses to motors, with some degree of processing between the two. In all cases, the goal is to select the best motor plan.
The cortex part that adds “intelligence” has morphed considerably over time but the basic function really has stayed much the same. See:

Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.

This suggest that this stimulus-response processing exists on some sort of continuum, from some rather simple stimulus-response actions to the extending of these control loop to very far outside the body and over very long time scales. At some point, these control loops start to look pretty smart.

2 Likes

Right, and this is the point when these control loops are not that relevant anymore.
What is the action part of reading a fiction book, for example?

As I have stated elsewhere, part of the control loop drives can consist of loading the local memory store with environmental data to make other control loops smarter.

Sure, it makes sense.
However, I think that it would be more productive to accept the fact that the human mind has emerged properties, which are not useful from the control theory perspective. Otherwise, we would need to agree that binge-watching instead of, for example, preparation for an exam, is a beneficial adaptive behaviour :slight_smile:

Or we could agree that the basic drives do not always converge on the optimal behavior.
A moth flying around a light bulb is trapped in the outcome of its control loops.

As I just intimated, the loading of data to shape behavior results in a certain degree of plasticity. The genetic innovations of social groups and external memory bring high flexibility and much more rapid response to changing environmental situations - but also more variation in the ways that that behavior can result in sub-optimal expression of basic drives.

1 Like

@rhyolight, are you sure? How exactly do you know? I don’t think this is true.

Let me be very specific about what I think is incorrect here: it’s impossible that the structures you created on the original sensor set have absolutely no use in learning the structure of the environment using the new sensor set. (Maybe that’s not what you’re claiming, maybe you’re claiming they have no direct use, which is probably true; they can’t just be ported over, but they are indirectly useful).

Those structures are indirectly useful because they’re analogous. They guide your predictions. That’s what intelligence does - reasoning by analogy. So if you find a structure in your brain that seems to have predictive usefulness when creating a new model, you will naturally use it to guide your predictions and therefore your attention (how you go about discovering the new structure).

Just as you show (later in the video) that the agent can use its developed knowledge of how its sensors work to explore a new environment so too can it use its developed knowledge of the abstract conception of the environment to learn how to explore the same environment with new sensors. They’re two sides of the same coin, they have to be.

That’s not to say it doesn’t have to build a whole new conception - it does. but what I’m saying is the existing model necessarily is analogous to what it’s new conception must be and therefore is useful to help it learn it’s a new conception of the same environment with new sensors, and that a generally intelligent agent will capitalize on that analog - because that’s essentially what it means to be generally intelligent: to recognize the analog and capitalize on it.

2 Likes

Yes, this is what I was claiming.

2 Likes

I whole-heartedly agree then :slight_smile:

1 Like

I think both are present in people’s minds (sub-optimal control loops and emerged properties, which are not useful from the control theory perspective), but only first is related to a moth.

If I may suggest an even simpler example. Every six months my ex-wife would completely change around the furniture. She is quite good at it and imaginative. I suck btw.
So we see a many functions involved ranging from visualization to long term memory.
And yet there many mental constructions we do promptly discard and retain a description rather than any details. We remember they are not wanted though.
These are highly integrated and yet some things, such as agency or intent are foundational. So that is a behavioralist view perhaps. I guess it’s my view prior to reading the documents.

G12 is the next generation of miracles

“Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.”
.
That’s not only my view but my most frequently used definition of consciousness. It means different things in different contexts. Define your terms (for purposes of discussion) eh?
So at the most fundamental level we have a state, intent (rules, etc.) and a loop in an anatomical and therefore purposeful hierarchy.
.
From there we can look at influence of energy economy, the medium, memory, prediction vs correction and more complex properties specific to mammals.
The repetition of functionality in different contexts is central to biology and the universe more broadly.
Your comment shows how this certainly applies to the brain. For example, dialog; that’s very high level. Dialog has multiple error signals.

Thought experiments can be problematic. In science and philosophy I made up a new term. The “canonical case fallacy”.
.
With the former you can constrain it to rationalize your theory. With the later, the failure to extend it to more complex examples leads to believing it’s universal. Philosophy is composed of countless partial truths converted to ideology.
.
Would you mind starting a thread on your views around free will sometime?

1 Like

I agree. I am skeptical about everything, including my current points of view.

There is a long thread that grew out of a post by @morning_star called Determinism in which a few of us discuss our views on free will. But it’s about 200 posts long.