Agency, Identity and Knowledge Transfer

The other point I wonder about is the brain transfer conundrum:

I think the upload scenario is long overdue. Let me counter with another thought experiment:

Imagine we can build a single neuron in an enclosed silicium chip. It performs exactly the same functions at exactly the same speeds as its biological counterpart. It receives input voltages from thousands of inputs and produces an output signal just like the real neuron would. It creates/changes/destroys simulated synapses according to the same rules.

When we implant this electronic neuron instead of the real one, would that still be you? Or even more basic, would your brain operate differently?

If that one neuron represents one bit in a sparse memory, would the electronic version invalidate the SDR? Would the spacial pooler no long work?

Now do that with 1000 neurons. With 10% of your neurons. And with 100% of your neo-cortex. At which point would your brain stop working?

Now lets backtrack a bit. Instead of replacing your neurons, lets build a few million cortical columns’ worth of virtual neurons. And lets connect them through whatever layers with some of your (real) columns and with your thalamus.

Wouldn’t your brain start using this expanded virtual neo-cortex to store new models?

I don’t think it will work very well as your semantics are grounded to your body senses/controls.

They are trained at the same time; this is true up to and through the association regions.

All the connections that carry meaning would be disrupted; it would all be trash.

See this paper for details:

https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf

Thanks for posting the article.

Not sure if we saw an explanation for four (Symantec) mechanisms as opposed to some other number, but we released a brief, video synopsis of our work since 12/12/12 and hope one slide in particular may offer some insight with respect to the four in question.

Thanks.

1 Like

What do you mean by it ? Which part of which post are you referring to, Mark?

See this paper for details:

I skimmed through it, but I’m not sure I understand. Actually I’m sure I don’t understand everything. It seems that Dr. Pulvermüller argues against centralised information (to explain the binding problem?). But as far as I understand, Numenta’s HTM works with a very distributed and probably heavily redundant information model.

Observing memory loss or skill loss in patients with particular lesions could also be explained by disruption of the grid cell mechanics. If in a region the column is no longer able to generate grid cell signals, then the locally encoded sensory information can’t be accessed anymore. Or can’t be put in relation to its relative location to other sensory information. Same goes for abstract information.

I’m out of my depth with this article. Could you explain which part specifically invalidates my point(s)?

Hello GoodReason. Welcome to the forum.

I’m puzzled by your post. What is this video? Has this anything to do with anything? Or is this a troll bot?

The “it” is moving a “person” from an old brain to a new body.

What the paper offers is that the core semantics are deeply intermixed with your somato-sensory cortex and the rest of your cortex.

You learn everything through through exploring your body and how it senses the universe. You suckle and learn the pleasures associated with that. This grounding to your body is the frame that you build your experience on.The millions and millions of learned micro-connections between the senses and the semantic structures that make up your declarative memory are built on this foundation.

If you read the paper with this interpretation in mind you can see how distributed your memory and the supporting semantics really are. You would have to extract the totality of the map to the body and the connections between maps and the local structure of the maps and the connectivity of the synapses to duplicate the you of you.

At this point you are not moving, you are duplicating.

1 Like

Ok, thanks. But that’s in reference then to the point Matt is making. What about the cross-voting principle? Am I to understand that voting does not occur between columns of different senses?

Ok, I understand that. But what I describe is about merging. Where do you stand on that?

I know this is far from practically feasable. I just think this is a way to understand how the underlying principles work.

This is one of the key differences between me and Numenta. I see the streams running relatively pure until the converge in the association regions. I don’t see object recognition happening until then. At that point the objects that are learned is what is usually called sensor fusion. My take on the maps up to that point is that they are extracting everything possible out of the streams to present that to the association region.

This difference in opinion could be sorted experimentally: simulate one sense, say optics, and look for response in a different sensory area. To the best of my knowledge this has never been reported in any of the papers that I have read.

To support my view you would have to see cells that respond to multi-mode stimulation in the association areas. Like this:
https://www.cell.com/neuron/pdf/S0896-6273(01)00174-X.pdf

Matt’s motif of changing limbs? strap-on prosthetic do this all the time and it works. You learn to re-purpose the existing columns to run the new limb.

At a more intrusive level? Actually inject signal from the new limb into the cortex somehow? Let me approach this from a slightly different angle. If you were to add a new limb you would have to add both motor (frontal lobe) and sensory (somatosensory) connections to control it. For best results there would have to be an existing loop joining the new connection areas.

Exploring some closely related material …

There have been experiments where they add a new sense - say a compass sense. The one I liked the best was where you wear a sock that has a ring of vibrators around the top that are keyed to a compass sensor. It does not take long for that to become a “built in” sense and you always have a sense of which way you are facing.

I could not find that original article but there are were a bunch of interesting projects under the banner of “augmented humans.”
Here is a more intrusive and less capable compass sense:

An embedded magnet that allows you to sense magnetic fields:

If you google augmented humans with various added keywords you will find a bunch of projects. The hacking humans movement has hit the equivalent of the AI winter but I suspect it will come back.

The current rage is adding more information to the existing sensory stream.
Retraining hearing to add a compass sense - I could see this going to your merging point:
https://www.nature.com/articles/srep42197

I talked with @Falco today on Twitch about this.


2 Likes

I’m not entirely convinced by your arguments @rhyolight. By conceding that you could replace your entire body with another sensor set through gradual substitution, you agree that your current model can actually be useful for the new sensor set. (It has to adapt slightly to the new sensors, but it’s faster than learning from scratch, and you retain your old knowledge about the world. )

So your claim is really equivalent to that “without ever being able to directly relate old and new sensory input, no part of the model can be reused”.

I’m sure that that is true for things like learning to use a new limb. You could learn faster if you can relate interactions with this new limb by comparing them to interactions with your arm (or something else that was part of your old model). Without being able to cross-reference like that, you would have to learn how to control this limb from scratch.

However, even without your old sensors, you are able to think like before. You would retain high-level concepts like language, math, knowledge about science and history, etc. Even if you are robbed of all old senses, I don’t see any reason why your internal monologue or ability to think about concepts with which you are already familiar must stop.

In some sense, you could say that your ability to remember, imagine, reason and perform these highly abstract tasks is a separate sense on its own. If you are able to think 1+1=2 using your old model, and you are taught 1+1=2 in the new model, that allows for a type of cross-referencing even without the involvement of any old external sensors. This could also hold for language and many other abstract concepts. Through these means, you could eventually learn to understand how the new and old models are related.

So my stance is that low-level sensory-motor control would absolutely have to be relearned from scratch, but abstract concepts could eventually be reused.

1 Like

I think you’re possibly right. I alluded to this in my original tweets:

The question is how much of the old model would still exist after you’ve been forced to re-learn reality through a new sensor set?

First of all, such a drawing is a brilliant way to discuss a problem!
But you don’t talk about intelligence in this video nearly at all, only about agent/environment.
What kind of intelligence is inside the agent, how the model of the world is built? It can be done in many different ways, and in some of them what you are talking there will be true (look at the DeepMind’s Atari results), but for some of them, including cortex-base intelligence it’s not.
You can learn an environment, for example by looking at it, then successfully navigate yourself in it using any other modality: touch, smell, hearing, etc. Perhaps it’s not so obvious because our vision is much better sensor then others to explore the environment, but if you do it with a rat, it’s absolutely clear.

1 Like

I’m not sure if this adds anything but at the risk of muddying the waters, in two related threads we have been discussing the biological underpinnings of intelligence.
The evolutionary history of integrated perception, cognition, and action
and
Affordance Competition Hypothesis
In both the basic premise is that there are control loops that go from senses to motors, with some degree of processing between the two. In all cases, the goal is to select the best motor plan.
The cortex part that adds “intelligence” has morphed considerably over time but the basic function really has stayed much the same. See:

Rather than the minimizing energy expenditure model as described by Firsten, Cisek replaces this with control loops; some error signal that is reduced by some response.

This suggest that this stimulus-response processing exists on some sort of continuum, from some rather simple stimulus-response actions to the extending of these control loop to very far outside the body and over very long time scales. At some point, these control loops start to look pretty smart.

1 Like

Right, and this is the point when these control loops are not that relevant anymore.
What is the action part of reading a fiction book, for example?

As I have stated elsewhere, part of the control loop drives can consist of loading the local memory store with environmental data to make other control loops smarter.

Sure, it makes sense.
However, I think that it would be more productive to accept the fact that the human mind has emerged properties, which are not useful from the control theory perspective. Otherwise, we would need to agree that binge-watching instead of, for example, preparation for an exam, is a beneficial adaptive behaviour :slight_smile:

Or we could agree that the basic drives do not always converge on the optimal behavior.
A moth flying around a light bulb is trapped in the outcome of its control loops.

As I just intimated, the loading of data to shape behavior results in a certain degree of plasticity. The genetic innovations of social groups and external memory bring high flexibility and much more rapid response to changing environmental situations - but also more variation in the ways that that behavior can result in sub-optimal expression of basic drives.

@rhyolight, are you sure? How exactly do you know? I don’t think this is true.

Let me be very specific about what I think is incorrect here: it’s impossible that the structures you created on the original sensor set have absolutely no use in learning the structure of the environment using the new sensor set. (Maybe that’s not what you’re claiming, maybe you’re claiming they have no direct use, which is probably true; they can’t just be ported over, but they are indirectly useful).

Those structures are indirectly useful because they’re analogous. They guide your predictions. That’s what intelligence does - reasoning by analogy. So if you find a structure in your brain that seems to have predictive usefulness when creating a new model, you will naturally use it to guide your predictions and therefore your attention (how you go about discovering the new structure).

Just as you show (later in the video) that the agent can use its developed knowledge of how its sensors work to explore a new environment so too can it use its developed knowledge of the abstract conception of the environment to learn how to explore the same environment with new sensors. They’re two sides of the same coin, they have to be.

That’s not to say it doesn’t have to build a whole new conception - it does. but what I’m saying is the existing model necessarily is analogous to what it’s new conception must be and therefore is useful to help it learn it’s a new conception of the same environment with new sensors, and that a generally intelligent agent will capitalize on that analog - because that’s essentially what it means to be generally intelligent: to recognize the analog and capitalize on it.

1 Like

Yes, this is what I was claiming.

2 Likes

I whole-heartedly agree then :slight_smile:

1 Like

I think both are present in people’s minds (sub-optimal control loops and emerged properties, which are not useful from the control theory perspective), but only first is related to a moth.