Agency, Identity and Knowledge Transfer

I came across this stream highlight:

I’m sorry I missed it live. There are two points I’d like to get feedback on:

Isn’t this in conflict with the kilo-brain-theory where different columns vote about features (i.e. the room)?

If for instance I loose a hand, and get a clunky prosthetic limb instead, I understand I have to learn to use it and will have some re-exploring to do. But lots of features (I suspect) are going to feel familiar fairly quickly.

Saying that I can’t use the previous model of the room is (in my opinion) saying that there is no crosstalk between senses. Whether these are currently attached senses or future senses does not matter, as long as the senses are (or get) attached to the neocortex.

Another way to look at it is learning to play Dust in the Wind on an acoustic guitar, and then relearn it on an electric guitar. This is a bit like changing your sensor set. The feel is quite different, but lots of knowledge of playing the song (your model) will help you in the relearning. And also when you learn it on a ukulele (different scale), or on a piano or kazoo (different senses).

The other point I wonder about is the brain transfer conundrum:

I think the upload scenario is long overdue. Let me counter with another thought experiment:

Imagine we can build a single neuron in an enclosed silicium chip. It performs exactly the same functions at exactly the same speeds as its biological counterpart. It receives input voltages from thousands of inputs and produces an output signal just like the real neuron would. It creates/changes/destroys simulated synapses according to the same rules.

When we implant this electronic neuron instead of the real one, would that still be you? Or even more basic, would your brain operate differently?

If that one neuron represents one bit in a sparse memory, would the electronic version invalidate the SDR? Would the spacial pooler no long work?

Now do that with 1000 neurons. With 10% of your neurons. And with 100% of your neo-cortex. At which point would your brain stop working?

Now lets backtrack a bit. Instead of replacing your neurons, lets build a few million cortical columns’ worth of virtual neurons. And lets connect them through whatever layers with some of your (real) columns and with your thalamus.

Wouldn’t your brain start using this expanded virtual neo-cortex to store new models?

I don’t think it will work very well as your semantics are grounded to your body senses/controls.

They are trained at the same time; this is true up to and through the association regions.

All the connections that carry meaning would be disrupted; it would all be trash.

See this paper for details:

https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf

Thanks for posting the article.

Not sure if we saw an explanation for four (Symantec) mechanisms as opposed to some other number, but we released a brief, video synopsis of our work since 12/12/12 and hope one slide in particular may offer some insight with respect to the four in question.

Thanks.

1 Like

What do you mean by it ? Which part of which post are you referring to, Mark?

See this paper for details:

I skimmed through it, but I’m not sure I understand. Actually I’m sure I don’t understand everything. It seems that Dr. Pulvermüller argues against centralised information (to explain the binding problem?). But as far as I understand, Numenta’s HTM works with a very distributed and probably heavily redundant information model.

Observing memory loss or skill loss in patients with particular lesions could also be explained by disruption of the grid cell mechanics. If in a region the column is no longer able to generate grid cell signals, then the locally encoded sensory information can’t be accessed anymore. Or can’t be put in relation to its relative location to other sensory information. Same goes for abstract information.

I’m out of my depth with this article. Could you explain which part specifically invalidates my point(s)?

Hello GoodReason. Welcome to the forum.

I’m puzzled by your post. What is this video? Has this anything to do with anything? Or is this a troll bot?

The “it” is moving a “person” from an old brain to a new body.

What the paper offers is that the core semantics are deeply intermixed with your somato-sensory cortex and the rest of your cortex.

You learn everything through through exploring your body and how it senses the universe. You suckle and learn the pleasures associated with that. This grounding to your body is the frame that you build your experience on.The millions and millions of learned micro-connections between the senses and the semantic structures that make up your declarative memory are built on this foundation.

If you read the paper with this interpretation in mind you can see how distributed your memory and the supporting semantics really are. You would have to extract the totality of the map to the body and the connections between maps and the local structure of the maps and the connectivity of the synapses to duplicate the you of you.

At this point you are not moving, you are duplicating.

1 Like

Ok, thanks. But that’s in reference then to the point Matt is making. What about the cross-voting principle? Am I to understand that voting does not occur between columns of different senses?

Ok, I understand that. But what I describe is about merging. Where do you stand on that?

I know this is far from practically feasable. I just think this is a way to understand how the underlying principles work.

This is one of the key differences between me and Numenta. I see the streams running relatively pure until the converge in the association regions. I don’t see object recognition happening until then. At that point the objects that are learned is what is usually called sensor fusion. My take on the maps up to that point is that they are extracting everything possible out of the streams to present that to the association region.

This difference in opinion could be sorted experimentally: simulate one sense, say optics, and look for response in a different sensory area. To the best of my knowledge this has never been reported in any of the papers that I have read.

To support my view you would have to see cells that respond to multi-mode stimulation in the association areas. Like this:
https://www.cell.com/neuron/pdf/S0896-6273(01)00174-X.pdf

Matt’s motif of changing limbs? strap-on prosthetic do this all the time and it works. You learn to re-purpose the existing columns to run the new limb.

At a more intrusive level? Actually inject signal from the new limb into the cortex somehow? Let me approach this from a slightly different angle. If you were to add a new limb you would have to add both motor (frontal lobe) and sensory (somatosensory) connections to control it. For best results there would have to be an existing loop joining the new connection areas.

Exploring some closely related material …

There have been experiments where they add a new sense - say a compass sense. The one I liked the best was where you wear a sock that has a ring of vibrators around the top that are keyed to a compass sensor. It does not take long for that to become a “built in” sense and you always have a sense of which way you are facing.

I could not find that original article but there are were a bunch of interesting projects under the banner of “augmented humans.”
Here is a more intrusive and less capable compass sense:

An embedded magnet that allows you to sense magnetic fields:

If you google augmented humans with various added keywords you will find a bunch of projects. The hacking humans movement has hit the equivalent of the AI winter but I suspect it will come back.

The current rage is adding more information to the existing sensory stream.
Retraining hearing to add a compass sense - I could see this going to your merging point:
https://www.nature.com/articles/srep42197

I talked with @Falco today on Twitch about this.


1 Like

I’m not entirely convinced by your arguments @rhyolight. By conceding that you could replace your entire body with another sensor set through gradual substitution, you agree that your current model can actually be useful for the new sensor set. (It has to adapt slightly to the new sensors, but it’s faster than learning from scratch, and you retain your old knowledge about the world. )

So your claim is really equivalent to that “without ever being able to directly relate old and new sensory input, no part of the model can be reused”.

I’m sure that that is true for things like learning to use a new limb. You could learn faster if you can relate interactions with this new limb by comparing them to interactions with your arm (or something else that was part of your old model). Without being able to cross-reference like that, you would have to learn how to control this limb from scratch.

However, even without your old sensors, you are able to think like before. You would retain high-level concepts like language, math, knowledge about science and history, etc. Even if you are robbed of all old senses, I don’t see any reason why your internal monologue or ability to think about concepts with which you are already familiar must stop.

In some sense, you could say that your ability to remember, imagine, reason and perform these highly abstract tasks is a separate sense on its own. If you are able to think 1+1=2 using your old model, and you are taught 1+1=2 in the new model, that allows for a type of cross-referencing even without the involvement of any old external sensors. This could also hold for language and many other abstract concepts. Through these means, you could eventually learn to understand how the new and old models are related.

So my stance is that low-level sensory-motor control would absolutely have to be relearned from scratch, but abstract concepts could eventually be reused.

1 Like

I think you’re possibly right. I alluded to this in my original tweets:

The question is how much of the old model would still exist after you’ve been forced to re-learn reality through a new sensor set?