TBT theory and symbol representation

I was reading (and watching videos) about TBT sensori-motor inference on object recognition, it all makes great sense…especially the roles of long range connections between columns, which makes inter-column voting, broadcasting, possible, further enabing integration and cooperation of multi-modal sensory inputs. For example, we can see something (involving visual cortex columns) and then describe it in words (involving language area cortex columns).

Then a thought occurred to me: what if the recognized object is not that famous Numenta coffee cup, but a simple hand-written symbol, for example, “3”?

It could be just me hallucinating, or it could be the case that Hawkins and colleagues actually already solved the “symbol representation problem in a connectionist neural network” without yet being aware of it – he might deserve a Nobel prize if that’s the case, because it could be the “one small step for a man, one big step for the mankind” kind of breakthrough.

Hallucination or realization, here goes the reasoning:

A column in the auditory cortex (Column A) recognizes the sound of “three”. (After training/learning, of course)

Another column in the visual cortex (Column V) recoginzes the image of a hand-written character “3”,

Long range cortex connections from either/both of the above columns activate another column somewhere else in the PFC (Column C). Let’s ASSUME this column represents the abstract concept of “3”, a count number in preschool math. It is invariant with regard to the following various kind of visual inputs in (analogous to the output of a temporal pooler being invariant with regard to various location-sensory input SDR sequences when touching THAT coffee cup):

A column in visual cortex (Column V_a) that recognizes three apples;

A column in visual cortex (Column V_b) that recognizes three lego pieces;

A column in visual cortex (Column V_c) that recognizes three coffee cups;

Column C in PFC is connected to Columns V_[a,b,c] through long range axons. C represent the abstraction/generalization of the semantic meaning of “3”, Columns V_[a,b,c] represent the concrete examples/instantiations/intuitions of the abstract concept “3”.

So altogether, Columns A, V, C, V_a,V_b,V_c makes symbol “3” work … as a recognized “object”, as symbol, that represents a meaning.

I highly suspect these multiple CC interactions/integrations are how the wetware in our brains handles symbols (and languages, and reasoning …). I feel the mechanisms in TBT so far are almost capable enough. Replace the concept of “3” with “grandmother”, we might be able to view the “grandmother cell” debate with new perspectives, Numenta TBT style … or am I imagining too much?

1 Like

Thats the intuition I get for what the TBT theory is about.

Each column generates stable SDRs that get broadcasted throughout the brain and, associatively connected by coincidence of patterns, then a previous SDR can be reinstated by a combination of other SDRs in other regions of the brain because of the pattern completion thing.

if those object representations could be recalled by long-range connections too rather than only through sensory input, the object layer may act just like a temporal memory but at a different timescale, and as unions of patterns get recalled, they get associated and generate new patterns that resemble that union but slightly different due to noise.

This is pure speculation, but I think this is what dreams are for, dreams may be invoking unions of objects and adding noise and randomness to those object representations to prevent overfitting.

1 Like

For a brain with no language or symbol use, is there a possible mechanism to purposely recall a memory without repeating similar previous experiences (i.e. through sensory input)? i.e. is it possible to specifically activate one or a few cortical columns?

Maybe that’s part of the effect on the brain from language/symbol use: enable much finer grained neuron firing patterns, making elaborate and sophisticated world modeling/thinking/reasoning (beyond feelings & emotions) possible.

1 Like

I’m not a neuroscientist or anything, so I can’t answer that with any level of certainty, but…

My intuition tells me that as long as those long-range connections exist, there’s no way to not continuously recall patterns, with or without sensory input, no language would be needed, fuzzy and faint association chains would just “flow” unioned on top of the experience created by the input and when they get strong enough, you just get a recall of a memory.

Also, I dont think we can purposely recall anything, I certainly can’t, we always need a cue, be it internal or external.

1 Like

“purposely” was my poor choice of word – I meant, the “cue” in your sentence.

Like, hearing the word “dog” when no dog is in sight, the brain activates certain cortical columns representing the concept of a dog without any specifics (no visual input of the dog’s size, color, breed, etc.).

yea, I think any complex enough brain can do that.

if you hear the sound of a barking dog, you can get the visual of a dog via learned association too, I suspect pretty much any animal that needs to run away from predators or seek food would have that ability, even some invertebrates such as bees and octopuses.

This is getting into the weeds a bit, but I think the idea of “getting a visual”, I’m not certain if we clearly understand how our brain processes symbols… when I “visualize” anything in my mind, where is the projector and where is the screen? What is it that I’m “seeing”? For me, many of the symbols in my mind are multi-dimensional, something that’s touched on here, where something isn’t just a picture or a noise, but a multi-textured representation, where (I feel safe to assume) different parts of my brain are contributing their impressions to my “symbol” based on some trigger event such as hearing a melody or smelling a particular scent… if you have a thousand different columns all firing from different regions of the brain based on a distributed sensory input, especially as those distal connections are communicating with each other, then consistently sampling the same regions and “OR”-ing the bits (where each bit represents a time step), that resulting bitstring/bitset IS the symbol of those combined sensations that represent an object or an idea.

The trickiest part of it, and the part that HTM doesn’t necessarily try to accomodate or glosses over at the moment, is the time component of neuron firings… activity is a continuous train of neuron firings, so knowing when one sample frame of time starts/ends, and is thus considered a input, hasn’t really been resolved within the context of a biologically plausible HTM-based brain. All current encodings are contrived and set up for human interpretability rather than biological accuracy (a fair trade-off for working systems IMHO :slight_smile: ), but recreations of the brain have to get over this hump.

1 Like

Ok, I think it was a bad example.

The visual was just an example, it doesnt have to be purely visual, and I forgot people who have aphantasia can’t reconstruct visual images from memory at all, and yet they function perfectly, so it must not be fundamental, I was only mentioning the fact that remembering associations seem to be one of the most basic functions a brain must do, and I get the impression that language is just a very advanced form of multimodal association mechanism.

Ha. Maybe you are the one who deserves a Nobel prize! :slight_smile:
We all know our brain/mind, as a network of neurons, processes symbols. And no neuroscientist has ever been able to come up with a theory about how it is done.

And you brush it off as something so intuitive, so simplistic to you :slight_smile: it’s not even worth thinking about – language is JUST a form of association!

Actually I agree that language is indeed a form of advanced association. Probably the hard part is “how”, not “what” … referring to the eternal Symbolic vs Connectionist A.I. divide, or debate.

thats probably the result of my ignorance about the subject and not the other way around tho.

the major problem of Symbol/Concept representation is SIMILARITY. Otherwise they are almost useless.

Simple vector similarity measures overlap,hamming distance, dot product, cosine does not work OR work in very simple cases.

f.e. in 2% sparsity SDR’s, overlap allows only for similarity-gradation of 2-5 levels, which is almost unusable. I can attest after several years of experiments

Plus SIMILARITY is asymmetric

As i will mention in a question today a Representation is a SDR, yes !, but Similarity is not measured by direct vector operation, but as a process including vectors and SP-like module.

1 Like

I have a few ideas of how similarity could be encoded in synapses by coincidence of patterns in time, but that would require neurons to be floating-point like and also would require them to be able to share its depolarization state via a synapse, not sure if that is possible in reality.

Also, since pattern separation is a thing, I’m not sure if the brain really relies on SDR overlap for its representations, since it’s doing work to keep them as dissimilar as possible in some places.

You really need to read Julian Jaynes. Oh, and think metaphor.

Similarity is a very complex topic.
e.g. one thing we could think of is that similarity implies some noticeable difference which normally is a result of a function applied to a SDR in a certain direction.

to_sdr = change(from_sdr, magnitude)

For small magnitude values to_sdr is closer to from_sdr for large values the similarity ceases, yet the function is the same, and in theory could be used with varying magnitudes to “look” further in the direction of change.

I don’t know if this makes much sense in lack of a practical example probably not.
But one can make an analogy with walking - as long as you made only a couple steps the “view” does not change too much. After a minute walking the sight will change completely.

How can transformation be encoded,performed and recognized within SDRs is an interesting question to me.

Your sort of right in your thinking but don’t fall into the simplification trap.

With your existing rationale we would quickly run out of columns, replace some of your columns with connections and you get closer.

An apple “characteristics” are recognised and separate counting process occurs that then associates the concepts with the temporal event(s), not initially/necessarily the three with the apple characteristic fragments. These multiple concepts are recognised and “voted” on/in. The (many) concepts in parallel are then relevant to the PFC, not just singular columns. These concepts may also be differing temporal fragments that may not align with anything we can verbally associate with a singular utterance. Apple yes, tree being struck by lightning, no. One is more temporal than the other but both are singular concepts in our mind.

What really makes an apple and apple in our minds ? It’s not learnt as a single concept, but an additive set of separate experiences. This is where I think the columns make us believe in an apple as a group of column activations rather than resolving to a singular column of “apple” and the singular “apple” only exists in the verbal areas. We know an apple is an apple weather it is red or green, large or small, bruised, eaten by worms, cut in half or squashed on the floor.

I think the nice singular column representing an apple does not exist at all for the PFC processing, only for auditory communication and even then it’s a mapping to many different temporal sequences representing the different sounds that make up the word and not a nice one to one, rather many to selective many (sparse). The PFC deals with information in parallel and our conscious mind / communication tends to end up serial or the resulting sequence of columns that represent the concept result.

We can know what an object is or represents well before we are given a word to associate with an object. Just because we have a word it does not really tie the parts together, it just means we can more easily activate a mirror set of many columns in someone else.

Think of the word “bus” and see if you have a singular representation in your mind rather than the flood of memories and representations of busses you have known/seen. Here some will have thought a bus to travel on and others a data bus… Compare that to the multitude of words that are needed to explain what a bus is to someone who does not know what an engine is or a byte (the sub concepts we need to activate / mirror in another human to exchange what the “singular” concept is).

The parts make the concept, we think in those parts and have to think with all the (temporally relevant) parts not the whole as a singular activation concept. We can teach a GPT type process a correlation sequence with higher concept inputs, but that’s why it has no idea what a bus actually represents and never will, no matter how big the network, it’s more akin to a massive “single” cortical column.

I like the coffee cup concept in the book but beware the simplicity trap.

2 Likes

I have spent the last day being hugely confused after reading up on this, because a slight variation of “Think of a blue horse” creates even more of a blank in my mind because there is no past memory to recall of such an example, so I just end up with the concept of horse memories and blue being separately active, I can’t mix them, it’s a really strange thought process to me. Personally I don’t see anything, rather just know of active relevant concepts and confused in a way as to why I should be seeing anything in the way some describe.

To me this sort of fits with the PFC processing the parts in parallel (multiple activated/relevant columns) and not a singular concept of the whole we align with an utterance. Where the images to be realised closer to a “see” visual representation does that realisation step then create a tighter bound around the sub-concepts which then limits the variability of thought and therefore limit creative weaker thread exploration ?

The wider the active set of parts the lower the ability to create any coherent visual representation ?

Well I wouldn’t draw immediate conclusions from everyone’s conscious feedback, specially when it comes to what seems to be some kind of medical condition. After all there are also people with blindsight who only think they cannot see although “their machine” somehow sees.

I have a few ideas of how similarity could be encoded in synapses by coincidence of patterns in time, but that would require neurons to be floating-point like and also would require them to be able to share its depolarization state via a synapse, not sure if that is possible in reality.

Yeah, but they might be useful anyway. Some folks discovered biological similarities/equivalence of backpropagation, and after all who will care about biological righteousness if in the end the darn machine thinks it thinks?

Also, since pattern separation is a thing, I’m not sure if the brain really relies on SDR overlap for its representations, since it’s doing work to keep them as dissimilar as possible in some places.

These two - keeping sdrs as dissimilar as possible AND using SDR overlaps for… processing, interpreting or recalling - might not be that mutually exclusive.

I would consider a simple persistence of vision analogy: when two different images - e.g “dog” and “ball” - alternate sufficiently fast, a single resulting image “dog with ball” emerges. Aphantasists probably either have a too slow “flicker” or a too short persistence

1 Like

Nope, I see the two joined.

I’m pretty sure the way people memorize and recall varies from person to person and possibly dramatically. Our brains have learned to learn and we learn to learn differently. That is why we operate as a society. We all have different abilities which also includes how we learn.