TBT theory and symbol representation

Thanks for the thoughtful comments. I agree with almost all the points being mentioned, but still…

Do I get any closer to scratching that itch? … that curiosity about how a network made of interconnected neurons mysteriously achieve this (some call it “emergent”) capability of using symbols and language to remember and recollect, to think and reason, to communicate and share … probably not.

I’d be willingly fall into a trap along the way if I know I’m in the right direction of getting to a true understanding … but I don’t know which direction is right. :frowning:

Still, I enjoyed reading all the great discussions nonetheless!

Absolutely right. Each of us is a unique individual, just like everyone else. We learned to understand our differences through the usage of a common language.

How do the different brains manage to use the same symbolic systems (e.g. natural language, or mathematical language, or logic systems, to name a few) ?

I am not searching for the ultimate englightment. To the contrary, I’m searching (mostly through literature) merely for a toy connectionist system that can demonstrate one or two of the rudimentary symbolic capabilities. No luck yet.

After reading a bit more, thinking and reading, thinking, reading. Then experimenting with the Ganzflicker test [1][2], more bits seemed to fall into place with my understanding as to where TBT fits within the whole brain system. For me the Ganzflicker just showed an interesting harmonic of the flicker that varied between 1 and 0.3Hz and tried to correlate this with my state of mind as it varied through 10 minutes. This seemed to match one theory of scan frequency may be some part of the process for the occipital components, but not PFC processing. Still a lot missing though.

So, ended up having an interesting discussion with the other half, which was very surprising to me. Asked her about imaging to imagine a horse and then asked quite a few questions and was a bit surprised at the difference (very vivid visual imaging and detail). Then went onto the “think of a blue horse” which ended up with a “horse with blue patches on it, not completely blue”. Stunned, surprised and (missing an applicable utterance here).

Then I was even more curious as to plasticity in early development allowing different parts of the brain to take over larger areas if a particular pattern of activity is needed / used more. Sort of like the London taxi driver brain scans, just in my case I ended up more with a programmers pattern mind that took over my visual ability area starting age 10.

I don’t see anything just know all the parts, not as a whole, more a knowledge of a pattern that has no shape. This to me sort of makes sense because programming to me is impossible to visualise, even in a 3D space, more like a pattern (not visual) in an unconstrained infinite dimensional space. This to me seems to (sort of) make sense at to memory structure as well where everything is (sort of) a dimension. This is where I find utterance matches (applicable words) to held concepts creates very difficult challenges in communication. In my mind programming is a clear and unambiguous means of communication.

This to me fits with the PFC processing a cluster of active columns as “part” of the input, which contains a wide temporal span (persisted column activations in the PFC process from other brain areas - which makes part of the short term working memory). If enough relevant parts exist the occipital elements can resolve an image or even blend multiple parts into a new image, which makes me recall the GPT-3 image generation examples.

This is part of how I understand or think of part of the TBT and HTM fitting together.

  1. Pseudo-Hallucinations (aphantasia.com)
  2. Ganzflicker Experience (google.com)

Did the same with my dad, “think of a blue horse”
“I imagined an image of a blue horse and then thought hang on a minute that’s bl**dy stupid”

The interesting part (aside from the hilarious reaction at the time) was the quick recursive self reflection and response to the imagined image which was almost immediate when asked about the timing.

Creates a whole heap of questions for me and another jigsaw piece for the puzzle.

I think I need to disappear again back to coding.

We have problems understanding how/why the fully mapped 300 neuron brain of C.elegans responds the way it responds.

So what each can do is to speculate and, if lucky, get a chance to put her speculation to a test.

My speculation is the whole thing transitions from pattern matching level to … let’s call it symbolic machine everywhere not only when/where it comes to language. Language is just the visible tip of the iceberg made of the same symbolic structure.

Somehow the pattern matching part is still used/useful at the symbol level.


You say you want one or two symbolic capabilities to be shown emerging from a conectionist system. Can you be more precise, give a few examples of the capabilities you consider necessary for a toy symbolic machine.


One direction of speculation - if the SDR hypothesis is correct - is that in order for SDRs to be useful as symbolic encodings, they need to become orthogonal to avoid mutual resemblance. A SDR for “zebra” is no longer a superposition of the SDR for “horse looking” and the sdr for “stripes”, although it will be activated by mentioned superposition, while same superposition will inhibit the symbolic representation SDR for horse.

For that to work probably the SDRs for “horse like” and the one for the conceptual “horse” have to be different.

Then the superposition of 2-3-4 symbolic SDRs is used to encode specific relationships.

So if SDRs

  1. can be used to encode symbols (yes they can) and
  2. the connectonic circuitry is able to combine/transform arbitrary symbolic SDRs to encode relationships
  3. and implement a very large storage/retrieval database aka associative memory responding to these encodings in order to retrieve not only symbol-SDRs combined to form a relationship-SDR then …

why wouldn’t it work?

1 Like

At the risk of oversimplifying to the point of uselessness … I think that some version of the TBT at the local map level makes a great deal of sense. The brain is a patchwork of 100 or so of these local maps. These maps are connected in a general scheme where they do connect in a complicated haystack but there is an overall trend to form a hierarchy that terminates in the hub of each cortical lobe.

In this scheme, there are local recognitions based on the sensory modality that feeds that map, and the connections to neighboring maps that get either different parts of a given modality or a feed from processing a different modality. The temporal and spatial processing pools features at different levels and forwards them to the hub.

In the hub, you have a rich mix of a basket of features that in the parietal lobe registers as objects that might be composed of multiple modalities.

These map-2-map connections are bidirectional - the hub can prime the lower levels. This is what I think drives the filtering aspect of processing that is demonstrated with the cocktail effect.

I personally have pushed the concept that the coding in the hub regions is the hex-grid sparse symbols as described in some of my earlier posts. From an outside point of view, this hex-grid does not seem very helpful but for the other hub regions, this is the lingua franca.

This IS the symbol representation.

For practitioners of HTM, you have to make a decoder to convert from the hex-grid representation to something more familiar to a human viewer.

1 Like

One example: a rule based expert system, say composed of 6 rules (rule style: if Socrates is a man, then Socrates is mortal), implemented in a neuron network.

Another example: a toddler learned to count to 10. Then he learned if you put 1 apple together with another apple, you get two apples. Then next time you put one orange together with another orange, he knows the result is two oranges. Then he’s taught 1+1 =2, regardless of apple or oranges. Is it possible to develop a toy (but kind of realistic) neuron network that does not do much but represents the abstract rule 1 + 1 = 2? without any need to involve apples or oranges?

Basically, I’m thinking of the neocortex in the frontal lobe that does the “common sense reasoning” stuff – SDR seems to be an ingenious solution to emulate sensory-motor input integration in object recognition along the visual (and other modal) pathway, but once an object is recognized, and is consciously representing an abstract concept such as “1 as a count number” by some cortical columns somewhere in the frontal lobe, I am curious about how the “abstract concept” is represented, apart from the SDRs about apples, oranges, etc.

I highly doubt that it (symbol 1, or rule 1+1 =2) is converted back to some SDRs in the frontal lobe for further/more abstract rule-based common sense reasoning, but I can be totally wrong. Maybe the brain does exactly that – SDR is everything. Everything is SDR. Nothing is impossible.

It all makes great sense, indeed. Thanks a great deal for taking the time to share your thoughts!

Yet, to continue from what you just described, further up from the hypothetical “hub”, there is obviously the frontal lobe that does planning/reasoning and goal oriented activities, it seems logical and inevitable that the “hub” has to be sitting between the “local maps” and the frontal lobe.

Even though many (not all) of the activities in “local maps” are automatic reflexes (hence beyond conscious introspection) that are typically categorized as “subconscious” activities, common sense thinking/reasoning activities in the frontal lobe are typically accessible to introspection and belong to the realm of consciousness.

These activities occur in the neural columns, no doubt about that. Through SDRs? or some other form?

Curiousity killed the cat. I know. right?

In other posts I have made in this forum I postulate that after the sensorium is processed in the WHAT/WHERE streams it is made available to sub-cortical structures. The purpose of all of this processing is to simplify and explain the world in a way that makes it possible for the subcortex to make executive decisions that are projected at the frontal cortex to be elaborated into action plans.
I call this my dumb boss/smart advisory model. You can search for it here to see more about it.

These same posts explain what I think is the mechanism for the “stream of consciousness.”

I guess you are referering to this post of yours:

I skimmed through it and found myself liking the overall framework very much. It may or may not be totally accurate (I cannot tell, and wonder who can anyways), but it is coherent and self-consistent to me.

I will read it a few more times but my inquiry is much more modest, concerning only a very small fraction of the overall framework –

For the man named Ildefonso, who grew up without language and was overwhelmed to tears when he finally learned that everything has a name, what happened to his brain/neural circuits after he started learning and using a sign language?

How are those signs/symbols and abstract rules he learned through using those signs/symbols expressed at the cortical column / neuron level?

I am aware of the fact that there are no answers, short or long. It’s just an approximate direction in which I have been searching or exploring.

I’ll read your “mechanism of consciousness” again, as it resonates pretty well with some structures in my frontal lobe.

1 Like

My guess regarding Ildefonso’s situation is this: His mind was already operating symbolically. The fact that we corectly recognize e,g. a place we traveled a few months ago, without having a name attached. Or a neighbour’s face without knowing her name. That’s the essence of a symbol - a recogniaable and uniquely identifiable thing. We all have this identification assigning and relationship buildup between known things. We “see” these natively. Joy of descovering language come from your living symbols finding out they don’t live on an island anymore.


Your examples are great, thanks. One set of problems is that of analogies. For me analogies are a peculiar form of similarity and SDRs are pretty good at emphasizing resemblances and, consequently differences between themselves.

One step forward would be some means to encode arbitrary relationships between symbol-SDRs (besides pattern recognition or sequence prediction) once a complex relationship between multiple entities would be encoded in a resulting SDR, by swapping one of the initial members in the relationship would change only partially the encoded SDR - and thus “the machine” would be able to notice there-s a similarity.

Maybe a poor example:
John (travels with) plane (to) Tokyo -encode-> SDR1
John (travels with) boat (to) Tokyo -encode-> SDR2
John (travels with) plane (to) London -encode->SDR3
Alex (travels with) plane (to) Tokyo -encode->SDR4

All SDRs there will have to show both similarities for the parts that are common and differences for which are not.
The learning part is to later know by looking at two similar but different SDRs which parts of the corresponding relationships are the same .and which are changed.

The “-encode->” here would be one specific to the "X travels with Y to Z " kind of relationship and needs to have internally assigned its own symbol in order to be accessible to the whole world modeler engine wherever it is needed - to either build new x-travelswith-y-to-z or to be used in itself in higher order relationships.

I know there are unanswered questions like where is the connectome/biology here? How these relationships pop out in existence from sensory stream SDRs?

Honestly I really care only for the second one.

Thanks for sharing some great thoughts.

I’m with you here. Biology faithfulness is not the objective.

I also.

So: we have 4 high-bandwidth sensory inputs (eyes, ears, smell, skin) and credible mechanisms for turning them into SDRs, suitable for processing by the millions of cortical columns. Do we have grounds to believe neural activity elsewhere uses any similar data structure?

Hex-grid / hub == Map synchronicity, not attention emergence. Fractional part of active memory. Attention emerges from map/memory recursion.

Can fixed SDR’s scale without hubs if the bit width exceeds biology and ram separate to synchronicity between maps for differing senses ?

(SDR1+SDR2) → (HTM + Associative memory) → SDR3

Bit width and “compatibility” of SDR1/2 are different to SDR3 ?

Internal feedback loops / mirror bus like arcuate fasciculus should have less translation going on and maybe a more simplified SDR symbol representation as a result.

Need to include a thalamus in with the vision, hearing, and touch. Olfactory bypasses it.

I have a few questions regarding the basic functioning of SDRs – I read and understand Spatial Pooling, Temporal Coding, Temporal Pooling (in sensory-motor integration for object recognition) …

In SDR representation (or call it encoding), How to differentiate a particular pattern and an instance of the pattern? How to bind a value to a pattern (variable) temporarily, or all representations are literal representations? using SDR representing “X travels with Y to Z”, as an example.

Hi, the only function or rather ability of SDRs I had in mind is
that when you overlap two SDRs with e.g. 2% sparsity each, one representing “A” and the other “B” you end up with a 4% sparsity SDR which represents A and B at the same time.

What would that be useful for, assuming you have an associative memory at hand.

One toy example could be some word association machine. A primitive semantic proximity NLP model which:

  • encodes each dictionary word into a unique SDR.
  • reads in text and for every pair of words within a window of say 20 consecutive words, generates a paired SDR and saves it in the Associative Memory (aka AM or database).

The AM has the ability that when queried with a single word SDR it retrieves either all pairs in which that word was included or, if its capacity was overflowed, the most significant ones.

So it would be useful as a means to find out for any random word which other words are its most frequent neighbors.

Of course such an example could be implemented with simpler means, and I was wondering whether there are ways to encode more complex relationships between … concepts than just context vicinity

And of course I was wondering how such an hypothetical AM would behave in terms of costs vs. capacity vs. performance

Searching for pages related to this problem I found an interesting decades old concept - the Copycat cognitive architecture
Here-s a chapter from a book

It uses a combination of associative memory and dynamically assembled “program pieces” to solve analogies like the ones you mentioned.

It could be something. Who knows how many buried ideas wait for machines magnitudes faster and larger than those in the 80-90s

1 Like