HTM Mini-Columns into Hexagonal Grids!

One thing I would also like to comment on is what appears to be a “collapsing” of scope or dismissal of scale or level at which we are working?

What I mean is this…

From a single input bit to the internal verbalization of a concept or idea probably results from maybe hundreds or thousands of “layers” of SDRs provoking pathways resulting in more SDRs until distinctions are refined and combined with many senses; later resulting in guttural sounds; then words; then concepts and ideas?

We always talk about cortical processes as if we’re at the very “top” of the assembly and propose analogies to very complex concepts and abstractions when those things are probably the result of thousands of combinatorial processes occurring before we arrive at the complex concepts we ascribe to the SDRs we’re speaking of…??

For example, we talk about the conceptualization of a “Cup” and it’s constituent parts and the SDRs which contribute through their combination with other feature SDRs to their formulation as a formal “Cup” concept…

…when really we are at a very very very preliminary “scope” at first and probably have to deal with things such as what it means to “touch” something, detecting it’s smoothness and temperature etc. We probably won’t come to anything that can be distinguishable as a “thing” conceptually until HTM Theory is able to formulate thousands of layers of combinations to even arrive at a “word” for something? Maybe?

Edit: My personal opinion is that what we’re actually dealing with is a repeatable cognitive heuristic which will eventually result in an emergent concept after many thousands of prior combinations even wayyyyy before distinctions get characterized in language?

We use analogy to describe complex concepts so that we can “envision” the flow and assembly of units of cognition - but it seems that we forget that we’re not yet at that “scope” yet when we’re actually at a very basic level at the moment?

I just thought it might be useful to point that out?


You are making this too hard. The “hundred step” principle puts a very hard upper limit on how much is actually being done from perception to response. Everything from the formation of a “global workspace” to the initiation of action has to be a fairly short process.

Perhaps very wide - but very short.

Also …


Sorry… :stuck_out_tongue:

But actually I’m not making any statement as to how complex the process is - just that there may be some benefit in being more “attentive” to the rigor with which we talk about the cognitive processes we’re currently working on? Basically that there is a looooong way to go before we even deal with the distinction of “things” because that requires the integration of language - and to me we’re at a much more “preliminary” stage in our development, maybe?

But anyway, I don’t mean to obfuscate the conversation? :slight_smile:

1 Like

From my reading, I find that the neuroscience community knows some things in a continuum from excruciating detail to a “vague fuzzy feeling.”

From fMRI studies, we can tell things like a given word or activity activates a particular region or regions.

From tract studies, we can tell to a high degree what regions are connected to what regions and the relative density of those connections.

Developmental studies have shown how the connections form in the developing animal.

Lesion studies have done much to catalog and localize functions. Man (via war injuries) has been conducting highly detailed focal injuries experiments to extend this lesion damage knowledge.

From microscopic studies, we have some very good ideas on where the connections are made in the cortical sheet. Perhaps 20% of the neurons in this menagerie have some working theory on what function they provide - many that have strong support via “in vivo” studies. These in vivo studies have received an amazingly powerful tool in light activation and/or light emission genetic manipulation.

Psycological studies have done much to give some good “black box” descriptions of what tasks are being performed by the brain. Other studies have done much to elaborate the order that the brain learns and exhibits these behaviors.

Interspecies comparisons add detail on what functions go with what structure and configuration of the structures.

So - how much is “known” depends to a great degree on how much you are willing to dig and integrate. Whether something is a “preliminary” stage or some more advanced stage may depend on your personal journey. Until someone puts it all together “we” may not know how long that journey is.

It never fails to amaze me that once I get some question in my mind regarding neurobiology I look and - lo and behold - someone has been researching it. There it is - laid out in research papers replete with tables, graphs, measurements, and references.

I think that we are in the same place as chemistry was in 1869 when Russian chemist Dimitri Mendeleev started the development of the periodic table. Once we have the “periodic table of the brain” everything may fall into a framework that makes everything make sense. I predict that we will discover that we really did know the answers - we just did not know how to fit the pieces together.


First off, I just want to say how nice it is to have persons with a high level of expertise to summarize the current “state of the union” - so for both your contributions and those of other experts in this forum - thank you!

I hope my comments didn’t imply that there aren’t advancements being made or discoveries and accomplishments for which we can be proud? But in reference to this:

…when it comes to being able to take very elemental sensory input and describe thoroughly how particles of cognitive knowledge combine to form concepts, and how language is integrated into that; and how behavior is generated with insights into how meta-cognitive processes are invoked (such as our internal conversation); where in the neocortex this occurs and how elemental knowledge is combined to form complex knowledge and where and how that is happening - I feel (and I always say it is “my” estimation), that we are just beginning our journey.

But yes, I acknowledge that to actually have a handle on what is “known” requires a level of dedicated investigation I am not currently engaged in, nor do I currently have the “acumen” to recognize it even if I came across it! :slight_smile:

Nevertheless, I kind of doubt we are at the level where we can use high-level human behavioral and conceptual analogies to analyze the coherence and accuracy of HTM Theory? (i.e. sequential vs. spatial and allo-centric vs egocentric processing are more fundamental than that?)

– which is all I am saying…

1 Like

Here is a little nugget that may push you along in your journey:


My apologies, I am having a hard time really grappling with the plumbing here but I very much appreciate the dialogue. I am looking at this from a practical psychology point of view and I find your research both fascinating and helpful in forming my thoughts about how to turn this into a theory about how to optimize human learning. I am stuck on this emotion thing because I can see that it is intimately linked to human learning and I have begun to see some similarities between the general approach to patterning and prediction in the neocortex and patterning and prediction in emotional state.
The following TED talks are well worth a watch in terms of understanding my thinking. I think that perhaps it is difficult to think in terms of emotions because we develop language fairly early on in human development and from that point emotions seem somehow to interfere with logical and analytical thinking, although I think they play a much underrated role in making intuitive leaps and such.

Without getting lost in the plumbing, I guess what I am wondering is if emotions or emotional state could be patterned in the same way as other data streams to return a predictive emotional state. I ask because I am becoming intensely aware that this seems to be what happens in many of my students and also family members. I believe that emotions are the semantic representations of our interactions with the world before there are words to make these semantic links, or in other words emotions are the way we make meaning out of the world before we develop language as a more precise means to do so. Consider a zebra escaping death at the hands of a lion, during the event the zebra is operating on current state compared to last state and decisions to turn left of right or whatever else might be an option are being made lightning fast with no opportunity to make any kind of meaning out of what’s happening. However, at some point when the zebra reaches relative safety shouldn’t there be some kind of patterning of the event to help make the correct predictive response (from a behaviour perspective) the next time a lion is encountered. Since the zebra has no word to describe a lion or its surroundings or anything else to make meaning of the environment, couldn’t a primitive kind of semantic representation of these things be patterned through emotional state at various points during the encounter. Does the zebra feel a kind of exhilaration at having escaped and would this not attach some kind of semantic meaning to the encounter. Would the zebra feel a kind of fear when next confronted with something that most resembles a lion in the future which would initiate evasive behaviour? Could the brain create an SDR which is used to compare the current emotional state to the last and to make a predictive emotional response. This would explain a great deal about how and why people fight with one another, especially when the fight seems to make no sense. I think that emotional response to the environment is patterned and used to make predictions about what our emotional response should be and the fact that we have not defined emotions in language very effectively means that we are terrible at accurately predicting the emotional state of another, hence misunderstanding etc. From my own observations I have seen that a predictive emotional state can actually impair an individuals ability to decode language to the point that the words being spoken are in fact irrelevant even when there can be no ambiguity about their meaning. I see exactly the same thing happen in written communication when the tone of an email is predictively assumed rather than the words being taken at face value. As Lisa Barrett points out, emotions are constructed in each individual and are entirely context based so the idea that I could accurately predict someone else’s emotional state is a bit of a red herring. Language is also context based in that while we can agree on the meaning of common words because most of us are exposed to them on a regular basis, defining a word from context, or making an inference is dependant on the level of exposure to language and the patterns of language (something I was intensely aware of as I tried to wade through some very academic papers on a variety of topics, none of which I am an expert in). I believe it was Bitking who made the analogy of a really dumb boss with a really smart advisor, I would like to extend the analogy a little further here and suggest this is much like the Captain Kirk and Mr. Spock command structure which is so entertaining in Star Trek. We need Captain Kirk to make intuitive and rash decisions when time does not permit a thorough analysis, and yet, if we make it out of this predicament we will need Mr Spock to figure out how that happened after the fact and make a prediction about what to do the next time we suspect the romulans might be lurking. I think emotions play a pivotal role in human learning because they are the mechanism by which we often decide whether to continue to play with novel information (and thus pattern it) or to avoid it because it might be dangerous or, more often, unpleasant. Any thoughts would be appreciated and I hope my avenue of exploration is not derailing anyone’s thought process.


I don’t see why not. I think emotions contain information, and they are part of the data flow.


I think that you might find this interesting:

I don’t think of emotions at a separate sensory stream like vision or hearing. I think of it more as a “smell” that you learn along with whatever experience you are having that flavors that experience.The more profound the experience the more intense the flavoring and the greater the learning rate of that encounter - for good or bad.

There are good reasons to expect that part of the purpose of the hippocampus is as a buffer for experience to be colored by the outcome of that experience as it is consolidated and transferred to the cortex.

Your learning as you explore your environment builds a vast landscape of emotionally flavored micro-experiences. This is the essence of exploration and play - building this catalog of emotionally tagged experience. This is why we seek novelties - to add to our useful store of knowledge of the world. It’s instinctual and every bit as powerful as other drives such as grooming or seeking shelter.

I am certain that this exploration extends to the social sphere. We are social animals and must learn our place in the leader/follower relationships. I suspect that there is a lot to know about how dating and dancing work in this context. Children can be unspeakably cruel to each other as they sort these things out. All of these experiences are colored with emotion by our innate lizard brain as we learn them.

The “finished” results of your exploration are what you draw upon when you encounter the lion; you use the crystallized sum of all your prior explorations. Assuming you do survive your stressful encounter whatever you did to survive is stamped hard in your memory - it worked! Such is the fuel of PTSD. If you did not survive whatever you did was not working and does not need to be remembered.

In relation to your example of certain words or ideas freighted with emotion - the tribal experience works to shade these tokens by context. I suspect that at some point this emotional weight overwhelms the meaning of the word to the point where some book definition is meaningless.

1 Like

I see this a lot on this site. This is the same thing as the "grandma cell.’ What happens if that one cell dies? You would forget whatever that cell knew.

I don’t think that this is how the brain does it. I think that the experience is spread over many SDRs.

I think that the way these things work is to spread a little bit of meaning over a large topographic space. This is sometimes called a “distributed representation.” This takes some getting used to but there are many good reasons to think that this is how the brain does things.

Quick definition:

A “classic” exploration of the concept:

Another - perhaps easier to follow:

And yet another:

1 Like

In my opinion emotions are labelled onto certain cognitive processes and patterns and the domain for an emotion expands as the system keeps on observing. The encoding might just be electrochemical processing but without certain cognitive patterns that define each emotion, the electrochemical staining isn’t of any use. Think about love. There are certain patterns that are associated with love: certain actions that everyone does for their loved ones. Without the patterns the emotion won’t evolve and would probably just be as annoying as when you cannot express negative emotions such a anger and have to suppress them quickly. Just a speculation after what I read about action-perception circuits and how the motor cortex is used in creating abstract emotions, cited by @Bitking


This one?



Of course we are driven by our emotions what else do we have to go by? This is the core of psychoanalysis. The growth experience of psychoanalysis is to make emotional responses that are purely subconscious into emotions that are well connected to and controllable by the conscious mind. The saying is “where id is, let ego become”. By having a person verbalize their emotions and examine their emotions over and over they move from being a reflect to being a considered and chosen response.

1 Like

2 posts were merged into an existing topic: Neuroscience newbie questions

So now I’ve read some Calvin.
I really like the way he writes, and he provides lots of interesting details and stories on each chapter. At the end of the day, however, I’m not convinced of the central thesis about grids.

I’m well aware that my reluctancy to accept it may be biased by the fact that such wave-based phenomenons, if intelligence turns out to be so inherently based on them, would be inconceivably more difficult to model that simple, almost-amenable-to-sequential layers and areas. Also, let me reiterate that I’m not an expert in any of this, so correct me if I’m misinterpreting something.

To begin with - but that may be something of a personal feeling - after reading “The Cerebral Code” and “How Brain Think” (another nice read, btw), I could not depart from the sensation that the darwinian process subtending his model was somehow shoehorned. Although quite intrigued at first with that proposition, by the end of this book, absolutely wanting to find all 6 darwinian-model-ingredients to the brain’s innerworkings looked like a hammer in search of a nail. A feeling even conveyed by the choice of the book’s very layout.

Inner-darwin aside, his reflexion upon the evolutionary side of the equation were insightful. And I see now why the idea of an evolving neocortex from a need of throwing accuracy matters to you, @bitking. Indeed he seems to have a point here, and the fact that his solution seems to involve his cortical grid concept tickles me.

To get this self-reinforcement mechanism turn out to produce waves interference patterns which he then sparse out as “high points”, would require, in my view :

  • First, [edit] a symmetry between connections to the proximal, “feedforward” part of the neighbouring neurons. Which does not seem to be the case as far as I can tell from my (limited) exposition to cortical columns flow diagrams. Most same-layer, lateral input from sensibly close columns seem to involve [edit, sorry] basal dendrites, which according to HTM is best described as resulting in a modulatory signal, effectively allowing cells that do have them to prevent cells that don’t from firing, all feedforward being equal.
  • Second, we’d need a super-fine regulatory system for it to make any sense (maybe what he means by “automatic gain control” ? I’m not sure). I mean, cells should fire “whenever” a faint input ask them to, but also should fire “only” as part of the grid when surrounded by most of their 6 neighbors also firing ?

Also, 100-step ceiling would seem to be hit quite soon if a pattern carrying semantic value has to :

  • settle from unsorted towards captured by an attractor
  • propagate along the grid
  • be detected as part of a spatiotemporal pattern

I’m also quite uncertain how the maths would solve out for the capacity side of the matter. I think I understand SDR statistical properties by now, but how lots of different attractors could share a common lattice I do not intuit well at all.

Besides, needing ridges and small passages and stuff for the diverging copies required for “speciation” left me a little puzzled, and long-range messaging stuff operating on similar copies also quite a bit. In other words, my curiosity about “H” was not really satisfied here.

All in all, thanks for those links, @bitking. As I said, even with such reserves there were still lots and lots of interesting details and things worth thinking about in those books.

Ah. Also. Something fun occurred to me here while reading the first chapter again :

One technological analogy is the hologram, but the brain seems unlikely to utilize phase information in the same way

or does it ? Grid cell modules and their “overlap” anyone ?



@gmirey, I see four main points in your assessment:

  1. Darwinism. Granted that when I first read Calvin I blew this off for much the same reasons you mentioned. I used to think that this was just a personal jihad that got mixed in with his otherwise good ideas.

Time has since whispered to me that my lack of understanding does not make the ideas wrong.

I think that the timescale of when the concept applies is an important consideration. I don’t think this applies to every presentation of a sensed stream. I do think it applies when competing grids are forming and the “boundary cells” are trying to learn what pattern they really are part of. So the correct time scale is during the exploration and learning phases. So - not all at once and not during every recognition event.

  1. Proximal vs. distal? I think that this is bigger than that. There is a large number of cell types in the cortex, each with different “reach” and different mixes of inputs and output, some excitatory, and some inhibitory. The grid-forming cells and related inhibitory cells are just two types in this mix. The Numenta temporal and it’s related intercolumn inhibitory cells are other cells in the mix.

I hope that you take away that I am only describing a part of the mix of cells in the cortical sheet. I have tried to be careful to distinguish that the layer 2 grid-forming cells are distinct from the standard temporal sensing cells central to the Numenta model. I see them working together as a system.

There are other cells that do not fit into either model such as the thalamocortical cells[1] in layer 4. I am sure that there are more stories to be told here.

Taking your shoe-horn niggle to the Numenta model - they are trying to cram every bit of the cortical sheet into the temporal sensing model seemingly without trying to use these other layers & connections to form larger patterns and connections to distant structures.

All that aside: The grid-forming cells solve a problem that does not seem to get much attention in the Numenta model: binding. Even if I accept the Numenta model as doing everything that is claimed (and I don’t) we still have the problem that this finger says “cup” and that finger say “guard rail” and the palm says “gearshift knob.” Nothing ties them together into a whole and integrates the various sensed states into the whole. The grid-forming layer is in effect allowing the various sensations to vote on a learned thing over a spatial region of a map. Quickly, automatically, and in a biologically plausible way.

  1. The 100 step thing - excellent observation. First - see #1 above. Second - all the local cells are trying to recognize some pattern at the same time. If there is some overarching pattern that the local cell is part of it gets an extra “kick” from its grid-spaced neighbors helping it to become fully active. All of the grid-forming cells are trying to see the pattern at the same time so it is a relatively fast local process.

  2. Regulation. Good point - seldom discussed. Note that at the level of the Numenta model you almost ever hear anything about brain waves or tonic maintenance. I don’t have a ready answer but I think this deserves more attention. I did mention some things about this in an earlier post[1] but did not follow-up on it.

Discussion: As you may have noticed - I read a fair number of papers; I try to understand the ideas being presented and move on to the next one. Some central tendencies emerge and often I see that the work done in the paper is not all that helpful by itself but that unintentionally it does offer support to other work in other papers. The Calvin books fit in that space. When I first read them I was entertained and I did check out some of the related references. It all checked out but I could not see much use for what he was saying and I filed it away with the vast number of proposed models of “how the brain works.”
Then the Moser Grid findings hit the scene and I went back and looked at anything that I had seen related to grids and was struck with how nicely Calvin’s work anticipated this. Then I (finally) made the connection with the binding problem and got serious in looking at his work. Even if Calvin’s work turns out to be wrong it does such a good job of explaining the meta-behavior of grids that I think it is useful as a starting point to evaluate what grids are doing.

Note: I would be delighted to post some of the papers supporting Calvin’s work but these were done in a different time. All of them seem to be either behind a paywall or in a book. The ones that I did obtain through interlibrary loans did seem like good solid technical support for his main points.

[1] Cortical Oscillations: A topic seldom discussed in HTM circles:


3 posts were merged into an existing topic: Neuroscience newbie questions

Oh, I have :smiley:

About shoe-horning : Yeah well. As I said that’s more of a feeling than an argument that he’s wrong. I guess any top-down approach which also cares about the bottom-up side has to shoehorn things somewhere at edges before the whole picture is crystal clear. However since we’re both discussing things here on this forum, I assume we have a gut feeling that Numenta’s broad direction and/or methodology feels right. For me is trying to work from the initial insight that we’re predictive machines who’ll try to discover any structure in whatever input flow we get. I cannot prove it right, but it sure does not sound dissonant to my experience either, and it seems to point towards an explanation while trying to stay consistent with biology findings. An inner-darwin mechanism, on the other side, I have no clue whether it is a necessary requirement. This does not make this wrong either, but I wouldn’t bet my life on it.
So were he to see/feel/intuit/have proof of five over the six “ingredients”, then see a similarity with a darwin model, then try to get a grasp on the missing one, okay. Even if the sixth had required a little edge-rounding. But the book certainly reads like he saw the shiny hammer well before that 5/6 mark.

Granted, he has a life time exposition to these considerations, a bright, well-working brain, and this work shall not be dismissed on the basis that a novice like myself does not understand it. Very true. I’m not dismissing it per se. In fact the best I can do to try and understand, however rough this may sound, is to assault everywhere I do not understand until convinced that it holds.

From your linked post above, I’ve barely browsed ref. [8] and this seems well over my head already. However what comes after the burst does seem like a wave indeed.

pondering pondering

What I still do not get is the relationship you see between “the Moser Grid findings” and Calvin’s grid patterns. Other than the fact they both involve the hex lattice (which is in itself an optimal form for a lot of things, not necessarily intrinsic to brains or navigation). I mean in the very HTM school video you’re referring to, Matt makes it clear that those grid cells spike at fixed locations in the environment’s space, but this has nothing to do with their own layout in the cortex topology.

Sorry to go heavy on that same question twice, when you already took the time to try and answer me, but I’m really confused here.



Me again, sorry.
I’m reading “Network mechanism of grid cells”. Haven’t finished, but wanted to let you know. Should have read it before, it seems. Here too they’re proposing that same correlation between environmental-grid-responses and internal-layout-“gridness”. I’m well open to the possibility that there is such a correlation, although I still cannot infer why.