Clarification about interlayer interactions

Im looking for a spot of clarification about the six layer model, so im throwing this question out there in hopes one of the more experienced users can help.

In my work I’ve been considering mini-columns as entities that span only one layer. A multilayer system, to me, meant a sequence of single-layer spatial poolers/temporal memories feeding in to one another. However I know that in the six-layer cortical model, mini-columns span multiple layers, with component cells in each layer.

My questions are:
How do the groups of cells in different layers within the same mini-column relate to each other?
Does the mini-column activate a cell at each layer, or only in one layer? Does the SDR defining the set of active mini-columns change in each layer, or only the active cells?

I assume the cells in a higher layer share the same proximal inout as the cells in a lower layer in the same mini-column, but also read the activity in the lower layer itself.

1 Like

Actually, I think a better question would be: could someone please explain the six-layer model to me using HTM terms (macro/mini-column, cell, pooler, temporal memory, etc)? I have seen references and figures and I think I get the general idea, but I must have some incorrect assumptions. I also tend to get bogged down with the typical biological terminology used in explanations–like dendrites, axons, etc that I cant conceptualize as well as algorithmic terms.

1 Like

I would like to hear more about this too. The cheat sheet has this diagram:


I think this is still a question in HTM research, not sure though. They might only span between certain layers, or only one layer each. They might not exist in some layers.

I don’t think it’s really nailed down yet, especially when it comes to implementation as algorithms. There are still problems to be solved, and ideas in previous papers/videos could easily mutate.

Until the model approaches completion, I think some of it can’t really be explained without biological terminology. It’s confusing trying to keep up with it, but also really cool seeing research developing. If you’re just interested in the algorithm-ish ideas, I wouldn’t get too caught up in the super confusing stuff.

I can’t explain this very well or in detail. What I can do is rattle off random stuff from memory.

These are mostly tentative assignments of functions to layers. They have possible circuits to do stuff, but it’s not written in stone.

L1: Hierarchy (higher cortical regions) isn’t really part of the equation yet. The plan is to figure out what individual regions do first, kinda.

L1 isn’t really a layer, I suppose. It has very few neurons, though there are some sparse inhibitory interneurons. It’s mostly just a place for axons and dendrites to spread out and form synapses. Apical dendrites reach into this, but they convey signals to neurons whose cell bodies are in any layer.

L2: This layer decides what the object is by voting. Each object is treated as a bunch of features (except those features are also objects nowadays, but ignore that.) When it senses the first feature on the object, it basically generates a list of all the possible objects. Each subsequent feature narrows down that list until there’s just one left.

It doesn’t have to just sense one feature at a time, though.

That’d be like just one fingertip, which provides input to one cortical column. A cortical column has several hundred minicolumns, and each gets input from a patch of the sensory organ, like a fingertip or part of a retina. Those patches can overlap.

If it has multiple cortical columns, like multiple fingertips, it can sense multiple features at once. Each feature just narrows down the list of possible objects, leaving just the ones which have the feature.

Every cortical column decides the object on its own. As in, its cells need to produce an SDR for the possible objects. This is based on the features sensed by the column’s sensory patch, but also the possibilities remaining in other columns. It’s a redundant representation of the same list of possible objects in every cortical column.
That’s voting. Each cortical column needs to communicate with a bunch of others, narrowing down each-others lists. Voting also happens between cortical regions, although it takes a whole bunch of connections all over the brain. You might see a feature which could be on a cup or a keyboard, and touch a feature which could be on a cup or a book. Together, visual cortex and somatosensory cortex can vote that it’s a cup.

L3a/b: Temporal memory was developed before the focus on objects and positioning. There are youtube videos explaining the algorithm. The difference between L3a and L3b is L3b receives a lot more input from thalamus. I dunno what that does. L3a might not be sequence memory, I guess.

L4: I believe this is like sequence memory, except it also gets the motor command (e.g. “move fingertip left 1 unit.”)

L5a/b: Motor output. There’s a lot more complexity here, like every layer.

L6a/b: “where” and “what” pathways correspond to the dorsal and ventral streams of visual cortex. Corresponding pathways have been found for other senses. They’re based on what happens when those parts of cortex are damaged.


Thank you so much! This actually helps a lot–knowing that there isn’t yet a nailed-down algorithmic model of the six layers makes me feel better about being confused about what the model was! :wink: The explanation of the general purpose of each layer also helps to give a basic idea of what the HTM equivalent might look like, at least in some layers.

Do the neurons in L1 have connections to the subcortical regions, or is it purely sensory input?

1 Like

I don’t know much about the neurons in L1. There are very few cell bodies there. Subcortical structures target L1, at least some cells in the thalamus. That could relay things besides sensory input from subcortical structures. Maybe also neuromodulators like dopamine, I’d guess.

1 Like

This is a much older Numenta paper and much of the HTM thinking has moved on newer incarnations.
That said, Appendix B is still useful:

1 Like

Did you see this thread?

1 Like