The latest theories were so excellent that I had to outline Jeff’s most recent surprises. Please feel free to add additional information or better way to word things. This newer explanation made very much sense to me!
1 Currently regarded as nonessential to include detail.
Not found in similar mammals.
2/3 Lateral Output layer, also connects to its 5. The "pooling layer"
Stable representation of Place (object=room, cup) being recognized.
Does not matter where head is pointed, independent of orientation.
4 Primary Input layer for skin, eyes, ears sensory. Where everything starts.
Minicolumn network represents sequences, melodies, how things behave.
Output connects to 2/3 then non-drivers connect 2/3 back to 4.
Sensory input is gated by thalamus.
5ab Lateral Output layer,
also (through thalamus) laterally to 4 making Places part of an Object.
Also connects to its 6.
Represents motor behaviors, motor outputs connect subcortically.
Two cell types, one for output other not.
In humans 5a thin tufted cells connect laterally to other 5a
In humans 5b thick tufted cells assigned to movement.
6a Represents rotational 3 degrees of freedom Orientation, head direction.
Major connections to 4, then 4 to 6a
6a and 4 model Place in 2/3,
provide sensory motor inference over (angular radial) orientation.
Has output to thalamus to restrict sensory input sent to 4, attentional signal, specifies the scale.
6b Grid cell linear Location.
Major connections to 5 then 5 to 6b
6b and 5 in turn use Place in 2/3 for sensory motor inference over linear X,Y,Z type location.
Scale of things, at orientation and location is established through 10 HZ oscillation between thalamus and cortex.
Do I understand Jeff right, that he talked about two communication channels?
One is described by excitatory (mostly pyramidal) neurons. This is the classical view of how communication works.
The second way is using inhibitory interneurons. These interneurons can only communicate WITHIN a minicolumn. Thus they are able to communicate the state of a mini-column to ALL layers of a mini-column.
I don’t know that this is strictly true. The inhibition may affect the cluster of cell bodies at that layer but the cell bodies in other layers may be free to respond. In particular, the grouping of L2/3 & L4 vs L5 & L6 could be able to respond independently.
I don’t know that this is true but I don’t recall reading anything on this either way.
Perhaps @Casey has seen something on this is his research?
After our discussion about minicolumn activations through multiple layers, Jeff sent me an email with some further considerations. Below is his summary, used with permission:
There are issues with this idea. The biggest problem is we don’t want to force cells in some layers to adhere to the same minicolumns as other layers. For example, a TM layer and its TP layer don’t share minicolumns in our current model of TP. Still, I think we should discuss this idea some more. I like the idea that minicolumns are like an underlying bus that shares information between layers separately from the cell population to cell population linking between layers.
I was very surprised to hear for the first time about a functional difference between layers 2 and 3. A new piece of the puzzle.
What I’m most surprised about is that the mini-columns span many (all?) layers, and that these somehow synchronize across the layers. (Maybe synchronize is not a good term). How does this work if neurobiologists don’t find projection between certain layers? Maybe I’m confused about what exactly a mini-columns is.
He also mentioned bipolar neurons of some kind. He didn’t remember the exact name.
My initial feeling is that this may not actually be a problem considering a couple of things. Firstly, what is proposed here is an inhibition signal, not an activation signal. This means we are not talking about forcing the same minicolumns through all layers to activate. Rather, we are talking about anti-biasing the other minicolumns through all layers (the strength of this anti-biasing signal could presumably be tuned). Secondly, if you consider the possibility of mutually reinforcing connections that establish between pyramidal cells in 2/3, you could view this inhibitory signal as simply bumping against the stable representation (“hey I think I’m seeing something here”). Only if multiple columns were agreeing on that something else, would it become strong enough to overcome the stable activation.
Anyway, this has definitely given me some food for thought. It will be interesting to see where this line of thinking leads.
Minicolumns spanning layer 2,3 and 4 right away brought me back to long trusted Machine Intelligence basics, where ironically the same is true for how each unique Address location in a RAM has a Data column that can store more than one data type along its (data bus width) length.
Signal wise sensory of any kind including motor/muscle error bits connect to any of the Address input pins of RAM. The RAM is then changing address location in response to any change in sensor readings. Since result of Data actions are not known until one timestep later the system learns predictions.
In this model Data includes a Confidence level that increases to max of 3 if all is well after an action is tried, else decremented and when zero (including not yet used RAM location) stores a new guess in memory. To me this part of the process very much resembles spatial pooling!
Required spatial functions are all conveniently found in the lower layers. I think that there are too many similarities for this to be a coincidence.
I’ve been toying with the idea of finite-state machines being used to replicate some of the functions of the thalamus, similar to your simulated analog (in software) setup. I feel it definitely has a place.
Depending on output that’s coming out of neurons above, the FSM could guide the attention mechanisms for IO. Random thought.
Just constantly trying to clear my plate so that I can finally get around to finishing my HTM implementation.
The idea of Heterogeneous Plasticity makes sense in proximal dendrites? (i.e. Spatial Pooler). Looks like the biological principles  are a bit off there (It looks like the active dendrites aren’t close enough to observe that phenomena or at least regular LTD should have greater influence).
 W. C. Oh, L. K. Parajuli, and K. Zito, “Heterosynaptic structural plasticity on local dendritic segments of hippocampal CA1 neurons,” Cell Rep. , vol. 10, no. 2, pp. 162–169, 2015.
Ops… probably this isn’t the place for this. Was in the previous twitch video
Jeff’s description of possible thalamus related signal conditioning reminded me of how in computer graphics the steps are: Translation, Rotation and Scaling of points. In that case common math functions could be used to demonstrate fundamental principles. A virtual machine like you proposed might also work, although of course for neuroscientific purposes virtual neurons are always preferable.
I have been most interested in how the robotics based implementation I already have relates to HTM theory and how our brain works. Increasing or decreasing a confidence level is very similar to strengthening or weakening a connection. In both cases an action (such as object recognition response) must at least occasionally work or is made gone, replaced.
I’m used to first connecting outputs similar to those described for cortical column motor outputs straight to drive motors. Actions that help the recognition process would be favored, in which case a simple entity may already have some ability to physically move around or turn things to get a better look at what needs recognition, a novelty.
The ultimate goal (implementation for how a human brain works) is certainly not an over the weekend project, anyway. For a full understanding we have to somehow make sense of all the otherwise intimidating neuron by neuron signals. Plenty of work to do, in that area too.
Totally agree here. I’ve been working on-and-off-again since January on my HTM implementation in Elixir, which runs on the Erlang VM, with built-in distribution, message passing between processes, and fault tolerance. Elixir itself has an offshoot project, Nerves, which runs on embedded hardware.
I have my clients and stakeholders on the same page about my time constraints now, so that I can focus more on this implementation to share with the crowd.
I could be wrong but I think that this should be the HYPOthalamus.
I think of the thalamus as being the 7th (or more’th?) layer of the cortex.
The clusters of the hypothalamus direct the forebrain to initiate actions based on the perceptions passed to it by the HC/EC and amygdala. And directly from the brainstem.
Do you mean there might be separate minicolumns for L2/3 and L4 vs. L5 and L6? It think it might be sublayer-specific because L6a seems linked to L4 and L6b to L5st. For example, L4 excitatory cells mainly project to L6 in a minicolumnar fashion (http://www.jneurosci.org/content/jneuro/31/50/18223.full.pdf). I recall L4 doesn’t target L6b much, so L4 and L6a might share minicolumns. L6a mainly targets L4, whereas L6b mainly targets L5a, so L5a and L6b might share minicolumns. That’s just a guess, though.
L5a and L5b don’t share minicolumns at least based on firing synchrony (https://science.sciencemag.org/content/358/6363/610).
I think you would need to read about interneurons to figure this out. I think the ones with minicolumnar projections are called bipolar cells = tufted cells. They project from L2/3 to L5 in a minicolumnar fashion (“Synaptic biology of barrel cortex circuit assembly” pay wall).
Hi @jhawkins, could it be that L5a vote on the motor commands as was mentioned in the chat of the video? I am really interested in this becasue if not these cells, there must other cells that do this. I’m currently reading papers on rewards and action/decision making based on them, involvement of dACC and PFC in this process, and since HTM theory states that columns should run the same algorithms, I wonder where the action voting occurs.
Hi guys, additionaly I also have been wondering about the next thing. For touch sensory data in order to change orientation and location of the input, we need to move the related body part (finger, leg, arm etc). But for visual and auditory sensory inputs, what do we move in order to change orientation and location? Since these inputs are correlated and dependent on head movements, does it mean that the head direction cells could serve as orientation input? And simillary grid cells from entorhinal cortex are the location input for visual and auditory (and olfactory?) data? So to rephrase what I am asking: we don’t know where touch orientation and location are calculated (or do we?). But for visual and auditory inputs do orienation and location are calculated in head direction cells + entorhinal cortex? And if this is the case, does it mean that neocortex is depenent on external location/orienation calculation for visual and auditory inputs?
Microsaccades in the eye move the incoming “image” across the retina.
Reportedly, if the eye muscles are paralyzed (say, by curare) the entire visual field disappears slowly - microsaccades are THAT important.
As to hearing, it FEELS like muscles are involved in redirecting auditory attention. I have not heard that this really happens. But how far “down” does the shift-of-auditory-focus “reach”? In the visual system, part of the perceptual field can be disattended so thoroughly that nothing in that area reaches consciousness… how comparable is this to the mechanism of auditory disattention?
Thanks for reply. I have just read why this happens. It looks like cones and rods get overstimulated without the microsaccades and the image dissapears. So that’s why we need to change visual input 3-4 times a second. And it makes me wonder: during REM (rapid eye movement) sleep when we see dreams, in order to change the image, we need to move our eyes including making saccades (movement and expirienced images are related, it is a fact). But it is obvious that we don’t receive any external visual input. We don’t need to expirience real touch to feel it in the dream, and the same is for auditory input, we generate those from inside (somehow). So my question is why do we need saccades to change visual input (like look around with eyes) in our dreams?