Numenta turns attention to The Thalamus!


I am certainly into computers and have been programming for many years but on this forum everything should be based on the biology of the brain - that is the mission here. We try to restrict our hardware to the kind of processing that can be found in the brain.

The brain does not have the usual tools used in classic AI work. There have been many other methods tried (LISP/Symbolic AI, expert systems, heuristic systems, reasoning systems, block worlds, knowledge based systems [frames and scripts], decision trees, propositional and first order logic, inference engines, the list goes on and on) to make an AI but that is not what we do here.

We have “encoders” to simulate the transformations in the early sensory system to formats compatible with processing suspected to be performed by the brain.

I do my non-biological processing on other forums - as should you.

You are correct that I have posted most of my main ideas over the last year: this is the distilled results of study of the biology of the brain since the late 70’s. My main ideas are moving forward slowly as I refine some of the “iffy” areas; there are some sub-cortical areas that are still a complete mystery to me. It is unlikely that I will be adding any bold new concepts to the central big picture any time soon; mostly at this point I am tweaking around the edges.

My main focus now is to reduce these concepts to working programs.


I think that the thalamus is as closely related to the basal ganglia as it is to the cortex. The basal ganglia uses reinforcement learning to predict when the animal will receive rewards or penalties. Animals use the basal ganglia’s predictions to attempt to maximize their cumulative rewards, which drives behaviour. The basal ganglia however does not directly connect to the muscles which control behaviours, instead the cortex connects to the muscles. The thalamus is the major pathway from the basal ganglia to the cortex, and therefore is at the interface between unsupervised and reinforcement learning.

I hypothesize that the function of the thalamus is to control the cortex, with the goal of maximizing the animals cumulative rewards.


4 posts were split to a new topic: An alternate view of memory functions


I think that TRN is just a coincidence detector (i.e. a comparator) between L5 and L6 projections. Note that at very low level (such as auditory cortex A1) there is no notion of objects. Just frequency changes.

BTW (in my humble opinion) I think the pursuit of vision (or any other motor involved sensor) as the main goal is introducing a lot of unnecessary difficulties to understand this. Any “cognitive-level” consideration is way above the bottom of the hierarchy (perhaps 10s of levels). I think that the principles at the bottom are the same as that at the top, but “discover” those principles from “top” observations seem pretty hard to me.

Just my 2¢


By “object” above I was referring to the activity in the “output layer”. Numenta’s view is that object representations exist in all hierarchical levels.

Of course the highly abstract concepts are still going to be higher in the hierarchy as you would expect. But HTM proposes that even the very lowest levels are capable of doing a lot more than traditionally thought.

BTW, for reference (I just realized my previous post lacked context), I was referring to the process of “reseting the output layer when switching to a new object” from the Columns Paper. The circuit described in that video (or something similar to it) seems like a good candidate for triggering a “reset”.


Ah, ok.

Nevertheless, I have some doubts about that view. Seems quite cost-inefficient to do so; you will use multiple synapses across the hierarchy to store information from the same high-level object? Seems more synapse-efficient to collapse as much as possible common information from different objects in lower levels and produce the disambiguation across you move up and use local redundancy in synapses to increase resilence.


I agree with your view, and this is one area with HTM theory that I struggled with after I first encountered it in Matt’s “HTM Chat with Jeff”. I think where it started to click for me was after a couple of realizations. I described these ideas here, but a quick summary

  1. The logical boundary between hierarchical levels is actually between layers within the same region (not the connections between regions)
  2. When a new unfamiliar object is encountered, it may require a deeper hierarchy to initially represent it, but as it is encountered more and more frequently, the abstraction can be pushed further down the hierarchy

I believe it is this basic architecture which enables some of the less traditional forms of hierarchy that are seen in the brain (horizontal connections between apparently separate hierarchies, feed-forward skipping levels, and so-on).