The hidden structure of Temporal Memory?

The last couple of days I’ve been pondering how in a hell you can squish all capability that is required into CC so that you can do high order functions w/o resorting to additional neuron circuitry.

TM is normally thought of a sequence memory and thats it !! That is how it was tested I think… taxicab, temperature datasets come to mind … but if that is all about it is pretty dumb memory and we have to implement alot of additional logic to make workable system…

Simply adding structure to the items solves the problem. How ? What ?

In ML language we need to memorize State and Action pairs.
In TBT language we need to build sequence of at least : Location, Feature and Motor Command.
The L4<=>L6 loop thus passes to the TM not only the Location but also the Cmd it used.

So we still store variable order sequences but that sequence has a structures that allows you to :

  1. Planning & RL
  2. making decision …afaik L4 pass down Cmds that is then filtered by BG!
  3. The dynamic MODEL thus is in RefFrame with a metric based on the Grid storing the whole interaction as State:Action sequence

Prove me wrong ?


also if we accept what ML thought us it has to be : Cmd1, Loc1:Cmd2 , Loc2:Cmd3,…

instead of : Loc1:Cmd1 , Loc2:Cmd2,…

I find it difficult to match up what a small local chunk of the cortex does to what the brain as a system does.

I see that going from perception to a motor command in a single stage without including the complicated systems of hierarchy and various looping paths will end up with about the same thing as text autocomplete. You start typing and if fills in some guess about the elements that sometimes match up with the few letters typed. There is no real understanding or planning.

The reduction in the WHAT/WHERE stream to something that the subcortex can process, and the resulting generation of basic subcortical commands to be elaborated in the frontal cortex as a motor plan involves many stages and systems.

Yes, you should make things as simple as possible to try and understand them, but it is also possible to overdo the reductionist thing. Examining a pile of parts will not lead to enlightenment on how the assembled drum makes noise.

2 Likes

I’m not saying this is where all things happen … CC-TM just holds the State:Action lookup table (btw it is not exactly a table, but a seq-of-seq of SA++ pairs. At least this is how imagine it ), instead of just “inert” Data.
How this SA-table is used depends on outside CC’s and other parts of the Brain.

This allow not just to predict what the Sensor sees next, but what is the most probable next motor command and probable Location.
If just use this info you get RL, if you play it forward you get Planning.

The object/concept models thus are seqs of SA relative to a RefFrame with metric provided by the grid cells, instead of model sort of being like a figure in ND space f.e. circle,sphere,cube .

I probably mistakenly implied that planning and RL happens in CC. /anyway you cant RL or plan w/o the choreography of 100s of CCs. I can forsee myriad of CCs sending those cmds to the BG via the Th to be sorted,filtered and selected/

What I’m trying to say is that information “tables” which are used to do those things are in the TM of the CCs, i.e. it is not just “dumb” data.

awaiting your critique :wink:

Btw you can imagine seqs-of-seqs as a sort of virtual graph even that it is not stored as a graph. you know how SDR virable-markov-order seq are stored in TM.

1 Like