I am reading about temporal memory. I have something dont understand and not sure about them. So can you help me check it and explain it?
I read temporal pooler pseudocode from " Hierarchical temporal memory version 0.2.1, 2011" and " why neurons have thousand of synapses, a theory of seuqnce memory in neocortex", Jeff Hawkins, 2016 figure 2 and figure 3.
In figure 2 and pseuocode phases 1, as I see, word “A” after learning is all active cell because of “A” is not predicted from previous word as pseouocode line 32-34. and word “B”, after learning, 3 cell active because of best matching from word “A” as pseudocode line 36-41 . word “C” 3 cell active of “C” because of 3 cells active of " B" as peseudocode line 22-30. is it right thinking?.
in figure 3, to predict “B” from “A”, when all cell of “A” active, the connections between “A” and “B” predicted is above threshold, so when “B” input applys on, 3 cells of “B” are active. When 3 cells “B” active, the list of segment and predicted cell update as phase 2 in pseudocode to next prediction for “C”. is it right?
Yes I think so. I would put it like this. The 3 cells that activate for “B” after learning are the cells that represent the spatial pattern for B after seeing A. The 3 cells active for “C” represent the spatial pattern for “C” after seeing “B” after seeing “A”.
For more details that will help with this and your 2nd question, watch the 2nd part of this video:
that is old version, so can you give me new pseudocode to read about TM?
I have a example to use TM in sequence, can you check it?. I wonder it is right thinking or wrong?
I have 4 sequences:
CDEFG
FCDEG
DCGFE
EFCDG
Temporal pooler have 8 columns, and 4 cells per one column, just 2 column active after learning. This is what I see from figure 2 and from your video about TM part 2.
after learning, every word has 2 active column. in which, just have one active cell on each cell.
in sequence #1,C, D,E,F,G will have different active cells.
in sequence #2, C,D,E,F,G will have different active cells, and different position active cell in sequence #1, ( like B and B’ in figure 2)
same for sequence #3, #4.
next is testing, when “C” come, D and G will be predicted, when D come, predicted cell D will become active cell, next E,C,G will be predicted, if E come, predicted cells of E will be active. …next F, and G. after G come, and cells become active. From that, we will know that this sequence #1 is learned and correct. if one word come unexpected, this sequence wasnot learned and wrong.
that is my thinking about TM, can you tell me it right or wrong.
Thank you.
Hi, can you explain to me clearly about
Active and has one or more predicted cells
Active and has no predicted cells
Inactive and has at least one predicted cell
and what is matchingsegment in BAMI code page ?. Thank you
These statements are referring to minicolumns, which can have those states. You should read more about the spatial pooling algorithm. You really need to understand it before you go to the TM.
If agent learned a sequence: “ABE”
“E” has it’s context of previous elements A, B
Then agent got sequence: “FBE”, where context is F, B.
In both cases we have different “E”.
But how we choose the depth of context?
If now we choose depth like four observations, “E” in for example “FABE”
will be different to “E” in for example “AABE”.
So, how we choose depth of context? Or each time we generate unique “E”?
The temporal memory will learn as much context as it can. The depth of context is not a parameter and at no time does the TM choose the depth of context. The depth of context is dependent on what the TM is trained on.
If the TM is trained on FABE and AABE it will form unique states for the ABE section of both sequences. You and I might say the context is four deep, but no where is that encoded.