I understand the Context synapses in TM , but my question is what about the FEEDBACK.
How do they work in TM ? Where the signal comes from ? how does it impact Learning and/or Prediction ? In what way ! What is the algorithm ?
I understand the Context synapses in TM , but my question is what about the FEEDBACK.
How do they work in TM ? Where the signal comes from ? how does it impact Learning and/or Prediction ? In what way ! What is the algorithm ?
Apical synapses were implemented in the ApicalTiebreakTemporalMemory algorithm in htmresearch, which was used in the project described in the Columns paper (2017). The “output layer” in the paper is implemented by the ColumnPooler algorithm and its activity provides the apical “feedback” signal.
I don’t know whether apical feedback is used in more recent numenta work.
As I understand apical segments act like distal segments in a standard TM, in that they depolarize cells (make them predictive) but don’t activate them. The main difference is that the pre-synaptic cells on apical segments come from other TM regions. I’d recommend checking out the ApicalTiebreak algorithm in htmresearch as @rogert suggested.
how do you decide when you have both Ctx and Feedback active ?
When you only have the Ctx in a sense the neuron with the “highest” overlap wins !
But if you have Ctx and Fback active do the sum of overlap, avg of overlap or some other measure/algorithm is used to figure out which one wins ?
If there are 2 neurons in the same column both in the predictive state and that column then activates, both usually become winners and activate – inhibiting all other cells in that column. But if one of them has both distal and apicals segment(s) active, that one becomes the lone winner. That’s the basic idea of the “Tiebreak” as I recall. Of course nothing beats checking the code tho! I think its commented pretty well.