Good points there @Paul_Lamb.
Itâs true I donât think Iâve seen docs describing that, I mustâve gotten it from listening to Numenta people like Jeff & Subutai talk.
I know there is a talk where they do contrast the mechanics of HTM and conventional ANN, but itâs true that terms like âfeedforwardâ and âlayersâ have totally different meaning in HTM vs ANN.
When I hear âfeedforwardâ in HTM I think of Spatial Pooler activation. The âfeedforwardâ input is ingested by the SP and determines which columns will activate. This is as opposed to the TM deciding which cell(s) in these columns will become predictive, and capable of inhibiting all other cells in their column if that column activates at the next time step. Iâm not sure the term for this kind of input though, I donât think itâs âfeedbackâ.
So rather than thinking in terms of âfeed forward/back/whateverâ I think of it as activating inputs (used by the SPâs proximal dendrite segments to choose winning columns) and then depolarizing inputs (used by the TMâs distal dendrite segments to make certain cells predictive). The SPâs proximal segments are connected to the encoding, thus saying âhereâs what Iâm seeing nowâ, while the TMâs distal segments are connected to other cells within the âregion/layerâ, thus saying âhereâs the sequential context in which Iâm seeing itâ. Come to think of it Iâm not sure the difference between âlayerâ and âregionâ, or if theyâre interchangeable.
When I hear âlayerâ in HTM I think of one of these SP+TM modules, which has some activating input and some depolarizing input. The activating (SP) input doesnât necessarily have to come from a raw data encoder, it could be an output from another layer. Itâs just a population for the layerâs SPâs proximal segments to connect to. Likewise the depolarizing ⢠input doesnât necessarily have to come from other cells in that layer. Itâs just a population for the region/layerâs TMâs distal segments to connect to.
The largest data structure in HTM (afaik) is the macro column, which is composed of multiple layers. Hereâs a repo where different kinds of macro columns are built using the Network API:
Hereâs an example of one such macro columns structure:
from this script:
htmresearch/l2456_network_creation.py at master ¡ numenta/htmresearch ¡ GitHub
We can see that L4 is activated by raw data from âSensorâ and depolarized by outputs from L6 and L2. These links between layers are set in the script by network.link(âŚ).
I hope this helps sheds more light, or leads to useful follow-ups at least.