Good points there @Paul_Lamb.
It’s true I don’t think I’ve seen docs describing that, I must’ve gotten it from listening to Numenta people like Jeff & Subutai talk.
I know there is a talk where they do contrast the mechanics of HTM and conventional ANN, but it’s true that terms like “feedforward” and “layers” have totally different meaning in HTM vs ANN.
When I hear “feedforward” in HTM I think of Spatial Pooler activation. The “feedforward” input is ingested by the SP and determines which columns will activate. This is as opposed to the TM deciding which cell(s) in these columns will become predictive, and capable of inhibiting all other cells in their column if that column activates at the next time step. I’m not sure the term for this kind of input though, I don’t think it’s “feedback”.
So rather than thinking in terms of “feed forward/back/whatever” I think of it as activating inputs (used by the SP’s proximal dendrite segments to choose winning columns) and then depolarizing inputs (used by the TM’s distal dendrite segments to make certain cells predictive). The SP’s proximal segments are connected to the encoding, thus saying “here’s what I’m seeing now”, while the TM’s distal segments are connected to other cells within the “region/layer”, thus saying “here’s the sequential context in which I’m seeing it”. Come to think of it I’m not sure the difference between “layer” and “region”, or if they’re interchangeable.
When I hear “layer” in HTM I think of one of these SP+TM modules, which has some activating input and some depolarizing input. The activating (SP) input doesn’t necessarily have to come from a raw data encoder, it could be an output from another layer. It’s just a population for the layer’s SP’s proximal segments to connect to. Likewise the depolarizing ™ input doesn’t necessarily have to come from other cells in that layer. It’s just a population for the region/layer’s TM’s distal segments to connect to.
The largest data structure in HTM (afaik) is the macro column, which is composed of multiple layers. Here’s a repo where different kinds of macro columns are built using the Network API:
Here’s an example of one such macro columns structure:
from this script: htmresearch/l2456_network_creation.py at master · numenta/htmresearch · GitHub
We can see that L4 is activated by raw data from “Sensor” and depolarized by outputs from L6 and L2. These links between layers are set in the script by network.link(…).
I hope this helps sheds more light, or leads to useful follow-ups at least.