Top-down feedback examples in NuPIC

Are there any examples of networks with top-down feedback implemented in NuPIC? Something similar to the third example in the NuPIC Network API video around minute 7. Perhaps htmresearch is the right place to look for such examples including this feedback project?

1 Like

Seems a few examples are located in /htmresearch/frameworks/layers

Yes. The code for creating those networks is here:

2 Likes

I see there is a delay implemented in the apical feedback. Why is it needed? Wouldn’t the compute function output it on the next step anyway?

# Link L2 feedback to L4
  if networkConfig.get("enableFeedback", True):
    network.link(L2ColumnName, L4ColumnName, "UniformLink", "",
                 srcOutput="feedForwardOutput", destInput="apicalInput",
                 propagationDelay=1)

I must defer this question to @scott or @mrcslws.

You’re correct, this just makes the delay more explicit. In practice, the only difference is: after a timestep, you can call L4Region.getInputData("apicalInput") and get its current input, rather than getting its next input.

Example timestep without this propagation delay:

  • L4 Compute
    • L2’s feedforwardInput is changed
  • L2 Compute
    • L4’s apicalInput is changed
    • …but technically it won’t be consumed until next timestep, because of the order we compute the regions.

Example timestep with this propagation delay:

  • L4 Compute
    • L2’s feedforwardInput is changed
  • L2 Compute
    • L4’s apicalInput for the next timestep is changed

The Network API today calls regions in a fixed order. You could imagine a different version that has the “propagationDelay” assigned correctly for every link. It could generate a dependency graph and call regions in the appropriate order automatically, perhaps calling some of them in parallel. (But today we rely on the region order, e.g. above with L2’s feedforwardInput.)

3 Likes

One reason that propagation delay is needed is when you have symmetric connections between regions. In our multi-column experiments, the L2 regions have symmetric connections where you want each L2 region at time t to receive lateral inputs from the other L2 regions outputs from t-1. You can’t do this symmetrically without propagation delay because the regions are processed serially.

3 Likes

Thank you for your answers. Now please correct me if I’m wrong. In a simple network with two regions (a single column with a single L2 and a single L4) the delays are not needed because the current output from one region can be immediately consumed by another region. There is no need to wait for other inputs as they don’t exist. On the other hand, when we have a slightly more complex network consisting of two such columns that also have lateral connection between L2 regions delay=1 becomes necessary. With the delay we can make sure that inputs on one cell are aggregated and processed simultaneously on the next step. Regions are computed with their current input from t-1 that is not being contaminated by the current output from other regions at t.

Should I then have all of the connections of any network containing lateral connections in addition to feedforward and feedback connections set with a delay=1 as a rule? What about any network with a classifier region that receives input from an encoder and a temporal memory region? If the classifier region is always the last one calling its compute function, delays aren’t needed, right?

1 Like

You need a delay between l2 and l4 to avoid an endless closed-loop.
With network API you can extent the network without any limit, you can insert any classifier some where that makes sense!

1 Like

A further question. Is there a way to access predicted cells in the column pooler? columnPooler.getInputData(“predictedInput”) always gives me an empty array for some reason. Thanks

There are many reasons for this problem. If you give us detailed information how you use them, I can better help…

  • There’s not really a notion of “predicted cells” in the ColumnPooler. Each timestep, each cell has some amount of “lateral support”, and this is used to determine which cells become active. So this lateral support is like a “prediction”, but there’s no clear threshold between cells that are / aren’t predicted.
  • The “predictedInput” is something different. It’s the predicted cells from the input (e.g L4). It’s an experimental part of the ColumnPooler, used for online learning, which is off by default. It’s not currently hooked up in our main experiments. (Someone could hook it up locally by adding a new link here) from L4 to L2.
1 Like