thank you so much for your response. I’m already aware of this API. Unfortunately, I do not see a way to build layers exactly as shown in my Python example above. JAVA API provides network/layer API, as you pointed out. However, inside of HTM we have different kinds of input and output like activeCells , predictiveCells, winner cells, feedforward output an may be more.
Darn near the whole API Networks/Layers have nearly the same syntax where “named” inputs are directed to “named” outputs. Let me take a look and work up an explanation for you…
Also (@rhyolight), am I missing something that that “link()” method might be doing in NuPIC’s Networking API that I’m not doing in the Java version? It could be ME that’s missing something? Because I don’t understand what about @ddobric’s example above he’s not seeing in the Java version? IT looks plain as day to me!
I think within a ‘network’ object, this line connects the outputs of an encoder with a certain region in the network (L4, which the usual SP+TM algorithm is drawn from). If I understand right NuPIC OPF models call essentially this function from the Network API in their instantiation.
Comparable lines (from the L2456 network) are:
# Link up the sensors
network.link(locationInputName, L6ColumnName, "UniformLink", "",
network.link(coarseSensorInputName, L6ColumnName, "UniformLink", "",
network.link(sensorInputName, L4ColumnName, "UniformLink", "",
Using the raw Network API you can also connect Layers to each other (like modulating Layer 4 with Layer 6 grid cell activity as done in the L4L6 network model).
# Link L6 to L4
network.link(L6ColumnName, L4ColumnName, "UniformLink", "",
So essentially the corollary is true for the Java version. You connect the Sensor responsible for “reading” the input source type (there are a few types which can act as sources), to an encoder of your choosing, then to a layer for the SpatialPooler, then the Temporal Memory, then to classifier etc.
As such: (there are many different examples)
Parameters p = NetworkDemoHarness.getParameters(); // "Default" test parameters (you will need to tweak)
p = p.union(NetworkDemoHarness.getNetworkDemoTestEncoderParams()); // Combine "default" encoder parameters.
Network network = Network.create("Network API Demo", p) // Name the Network whatever you wish...
.add(Network.createRegion("Region 1") // Name the Region whatever you wish...
.add(Network.createLayer("Layer 2/3", p) // Name the Layer whatever you wish...
.alterParameter(KEY.AUTO_CLASSIFY, Boolean.TRUE) // (Optional) Add a CLAClassifier
.add(Anomaly.create()) // (Optional) Add an Anomaly detector
.add(new TemporalMemory()) // Core Component but also it's "optional"
.add(new SpatialPooler()) // Core Component, but also "optional"
Keys::path, "", ResourceLocator.path("rec-center-hourly.csv")))))); // Sensors automatically connect to your source data, but you may omit this and pump data directly in!
@cogmission all examples in JAVA, which I know, use same programming model as in the code you have shown above. This is all fine. It makes it easy to developer to build the network and it keeps transparent scientific details. However, Python examples above provide a way to explicitly specify which output to connect to specific input. For example, connect resulting ‘activeCells’ of a TM inside of a layer with ‘basalInput’ of the next layer. One could also try to connect ‘predictiveCells’ of a TM with ‘basalInput’, etc.
JAVA example shown above uses (I guess) TM compute cycle output (which holds both active and predictive cells) as input to CLAClassifier. This is fine, but it does not demonstrates how to do connections (links) between layers as shown in my very first example on the beginning of this post.
Possibly I didn’t find this unit test (or sample)?! It might also be that this is not supported by JAVA API or it is transparent.