Predicting a 3-tuple

I have a system where I am trying to predict a 3-tuple. The input is previous values of the 3-tuple, plus some other scalar inputs, plus a timestamp.

With particular reference to the encoders, what’s the best way to go about doing this?

Specifically, within model_params,

  • is there any difference between specifying the sensorParams as individual encoders (e.g. using ScalarEncoder), or in using the MultiEncoder (with the individual encoders as an input dict to it)? In other words, if specified as individual encoders, are the encoders just put into a MultiEncoder (under the hood) anyway? Do both approaches achieve the same result?

  • How should the ‘_classifierInput’ field be specified? I can only find examples where the predicted field is a scalar not a tuple. (I haven’t dug into the CoordinateEncoder, which seems to be the most likely candidate. Is that right?) It’s not even clear to me why it is required (many examples don’t seem to have it), however an “TypeError: list indices must be integers, not NoneType” error occurs if it isn’t specified.

  • Are there any examples where the predicted field is a tuple?

Thanks.

1 Like

I asked @scott and @subutai about this. Here is a paraphrase of what they told me:

You can’t currently predict multiple fields in the OPF or network API. You could pull out the relevant data and manually pass it to three separate classifiers (SDRClassifier), one for each value in the tuple.

The classifiers take the current active cells in the TM, the bucket/category of the current input (of the field you want to predict), and the actually value for the field you want to predict.

You can do it with the Network API, just not with the current OPF model we use.

Can you clarify whether tuples can be predicted using the Network API. The first sentence says no it can’t, but the last sentence says yes it can.

Thanks

The current way the RecordSensor region is written, you can only get the bucket & actual value for one field that you specify during setup. So if you wanted to use the network API, you would have to use a custom region with outputs for each of the three fields, a separate RecordSensor region for each field (which I believe you can concatenate with links to the SP, although I’m not sure we have examples of this), or simply keep the classifiers outside the network since you can always pull out what data you need from the encoders and TM.

Scott,
I’d like to try the solution you recommend of keeping the classifiers outside the network, but I’m not sure how to do it.
I want to have a SP-TM network that takes in a set of 3 scalar values and then get a prediction of the 3 values.
How would you create 3 separate classifiers outside the network to pull the data? (lines 120-125 need to be modified)
Here is my example code:

#!/usr/bin/env python
import json
from nupic.engine import Network
from nupic.encoders import ScalarEncoder

def createNetwork():
  network = Network()

  #
  # Sensors
  #

  fooSensor = network.addRegion('fooSensor', 'ScalarSensor',
                                        json.dumps({'n': 120,
                                                    'w': 21,
                                                    'minValue': -4.8,
                                                    'maxValue': 4.8,
                                                    'clipInput': True}))
  barSensor = network.addRegion('barSensor', 'ScalarSensor',
                                        json.dumps({'n': 120,
                                                    'w': 21,
                                                    'minValue': -48,
                                                    'maxValue': 48,
                                                    'clipInput': True}))
  bazSensor = network.addRegion('bazSensor', 'ScalarSensor',
                                        json.dumps({'n': 120,
                                                    'w': 21,
                                                    'minValue': -0.48,
                                                    'maxValue': 0.48,
                                                    'clipInput': True}))

  #
  # Add a SPRegion, a region containing a spatial pooler
  #
  inputWidth = 0
  inputWidth += fooSensor.getParameter('n')
  inputWidth += barSensor.getParameter('n')
  inputWidth += bazSensor.getParameter('n')

  network.addRegion("sp", "py.SPRegion",
                    json.dumps({
                      "spatialImp": "cpp",
                      "globalInhibition": 1,
                      "columnCount": 2048,
                      "inputWidth": inputWidth,
                      "numActiveColumnsPerInhArea": 40,
                      "seed": 1956,
                      "potentialPct": 0.8,
                      "synPermConnected": 0.1,
                      "synPermActiveInc": 0.0001,
                      "synPermInactiveDec": 0.0005,
                      "maxBoost": 1.0,
                    }))

  #
  # Input to the Spatial Pooler
  #
  network.link("fooSensor", "sp", "UniformLink", "")
  network.link("barSensor", "sp", "UniformLink", "")
  network.link("bazSensor", "sp", "UniformLink", "")

  #
  # Add a TPRegion, a region containing a Temporal Memory
  #
  network.addRegion("tm", "py.TPRegion",
                    json.dumps({
                      "columnCount": 2048,
                      "cellsPerColumn": 32,
                      "inputWidth": 2048,
                      "seed": 1960,
                      "temporalImp": "cpp",
                      "newSynapseCount": 20,
                      "maxSynapsesPerSegment": 32,
                      "maxSegmentsPerCell": 128,
                      "initialPerm": 0.21,
                      "permanenceInc": 0.1,
                      "permanenceDec": 0.1,
                      "globalDecay": 0.0,
                      "maxAge": 0,
                      "minThreshold": 9,
                      "activationThreshold": 12,
                      "outputType": "normal",
                      "pamLength": 3,
                    }))
  #
  # Add a ClassifierRegion, a region for ouput predictions
  #
  #network.addRegion("cl", "py.SDRClassifierRegion",
  #                 json.dumps({
  #                   "verbosity": 1,
  #                   "alpha": 0.005,
  #                   "steps": '1'
  #                  }))
 
  network.link("sp", "tm", "UniformLink", "")
  network.link("tm", "sp", "UniformLink", "", srcOutput="topDownOut",
               destInput="topDownIn")

  # Enable inference mode to be able to get predictions
  network.regions['tm'].setParameter("inferenceMode", True)

  return network

def runNetwork(network):
  fooSensor = network.regions['fooSensor']
  barSensor = network.regions['barSensor']
  bazSensor = network.regions['bazSensor']

  foo = 1
  bar = 2
  baz = 0.3
  for i in range(10):
    # For core encoders, use the network API.
    fooSensor.setParameter('sensedValue', foo)
    barSensor.setParameter('sensedValue', bar)
    bazSensor.setParameter('sensedValue', baz)

    network.run(1)
    # get a predicted foo from the network
    predFoo = foo
    # get a predicted bar from the network
    predBar = bar
    # get a predicted baz from the network
    predBaz = baz

    foo = foo + 1
    if foo > 4.8: foo = -4
    bar = bar + 2
    if bar > 48: bar = -48
    baz = baz + 0.03
    if baz >= 0.48: baz = -0.47

    print "foo: predicted={:.1f} actual={:.1f}".format(predFoo,foo)
    print "bar: predicted={} actual={}".format(predBar,bar)
    print "baz: predicted={:.2f} actual={:.2f}".format(predBaz,baz)

if __name__ == "__main__":
  network = createNetwork()
  runNetwork(network)

How could I pull out the relevant data from OPF’s TM to do it?