Temporal Memory region does not seem to learn

Hello,

I’m trying to understand the example of the Network API Guide (found here : http://nupic.docs.numenta.org/stable/quick-start/network.html). I tried to visualize the outputs of the Temporal Memory Region to observe how the TemporalMemory learns to predict sequences. As I understood, either no cells have been predicted and the entire column is activated, or only the predicted cells are activated. I tried to observe the bottomUpOut output of this region and it seems that entire columns are always activated which implies that cells are never in a predicted state. I also observed the predictedActiveCells output which, as I understand, indicates the activated cells that are in a predicted state, and I’m never able to see active cells that were in a predicted state. The parameters inferenceMode and learningMode are set to 1 in the example I used. The parameters of the Temporal Memory region are the following :

verbosity: 0
columnCount: 2048
cellsPerColumn: 32
inputWidth: 2048
seed: 1960
temporalImp: cpp
newSynapseCount: 20
initialPerm: 0.21
permanenceInc: 0.1
permanenceDec: 0.1
maxAge: 0
globalDecay: 0.0
maxSynapsesPerSegment: 32
maxSegmentsPerCell: 128
minThreshold: 12
activationThreshold: 16
outputType: normal
pamLength: 1 

And the code I used is exactly the same as the documentation, except in the runHotGym where I added code to get the indexes of the active cells in the for loop (I know this is bad code… I just rapidly inserted those lines to observe what happened inside the network…) :

 bestPrediction=0
 for iteration in range(0, numRecords, N):
  network.run(N)
  currentValue=network.regions["sensor"].getOutputData("actValueOut")
  print("currentValue:"+str(currentValue)+" previousPrediction:"+str(bestPrediction))
  predictionResults = getPredictionResults(network, "classifier")
  oneStep = predictionResults[1]["predictedValue"]
  oneStepConfidence = predictionResults[1]["predictionConfidence"]
  fiveStep = predictionResults[5]["predictedValue"]
  fiveStepConfidence = predictionResults[5]["predictionConfidence"]

  bestPrediction=oneStep
  actColumns=network.regions["TM"].getOutputData("bottomUpOut")
  activeIndexColumns=[]
  for i in range(0,len(actColumns)):
    if actColumns[i]==1:
      activeIndexColumns.append(i)
  print(activeIndexColumns)

  actCells=network.regions["TM"].getOutputData("activeCells")
  activeIndexCells=[]
  for i in range(0,len(actCells)):
    if actCells[i]==1:
       activeIndexCells.append(i)
  print(activeIndexCells)

  predCells=network.regions["TM"].getOutputData("predictedActiveCells")
  predictedIndexCells=[]
  for i in range(0,len(predCells)):
     if predCells[i]==1:
        predictedIndexCells.append(i)
  print(predictedIndexCells)

  result = (oneStep, oneStepConfidence * 100,
          fiveStep, fiveStepConfidence * 100)
  print "1-step: {:16} ({:4.4}%)\t 5-step: {:16} ({:4.4}%)".format(*result)

Do you know if it is normal, or should I observe cells in a predicted state in this example (ps : Sorry for my English) ?

If you are looking to find the actual cell values of the SP and TM regions, the only way I have done this is by getting the algortihm instances like this:

Once you have those, you can inspect them as shown in the conjureXXX functions in TmFacade and SpFacade in the NuPIC Usage FAQ.

2 Likes