Implementing the Hot Gym Example

Hello,

I’m trying to implement the example given in https://nupic.docs.numenta.org/0.8.0.dev0/quick-start/algorithms.html#encoding-data, but with generated sinusoidal data.

Code:

import csv
import datetime
import numpy
import os
import yaml
import matplotlib.pyplot as plot

from nupic.algorithms.sdr_classifier_factory import SDRClassifierFactory
from nupic.algorithms.spatial_pooler import SpatialPooler
from nupic.algorithms.temporal_memory import TemporalMemory
from nupic.encoders.date import DateEncoder
from nupic.encoders.random_distributed_scalar import \
  RandomDistributedScalarEncoder

_NUM_RECORDS = 3000
_EXAMPLE_DIR = os.path.dirname(os.path.abspath(__file__))
_INPUT_FILE_PATH = os.path.join(_EXAMPLE_DIR, "data", "shortwave2.csv")
_PARAMS_PATH = os.path.join(_EXAMPLE_DIR, "params", "model.yaml")

def runTest(numRecords):
  print(_PARAMS_PATH)
  
  with open(_PARAMS_PATH, "r") as f:
    modelParams = yaml.safe_load(f)["modelParams"]
    enParams = modelParams["sensorParams"]["encoders"]
    spParams = modelParams["spParams"]
    tmParams = modelParams["tmParams"]

  timeOfDayEncoder = DateEncoder(forced=True)
  scalarEncoder = RandomDistributedScalarEncoder(enParams["signal"]["resolution"])

  encodingWidth = (timeOfDayEncoder.getWidth()
                   + scalarEncoder.getWidth())

  sp = SpatialPooler(
    inputDimensions=(encodingWidth,),
    columnDimensions=(spParams["columnCount"],),
    potentialPct=spParams["potentialPct"],
    potentialRadius=encodingWidth,
    globalInhibition=spParams["globalInhibition"],
    localAreaDensity=spParams["localAreaDensity"],
    numActiveColumnsPerInhArea=spParams["numActiveColumnsPerInhArea"],
    synPermInactiveDec=spParams["synPermInactiveDec"],
    synPermActiveInc=spParams["synPermActiveInc"],
    synPermConnected=spParams["synPermConnected"],
    boostStrength=spParams["boostStrength"],
    seed=spParams["seed"],
    wrapAround=True
  )

  tm = TemporalMemory(
    columnDimensions=(tmParams["columnCount"],),
    cellsPerColumn=tmParams["cellsPerColumn"],
    activationThreshold=tmParams["activationThreshold"],
    initialPermanence=tmParams["initialPerm"],
    connectedPermanence=spParams["synPermConnected"],
    minThreshold=tmParams["minThreshold"],
    maxNewSynapseCount=tmParams["newSynapseCount"],
    permanenceIncrement=tmParams["permanenceInc"],
    permanenceDecrement=tmParams["permanenceDec"],
    predictedSegmentDecrement=0.0,
    maxSegmentsPerCell=tmParams["maxSegmentsPerCell"],
    maxSynapsesPerSegment=tmParams["maxSynapsesPerSegment"],
    seed=tmParams["seed"]
  )

  classifier = SDRClassifierFactory.create()
  results = []
  with open(_INPUT_FILE_PATH, "r") as fin:
    reader = csv.reader(fin)
    headers = reader.next()

    for count, record in enumerate(reader):

      if count >= numRecords: break

      dateString = datetime.datetime.strptime(record[0],"%Y-%m-%d %H:%M:%S")
      #print(dateString)

      # Convert data value string into float.
      sig = float(record[1])

      # To encode, we need to provide zero-filled numpy arrays for the encoders
      # to populate.
      timeOfDayBits = numpy.zeros(timeOfDayEncoder.getWidth())
      sigBits = numpy.zeros(scalarEncoder.getWidth())

      # Now we call the encoders to create bit representations for each value.
      timeOfDayEncoder.encodeIntoArray(dateString, timeOfDayBits)
      scalarEncoder.encodeIntoArray(sig, sigBits)

      # Concatenate all these encodings into one large encoding for Spatial
      # Pooling.
      encoding = numpy.concatenate(
        [timeOfDayBits, sigBits]
      )

      encoding = sigBits

      # Create an array to represent active columns, all initially zero. This
      # will be populated by the compute method below. It must have the same
      # dimensions as the Spatial Pooler.
      activeColumns = numpy.zeros(spParams["columnCount"])

      # Execute Spatial Pooling algorithm over input space.
      sp.compute(encoding, True, activeColumns)

      activeColumnIndices = numpy.nonzero(activeColumns)[0]

      # Execute Temporal Memory algorithm over active mini-columns.
      tm.compute(activeColumnIndices, learn=True)

      activeCells = tm.getActiveCells()

      # Get the bucket info for this input value for classification.
      bucketIdx = scalarEncoder.getBucketIndices(sig)[0]

      # Run classifier to translate active cells back to scalar value.
      classifierResult = classifier.compute(
        recordNum=count,
        patternNZ=activeCells,
        classification={
          "bucketIdx": bucketIdx,
          "actValue": sig
        },
        learn=True,
        infer=True
      )

      # Print the best prediction for 1 step out.
      oneStepConfidence, oneStep = sorted(
        zip(classifierResult[1], classifierResult["actualValues"]),
        reverse=True
      )[0]
      print("1-step: {:16} ({:4.4}%)".format(oneStep, oneStepConfidence * 100))
      results.append(oneStepConfidence * 100)

    sub = results[2500:3000]
    plot.plot(sub)
    plot.grid(True)
    plot.show()

if __name__ == "__main__":
  runTest(_NUM_RECORDS)

Config file:

model: HTMPrediction
version: 1

predictAheadTime: null


modelParams:
  inferenceType: TemporalNextStep

  sensorParams:
    verbosity: 0

encoders:
  signal:
    fieldname: signal
    name: signal
    resolution: 0.88
    seed: 1
    type: RandomDistributedScalarEncoder
  datetime:
    fieldname: datetime
    name: datetime
    type: DateEncoder

sensorAutoReset: null

  spEnable: true

  spParams:
    inputWidth: 946
    columnCount: 2048
    spVerbosity: 0
    spatialImp: cpp
    globalInhibition: 1
    localAreaDensity: -1.0
    numActiveColumnsPerInhArea: 40
    seed: 1956
    potentialPct: 0.85
    synPermConnected: 0.1
    synPermActiveInc: 0.04
    synPermInactiveDec: 0.005
    boostStrength: 3.0

  tmEnable: true

  tmParams:
    verbosity: 0
    columnCount: 2048
    cellsPerColumn: 32
    inputWidth: 2048
    seed: 1960
    temporalImp: cpp
    newSynapseCount: 20
    initialPerm: 0.21
    permanenceInc: 0.1
    permanenceDec: 0.1
    maxAge: 0
    globalDecay: 0.0
    maxSynapsesPerSegment: 32
    maxSegmentsPerCell: 128
    minThreshold: 12
    activationThreshold: 16
    outputType: normal
    pamLength: 1

  clParams:
    verbosity: 0
    regionName: SDRClassifierRegion
    alpha: 0.1
    steps: '1,5'
    maxCategoryCount: 1000
    implementation: cpp

  trainSPNetOnlyIfRequested: false

Example output:

1-step: -0.804489083283 (51.25%)
1-step: 0.93655602968 (47.5%)
1-step: 0.187003990938 (76.98%)
1-step: 0.929403642176 (5.918%)
1-step: 0.187003990938 (36.48%)
1-step: -0.788550323498 (0.8647%)
1-step: 0.128416748256 (27.99%)
1-step: 0.901671961023 (92.11%)
1-step: 0.901671961023 (0.6367%)
1-step: 0.859828313216 (21.62%)
1-step: -0.835778050704 (66.3%)
1-step: -0.835778050704 (22.05%)
1-step: 0.116447583296 (56.18%)
1-step: 0.116447583296 (6.337%)
1-step: 0.888842026851 (59.55%)
1-step: 0.888842026851 (52.27%)
1-step: 0.922165177596 (43.04%)
1-step: -0.713035043798 (74.61%)
1-step: -0.627426267459 (42.81%)
1-step: 0.331628494255 (79.58%)
1-step: 0.922165177596 (19.21%)
1-step: 0.922165177596 (56.07%)

Does anyone know why the resulting probabilities are so random? What am I doing wrong?

First of all is it right that each period of the sine wave is composed of ~5 points? So that plot of 500 has ~100 repeats of the wave?

Second is this right for sure?

Seems like this cuts the timeOfDayBits out of the encoding.

Third I’d plot the raw anomaly scores first before any predictions, since they don’t rely on the the classifier. The classifier adds complexity since it has its own learning system, apart from the core SP+TM. I’d make sure the anomaly scores settle down after ~5 repeats of the sequence (assuming there are ~5 point per repeat)

Lastly for predictions I’d plot ‘oneStep’ along with the confidence values. Idk how exactly the confidence values are calculated by the classifier, but the oneSteps will show how long its taking the SP+TM+Classifier to accurately forecast the raw values themselves. I’d be especially curious to see how much longer this takes than the settling of anomaly scores.

2 Likes

Hi @sheiser1,

Thank you so much for the reply and sorry for the late response!

encoding = numpy.concatenate( [timeOfDayBits, sigBits] )
encoding = sigBits

This was a stupid mistake, although changing it made no difference to the output.

The raw anomaly scores:
Anomaly score: 1.0
Anomaly score: 1.0
Anomaly score: 1.0
Anomaly score: 0.575
Anomaly score: 0.425
Anomaly score: 0.425
Anomaly score: 0.2
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.05
Anomaly score: 0.05
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.05
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.075
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.1
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.075
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.075
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.025
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.05
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.05
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.025
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0
Anomaly score: 0.0

The onestep and confidence values:
1-step: -0.821, Confidence: 1.0
1-step: -0.821, Confidence: 0.00199600798403
1-step: -0.177, Confidence: 0.00199600798403
1-step: 0.733, Confidence: 0.00199203187251
1-step: 0.969, Confidence: 0.00198807157058
1-step: 0.8038, Confidence: 0.00198807157058
1-step: 0.8038, Confidence: 0.00198807157058
1-step: -0.0297, Confidence: 0.00205245148892
1-step: -0.0297, Confidence: 0.00467944714938
1-step: 0.8038, Confidence: 0.0024076705003
1-step: 0.71656, Confidence: 0.00291374132738
1-step: 0.801592, Confidence: 0.00517012366905
1-step: -0.0297, Confidence: 0.00641272720765
1-step: 0.7312144, Confidence: 0.00483515398319
1-step: -0.0297, Confidence: 0.00601545236259
1-step: 0.7312144, Confidence: 0.00426454536324
1-step: 0.7312144, Confidence: 0.00687699268936
1-step: 0.79715008, Confidence: 0.006039417523
1-step: 0.790205056, Confidence: 0.0228389789993
1-step: 0.004377, Confidence: 0.0182554982486
1-step: 0.004377, Confidence: 0.0218509346702
1-step: 0.790205056, Confidence: 0.00995013271517
1-step: 0.790205056, Confidence: 0.0165340977617
1-step: 0.8009435392, Confidence: 0.0159035674379
1-step: 0.83666047744, Confidence: 0.0458158194165
1-step: 0.04687473, Confidence: 0.0338717315276
1-step: 0.04687473, Confidence: 0.0451443044843
1-step: 0.83666047744, Confidence: 0.0141445948452
1-step: 0.83666047744, Confidence: 0.0344285671352
1-step: 0.776462334208, Confidence: 0.00400431832239
1-step: 0.841423633946, Confidence: 0.0236162150911
1-step: 0.0900186177, Confidence: 0.047499039946
1-step: 0.0900186177, Confidence: 0.106603174419
1-step: 0.0900186177, Confidence: 0.0403181784663
1-step: 0.841423633946, Confidence: 0.0137759247224
1-step: 0.841423633946, Confidence: 0.0359551341084
1-step: 0.885096543762, Confidence: 0.0327564891238
1-step: 0.820867580633, Confidence: 0.0144597070401
1-step: 0.048459122673, Confidence: 0.230901539289
1-step: 0.048459122673, Confidence: 0.0709398965241
1-step: 0.820867580633, Confidence: 0.10739632596
1-step: 0.820867580633, Confidence: 0.0347072325894
1-step: 0.845207306443, Confidence: 0.0404673087313
1-step: 0.84694511451, Confidence: 0.224022254689
1-step: 0.0549749701098, Confidence: 0.229271060258
1-step: 0.0549749701098, Confidence: 0.389443598305
1-step: 0.84694511451, Confidence: 0.087448511595
1-step: 0.84694511451, Confidence: 0.0395858431405
1-step: 0.816361580157, Confidence: 0.0159465774272
1-step: 0.86065310611, Confidence: 0.0694145499567
1-step: 0.0826477353538, Confidence: 0.124014062269
1-step: 0.0826477353538, Confidence: 0.212195406873
1-step: 0.0826477353538, Confidence: 0.39188192732
1-step: 0.86065310611, Confidence: 0.100769287656
1-step: 0.761157174277, Confidence: 0.159502084077
1-step: 0.832810021994, Confidence: 0.137862385165
1-step: 0.0826477353538, Confidence: 0.552922640189
1-step: 0.748567015396, Confidence: 0.356714934748
1-step: 0.748567015396, Confidence: 0.138356635374
1-step: 0.748567015396, Confidence: 0.575062961328
1-step: 0.748567015396, Confidence: 0.33755630015
1-step: 0.810796910777, Confidence: 0.173264472134
1-step: 0.796457837544, Confidence: 0.497396852787
1-step: 0.0575973903234, Confidence: 0.566309625433
1-step: 0.0575973903234, Confidence: 0.193027328759
1-step: 0.796457837544, Confidence: 0.744200761732
1-step: 0.796457837544, Confidence: 0.502320355654
1-step: 0.808320486281, Confidence: 0.11295419993
1-step: 0.839724340397, Confidence: 0.579464151376
1-step: 0.0713327212584, Confidence: 0.514797080174
1-step: 0.0713327212584, Confidence: 0.22727649457
1-step: 0.839724340397, Confidence: 0.459636725625
1-step: 0.839724340397, Confidence: 0.32755033984
1-step: 0.782807038278, Confidence: 0.300412241342
1-step: 0.845264926794, Confidence: 0.524906075732
1-step: 0.100773033417, Confidence: 0.532410644669
1-step: 0.100773033417, Confidence: 0.344628858781
1-step: 0.100773033417, Confidence: 0.644447453793
1-step: 0.845264926794, Confidence: 0.672393863559
1-step: 0.845264926794, Confidence: 0.494628631033
1-step: 0.888385448756, Confidence: 0.34310752921
1-step: 0.818969814129, Confidence: 0.630388578358
1-step: 0.0521987863742, Confidence: 0.458820953924
1-step: 0.0521987863742, Confidence: 0.181835350306
1-step: 0.818969814129, Confidence: 0.470191719757
1-step: 0.818969814129, Confidence: 0.412744045612
1-step: 0.84597886989, Confidence: 0.247087419822
1-step: 0.844785208923, Confidence: 0.643805435556
1-step: 0.0554874053233, Confidence: 0.33040605254
1-step: 0.0554874053233, Confidence: 0.670430520524
1-step: 0.844785208923, Confidence: 0.40291303004
1-step: 0.844785208923, Confidence: 0.591740245271
1-step: 0.818149646246, Confidence: 0.17990039046
1-step: 0.860404752372, Confidence: 0.6417244131
1-step: 0.0813688286084, Confidence: 0.517648725689
1-step: 0.0813688286084, Confidence: 0.517556392143
1-step: 0.0813688286084, Confidence: 0.825347885657
1-step: 0.860404752372, Confidence: 0.737456048001
1-step: 0.765483326661, Confidence: 0.10679084994
1-step: 0.835838328662, Confidence: 0.403036131378
1-step: 0.0813688286084, Confidence: 0.498047190366
1-step: 0.0813688286084, Confidence: 0.629246024137
1-step: 0.0813688286084, Confidence: 0.282813171954
1-step: 0.746186830064, Confidence: 0.81305398265
1-step: 0.746186830064, Confidence: 0.345921155266
1-step: 0.810630781045, Confidence: 0.0263253358062
1-step: 0.792741546731, Confidence: 0.576348194978
1-step: -0.710476487535, Confidence: 0.44387074025
1-step: 0.0554407260181, Confidence: 0.176124975315
1-step: 0.792741546731, Confidence: 0.637706207102
1-step: 0.792741546731, Confidence: 0.0328845477568
1-step: 0.808719082712, Confidence: 0.445275839473
1-step: 0.837903357898, Confidence: 0.463142338352
1-step: 0.837903357898, Confidence: 0.372542404192
1-step: -0.782274435225, Confidence: 0.393809746527
1-step: 0.837903357898, Confidence: 0.640847131339
1-step: 0.837903357898, Confidence: 0.0166180987441
1-step: 0.785432350529, Confidence: 0.149688466475
1-step: 0.84620264537, Confidence: 0.0894739238225
1-step: -0.834692104657, Confidence: 0.35934201305
1-step: 0.098275518317, Confidence: 0.521592095549
1-step: 0.098275518317, Confidence: 0.453034633495
1-step: 0.84620264537, Confidence: 0.416234199469
1-step: 0.84620264537, Confidence: 0.713312438121
1-step: 0.889941851759, Confidence: 0.266315777781
1-step: 0.816159296231, Confidence: 0.550044663446
1-step: -0.736296391897, Confidence: 0.441863378848
1-step: 0.0492350039753, Confidence: 0.448077854897
1-step: 0.816159296231, Confidence: 0.899724065867
1-step: 0.816159296231, Confidence: 0.0650073868508
1-step: 0.846111507362, Confidence: 0.497677215782
1-step: 0.0818645027827, Confidence: 0.571428908991
1-step: -0.78702523203, Confidence: 0.312464576727
1-step: 0.0522051519479, Confidence: 0.270429168263
1-step: 0.841878055153, Confidence: 0.746860361435
1-step: 0.841878055153, Confidence: 0.00943002756773
1-step: 0.819714638607, Confidence: 0.112612400849
1-step: 0.860000247025, Confidence: 0.515996217687
1-step: 0.0784405244545, Confidence: 0.424636278443
1-step: 0.0784405244545, Confidence: 0.623544551218
1-step: 0.0784405244545, Confidence: 0.629452320989
1-step: 0.860000247025, Confidence: 0.636412319274
1-step: 0.769400172918, Confidence: 0.108396587264
1-step: 0.838580121042, Confidence: 0.314592719123
1-step: 0.0784405244545, Confidence: 0.47499006656
1-step: -0.628464062523, Confidence: 0.530138455848
1-step: 0.0784405244545, Confidence: 0.715281775704
1-step: 0.74360608473, Confidence: 0.898854107338
1-step: 0.74360608473, Confidence: 0.718931012682
1-step: 0.810324259311, Confidence: 0.0636448157142
1-step: 0.789226981518, Confidence: 0.79553954151
1-step: -0.707577390636, Confidence: 0.450988140308
1-step: -0.771304173445, Confidence: 0.374244949009
1-step: 0.789226981518, Confidence: 0.667209212636
1-step: 0.789226981518, Confidence: 0.764869087442
1-step: 0.808958887062, Confidence: 0.234646818977
1-step: 0.835671220944, Confidence: 0.382527399252
1-step: -0.788012921412, Confidence: 0.602220947351
1-step: 0.0655261699215, Confidence: 0.018511707329
1-step: 0.835671220944, Confidence: 0.598965875974
1-step: 0.835671220944, Confidence: 0.767285647929
1-step: -0.0303316810549, Confidence: 0.152254355434
1-step: -0.0303316810549, Confidence: 0.446370743457
1-step: -0.833966331492, Confidence: 0.236438306578
1-step: -0.753576432044, Confidence: 0.420955585884
1-step: 0.0951678232615, Confidence: 0.567728890772
1-step: 0.846938898262, Confidence: 0.023951083661
1-step: 0.846938898262, Confidence: 0.651851825761
1-step: 0.891057228784, Confidence: 0.0257712494634
1-step: 0.812740060149, Confidence: 0.370005714255
1-step: -0.733452451702, Confidence: 0.0110140342972
1-step: 0.0459722333982, Confidence: 0.40371602675
1-step: 0.812740060149, Confidence: 0.0823700037751
1-step: 0.812740060149, Confidence: 0.579458883932
1-step: 0.845818042104, Confidence: 0.2132635498
1-step: 0.0846805633787, Confidence: 0.16191770895
1-step: -0.782871701334, Confidence: 0.547607784407
1-step: -0.806010190934, Confidence: 0.154281605759
1-step: 0.838672629473, Confidence: 0.697759669069
1-step: 0.838672629473, Confidence: 0.741114887403
1-step: 0.820770840631, Confidence: 0.0518159804443
1-step: 0.858939588442, Confidence: 0.474893813384
1-step: -0.832407133654, Confidence: 0.665464699691
1-step: 0.0752304332389, Confidence: 0.0396051905878
1-step: 0.858939588442, Confidence: 0.83454719451
1-step: 0.858939588442, Confidence: 0.0381961602536
1-step: 0.773157711909, Confidence: 0.629757999105
1-step: 0.840910398336, Confidence: 0.995148037677
1-step: -0.84650949549, Confidence: 0.653909074048
1-step: -0.727856646843, Confidence: 0.442703133048
1-step: -0.0613386967328, Confidence: 0.715158017344
1-step: 0.740737278835, Confidence: 0.0371738499555
1-step: 0.740737278835, Confidence: 0.245324566491
1-step: 0.809516095185, Confidence: 0.606127389767
1-step: 0.785061266629, Confidence: 0.287039855829
1-step: -0.752499756953, Confidence: 0.458985735164
1-step: -0.0178459613991, Confidence: 0.715267652863
1-step: 0.785061266629, Confidence: 0.776712468758
1-step: 0.785061266629, Confidence: 0.452749546693
1-step: 0.808742886641, Confidence: 0.121540493221

Confidence values plotted:

I implemented this alternative hot gym example and I’m getting much more accurate results :thinking:

You mean more accurate 1-step predictions than with the sine data you were using before? I think the patterns in the hot gym data are more elaborate than a simple sine, and also have some noise. Does this mean you’re looking at 1-step predictions vs actual values for the hot gym example?

Also how much more data is being used to train the hotgym model than the sine model? The classifier adds some complexity since it has its own training process separate from the core SP+TM structures of HTM, so more data plus more intricate patterns may help the classifier predict better in hot gym. Not totally sure.

For the simplest sanity check on any model I’d first ensure the encoder settings are fitting to the spatial distribution of your data. So for the sine wave a scalar encoder should have min/max of -1/1, but that should change for the hot gym data.

Then I’d plot the raw anomaly scores right under the raw data. Anomaly scores will always start out at 1.0, and settle down at some rate based on how periodic/predictable the sequences are. When there’s a change in the sequential behavior that’s when the anomaly scores should spike, and then settle again when another predictable pattern emerges.