Help debugging for anomaly detection


Would you be able to print out exactly what these parameters are (at the algorithm level of I know them well and can inspect for differences.


I just pushed an update to the “Connections” object to include the new TM’s params in the printout. The Algorithms retrieve their parameters from the Connections object because the Connections object is passed in to all methods where configs are needed.

I can then run the @lscheinkman NAB branch and printout the parameters and let you review them?





Here are the effective parameters as applied:

For the RDSE:

n = 400
w = 21
resolution = 0.9584615384615384

From the Connections object:

---------------------- General -------------------------
columnDimensions           = [2048]
inputDimensions            = [454]
cellsPerColumn             = 32
random                     = org.numenta.nupic.util.MersenneTwister@3f49dace
seed                       = 1960

------------ SpatialPooler Parameters ------------------
numInputs                  = 454
numColumns                 = 2048
numActiveColumnsPerInhArea = 40.0
potentialPct               = 0.8
potentialRadius            = 16
globalInhibition           = true
localAreaDensity           = -1.0
inhibitionRadius           = 2048
stimulusThreshold          = 0.0
synPermActiveInc           = 0.003
synPermInactiveDec         = 5.0E-4
synPermConnected           = 0.2
synPermBelowStimulusInc    = 0.01
synPermTrimThreshold       = 0.05
minPctOverlapDutyCycles    = 0.001
minPctActiveDutyCycles     = 0.001
dutyCyclePeriod            = 1000
maxBoost                   = 1.0
version                    = 1.0

------------ TemporalMemory Parameters ------------------
activationThreshold        = 20
learningRadius             = 2048
minThreshold               = 13
maxNewSynapseCount         = 31
maxSynapsesPerSegment      = 128
maxSegmentsPerCell         = 128
initialPermanence          = 0.24
connectedPermanence        = 0.5
permanenceIncrement        = 0.04
permanenceDecrement        = 0.008
predictedSegmentDecrement  = 0.001


Thank you. And what data file is this? The resolution is specific to the data min/max.




Ok, I added another test to add to the vetting of the algorithms. This time I ran the Network API within the raw test harness (which combines raw algorithms in a chain). The network only included the TemporalMemory and Anomaly classes, and I ran the stored SP output into it.

Conclusion: The output was exactly the same as the Python TM -> to Anomaly code; RNG output, active & predicted columns. So this test combined with the NetworkConsistencyTest which already exists in HTM.Java’s repo; and the prior test I ran using the RunLayer (QuickTest) raw testing framework; together with all the SP compatibility testing - now establish beyond the shadow of a doubt that HTM.Java’s Algorithm output (SP, TM, and Anomaly) - is 100% compatible with NuPIC.

So now I plan to run the Java NAB and output NuPIC’s scores from a file and see if the mere outputting them through the Java detector is causing some kind of perturbation.

@rhyolight I would like to organize that meeting like we had before, to discuss the current state of things?


@alavin @lscheinkman @rhyolight

I found two significant SP configuration mishaps that apparently aren’t caught by the unit tests. Those are the synPermTrimThreshold, and the synPermBelowStimulusInc - because those to values are derived in the constructor.
HTM.Java’s SP doesn’t have state variables and so those values were initialized in the Connections object before the Parameters were applied to it. Correcting this had no effect on the unit tests but it did improve the NAB scores a healthy amount.

So now I want to play with the configs and see if I can improve the scores… (since there is no java form of swarming)

@rhyolight If you could run the HotGym stuff again, and tell me if you see an improvement (you should), that would be great. You can use the master branch on HTM.Java… Also, I think I’m going to switch to the HotGym stuff as a testbed for improving the results (it should be easier?) - so if you could tell me how you’re running that and plotting that; what params are used by NuPIC etc. that would be great!


So does NuPIC use a Classifier within NAB?


I assume it doesn’t, because a classifier is only necessary to decode predicted cells into predicted values, and NAB doesn’t need predictions, just anomaly scores.


The OPF needs to be fixed up so it doesn’t run through faux Classifier configuration, (so it doesn’t call those methods to do and return nothing - when the configuration doesn’t include the need for a classifier). That’s what confused me - it always tries to configure a classifier regardless of the intended operation - it just doesn’t return anything if there is no specification for one.

As a matter of fact, the OPF fails with an exception if you don’t specify an inference field - that has to be fixed. I think I’ll submit an issue…

self.model.enableInference({"predictedField": "value"})

… if excluded results in an exception and crash. try commenting this line out

I’m going to leave this up to you @rhyolight because while the cause of the problem is within NAB (the line to exclude), the error arises out of the OPF code in NuPIC??? So I’m not sure where to file the bug? I’m guessing NuPIC because that is what forces the unneeded declaration, and from where the error is thrown?


@alavin @lscheinkman @rhyolight

Added a universal shuffle() method to the UniversalRandom so that I can test the difference in output of the Java and Python RandomDistributedScalarEncoder. I want to see if this will get the same results…


@alavin @lscheinkman @rhyolight

Ok, now the RDSE has been thoroughly vetted with a compatibility test. It produces identical output…


@rhyolight @lscheinkman @alavin @mrcslws


I’d like to have a meeting next week to discuss the development of a Swarming algorithm for HTM.Java. I have to admit, I am unfamiliar with statistical methods, and also I’d like to get some opinions on how to go about this (in case there have been any “wish-we-would-have-done-it-this-way” realizations since the swarmer was developed for NuPIC).

I have vetted the algorithms used within NAB as well as the Network API and have resolved any issues that may have stood in the way as well as noted places where there were no issues. So I am left with the conclusion that what remains is to find the optimum configuration. Unless anyone else has any suggestions on other approaches that could be used to uncover any problem areas? We could also discuss this in the meeting?



I’ll try to set something up.


@alavin @mrcslws


Apparently I owe whoever was implicated in this communication an apology!

In that statement I objected to not having assistance in the form of someone quoting the calculations to do NAB scoring so that I could quickly mock something up.

Apparently I was VASTLY mistaken in my assumption that this could be a trivial effort, and Marcus’ suggestion that I take some time out to get familiar with the NAB module, was totally an appropriate one! So I’m sorry for my mistaken reaction.

I had no idea how elaborate the scoring mechanism within NAB was! I did finally take some time to review it as I’m now mocking up the infrastructure so I can run NAB totally within Java (not with all bells and whistles, but enough to do the scoring); so that I can run iterative tasks from a very rudimentary swarming mechanism to see if I can optimize the configurations.

So again my apologies…

However… <-- (Ok watch out :wink:)

Why didn’t any body respond by telling me how way off base my assumption of simplicity was? Instead of silently getting offended? I meant nothing personal by my complaint, I just wanted to spur on whatever action would get me to the finish line fastest - that’s all. So somebody could have said, “Hey David, it is soooooooo not as simple as that. The scoring of NAB is vastly complex!”

I wish we could have communicated more completely and directly… …and I hope we can do this in the future.

Thanks Guys!


But you said the parameters for HTM.Java and NuPIC have a 1-v-1 relationship, why do you have to swarm? You should use the exact same parameters for each.

The scoring mechanism is irrelevant! This anomaly score output mismatch can be replicated outside of NAB. NAB has nothing to do with this problem from what I can see at this point.

You make comments like this all the time, so I’m kindof numb to them. :wink:


Because I have no idea what to do next. My hands are tied. I have proven the algorithms are exactly the same and that the Network code causes no change in the output, so the parameter settings are the only thing left. This is why I want to talk with someone like Subutai about this, because I need to know where to look next?

The scoring mechanism is relevant. If I can reproduce it, I can swarm around it. The purpose of me asking this was so that I could create a rudimentary swarm to tweak the parameters. I viewed the parameters as the next logical step now that I’ve ruled out everything else. Can you think of anything else I should check?

Unless anyone has other ideas, I’m working on a mini-nab in Java so that I can run concurrent jobs and swarm over the parameters to see if I can improve the scoring.


I am very confused by your approach.

Please confirm these assumptions of mine are true.

  1. HTM.Java and NuPIC have the same set of available algorithm parameters
  2. You can set up an HTM.Java instance and NuPIC instance with the exact same parameter values
  3. With this setup, when you push the same data into each system (each having the same setup and model params), you get back very different raw anomaly scores.

Would you agree?


Algorithm parameters, yes. Network parameters or OPF parameters, no.

Algorithm parameter values, yes.

Very different raw anomaly scores? Not sure how to evaluate this. They aren’t exactly the same (nor would they be between Python and C++ I would imagine). (Otherwise the NuPIC compatibility tests wouldn’t be written to copy over values). By eye, the scoring fluctuations don’t all happen always in the same place.

I have been busy with elementary vetting so I haven’t tweaked the examples repo with the latest code, so you haven’t been able to run a visualizer over it - so the most recent visual comparison is not relevant. I found the bug in two important parameter values while vetting the SP, but we haven’t yet done a visual comparison with that code. I was busy and you were getting ready for the convention.