Squeezing more from HTM

Provocative heading right? :smiley:

So my limited understanding to date is: we encode the incoming data into a binary array. We then use the spatial pooler to create an SDR based on that encoded array which we then use for learning etc.

What if we also used the encoding? The way I see it, the encoding binds the incoming data to be exactly a set of on-bits within an array. As @rhyolight has said previously - that array can be dense or sparse. However, if you make it (reasonably) sparse then (my thinking is that) all the lovely maths involved in sparse arrays means that you can determine the statistical ā€œlikenessā€ of one input to the next based on the overlap between them and the encoding ensures that likeness calculation wonā€™t change from one input data to the next (as long as the input is from the same source). Sure, you wonā€™t yet have the whole data pattern to use - but you may not care. Letā€™s say I was looking at oxygen saturation in a patient - if there was a sudden change - e.g. from 98% to 70% - would I care what the pattern was? Probably not. If it changed from 98% to 97% then I might want to see the underlying pattern before sounding an alarm.

So questions:

  • Does the sparsity of the input binary array affect the dynamics (e.g. rate of learning) of the spatial pooler?
  • Would it be valid to set an alert based on the amount of overlap from one input to the next - at the encoder level if this is what was required?
  • Would it be possible to use the learning / predictions from further along the HTM system in a feedback loop to modify / control the alerts at the encoder level?
  • Does any of the above make sense? :slight_smile:
2 Likes

Hi @REager, a provocative heading indeed and well presented. Since HTM is all about learning sequences a change from 98 to 70 isnā€™t inherently more surprising than 98 to 97, it depends entirely on the history. The smaller change could surprise the system much more if its not used to seeing small changes, or not in the given context.

So while HTM is highly capable of detecting subtle temporal anomalies it can easily miss theses spatial spikes, depending on the predictability of the sequences up until then. I think your idea of monitoring overlaps between successive encoding vectors could help in getting these spatial anomalies, but I bet it would also add false positives in a lot of cases. Also it seems to me basically equivalent to adding a threshold as often done on numeric data.

Thinking about the learning in SP, each column connects to a subset of the encoding vector and gets an overlap score with the input. The columns with overlap score in the top 2% win and activate.

If the encoding space is more saturated with active bits, the receptive fields of the SP columns will overlap more and their overlap score will be more similar.
Basically itā€™ll be less clear from a denser encoding which SP columns should activate, since many of them will have strong overlap scores. The SP is working better when similar recurring inputs are activating many of the same columns, so the TM can learn faster. If you imagine a sparser encoding, each input would only create strong overlap scores in a few columns, kind of giving those columns watch over that part of the input space.

5 Likes

Thanks Sam

Clinically it might be surprising if itā€™s oxygen saturation levels. Not many people can survive long at that level of O2 saturation (SaO2). A drop from 98% to 70% would definitely be something that youā€™d want investigated (I have a degree in Nursing).

Thatā€™s that crux of my question I think. If the encoding ensures that each input is encoded to a specific bit location within the array - then the amount of overlap will determine the likeness of one number compared with the one before it. The lower the overlap the higher the probability that the two numbers are different - and the encoding (and a sparse array) essentially reduces the likelihood of a false positive.

1 Like

I think that this is part of what the thalamus does.

4 Likes

I donā€™t doubt that for a second of course! I just meant in terms of pure HTM learning, a bigger gap between two successive inputs isnā€™t necessarily more surprising that a smaller gap depending on the history leading up to it.

Yes in that respect youā€™re correct of course. But in some cases domain knowledge would allow the use of an alert setting based on the properties of sparse distributions at the encoder level.

If the input source is known to have low variation - regardless of its base value - my proposition is that you can use that to trigger an alert based on overlap. Knowing the dimensions of the array you create for each input, you should be able to work out what an abnormal overlap value is. The risk of false positives in that case will only be high if the input does not behave in a way that is within the norms of domain knowledge. A classic example was a patient I looked after with tetanus. Her blood pressure would go from 230+ systolic during muscle contractions to 90 or less during the relax stage - obviously thatā€™s way outside normal. In these extreme cases you would want feedback from the learning +/- prediction stage to modify alerts set at the encoding stage.

Iā€™m just looking at a specific use case and wondering if making the encoded array sparse will break the sparsity pooler

1 Like

Good point. It has a bi-directional connection to the cortex (one of the few that do). Itā€™s one of the reasons why people who have an accident that affects it have problems that can mimic cortical injury such as planning ability, fine motor adjustment and loss of visual field etc.

1 Like

Definitely, and personally its awesome for me to hear perspectives like this from domain practitioners on how HTM could be augmented to catch those native anomaly types :smile:.

Thinking more about your encoder overlap idea, I think this would be best implemented at the SP column level, since two encoding vectors can have little or no overlap without too much spatial difference between them. This leads me to the false positive concern. Two SP vectors though will be pretty far apart in space to have no overlap, it really comes down to the encoder settings (min & max values to the RDSE).

1 Like

Thank you Sam. That makes sense. Iā€™m not currently creating an RDSE - will that make a difference?

It can make some difference from the Simple Scalar Encoder by handling inputs outside the min/max more flexibly (SSE just clips them). The RDSE is the standard for encoding numeric data for anomaly detection though any valid encoding should perform well. Iā€™d recommend having a look at the NuPIC function:

getScalarMetricWithTimeOfDayAnomalyParams().

1 Like

Thanks Sam. Will look into it. Will using RDSE change my spatial pooler? That is - will the SP be different depending on the encoder used? It would be nice to have a single SP and then can compare the results from different encoders on the same data set.

Hi @REager, a single Spatial Pooler will be fine, it should generally be the same from encoder to encoder.

3 Likes

Thanks Brev. Comparing results from different encoders on the same data will be an interesting additional experiment

1 Like