I’m in the midst of learning about SPs and TMs. I’ve implemented an SP with a really basic encoded input of <letters of the alphabet, indices of those letters> to try to establish what kind of anomaly detection can be done with that alone. I should mention I’m incredibly new to all of this and am trying to tackle it from a brute-force engineering perspective.
I observe that, given a new input, the amount of permanence encountered across active columns will be higher if that input has been seen before (forgive this being obvious, just trying to set things out in my head). Purely using the SP I can then establish whether an input is anomalous. Is this correct? And a correct usage? Would it be correct to say that this is analogous to a single layer of “dumb” independent neurons (w/ no lateral connections)?
I want to understand SP sufficiently such that I can then graduate onto TM but I feel I still haven’t fully internalised SPs. Any help appreciated (especially if you’ve seen via my explanation that I have some kind of fundamental misapprehension – please correct me : )
Here is my attempt to understand the value of TM over SP: Right now I’m encoding the index of an input sequence into the input space itself, making ‘a’ correlate to ‘0’ (b:1, c:2, d:3, etc.). If I was to employ TM am I correct in my thinking that I would no longer need to encode the index of the sequence because TM’s predictive nature would effectively encode its own “index”. Or rather, the magic of TM is literally that it would relate ‘a’ to ‘c’ via their shared neighbour of ‘b’, thus the ‘order’ of a set of events becomes an intrinsic (instead of extrinsic) property. Is this even close to what’s occurring?
I’m not sure you are understanding. Minicolumns become activated in response to activations in the input space. If active bits in the input space overlap with connections (synapses with permanences above a connection threshold) for a minicolumn, it has a higher chance of becoming activated. Permanences between a minicolumn and the input certainly come into play to decide whether it becomes activated, but that activation is entirely up to the overlap it has with the current input.
Once the overlap values for each minicolumn is computed, we have means for them to compete against each other, deciding which ones will become active.
This has nothing to do with temporal structure in the input space, only spatial structure. Although incomplete, you might get a better intuition for how Spatial Pooling works from reading through the Building HTM Systems SP Page (still a staging site).
The type of anomaly detection we always advertise with HTM is temporal anomaly detection. The SP does not store temporal structure at all, so it has no ability to do temporal anomaly detection.
If you were simply looking for spatial anomalies, The SP is not going to bring anything useful, since if you encode your semantic data properly, you should be able to perform binary comparisons on the encoders direction without even running an SP. You can do these comparisons with simple binary AND operations, which indicate how similar binary representations are. If you have not already (or maybe if you need a refresher), I would encourage you to watch this HTM School video on SDR sets and unions for an explanation of how these comparisons can be made without Spatial Pooling.
I’m a bit confused by this. Does the index contain some useful information? Or is it just a repeating count? If the latter, you should not be encoding it. Take musical notes for example. If you indexed every note in a song and encoded the index into each note’s encoding, that doesn’t provide any additional information for the algorithm. It knows that each datum follows the preceding data naturally. There is no need to encode indices.
Yes, I think you are getting it. Watch the Temporal Memory episode of HTM School for some concrete examples.