I’m in the midst of learning about SPs and TMs. I’ve implemented an SP with a really basic encoded input of <letters of the alphabet, indices of those letters> to try to establish what kind of anomaly detection can be done with that alone. I should mention I’m incredibly new to all of this and am trying to tackle it from a brute-force engineering perspective.
I observe that, given a new input, the amount of permanence encountered across active columns will be higher if that input has been seen before (forgive this being obvious, just trying to set things out in my head). Purely using the SP I can then establish whether an input is anomalous. Is this correct? And a correct usage? Would it be correct to say that this is analogous to a single layer of “dumb” independent neurons (w/ no lateral connections)?
I want to understand SP sufficiently such that I can then graduate onto TM but I feel I still haven’t fully internalised SPs. Any help appreciated (especially if you’ve seen via my explanation that I have some kind of fundamental misapprehension – please correct me : )
Here is my attempt to understand the value of TM over SP: Right now I’m encoding the index of an input sequence into the input space itself, making ‘a’ correlate to ‘0’ (b:1, c:2, d:3, etc.). If I was to employ TM am I correct in my thinking that I would no longer need to encode the index of the sequence because TM’s predictive nature would effectively encode its own “index”. Or rather, the magic of TM is literally that it would relate ‘a’ to ‘c’ via their shared neighbour of ‘b’, thus the ‘order’ of a set of events becomes an intrinsic (instead of extrinsic) property. Is this even close to what’s occurring?