Recognition ? How?

This is two part question the second part will come later : When RECOGNITION is NOT enough ?

Is the idea of Temporal poller still valid ?
How exactly the RECOGNITION happens i.e. Sequence =to=> UID and L2/3 voting ?

Recognition of objects and paths are in a sense Sequences with richer data !
So we will talk Sequences…

After this there is a question …


exerpt from : ihtm/sequence_learning.ipynb at main · vsraptor/ihtm · GitHub

Sequence recognition

The goal is to generate correct next element. This is akin to Classification.

si, si+1, …, sj → yes or no, where 1 ≤ i ≤ j < ∞; that is, given si, si+1, …, sj, we want to determine if this subsequence is legitimate or not. (There are alternative ways of formulating the sequence recognition problem, for example, as an one-shot recognition process, as opposed to an incremental step-by-step recognition process as formulated here.)

With this formulation, sequence recognition can be turned into sequence generation/prediction, by basing recognition on prediction; that is, si, si+1, …, sj → yes (a recognition problem), if and only if si, si+1, …, sj-1 → spj (a prediction problem) and spj = saj, where spj is the prediction and saj is the actual element.

So recognition can be interpreted either as comparing the sequence elements one by one until the end. OR by the condition that the element before-last will predict the last element (this assumes uniqueness of the sequence. Said it differently the last element is a Goal that has to be reached, when that happen we assume that the sequence was recognized ).


What is my “problem”, sort of ?

Which is the best WAY to implement RECOGNITION ?

There are couple ways to represent recognition of number|word|data|Loc+Feature sequence :
/by CPU I also mean brain power/

  • full match : requires alot of memory and cpu, have to store full sequences
  • make the sequences as sets and do full match : less memory and less cpu, order and repeat elements does not matter, so if that is important the recognition would be invalid.
  • store (as actual) and compare only the last element : very fast and the least cpu. This assumes that the last element is unique and sequence-item-order is also unique.
  • pick elements at random positions : less mem&cpu, probabilistic false positives
  • some mix of those : partial-full | partial-set
  • use hash function : need to preserve similarity

Now if the sequence is converted to SDRs :

  • we can use the UNION quality to accumulate small sequences. Not enough, TM is not reliable for longer sequences and also has the drawback of the set scenario.
  • randomly pick smaller number of bits to UNIONize : longer sequences, but more false positive
  • in addition dacay bits in the UNION : how do you decide what to decay
  • use Sparse hashing of the concatenated SDRs : https://openreview.net/pdf?id=BJNXJgVKg

Apparently those are algorithms which should/would mimic neurons circuitry !
In the brain there is a “circuit” L2/3 <=> L6-TMem which does that according to TBT

How would you do RECOGNITION ?

The REPRESENTATION/snapshot/ID has to be :

  • unique
  • much smaller than the sequence
  • similar sequences should create similar ID’s

Here I’m skipping the problem of detecting boundaries i.e. how do u know when old seq ends and new starts, but if you have an idea share it ?