I have a general question, “What specifies an HTM?”
I think it’d help if you elaborated a bit on the question, do you mean what traits make it different from other online learning or anomaly detection algorithms? Or other neuro-based methods or sequence learning methods? I think its safe to say that the HTM can’t be HTM without sparse distributed representations, and I’m not sure of any other methods that use sparse representations as a form of feature detection.
From my basic understanding I think the encoding process is somewhat comparable to convolution in ANN’s since it sort of fuzzifies the encoding to create overlap between semantically similar inputs, and the basic TM mechanism of learning transitions is also the basis of markovian models, though TM does it using SDR’s and learns the sequences themselves rather than just storing transition probability at a give n-order.
If you haven’t already I’d highly recommend reading through this:
You’ll understand exactly how the TM works and the concepts behind it, so you’ll understand why it works so well and be able to compare it to any other sequence learning algorithm.
Thank you for your response @sheiser1. Let me give an example, an LTI system is specified by its impulse response, or a system is specified by the set of all possible (x(t),y(t)) pairs where x(t) is an input and y(t) is its correspoding output signal. How about HTM, "What specifies an HTM?”
Here is my take on this question. Other opinions welcome.
An HTM System:
uses an HTM Neuron. The HTM Neuron treats distal input differently than proximal input. It implements the effects of a dendritic spike because it allows some localized apical/distal input to affect how the cell fires in response to proximal input.
requires high-dimensional spaces to represent synapses. In most of the HTM implementations I’ve seen, we use binary SDRs to fill this need. Sensory input must be encoded into high-dimensional space with semantic meaning.
These two things are implemented and well-understood. Many community members have written HTM systems with these. So I would say these 2 things are the primary elements of an HTM system.
However there’s more. An HTM system also:
- contains Layers of HTM Neurons used as compute modules, which can use different data sources for proximal, distal, and apical input. Each layer’s output can be input for another layer. Non-trivial computational structures can be built with these components.
This is currently less understood. There are only a few people I know building cortical columns out of layers of HTM neurons (most of them work at Numenta).