Can HTM extract features?

Hi guy,
how HTM can extract features (specific features)
thanks.

1 Like

Hi @way-sal,

Do you mean features in the traditional Machine Learning sense? E.g. creating an “age” feature derived from a date of birth?

2 Likes

yes, I mean the dynamic features not the static, HTM using windows techniques to extract features in time series, so how HTM could extract for example the duration, peak, spectral features … Maybe create a specific encoder for these locale features

1 Like

I think I follow, but let me know if I misunderstand.

Nothing like this can be retrieved back from the HTM network after learning, it’s just weights that predict the next input.

2 Likes

I’m not sure about @jimmyw perspective on this. At the scale of a macrocolumn (MC) there are minicolumns (mC) voting to generate an SDR. The SDR could be considered to represent features. For example, a mC that predicts a local pattern and wins at two distinct times is noting some aspect of similarity across time.

I guess that if you feed the labels for the categories you want the system to learn into an encoder. Then alternatively feed the output SDR from above, followed by the correct label into another MC, then it will learn to predict the correct label. Then once it has been trained you could stop it from learning and leave it connected to the outputs of the lower MC and it will generate a label based on features identified by the lower MC.

You would still need to have a decoder to get from the prediction of the 2nd MC back into a representation you could understand e.g. a word. That would be the inverse of the encoder.

From what I have seen, people do not bother with training the ‘higher’ MC and instead use a classifier like a SVM rather than training another MC. But the SVM is still using the features identified by the MC for its classification.

I am far from an expert on this - so I am answering as much to test my understanding as to answer your question :slight_smile:

1 Like

Hi @markNZed,

I’m not sure I fully understand what you’re proposing. The output of the second MC wouldn’t be readily decoded back to an understandable label.

Can you give an example scenario perhaps?

1 Like

Here is my guess with a very simple example: HTM learns sequences, if we train it on a sequence of labelled data where ‘I’ is input and ‘L’ is label, then feed it with:

I1,L1,I2,L1,I3,L1,I4,L2,I5,L2,I6,L2

Assume it has learn the association I1 or I2 or I3 → L1, I4 or I5 or I6 → L2 now stop training the HTM and feed it with an input that is close to I1 (I1’) and the HTM will predict L1. The prediction of the second layer when seeing the input I1’,… would be L1 which can then be decoded into a string “featureX”.

Maybe it helps to see that the SDR output by the encoder of the label will always generate the same SDR and the 2nd layer will always generate the same SDR for the label. That is why it could be decoded easily.

Not sure on the overall architecture (I need to think about it some more), but to address this specific question, one could implement apical feedback from the output of the second cortical column to the first one (i.e. cells in the first grow and learn apical connections with cells in the second, using the TM algorithm). Cells predicted apically could then be decoded using a classifier as one would do for distal predictions in vanilla HTM.

1 Like