Hi All,
So I’m trying to apply HTM for basic regression – to predict Y1,Y2 (t+1) from X1,X2 (t).
I have an approach in mind, and very curious for anyone’s opinion.
This setup differs from most HTM applications which are auto-regressive (AFAIK) – predicting X1,X2,Y1,Y2 (t+1) from X1,X2,Y1,Y2 (t).
I don’t want the system to predict X1,X2 at all because they represent random motion.
This is why I think the standard auto-regressive approach is doomed to fail.
Y1,Y2 however represent a human controller’s response to X1,X2 – so seems better to learn the X --> Y sequences, since they aren’t inherently noise-laden.
The data would thus be structured sequentially like so:
To implement this, I think 2 separate HTM regions are required, one for each X1,X2 and Y1,Y2. Here’s my idea:
The blue arrow represents cell-depolarizing connections (TM’s distal links), and the red arrows represent column-activating connections (SP’s proximal links).
So TM cells in Region_2 connect only to Active Columns in Region_1. This is how auto-regression is replaced with common X --> Y regression.
The procedure would be as follows:
T=1
- Region_1 columns activated by input --> X1,X2
- Region_2 cells depolarized by input --> Region_1’s Active Columns
T=2
- Region_2 columns activated by input --> Y1,Y2
- Region_2 anomaly score calculated
- Region_2 cells’ links to Region_1 Active Columns updated (TM learn)
This 2-step process would be repeated, where every odd-numbered tilmestep act like T=1, and every even tilmestep like T=2.
So ultimately you feed the system an X1,X2 input at (t) and get a Y1,Y2 predicted output for (t+1).
Does this seem like a valid approach to you!?
I’m very curious for anyone’s thoughts!
Thanks