Hi all!
First of all I’d like to say how excited I am to have found Numenta and HTMs! After digging into the theory and learning resources I think it is very likely to be exactly what I am looking for!
I have been working on a personal research project for some years now. I am investigating computational structures that act as exact and deterministic mapping functions between stimulus-product pairs. A mechanically motivated example would be that of a robot with one optical sensor and a speaker. When the robot sees the color red it might make a soft beep and when it sees the color green it might make a loud beep. One might immediately think that this is a very simple algorithm to write which would be true. However, the data I am interested in processing is highly dimensional. In this case the robot might see a 2d grid of colors and would need to output a highly dimensional point cloud by association. The key, though, is that the mapping between the lower fidelity data and the higher fidelity data needs to be exact, just like a hashmap might act. But instead of a hashmap, I am interested in a flexible and trainable computational unit that is capable of producing an exact mapping.
Here is a very simplistic example:
Let’s say we have a rank 2 tensor of arbitrarily-valued scalars S where S = [[1],[2][3]]
Let us then say that we have a rank 2 tensor of arbitrarily-valued binary P where P = [[1,1,0],[0,0,1],[0,1,0].
The goal is to generate a mapping function M such that each element at some index I in S is perfectly mapped to each binary vector B in P at index I:
S[0] M P[0] would be [1] -> [1,1,0]
S[1] M P[1] would be [2] -> [0,0,1]
S[2] M P[2] would be [2] -> [0,1,0]
Up until this point I have been trying to use traditional feed forward neural networks to act as this mapping function M and train them using floating point scalars. I’m able to obtain some good results when the size of the tensors are very small. However, once I scale the data to a decent size the accuracy drops off. I believe this to be because of the simple fact that the task of perfect curve fitting a highly dimensional function is inherently intractable. Unfortunately, my use case requires that I map large and highly dimensional datasets together. So I am looking for a new model of computation to fit my use case. That’s when I stumbled upon HTMs!
I have been slowly getting through the material and specifically watching all the HTM school videos. The volume of information is a bit daunting and am struggling to determine if HTMs will suit my use case without spending days and days going through the material. So I thought I would post to present my potential use case and have a conversation about it with the experts!
Based on my example use case, do you think HTMs would be a good model for me to pursue?