New to HTM and asking some basic questions

Hi, I have gone through the HTM school videos (16 episodes) at youtube and I still can’t quite grasp HTM and would appreciate your advise on my questions below:

Questions:

  1. In typical deep learning if one would like to approximate a very complicated function one would increase the depth (hidden layers) and the amount of neurons per layer. What are some recommended ways to do such a thing in HTM (i.e. increasing the network’s complexity or capacity to approximate more complicated functions)?

  2. If I have an input in the form of a floating number vector and an output that is also a scalar vector how would HTM take in those inputs and output the outputs (i.e. is it serially or in parallel or both somehow)?

3 Likes

Hey @roboto, welcome!

I think you’d basically make the system bigger, which can be done through params in the config. For ‘spParams’ increase the columnCount, and for ‘tmParams’ increase the cellsPerColumn, maxSynapsesPerSegment, newSynapseCount, minThreshold & activationThreshold.

One good thing about HTM systems is that they tend to be much less sensitive to these hyper parameter than deep learning, since only a few of the learned parameter are actually involved at any given time – unlike MLP networks which generally use all synapses and weights.

In HTM’s Temporal Memory links are grown to represent new sequences as they appear. The SDR nature of HTM makes the capacity extremely high, so most potential links which could exist are never created. So if you have a very big network but simple patterns, the network won’t overfit.

There is a generic set of hyper-parameters which have been applied to many different data sets.

HTM is usually applied to auto-regressive problems. So instead of using variable X to predict variable Y, you’re using X/Y to date to predict future X/Y. There is an expectation that all values of X/Y at all times will come in the same data structure.

Learning in HTM is incremental in nature, so when we use today’s X/Y values to predict tomorrow’s values, we can’t learn again until those values arrive tomorrow.

4 Likes