HTM and Curriculum Learning


I’m very new to HTM and I’ve also been reading up about using Curriculum Learning for neural networks. See here.

With Curriculum Learning, you gradually feed your model increasingly difficult training data, like increasingly noisy data, or as the authors of the above paper have done, increasing the variance in an object’s position, size, orientation, etc.

My question is, do you think this method of training can be applied to HTM? Could HTM handle an increasingly ‘difficult’ pattern? I’m thinking along the lines of object recognition.



It looks like Curriculum Learning is a Deep Learning tactic to overcome the catastrophic forgetting problem. This problem doesn’t exist in HTM systems, so I don’t think the tactic applies. Theoretically, as well, the idea of training one layer of a hierarchy first and moving to more complex objects while ascending the hierarchy and training the next one is not biologically plausible. It is evident that all parts of the brain are learning at once.

HTM doesn’t have catastrophic forgetting because it is already a continuous learning system. There is no backprop where all weights are altered. Learning happens as synapse permanences change at every time step, just like a brain learns with every second. Representations are sparse and semantic.

HTM will handle increasingly difficult patterns, in that it will update its representation of the input data continuously as patterns change. Its model is always changing (at least as long as learning is turned on).


Hi @dee welcome to the HTM forum! :smiley:

As @rhyolight were arguing, curriculum learning is very related to continual learning. However, the initial idea of curriculum learning is more about creating a curriculum to help AI agents learn faster and better a very complex task. For example if we want an agent to move around a maze and pick up objects on the ground, one may argue that it may be easier to learn how to do that by:

  1. Learning how to recognize those objects;
  2. Learning to navigate a maze and finally;
  3. Putting these two skills together to maximize the reward;

I think HTMs may benefit from a curriculum as well, but keep in mind that with HTMs we don’t need to learn complex hierarchical features thought gradient descent, so I think most of the advantage of it would be lost. However I’m not sure about it: It may worth to investigate! :smiley:


Thanks @rhyolight and @vlomonaco for explaining your thoughts in detail. I really appreciate it as a newb :slightly_smiling_face: