Releasing BrainBlocks 0.7.1: Building ML Applications with HTM-Like Algorithms

The ContextLearner and SequenceLearner are very similar architecturally. Context provides the depolarized predictions for each input column. In the SequenceLearner’s case, the context is the neuron activations for t-1. In the ContextLearner’s case, the context is whatever you provide it. However, the SequenceLearner is optimized for sequence learning, whereas the ContextLearner is not optimized this way and defaults to the more general HTM-type algorithm.

The ContextLearner would be used for things like sensorimotor learning, object learning, etc., where you are doing feature/location pairs. The location can be the context and the feature can be the input. You would also have to provide the location at t-1 as well to include transitions in the context.

Here’s the basic setup example:

Here is how it might be used in a sensorimotor inference problem:

3 Likes