Releasing BrainBlocks 0.7.1: Building ML Applications with HTM-Like Algorithms

BrainBlocks 0.7.1 is released. We have previously posted about Brainblocks on this discussion board.

This release includes a number of changes reflecting the user feedback provided by people here. We also included more comprehensive example scripts to see how each of the provided block libraries can be used. BrainBlocks is also provided as a pip package across all major platforms.

Simple way to start is to install with pip:

pip install brainblocks

The current computational blocks provided are:

  • Transformers: Encodes symbols, scalars, or vectors into binary patterns for processing by BrainBlocks.
  • PatternClassifier: Supervised learning classifier for binary patterns.
  • PatternPooler: Learns mapping from one representation to a pooled representation.
  • ContextLearner: Learn inputs in provided contexts. Flags anomaly if inputs are out-of-context.
  • SequenceLearner: Learns input sequences. Flags anomaly if previously unseen sequences are detected.

Visit our github page here:

I want to thank the users who provided feedback but I can only mention 4 users per post, so mentions are in the replies below.

3 Likes

Thanks for the feedback from @klokare @MaxLee @thanh-binh.to @marty1885

Thanks for the feedback from @Vaibhav_Chris @ddigiorg @dmac @Mark_Springer

Thanks for the feedback from @dee @cezar_t @mthiboust @lucasosouza

Thanks for the feedback from @CollinsEM @hsgo @bela.berde

FYI, installs and runs on Monterey (12.4).

Thanks!

Can you share the key ideas behind ContextLearner?
I think it is quite new.
Where is it useful, and how?

The ContextLearner and SequenceLearner are very similar architecturally. Context provides the depolarized predictions for each input column. In the SequenceLearner’s case, the context is the neuron activations for t-1. In the ContextLearner’s case, the context is whatever you provide it. However, the SequenceLearner is optimized for sequence learning, whereas the ContextLearner is not optimized this way and defaults to the more general HTM-type algorithm.

The ContextLearner would be used for things like sensorimotor learning, object learning, etc., where you are doing feature/location pairs. The location can be the context and the feature can be the input. You would also have to provide the location at t-1 as well to include transitions in the context.

Here’s the basic setup example:

Here is how it might be used in a sensorimotor inference problem:

2 Likes

I looked at the C++ code and it appears there are no provisions for sparse bit vectors, Brainblocks is currently operating entirely with dense vectors at the moment. Are there plans for expanding functionality to include sparse vectors?

I think there is no restriction on sparsity so you are free to choose dense or sparse whichever better suits your problem

I understand your concern. You will notice that when representing neuron activations, we use compact binary vectors. However, when representing dendrite connections, which is O(n*m), we represent the connections as lists of pointers. This is where we chose to use the “sparse representation” approach.

We found this balance between compact and sparse representations for neuron activation and dendrite connection respectively to be the most “optimal” in our experience.

@Deftware I should also mention that there are functions to return either the full binary vector or the list of activation indexes. So both the compact and sparse representations are available if needed.

In general, the 1/8th compression of a uint8_t vector to a binary vector gives sufficient performance gains that keeping the neuron activations under a compact representation makes sense. Performing as many boolean operations as possible on these binary vectors also helps the performance too.