Have you had a chance to review the code? Do you have any questions? I’m still working on the documentation, so I will follow-up when it’s ready.
I’d like to be able to leverage your expertise gained from Etaler development to do things like TBB, and maybe templates if they fit somehow. Also, possibly a backend/frontend separation like you did in Etaler. Although, we currently only have one backend at the moment. We would have to resurrect our OpenCL code to make a GPU backend, with some re-engineering to get to work at least as fast as the current single-core code.
The backend code is callled bbcore. The C++ wrapper is applied first to create the C++ interface. Secondly, the python wrapper is applied to the generated C++ interface with pybind11. A number of python modules are created to comprise the generated python package which can be imported with
Below is a glossary of our naming conventions that will you help you interpret the code and how it relates to HTM terminology. Where relevant, we provide links to source or example code. The “Headers” refer to the backend ‘.h’ files which each a companion ‘.c’ file which is easy to find. “Block Examples” refer to building a network of blocks manually. “Template Examples” show the use of templates that auto-assemble a network of blocks into a common architecture. The “Sklearn-Style Examples” demonstrate our Python classes that emulate the interface of scikit-learn estimators or transformers. This makes it easy to use and compare with scikit-learn’s library of classifiers and tools. Some code is implemented in Python and we refer to that as “Python Source”.
Block - similar to a region or layer in HTM. A standard interface for all the components of BrainBlocks.
Pattern Pooler - (PP), like HTM Spatial Pooler but with differences. Header
Pattern Sequence Learner - (PSL), like HTM temporal memory but with differences. Block Example, Template Example, Multivariate Abnormalities Example, Hierarchical Abnormalities Example, Header
Scalar Encoder - same as HTM Scalar Encoder. Example, Header
Symbols Encoder - like label encoder… although it takes integers like sklearn-style encoders. Example, Header
Persistence Encoder - this is a unique BrainBlocks thing where you can represent the passage of time as the same input is received. this creates input changes when you have long sequences like AAAAAAABBBBBBB. This helps with learning and also works great if you have missing data. Header
Pattern Classifier - (PC), our natively distributed classifier. it provides a supervised learning capability to HTM-like architectures. You can assign labels to sets of neurons, and it will train those neurons to activate when the labeled inputs are received. It works quite well in comparison to classic classifier algorithms. Blocks Example, Template Example, Sklearn-Style Example, Header, Python source
BlankBlock - a no-op block that is useful if you want to control the bit encoding directly from your scripts instead of using the backend tools. This is used in conjunction with the Hypergrid Transform. Example, Header
Hypergrid Transform - (HGT), a Python sklearn-style transformer that converts M-dimensional scalar vectors into numpy binary arrays. Can be input into BrainBlocks with the BlankBlock. Example, Python source
Page - these are the input/outputs of the blocks. A page is capable of having parent-child relationships with other pages. The content of the child pages are concatenated to create the content of the parent page. So to connect the output of an encoder to the input of a pooler, you would add the encoder output page as a child of the pooler input page. The pages have both the BitArray and ActArray representation available which are created as needed. Header
BitArray - The full bit representation of neuron activity. This is compact and can represent 8 neurons per byte. Header
ActArray - The sparse active neuron representation. An array of addresses that represent the active neurons. Sometimes this is a preferred representation, but often times the BitArray outperforms it. Header
Permanence - same as HTM permanence.
Statelets - these are analogous to neurons but without any implications of biological function. Either a statelet is active or not. And like its name, it represents a fragment of some greater state representation.
Column - equivalent to HTM minicolumn. This is just a convenience referring to the geometry without needing to explaining the difference between minicolumns and cortical columns. Again, we’re trying to avoid biological discussion and focus on algorithms.
CoincidenceSet - This analogous to a dendrite with synapses. A CoincidenceSet is owned by a statelet or shared by a column of statelets (in the sequence learner block). We renamed it to describe what its functional role is, which is to find statelets that activate which are coincident with the statelet that owns this coincidence set. Header
Receptors - The set of statelets that a CoincidenceSet is using for input (i.e. the potential pool of inputs in HTM parlance). Again, this reflects their functional role of creating a “receptive field” for a particular statelet that owns the CoincidenceSet.
That’s all for now. Let me know if you have more questions and I’ll try and to answer them and make this is as a sort of guide.