I’d like some direction with how best to integrate the HTM algorithm with a project I’m working on.
I’m a member of the Neureal project. We’re using distributed ledger technology to coordinate predictive computation at scale. The Neureal network itself is agnostic to the ML algorithms each node uses to produce predictions or otherwise analyze data. However, it must come with a built-in default prediction strategy which is the portion of the project I’m most involved with.
This Default Prediction Strategy boils down to two basic functions:
- an internal generalized pattern recognition unto generalized prediction algorithm
- an external node-to-node communication algorithm (the default datatype of the information they share with one another and the way in which they choose to transact with who).
Each node in the network can be a receiver of any kind of streaming data, thus the algorithm that interprets, analyzes, and makes predictions must be as general as possible (#1 of the two basic functions listed above).
So, as a long time follower of Jeff Hawkins, and Numenta I think HTM would be the perfect fit!
Though I understand the HTM algorithm very well on a conceptual level, (along with the required sub-concepts that make it so powerful such as Sparse Distributed Representation and hierarchy’s ability to create increasingly invariant views, etc, etc) I’ve never actually used Nupic. I’ve never set it up on my machine.
Furthermore, a straight implementation of Nupic.core on our system may not be optimal for our particular system since the way nodes decide to communicate with one another will undoubtedly affect how the internal algorithm should be structured and vice versa.
We’d like to start simple, implementing lightweight and lean and straightforward versions of the algorithm in order to prototype and iterate through designs.
What would be your suggestion for our next steps of implementing HTM in our development? What comes to mind as maybe ‘something we should know’?