What do you think would be the pros/cons for a service-based HTM implementation? I don’t wanna use the term “microservice” as I’m afraid it may cause a lot of discussions. But the idea of a microservice approach is similar.
Presently (disclaimer: limited knowledge to nupic), nupic is being used similarly to how most data scientist’s use ML tools to solve a particular problem, and that is in a sense a sequential manner. Gather data, refine data, build model, feed the data to the model, repeat, until results are acceptable.
What would be the pros/cons if the usage is extended to be aligned realistically with how systems are deployed nowadays? That is mostly are deployed as services (e.g. tiny docker apps) that are independent with each other and scale distributedly, after all the cloud is distributed and it is where “living” apps are deployed.
Every time I think about building an HTM system, I get inclined to considering the deployment and operational aspect of the system, IOW improving the HTM implementation from a library to a modular system (e.g. apps). By intuition (sorry I have nothing to show yet), I think that there will be lots of applications that may benefit the modular parts of an HTM system. As an example, the minicolumn is basically a mini storage of permanence values, and works well with sequential pattern recognition, at first look, caching systems may benefit with its strengths while being independent with other HTM modules. Another example is probably the encoder module, how about an encoder module that learns? There are lots of possibilities and extensions that I can think of if HTM modules are services themselves. Maybe not applicable yet, but I think that these possibilities when applied will provide more practical solutions to existing computing systems.
Interested of your thoughts.