Jumping in a bit late. Disclaimer: I haven’t dug into Nupic code yet, however there’s one thought I’d like to share. If the brain works in parallel (each neuron, I suppose), modelling it in a sequential language may lead to a sub-optimization of the CPU resource usage. Objects (columns?) in an HTM could all be seen as running in parallel, if I’m not mistaken. These do not share any data with each other directly. This matches superbly to the Actor Model of computation.
http://worrydream.com/refs/Hewitt-ActorModel.pdf
there’s an emerging implementation of the actor model compiled to native code:
https://www.ponylang.org/
there’s a nice blog article about the choice for using this language in data processing:
https://blog.wallaroolabs.com/2017/10/why-we-used-pony-to-write-wallaroo/
So, modelling each SDR as one actor, in my view, could be the road to optimizing the computation at least on one physical machine. The Actor Model helps distributed computing, as there’s no conceptual difference between concurrency and distribution: messages are sent, received and accepted upon, transparently, whether these come from the same machine or not.
The Disruptor is a good step towards the actor model, however it models few concurrent processes, whereas in the Actor Model each neuron/column could be modeled as an actor, scheduled independently.
Thoughts?