As I’m working on my HTM framework, I have been using the same API design that NuPIC uses; most implementation seems to use it too. It works. But thinking about it. I’m wondering if there can a better API design.
For example the current TM/SP
compue() method basically works as follows:
tm.compute(input_sdr, true); //true means learning enabled auto predictive_cells = tm.getPredictiveCells();
Which presents the problem.
compute()function can never be marked as
const. Because constness is determined at compile time, but the decision to learn is made a runtime.
- Despite the first point, the
compute()function still can’t be marked as const. Because the method itself modifies the internal state of the TM (the currently active and predictive cells)
- Batch processing become impossible as the state is modified every call. Which may or may not be a problem. For online HTM services, batch processing may bring extra performance to the system. For building AGIs, it doesn’t matter.
So I purpose the more PyTorch-iy API. (Requiring C++17)
auto [predictive_cells, active_cells] = tm.compute(sdr, last_active_cells); tm.learn(last_active_cells, active_cells) last_active_cells = std::move(active_cells)
Under this design, the
compute() function (maybe it should be renamed as
forward?) can be marked as const; it don’t modify any internal state, rather it returns them. So the compiler can do more optimizations on it. And batch processing is possible under this design. And it simply feel more intuitive to me.
The API designs should hopefully be portable to other HTM implementations. As they fundamentally fork in the same way. Everyone benefits from this discussion. I need your feedback. Is this a good API? What do you think? Can anything be improved?