I’m trying to understand nupic.torch. At first glance I’m not seeing any nupic code at all. It just appears to use existing ML/torch code to emulate various aspects of HTM/SP/etc.
e.g. by using a CNN you’re emulating the behavior of individual neurons only sampling a sub-set of the input? By using k-winners between layers, you’re emulating the behavior of maintaining an ideal % of activations (inhibition/boosting/etc)?
1 Like
Right, there is no NuPIC dependency in nupic.torch. It aims to replicate how sparsity is enforced via Spatial Pooling, as defined in our paper How Could We Be So Dense?.
HTM as implemented in NuPIC is incompatible with Deep Learning systems, mainly because of the core difference in the neuron models. So this attempt is a way to show how ideas from the brain can be applied to current DL architectures to improve them.
4 Likes
@rhyolight thanks, that makes sense. It’s pretty awesome to see just the raw math of sparsity improving noise tolerance by such a degree.
Does anyone think that encoding inputs using the various nupic sparse encoders would improve traditional NNs?
2 Likes
In the deep learning realm we are trying to apply nupic.torch, the inputs are almost always dense and scalar, not binary. So we have had to adjust our techniques to produce sparsity within this environment. I don’t know if thinking about encoders will apply in this environment. 
1 Like
well ultimately everything input in a computer is binary, so… 
Jk, jk, just being pedant for a laugh… I hear ya. I really can’t begin to think how to encode something like imagenet into sparse encoded inputs to feed into a CNN.
It just feels like we’re losing a big piece of the puzzle when closely related inputs do not share large swaths of the representational input space like they do when encoded in nupic. Like maybe training would take 10x less time.
However, it’s purely my intuition (and we all know how reliable intuition is
) that says this would help, perhaps it would not change anything in the NNs, after all, those neurons are not at all similar (as you mentioned already).
2 Likes