I’ll modify the picture tonight when I get a chance. Drawing circles on the biological neuron images made it too cluttered as well.
No problem, hopefully I can explain it well enough. At each time step the network learns the current input image and the previous neuron states. To make a prediction of the next time step, the current neurons states are fed back into the network, which gives the next time step’s predicted neuron states. This can loop as many timesteps as needed.
That’s a good idea because MNIST is very well known and I want to make a demo with it anyway. As long as I use the same demo between SC, NUPIC, Ogmaneo, and a LSTM I’ll be happy with it.
True enough I haven’t demonstrated SC works in a noisy environment, but my first guess is it would work ok if I set the right dendrite threshold. Something like as long as 75% of previously observed stimulae is present then the pattern is recognized even if there is noise. Admittedly, I am not an expert so I may be missing something essential.
Also, Ogma algorithms can observe and learn from scalar buffers. Would I have to modify SC to have a fair comparison with Ogmaneo on the MNIST dataset? It wouldn’t be that hard, I’d just replace the overlap and threshold functionality with a euclidean distance formula. However, NUPIC only operates on binary arrays. Is this why choosing the right encoder is so important?