EMINST is a data set for supervised leaning.
SDR matrix is meant for unsupervised learning.
The closed compare would with a artificial neural network would be that the SDR matrix
would be that the output side of deep neural network. But SDR do not use back propagation to
for setting output bits high.
Deep learning programmer randomly select a out put pixel for a given letter and chase it all the way to the front. Knock up weight for under voting. And knocking down weight voting all
Most of the work here is for temporal audio detection, and like data structures.
The closets machine leaning NN algorithm is the audio detecting LSTM neural
Machine Learning community is still in the stone ages when it comes to unsupervised learning.
K means is where they are at.
A real SDR has sub feature activation, such as edges, fingers, and hands, and all the way up
to bigger pattern and temporal patterns that have a complete loops.
Deep networks skip all the sub feature during training. The whole NN is trained for a given
letter. From front to back and then back to front.
They are not trained for sub features fist. Such as lines, corners, and curves of various types,.
And then, They do not train for sub features to be use to train for bigger features.
I like both ways each catch what the other was missed. In a unsupervised manner.
Hinton said it was to much of a fuss.
But then look at capsule network?
Since deep neural networks are trained all at once there is no guarantee that the sub
features will localize in the fist layers. A self learned eye detector could be spread out on
all layer. Trouble shooting night mare.