I’ve been trying to improve the accuracy of MNIST classifications. My ~65% accuracy accuracy using SP was a bit disappointing for myself. Particularly, MNIST isn’t a hard problem in ML.
I find the problem was mostly how I implemented the classier. The classifier computes the overflap score of a input SDR versus a stored reference SDR. However my old implementation does a crud job maintaining the sparsity of the stored SDR. Improving that immediately improved the classification accuracy to 87.15%, on-par with the earliest neural networks. And surprisingly the optimal hyper parameter changed dramatically. Instead of a boosting strength of 0.1, now the optimal boost strength is a very high value like 9.
I think the next step to further improve the performance would be building a vision encoder. But it is out of my capability. Hopefully my result can inspire someone to look further into it.
The code and results are available on GitHub