Hi I did some experiments involving SDRClassifier and a few encoders.
SpatialPooler, FlyHash Encoder and a newer FH 2D Encoder which combines idea of random projection from fly hash with spatial proximity.
Which means two neighbor pixels in an image contribute with similar projections in the encoder, while two distant pixels have relatively dissimilar (random) projections
In order to get comparable results with other ML classification tasks, I’ve modified the HTM MNIST example from an online style to a “batched” operation:
- First stage is encoding the whole MNIST (60k train + 10k test) dataset into SDRs with specified encoder and parameters.
- Second use the SDRClassifier for a variable number of epochs till I got converging results.
One most important point was that classification results are heavily dependent on the amount of information available, mostly the number of 1 bits in its input SDRs.
In order to compare the three encoders (SP, FH, FH_2D) they were tuned to produce similar SDRs in terms of size and solidity. Since I could not do much in controlling SpatialPooler solidity, I simply parametrized the other two encoders to output SDRs at the average solidity reported by SP.
Results were that while SpatialPooler slightly outperforms Fly Hash encoder, the 2D encoder is way ahead.
77/1024 bits SDRs (77 ON Bits) corresponding to (32,32) columnDimensions in SP, with potentialRadius: 7
Classifier epochs on pre-computed encodings: 30
FH Encoder: 94.03%
2 x SpatialPooler: 94.34%
FH_2D_Encoder : 96.13%
2x SpatialPooler means I fed it twice the x_train dataset in learning mode. It produces slightly better results over a single pass.
611/6241 bit SDRs - which corresponds to (79,79) columnDimensions and potentialRadius of 11
FH Encoder: 97.13%
FH_2D Encoder: 97.86%
I will also post the script, currently there are dependencies on modules that are still changing/unstable.