Right now I'm trying to finish a scalar encoder. For some reason, I wanted to minimize bits needed for representation while keeping sparsity.
Anyway, here it is:
It starts with zero off bits mixed in with the on bits and slides those mixed bits back and forth over the total bits. Once it hits a side, either the permutation of the on/off bits is increased by one, or, if the mix has reached the max permutation, the number of off bits in the mix is increased by one.
I still need to debug it a lot and make it nupic compatible, as it should be capable of larger numbers when using three or four on bits and still keeping sparsity, but there's some math error somewhere right now that's rounding up instead of rounding down. There's another problem though: since the on bits move back and forth, completely different numbers turn on the same columns. One way I was thinking of improving this was adding on number encoders to the main number encoder that would represent the number of off bits mixed into the permutation, as well as the permutation number. That way, those neurons or nearby neurons would stay active for larger groups of numbers. I could repeat that cycle until some neurons are active for nearly half the numbers.
So, would that ensure the scalar encoder I'm making had good properties? Am I missing anything here? This is my first time writing an encoder, so I welcome any help.