As I noted in another thread, I’ve been working on a JavaScript implementation of the Random Distributed Scalar Encoder (RDSE). I’m having trouble understanding why it is wasting buckets when the resolution of the encoder is below 1.0. Have a look at this quick video demo. Sorry about the crappy audio, I recorded it with my laptop mic while sitting on my recliner.
I’m not sure if my implementation is incorrect, or if I don’t understand something about the RDSE. It seems really wasteful to have more buckets than values.