Decoding Grid Cells using Machine Learning

Hi all. I want to share what I’ve been doing in the past few days.

So, Numenta have released their grid cell paper for quite a while now. From the paper we know that grid cells can represent a very large space using relatively few bits. However, we don’t have a better way to decode grid cells besides using the SDR Classifer. Which doesn’t make sense (and is slow) since the representation of each location is unique thus a 1 to 1 mapping should be achievable. I have been thinking about algorithm to generate a probability field out of grid cell SDRs lately. So directly decoding it could be fast.

Then I found that I could just use machine learning to map SDR back into real values. Bits in grid cell encodes are on and off for a static cycle. It makes intuitive sense some linear model could learn the reverse mapping.

image

Turns out it’s mostly the case. Basic machine learning models can learn how to map SDR back into real values. And with the write model, it can resist noise to a certain degree. Basically

  • Generate enough samples from grid cell for a large enough range (ex: -50~50)
    • These are pairs of values. sdr and the associated value
    • Must have enough of them. Otherwise the model will overfit
  • Train a model (ex: Ridge) on the samples
  • Use the model to convert sdr back into real values
  • The sdr can be noisy. It will effect the accuracy but won’t break the model

Please have a look at my code for more detail.

15 Likes

Good job! That is awesome!

4 Likes