A 1D grid cell encoder works like this (in a algorithmic sense):
- Let’s say we want to encode the value x using N grid modules where each module have M bits.
- For each of the N module:
- Generate a scaling factor s and bias b (from a reasonable range)
- Compute x’ = x*s+b (scale the value then add a bias to it)
- Compute I = round(fmod(x’, M) + sign(x’)*M) mod M (handle negative values then wrap around into grid space)
- Generate an empty sdr of length M then set sdr[I] = 1
- Concatenate all sdr generated by all modules
- You should end up with a SDR of length M*N
Here’s my Python implementation from a project a few days before.
Edit: Fix formula