Proportional Scalar Encoder?

I searched for such a beast, which bit overlap between any two encoded numbers is higher as their ratio is closer to 1.

e.g. overlap between 900 and 1000 would be the same as the one between 0.9 and 1

PS “biological justification” is we perceive sizes relatively - in terms of sizes, a horse-ox distance is “smaller” than the sparrow-duck one while measuring absolute values are the opposite.
encoding logarithmic values doesn’t seem to solve it.

Is anyone aware of such an encoder, and which one is that?

This is one I made for this purpose sometime ago:

def hash_scalar_encoder(value, resolution=0.001, curving=2, n=1000, p=10):
    sdr = set()
    value **= 1 / curving
    value /= resolution
    for i in range(p):
        sdr.add(hash((round(value + i),)) % n)
    return sdr
2 Likes

Thanks, I’ll look into it.

Thanks for sharing. Looks useful. Just wondering if you’ve considered using log function instead since we can use additions in logs? Below is my attempt but haven’t tested it yet.

def hash_scalar_encoder(value, resolution=2, n=1000, p=10):
    sdr = set()
    scale = p / (2 * math.log(resolution))
    value = math.log(value)
    value = value * scale
    for i in range(p):
        sdr.add(hash((round(value + i),)) % n)
    return sdr

Some examples of what the variable ‘resolution’ does:
E.g. resolution of 2 means that a value and its double (e.g. 3 & 6 or 7 & 14) will share 50% of bits in their SDRs
E.g. resolution of 5 means that a value and its quintuple (e.g. 3 & 15 or 7 & 35) will share 50% of bits in their SDRs

1 Like

Thanks, I assumed encoding logarithms won’t work which isn’t correct.

I dont have a good intuition for those math concepts so I havent thought about doing it that way,

Oh ok that log thing I thought was a natural extension to your code so i thought that maybe it was more expensive in terms of computation and that’s why it wasn’t used. Anyways it was a clever code.