RDS encoder efficiency

I’ve finished prototyping some RDSE versions for my PyHTM repo, and I’m wondering about speed vs. memory efficiency.

I have two versions. In one, the RDSE maintains a list of randomly-generated indices corresponding to the locations of 1s in the output SDR it will produce. It compares the input value with the resolution and starting point to determine where to slide a window over the index list and produce the SDR. I was wondering about memory use for storing all those indices in a situation where many, many unique encodings are required, so I also made a second version:

In this version, no index list is maintained. Instead, a specific seed is chosen at object instantiation and used to generate repeatable, overlapping index lists based on the resolution and starting point, similar to version 1. Pros: almost no stored data. Cons: Lots of computation to generate all those random numbers every time it needs to encode.

I’m almost totally ignorant in the realm of speed and memory tradeoff, I just know that it exists. So, anyone have thoughts about which version would be better to use in general? The code is up on my github repo, accessible via link in my last post.