Ah, that’s interesting that both the logical author-written breaks and eligibility traces work well. I’ll have to look more into traces, but probably need a better grounding in RL first to do so.
“Similar meanings closer to one another” like GLoVe vectors, or close as in “appearing closer in sentences / over time”? I think I’m misunderstanding exactly how you converted words to SDRs here, and how you fed your model - one word or one sentence at a time.
Hex grids deriving topology from semantics - that is fascinating stuff. I’ll look through Jordan’s thread you linked. Is that connected much with Bitking’s thread? You discuss several interesting potential uses with him in that one, I quite like the idea of less sparse borders from lateral input.