A while ago, I was trying to add spacial indexing to HTM neurons so that they could mimic neurogenesis/neurodeterioration and be stored in a corruption resistant file. I could’ve used hash maps for that, but I came across a few problems with that idea, such as: how could synapses take input from neurons far outside the length of the original array size without storing the indices of each neuron and reconfiguring each synapse-neuron link when a neuron was deleted/added, and keep things ordered so spatial algorithms could find nearby algorithms?
R-trees could solve those problems while giving a few extra important benefits, such as using the index locations to read in only a portion of a much larger array into an HTM spatial pooler, and allowing for efficient algorithms to be developed to change those input locations over time. This could allow for algorithms that took a small subset of the input from the cells on the temporal memory spatial arrays and fed it into another spatial pooler while still having the ability to read every cell in the temporal memory array.
So far I’ve managed to create an algorithm that generated an n-dimensional set of points, and I’m just getting into R-trees. However, a set of n-d points should be enough to take input from n-d arrays, so I might be at the point where I can start combining my code with some of the python Nupic research code. But if I’m replacing a core part of the Nupic research libraries or Nupic core libraries with R-trees, where should I start from? would I just replace the vector or array classes and go up from there making sure everything worked with the change, or would I have to change much of how some of the algorithms worked? If it’s the former, then where would I want to replace arrays and where wouldn’t I?