Looking for help with coding

Hello,

I’m looking for some guidance from some more experienced people, and I’m hoping this is a good place for this. I’ve been following Numenta for quite a while, as I think the HTM method for learning robust feature representations is extremely promising. I’ve been using Python to help me play with and understand the algorithms, and I think I have a good understanding now. My issue is my Python, compsci, and math knowledge (or lack thereof) are beginning to hinder me now that I want to create a more robust implementation. Here’s what I’ve done so far (keep in mind, I’ve only worked on the SDR so far, and I’m hoping that solutions found here will help me implement the TM layer more easily as well):

  1. I started with creating classes for neurons and synapses individually to understand how everything worked while following the documentation. I followed the BAMI outline here (https://numenta.com/assets/pdf/biological-and-machine-intelligence/BAMI-Complete.pdf) as best I could, but it was so slow I couldn’t test anything.

  2. I moved to looking for a ready-made solution in one of the many HTM libraries that Google found for me. The problem here was two-fold: First, the majority of the functions were written in C++ (which I don’t know) so they’re really fast, but they’re not modular enough for me to use in the way I want. I want to be able to understand how the algorithms are actually implemented so I can tweak the parameters any way I want, or something like Pytorch where eager execution is fairly universal. It takes me some time to understand how things are working, so being able to just play with matrix objects really speeds up that learning time. Second, it took me ages to sort through all the different libraries and repositories to figure out what does what. There is so much jargon thrown around everywhere, it’s difficult to navigate if you don’t have experience in academic coding. So I abandoned this method because it didn’t seem like I would make much progress in what I actually wanted to do here.

  3. I went back to my own implementation, and eventually moved to consolidating everything into numpy arrays, and using numpy functions in a creative way to get the output, perform the permanence updates, etc. This worked great at first, but I ran into a brick wall when I wanted to implement boosting and inhibition. Either there isn’t a way to perform the operations necessary in a fast manner, or I’m not knowledgeable enough to figure it out.

  4. Finally, I started trying to work with sparse arrays. I have such a lack of confidence in my understanding of how this implementation should work, that it’s difficult to make any progress because I have no way of knowing if I’m moving in the right direction. And when I get done, there is no way of knowing if the python sparse array implementation will actually be any faster than the other methods I’ve used.

So as I mentioned at the start, I’m looking for some guidance on how I should move forward here. Ideally, I would love it if there was a discord server or something similar I could join to talk this sort of thing out. Is there an HTM library out there that has the sort of flexibility I’m looking for? Or perhaps someone knows of a good way to implement the algorithms with Python sparse arrays? I feel lost to the point that I feel like my best option is to try to go back to college for compsci, learn C++, then just come back and tackle the problem in 6 years.

1 Like

Numpy is the best!
I too implemented an HTM using numpy, just so that I could understand how it works inside and out. My advice for writing numpy code is to first write the code using plain old python (lists and for loops), and then once you know what the code should be doing, convert it into numpy operations. I would also recommend using the python profiler (there is one built into the standard library) so that you can focus on just the areas which are slowest.

Assuming you want global inhibition, the function you’re looking for is numpy.argpartition. numpy.argpartition — NumPy v1.26 Manual

Here is some psuedo code:

k = num_cells - num_activate
active_cells = np.argpartition(excitement, k)[k:]

My recommendation for python sparse arrays is to use SciPy, which was made by the same organization as made NumPy. Sparse matrices (scipy.sparse) — SciPy v1.11.4 Manual

HTH

My advice for writing numpy code is to first write the code using plain old python (lists and for loops), and then once you know what the code should be doing, convert it into numpy operations.

I did this originally. That was the first step I was referring to. I just used classes and functions instead of lists and for loops. I used the profiler to check the speed, and everything was just slow as hell.

Assuming you want global inhibition, the function you’re looking for is numpy.argpartition . numpy.argpartition — NumPy v1.20 Manual

The issue I’ve run into is that global inhibition just learns too slowly, and isn’t very good at learning in high dimensional spaces. I’m pretty sure local inhibition is necessary for what I’m looking to do, but calculating different neighborhoods for each neuron is slow. The fastest solution I’ve found is to incrementally shift the matrix up and down by the inhibition radius, each time adding the result as a new row in the matrix, then slicing column-wise to get the neighbors for each neuron(column). But even this is really slow once you start scaling things up a bit.

Ok, I think next thing to do is to re-organize your code from an “array-of-structures” (AoS) to a “structure-of-arrays” (SoA). These are two different ways of organizing your data, and they require using different tools & techniques to work with.

Example of an “array-of-structures”

class Synapse:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

my_synapses = list(a-bunch-of-instances-of-the-Synapse-class)

Example of a “structure-of-array”

class Synapse:
    def __init__(self, x, y, z):
        self.x = numpy.array(x)
        self.y = numpy.array(y)
        self.z = numpy.array(z)

Using the “structure of arrays” style of coding should, in general, work very well with numpy (once you get past the learning curve).




Another alternative way to make fast numpy code is to use numpy’s “structured-arrays” feature. Structured arrays are a way to put your entire class into a numpy array. First you tell numpy what one instance of you class looks like, and then you can make an array of them. The resulting array is a numpy array, and can interoperate with all of the other numpy features. Structured arrays — NumPy v1.26 Manual

Example:

Synapse_dtype = numpy.dtype([('x', numpy.float64), ('y', numpy.float64), ('z', numpy.int64)])

my_synapses = numpy.zeros(100, dtype=Synapse_dtype)

my_synapses['x'] # All of the 'x' components of all of the synapses.
my_synapses[3] # One synapse
my_synapses[3]['x'] # The 'x' component of one synapse

Structured arrays look nice on paper, but in practice they can be a bit finicky at times. If you use them, I recommend reading the docs…

There are a number of different ideas on how local inhibition should work and how to make it run fast…

The best idea I’ve encountered for local inhibition is to make many small spatial poolers, which each use global inhibition and cover a small portion of the input space.

  • So you still divide up the large input space into many topological areas, which are processed independently of each other.
  • The input domains of each spatial pooler can overlap, to get good coverage of the whole input.
  • The inhibition is local. Neurons which are very close together inhibit each other, and neurons which are far apart from each other do not inhibit each other at all. There is a hard border between neuron-areas which inhibition can not cross.
  • It’s not elegant, but it does make local inhibition run as fast as global inhibition.

here is a old implementation with bitarray.
It has a docs, read them first.

/docs/bbHTM.ipynb

I’ve done after this a better implementation with indexed-SDR, but is not ready for publishing. indexed-SDR are better cause take less memory and numpy is faster than bitarray when searching.

PS> there is a bug in this implementation : PAST and ACTIVE have to be switched

Also you can look for Kanerva hypervector “algebra” here (most of properties translate fo SDR) :