Compiling all the information into something cohesive

Hello!

I have been perusing HTM literature (YouTube, GitHub, HTM school, BAMI papers, etc.) for awhile, and I am finally starting to “get my hands dirty”. I am working with a sequence of integers from OEIS as a way to understand how to implement the HTM code from htm.core. I know there is a pre-built scalar encoder, and I understand that this is pretty standard. However, some of the sequences in the encyclopedia have elements that are super large (order of magnitude can be 16+), and one may have to predict an integer that is higher in order of magnitude.

Given that the scalarEncoder requires a maximum value, it feels limited to me. I know other options include: using a hash function, according to Scott Purdy but I can’t seem to find an example of how to do this, the adaptive scalar or logarithmic encoders from the nupic repository, or a delta encoder (which, in the end, will probably be the encoder I want to use since predictions will be poor for these sequence values?). Adapting the code from nupic for htm.core source code seems more difficult to me, so I attempted to create my own encoder, and I’m here to double-check my thought process with others who have more experience than me. I believe I am following the features described in the literature for encoders.

Let the sample sequence be s = [1, 2, 5, 8158194850]. What I do to encode these integers is to create a matrix of zeroes that has as many rows as the longest number has digits (for s, that is 10 rows, and the last row will represent the ones place, the 9th row will represent the tens place, … ), and 10 columns (one for 0, 1, 2, 3, 4, …, 9; though if you want, say, 5 ones to represent the number 1, there would be 50 columns total). Then, for each element in s, a 10x10 matrix of 0’s is created. For s[0], the 10x10 matrix will have a 1 in the [10,1]-position of the matrix; s[1] will have a 1 in the [10,2]-position; but, s[3] will have a 1 in [10,0], [9,5], [8,8,], …, [2,5], [1,1], and [0,8] positions. Each matrix is embedded into a slightly longer matrix to account for predictions that may be higher in magnitude. The flattened version of this matrix is used to create the SDR data structure.

Any comments would be helpful! Please note I have superficial experience in ML, DL, DS, and the brain, so I may not fully understand certain jargon. Thank you again.

…using a hash function, according to Scott Purdy but I can’t seem to find an example of how to do this.

The hash version of the scalar encoder is RandomDistributedScalarEncoder or RDSE class. If you are using NetworkAPI it is the RDSEEncoderRegion class. An example of using this in Python is hotgym.py.

If you would like to build your own delta encoder perhaps we can fold that into the htm.core as well.

1 Like

I wrote an integer encoder a while back that might meet your needs. The encoding was inspired by a series of discussions about grid cell modules. Given a series of partition widths (I chose prime values for reasons explained in the post), the representation stores the encoded number as a series of bits (one per partition) indicating the number modulo by each of the partition widths. The representation has excellent capacity and maintains sparseness as well. It does not provide much in the way of semantic overlap for nearby values, opting instead to encode that information as bit shifts in the representation.

I included some JavaScript code to provide a reference implementation, but it’s fairly straight-forward to implement in any language. Please have a look and let me know if you find it useful.

2 Likes

Hello,
Thank you. I implemented using the RDS encoder, and got similar results (predictive cells come out as 0’s). The following are my thoughts:

  • Some of the sequences (like my earlier example) I am attempting to predict (n+1)-th element do not have recurring patterns, so cells are bursting with every new element and cannot properly predict.
  • I should test this theory with an oscillating sequence.
  • I could concentrate on something like a delta encoder, and this may fix the issue for some sequences, like those with linear growth. However, for sequences that have an exponential growth pattern, HTM is not the appropriate learning method.
  • Or I could be programming things incorrectly and so my results are a reflection of that, instead.

I am open to thoughts. Thank you!