Encoding Data for HTM Systems


#1

By @scott

Please discuss this paper below.


Scalar encoder to SDRs
Questions regarding SDR encoding
SDR questions for image encoding (newbie)
Can someone please review my .json and swarm data
#2

Quick question, on page 5, bullet point 6. The equation that describes how to find the bucket index leads to a small problem according to my calculations.

If we look at the next example with temperatures ranging from 0F to 100F and there being 100 buckets. According to the bucket index equation, the temperature 0 would fall into bucket index 0, 1 into 1 and so on until 99F falls into bucket index 99 which makes for a total of 100 buckets so far filled up. However the temperature 100F would get into bucket 100 which would be the 101th bucket index. The equation however was based on there being 100 buckets. Am I missing something or is this an error?


#3

@Setus - your calculations are correct. But keep in mind that buckets represent real value ranges. So bucket 100 represents values greater than or equal to 99.0 but strictly less than 100.0. So the value 99.999999 falls into bucket 100. We you can still say that we are representing the range 0.0 (inclusive) to 100.0 (exclusive).


#4

I understand and agree, thank you @scott


#5

Hi @scott

I am really impressed by your work. That is awesome. I read your paper recently and I have some basic questions, but I really appreciate if you guide me regarding them.

It is not clear for me what are buckets?
In “A Simple Encoder for Numbers” part, you showed an example for encoding the outside temperature and the output is consecutive active bits starting at the 72nd bit. Then you explained about using hashing function, now my question is, if I select the bucket for the value 71, the output will be consecutive active bits starting at the 71st bit. by using hashing function, the condition “The encoder should create representations that overlap for inputs that are similar in one or more of the characteristics of the data that were chosen.” will be preserved?


#6

Hi Niki, thanks for the questions!

Buckets are an intermediate representation used to convert from a number to a binary array. Each bucket represents a range of values from the input but has a single output representation. In this way, the numeric range of a bucket determines the granularity that can be encoded. The number of buckets is n-w+1 since that is the number of ways that we can select w consecutive bits out of n.

In the simple numeric encoding, we set the w consecutive bits starting with the bucket index to 1. So bucket 3 results in the output bits at index 3, 4, 5, … being set to 1 and the rest of the bits 0.

The hashing example takes the indices of the 1 bits from the simple encoder and hashes each one separately. So while the inputs to the hash function are consecutive integers, the outputs are not. See the figure in section 3.2 to see how the encoding from the simple encoder is converted to the encoding for the hash-based encoder.

Does that make sense?


#7

Thank you very much @scott for your response. Yes, it makes sense.

About “Encoding Geospatial Data” part, I saw this figure,

in one of videos related to encoding geospatial data; my question is, how we consider 35, zeros and ones totally here?


#8

I’m not sure I understand the question but the size of the squares and the number of squares around the center are parameters that you choose and then the output is just a one or zero for each of the squares. Then, you simply take the top W of the squares to be 1s and the rest are zeros.


#9

Thank you for your reply @scott, so in this figure, the number of squares are 25 but the number of zeroes and ones are 35, my question was why the number of zeroes and ones is not equal to the number of squares? they should not be equal?


#10

I agree that it looks like they should be equal!


#11

Hi @Scott

I am confused about these two:

*The first step to apply HTM to real world sequence learning problems is to convert original data to SDRs by using an encoder.

*The HTM network works based on the principles of sparse distributed representation (SDR). The spatial pooling is responsible for converting the binary input data into SDR.

my question is, Encoder is responsible for SDR or Spatial Pooling?


#12

The output of the encoder (a relatively small and dense binary vector) goes into the spatial pooler, which outputs the SDR.


#13

thanks for the explanation! I was wondering the same thing while reading about encoders and the spatial pooler: if they both produce SDRs why not combine them into a single thing…


#14

The encoding output should have SDR properties but it isn’t as strict. It is ok to have a higher density, for instance, because the SP will enforce sparsity. I think it is fine to say that the encoder output is an SDR. The SP isn’t so much responsible for turning something else into an SDR as it is responsible for creating efficient representations that capture the general and specific aspects of spatial data with relatively fixed sparsity.


#15

Thank you for your reply @sheiser1


#16

Thank you for your reply @Scott

In the paper, there are a few important aspects for encoding data.
1. The encoder should create representations that overlap for inputs that are similar in one or more of the characteristics of the data.
2. The same input should always produce the same SDR as output.
3. The output should have the same dimensionality for all inputs.
4. The output should have similar sparsity for all inputs and have enough one-bits to handle noise and subsampling

this means, “The encoding output have to have SDR properties”?


#17

this means, “The encoding output have to have SDR properties”?

Essentially, but while encoding output needs similar sparsity for different inputs, 20% active is totally fine. But the SP produces SDRs in the more strict sense, where you need something like <5% active. It’s a bit of a subtle difference but relevant in practice.

if they both produce SDRs why not combine them into a single thing…

They have very different purposes. The encoding step takes some very specific type of real value (non-binary-vector) and has to convert it into a binary vector to enable the rest of the algorithm components to understand it. It must produce SDR-like representations, albeit with less of a sparsity constraint.

The SP is a very general and powerful tool for creating representations that capture generalization and specificity and adjust through learning to better represent commonly seen elements. It also has to enforce a relative level of sparsity that may not occur in the inputs if they come from a more dense encoder or from a union representation.

In short, encoders are an algorithmic boostrapping step that is very specific to the input data types and the particular application while the SP is a very general algorithm that is potentially used many times in a network.


#18

Thank you for your reply, @scott