Hi @sheiser1

I dont understnad the connection thresold 0.2. I think it is 0.5,

so with initial permanence is 0.21, I have to learn 3 times with increase permenence p+ is 0.1, then activepredicted is formed.

please correct me if I am wrong.

Hi @sheiser1

I dont understnad the connection thresold 0.2. I think it is 0.5,

so with initial permanence is 0.21, I have to learn 3 times with increase permenence p+ is 0.1, then activepredicted is formed.

please correct me if I am wrong.

You’re right it is 0.5 (not sure where I got 0.2). So yes the system would have to see the pattern three times, to raise the permanence values from 0.21 to 0.51. That’s assuming that there aren’t deviations from the pattern in between the repetitions, which would cause it to be wrongly predicted and decrease the permanences back below the threshold.

Hi @sheiser1

In the following figure, if we consider these are connections to cell 1 in column 1, then the first line is distal dendrite segment 1, and it can be D(1,1,1)? and the seconde one will be D(2,1,1), that is correct?

Also, in this figure as it is showing there are more than one cell A, or B,or C, that are connected to cell 1 in comlumn 1, that is correct?

Correct on both counts yes. The three lines (segments with synapses to A, B and C cells) are D(1,1,1), D(2,1,1) and D(3,1,1). Since it has grown three segments it has become active in three different contexts (A, B and C). Each of these inputs trigger the activation of numerous cells (one from each active SP column), and each segment’s synapses link to a subset of these cells. I hope that clarifies anything though it seems you’ve got it.

Hi @sheiser1

If for spatial pooling process, I consider input a binary vector of length L , and a binary matrix with dimensions L* N to represent the set of connected synapses in the spatial pooling such that each column of it represents proximal dendrite segments in a cell. (according to paper “Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory”)

This matrix shows premanence value of synapses? How we should select its dimension? Is there any relation between L and M? L should be less than M or L=M (M number of cells and N number of columns)?

I know that each column of the SP (each ‘N’ as you refer to them) connects to a random subset of bits from the encoding vector (‘L’ as you put it). When a given N is activated (meaning it has a top 2% overlap score with the encoding vector), it strengthens the synapses connecting it to L’s active bits and weakens those connecting it to its inactive bits. This connecting ‘strength’ is represented by the permanence values, and the adjusting of those values is how the SP learns.

I’m not exactly sure about this matrix you refer to, but from your description I believe it holds the permanence values of the proximal synapses (those connecting the SP columns to the bits of the encoding vector). The SP only operates on a columnar level, choosing which 2% of columns will become active. Once a column is activated the decision of which cell(s) within it will activate is not part of the SP but exclusively done with the TM. By default there are 2048 columns in the SP space, of which ~40 will activate and form a SDR of columns. Encoding vectors tend to be much smaller, maybe a couple hundred bits per input. Since each SP column connects to many encoding bits the columns overlap in the encoding areas they cover. When selecting the parameters of the encoding vector ‘n’ is the total number of bits and ‘w’ is the number of active bits. There should be enough active bits to handle noise in the input while remaining a minority of all bits, something around n=21 and w=200 may make sense to start with.

Hi @sheiser1

These are correct:

About index d in matrix D, if B just was predicting by A then for cell B, d is equal 1?

The premanence values can change between -1 to 1?

Is there any example that shows all steps together, I mean encoding, learning process, and classifier?

Permanence values can’t go any lower than 0 or higher than 1. In the diagram above, if the HTM neuron represents B and it became predictive from A it means that one of its distal segments became predictive. This only happens when enough synapses on a distal segment become active (1 instead of 0 in the matrix).

I think the way to really understand this is to:

- read through Numenta’s BAMI book which has lots of description and pseudocode; and
- watch HTM school on youtube. I think the detailed description and visualizations will help you get the intuition for how it works better than any descriptions here