Spatial pooler learning


#1

Hi
I have read about spatial pooling and how learning could help to improve the performance, but I have got confused. I implemented SP and TM algorithm and as input [1,2,3,1,2,3,1,2,3,1,2,3] was sent to SP after encoding. If learn=True, SP generates different result as activeArray so each time different activeColumns feed TM for individual input e.g. β€˜2’. In this case I see bursting every time and no prediction will happen. It 's not what I saw in HTM school episodes.
What is my mistake?? Any help would be appreciated
Thank you in advance


#2

You have to use the same value of seed for random generation


#3

Thank you for your response. Unfortunately I can not understand what you mean.


#4

In SP your will find the Parameter β€žseedβ€œ, if itβ€˜s value is the same you will get the same SDR for the same input value.


#5

I did not change its default value which is β€˜-1’. Do I have to change it?:thinking::disappointed_relieved:


#6

My guess would be that β€˜-1’ tells the SP to use a random value every time, but I could be wrong, just in case set it to 7777, a fixed value.

Also you need to train the SP first for a number of iterations before feeding it into the TM, i.e it needs to learn and settle on a spatial representation / SDR first otherwise as you have found out while it learns its output changes over time.


#7

After studying about SP and watching HTM school episodes, it seems to me that using SP, the same input has to give the same output from beginning or maybe I am wrong. I also tried input=[1,1,1,…,1,1] and the result was different outputs with overlap.
I set a fix number for seed but nothing changed.


#8

Something wrong here. Can you show us the encodings you sue for 1,2,3? Are you sure they are always the same? Did you write the SP code yourself or are you using an existing HTM?


#9

Thank you so much Matt
First I have a question to be sure that I understand SP correctly or not. If learn=True then will we receive the exact same outputs for same inputs from beginning? e.g. for following inputs every time same outputs will be received for 10? or it needs to see more sequences?
I used existing codes of HTM. SP code is here.
I sent [10,20,30,10,20,30] as an input to scalar encoder(instead of [1,2,3,1,2,3] because the encoder results was similar). Yes the encoder shows the same result for an individual inputs. Following is the result on encoder:

10 = 000001111111111111111111111111111111111111111111111111110000000000…0
20 = 000000000111111111111111111111111111111111111111111111111111000000…0
30 = 000000000000001111111111111111111111111111111111111111111111111110…0

and here are the result of SP alorithmn(active indices of activeArray):

10 --> [505, 544, 1157, 1716, 1746, 2331, 2356, 2677, 2893, 2905, 2929, 3408, 3521, 3593, 3789, 3805, 3938, 3975, 3998, 4062]
20 --> [318, 575, 672, 811, 944, 1097, 1155, 1272, 2033, 2268, 2909, 3243, 3661, 3780, 3917, 3918, 3943, 3984, 4067, 4069]
30 --> [86, 438, 709, 1025, 1091, 1438, 1838, 2072, 2184, 2541, 2738, 2828, 2876, 3297, 3566, 3687, 3859, 3923, 3964, 4076]
10 --> [657, 965, 1135, 1154, 1226, 1297, 1459, 1594, 1709, 1938, 1992, 2264, 2543, 2632, 2658, 3512, 3667, 3683, 3802, 3807]
20 --> [113, 183, 396, 541, 735, 879, 978, 1961, 2139, 2188, 2475, 3556, 3611, 3634, 3684, 3710, 4012, 4065, 4080, 4092]
30 --> [86, 582, 1654, 1722, 2190, 2232, 2298, 2478, 2634, 2670, 2823, 2929, 3171, 3214, 3330, 3422, 3464, 3504, 3566, 3574]

If you want any more information, let me know.