hi, I’m sorry for my question, I have a problem in HTM I have some question about spatial Pooler I watch the videos of matt on HTM school, but I still have some question because I’m not perfect in the English language:
First, we encode our input to a binary code then we represent this binary input to SDR but I don’t instrestand why we use spatial Pooler (matt said " maintain a fixed sparsity " and " maintain the semantic meaning") why and how did this
why we must determine which of the columns of cells “win”
what is the difference between blue and gray ball in those images and why the input is change
If anyone can answer me or explain me what is SP and why we use it
All the inputs do not need to have the same percentage of “on” bits to “off” bits. The inputs in fact do not need to be very sparse either. They only need to encode semantic meaning. The Spatial Pooler takes each of the inputs and generates an output SDR. Every output SDR always has the same percentage of “on” bits to “off” bits. This is what is meant by “fixed sparsity”. Additionally, the output SDRs will preserve the semantic meaning of the inputs.
By “semantic meaning”, that means various percentages of the “on” bits in the SDR represent different concepts. With a date encoder, for example, semantic meaning refers to things like day of the week, time of day, and weekends. If 25% of the “on” bits in the input represent “Monday”, for example, then 25% of the minicolumns chosen as winners by the Spatial Pooler should also represent “Monday”.
This is the part which ensures a fixed sparsity. For example, if we want to generate SDRs which have 2% “on” bits and 98% “off” bits, then all we have to do is select the top 2% of minicolumns as “winners” to be activated, and all the other minicolumns are inhibited.
I didn’t understand this question. Could you rephrase it? I scanned through the two HTM School videos for Spatial Pooling, but I didn’t spot what you were talking about with respect to the colors blue and gray. I do see green and grey circles over the input space in some of the visualizations, but I’m not sure whether that is what you are referring to. Could you mention which of the two videos you are talking about, and at what timestamp in the video?
Its true that the encoding bit string also has a fixed sparsity and maintains the semantic meaning of the input, but the SP distributes the semantics across a broader space. It is possible to feed the active encoding bits directly into the TM, but this makes for a heavy load on each. Also the SP has its own learning mechanism, where the SP columns connect more strongly to those encoding bits which helped it to successfully activate.
Each of the 1’s in this SP output represents a column of cells in that image. Most columns are grey (with no active or predicted cells) while some have red (active) cells and others have yellow (predictive) cells. The image has been scaled down for visual clarity, but if it were full sized there would be 2048 columns.
Its the job of the TM (temporal memory) to choose which cell(s) within the activated columns will activate. ~40 columns out of the 2048 will activate (represented by the 1-bits in the SDR output from the SP).
I think a good place to start would be the HTM School videos on youtube. Matt Taylor does a really great job of explaining and visualizing this whole process.