In the HTM School YouTube channel in the videos that were made by this great man Matt Taylor, I found that Encoders should result in a sparse binary vector and the sparse binary vector for each variable coming from separate encoders will be placed in its appropriate place in the Input Space, and then Spatial Poolers checks its connections with the Input Space and produce the SDR.
That really makes sense to me, but in some papers, I found that Encoders will result in SDRs and I didn’t get it actually.
So, my question here is, how would the Encoders produce SDRs? And, how would the Spatial Pooler deal with and be connected to multiple SDRs instead of being connected to only the Input Space? I just can’t get it.
You absolutely can! It’s just a learning curve getting your mind around it. Firstly the encoders themselves don’t actually produce SDR’s, they produce bit strings that get concatenated together. The Spatial Pooler then takes this total encoding bit string and produces an SDR. I’d check out the 1st HTM school video on SP if you haven’t already.
It is important that encoders produce representations with consistent density, but they do not have to be sparse. They must have semantic meaning, however.
Yes, I’ve already watched this video, this is where he explains how are Encoders’ outputs are put into the Input Space and then the Spatial Pooler formulate the one single SDR that will enter the Temporal Pooler.
But the most important thing for me now that you agree that Encoders don’t produce SDR’s directly.
Remember encoders are representing sensory data as they might be represented by sensory neuron firings. An encoder can have a lot of tricks to identify and encode semantics. I suggest if you are creating an encoder that you try to keep your encodings sparse. All of our encoders produce pretty sparse encodings.
Yes, one encoder could split up its space and encode it in different ways. This is like SDRs concatenated together into a larger encoding. We call this a MultiEncoder in NuPIC.
Your question “do encoders produce SDRs” is also like asking if biological encoders produce SDRs. So I was asking on twitter if our eyes, ears, etc have nerve input to the brain that are sparse. This chart is just some evidence that representations in these signals are sparse.
But eventually, if we have only one single HTM model, all of these SDRs coming out from multiple Encoders should be combined somehow to produce only one SDR that will be fed to the HTM model, right?
Sensory arrays like somatic nerves encode data that flows into many cortical columns. Each cortical column learns the patterns it observes. So a sensory array might split up its encoding topologically and different “HTM models” would processes different parts. Theoretically, they will work together to represent objects.
But today in NuPIC we pretty much just create encoders meant to feed into one model.