The spatial pooler converts binary inputs to spatially pooled SDRs. These SDRs are the basic inputs on which the cortical computations take place. My question pertains to the sensory inputs which are given to the SP.
I think it is fair to say the sparseness of those sensory inputs determine the amount and quality of sensory semantics or features that the SP is able to pool, owing to inhibition and SP output sparseness.
So I am assuming that even if our sensor produces very dense outputs, we can divide them into parts of input representations and then make them sparse by introducing off-bit noise.
Please correct me if anything is wrong and also a positive signal will be helpful if it isnt incorrect.
Also,
How sparse are the sensory inputs in the brain? Is that sparseness achieved using similar principles does the actual sensory encoding result in pretty sparse representations?
Instead of using multiple SPs to process the divided input, we can give the divided and sparse-ed input to the same SP one by one using the same timestep(same moment in time) encoding and if the system works on this principle then it still wouldn’t have flaws in predictions, right? We just would have to interpret the predictions differently.
Also how useful are the predictions that SP produces? Are they just to bias the minicolumn in a localised area so that instead of random minicolumns the predicted ones get active and similar patterns are produced?
I see. So is that the only reason? Or are those predictions utilised in the actual inferences? Not the contextual versions of the SP outputs in TM, but at the SP stage itself. Or are those predictions only for biasing? This only refers to the last of OPs. Do say about the initial posts.
The SP is not making predictions. It is simply pooling and learning correlations of spatial patterns in the input. The TM is the part that makes predictions (distal stimulation, not proximal). The SP is all about processing proximal input.