Useful reference: HTM White Paper.
- What is spatial pooling? How can we tell if the output of spatial pooling is good?
- Is an untrained spatial pooler just a “random hash” (random mapping from input to output vector)? Why or why not? What happens to the output of the spatial pooler if you change one bit in the input? Can untrained spatial poolers with randomly initialized input bit weights be useful or even better than a trained SP?
- Can you do spatial pooling with small numbers? For example, is it reasonable to have an SP with 20 columns? If not, why are large numbers important in SDR’s?
a. What’s the difference between picking “5 columns out of 50” vs “50 out of 500”? Both have 10% sparsity.
b. What’s the difference between picking “50 out of 100” vs “50 out of 1000”? Both will output 50 1’s.
- Suppose the input vector (input to the SP) is 10,000 bits long, with 5% sparsity. What is the right value of potentialPct? How do you figure this out?
- How does the SDR representation of input A in isolation, and input B in isolation, compare with the SDR representation of input A unioned with B? Alternatively, how does the representation of a horizontal line and the representation of a vertical line compare with the representation of a cross?
- Suppose we have an input vector that is 10,000 bits long. Suppose the spatial pooler has 500 columns, of which 50 are active at any time.
a. Can we distinguish many patterns, or a small number? Which patterns are likely to be confused?
b. What happens to the SDR representation if we add noise to the patterns?
c. What happens if we add occlusions?
- How does online learning happen in the SP?
- What is boosting and how does it work? Is it necessary?