Multiencoder and density of SDRs

Straight forward question here. What happens to an HTM model if a multi-encoder is used and, for the sake of the argument, thousands and thousands of encoders were used to combine many metrics? Do the SDR’s still maintain sparsity?

The SDR’s would maintain the same sparsity yes, but the encoding space would be so gigantic that it would take the Spatial Pooler forever to choose the 2% of columns to activate. Each new field has its own encoding and they all get concatenated together, so you’d wind up with a huge total encoding.

Sparsity would be maintained through the spatial pooling process. However, I really doubt you’d get any useful output by combine thousands of fields into a single input representation. You’d need to split them into smaller units, with at most three or four fields each. Of course that would still be difficult to scale into the range of thousands of units. Additionally, HTM doesn’t currently have an effective way of linking up parallel units to form high-dimensional predictions. Just a theory, but I think the missing piece is a pooling layer like what is being used in the SMI, where parallel units can vote on a higher-level “object” which biases the predictions in the inference layers.

1 Like