No influence of learning based on the permanence of proximal connections

I agree that it is what we should do if the dense representations were the objective. However, sparsity of HTM is a crucial factor of its function. @jhawkins says in some of his presentations along the lines that “If you can only remember a single thing from this presentation, it should be sparse distributed representations.” SDR is a very core idea to HTM because of its merits. Especially when the theory extends to hierarchies, the sparsity would really shine when you could work with unions of activations or when you try to crosscheck whether an SDR belongs to a union of SDR’s (temporal\union pooling). But then, this discussion becomes about the merits of sparse distributed representations which there are publications on.

For now, if the question is “Why should the inputs be sparse for the current SP algorithm?”, “Because they are designed around sparsity which suffer on dense inputs.” would be my short answer. If the question is “Why sparsity?”, there are publications by Numenta focused on answering that exact question such as [1][2].