I’d like to disagree on this conclusion due to the fact that a well-trained SP will result to a good classifier. A good classifier can distinguish patterns from sets of inputs hence it will also have a stable output (e.g. active columns). The output of a well-trained SP (e.g. SP1) retains only the features that matter the most, hence it will restrict SP2’s input domain, consequently SPN will restrict SPN+k’s input domain. It is similar to how function composition works where the outputs of these functions have smaller domains/sets - f(g(h(i(j(k(\R)))))), k reduces the output k(\R) and so on leaving f with a smaller input domain. I think the experiment and the opposite of the expectation makes sense, at least to me.
Edit: Restrict here means reducing the set size.