Instead of a default initial threshold for a segment activation, would it be more sensitive to use a dynamic threshold, a statistical function of some kind of all current permanences, for instance?
I’ve considered this as well because it would help a single HTM instance learn a broader range of input features. It may even have some biological plausibility because V1 neurons respond to a lot of different receptive field sizes, from very small to very large.
For what it’s worth, many detailed neural models use forms of homeostasis including threshold adjustment. Check out SORN for three characteristic types of homeostasis.
In the case of HTM, a similar thing is accomplished by using the boosting and bumping heuristics, to make sure columns are roughly evenly used. For distal connections, the properties of sparse pattern matching may make this unnecessary. All patterns are sparse and their matching segments even more so, and that leaves little room for a lower threshold when matching (for example) 8 bits of an SDR already gives you 99.9999% match confidence.
In NuPIC, there is nothing stopping you from changing permanence thresholds during runtime.