Chaos/reservoir computing and sequential cognitive models like HTM

I’ve sort of (maybe not) related these points here in the past, albeit from a simpler context.

TLDR; If “the existence of a variable” is a metric, then some variable’s existence cause another variable(s)'s existence metric to decrease. At some point, the diminished variable will be forgotten, and in some architectures, they compensate for this issue by minimizing decrement values and assigning parameter values to more or increasing number of variables. The smaller these values are the longer it takes a variable’s existence to disappear with the cost of more variables (e.g. billions of parameters).

I do think that the contradictory parameters increase in number as the context becomes “high-level” or up the hierarchy. Hence in low-level, pattern recognition is relatively easily solved (e.g. sequences) but as it goes up the hierarchy, these patterns become contradictory depending on context.