The human brain doesn’t need to build individual interpretations of all the numbers we encounter in daily life. We can utilise our understanding of numbers across domains, 1 million dollars, 5 hours, 500m etc. and also learn common sense understanding of each individual quantity in context. To the best of my understanding, the HTM model learns a mapping between encoded representations of numbers with some level of semantic overlap between bits (as shown in the hot gym example), and their predictive patterns. When running a HTM on a cartpole environment, it fails hopelessly to interpolate between values and actually make predictions for values it hasn’t seen. There are also four separate quantities encoded individually, but the HTM should make use of the shared quantities to learn better mappings. Why do we need to pass in quantities through separate streams? This makes it difficult to extrapolate or even interpolate between values that have been previously unseen, not only for individual quantities, but between quantities. Is there some form of representation that can make predictions about new quantities based on learned relationships from existing ones? For example, I can imagine that my room can’t fit 100 people in it, just because I can imagine that quantity, even though I’ve never seen that specific example. I can apply that logic to any number of objects, regardless of if I’ve seen that many before.
Furthermore, my representation of quantities is (roughly) infinitely divisible and (roughly) infinitely scalable, but the HTM needs a quantised representation of those numbers, it requires semantic overlap between bits, but you can’t have 10 million bits representing all the numbers between 0.000001 and 0.000002
So my question is, how can we reuse learned patterns for quantities across domains, and how do we allow the HTM to interpolate and extrapolate examples of unseen quantities?
Notes: This paper appears to indicate that the frequency of spikes to encode numeric information (Single Neurons in the Human Brain Encode Numbers - ScienceDirect), and maybe the brain reuses some kind of columnar structure to pass along object information together, such that it can pass objects and quantities to the same models to learn more robust representations of quantities for all types of inputs, instead of learning a new mapping for each input as the HTM appears to do. I’m only vaguely familiar with how this columnar view applies to and is used by HTM theory, but it’s similar to something like a capsule network which encodes object property information as a vector, instead of individual distributed values.
Another key thought I had is that the brain actually never receives encoded representations of numbers, it always encodes it as, for example, the visual lines of text which is transformed into a higher level representation, or the individual objects which can be passed in as SDRs. Maybe this is actually a non-problem which only occurs because we can directly pass in quantities, instead of symbols of quantities.