How to learn to add numbers

Hi. I am new to HTM (i haven’t still coded anything).

Let’s say some network has learn the alphabet and the digits. And has learned that a set of continuous digits form a number. And has learned to detect that: “number + number” is an operation to be carried out.

Where could the two numbers be kept so the “brain” can add them?


1 Like

This is a very good question.

Gallistel CR. 2017 Finding numbers in the brain comes to mind. Abstract:

After listing functional constraints on what numbers in the brain must do, I sketch the two’s complement fixed-point representation of numbers because it has stood the test of time and because it illustrates the non-obvious ways in which an effective coding scheme may operate. I briefly consider its neuro- biological implementation. It is easier to imagine its implementation at the cell-intrinsic molecular level, with thermodynamically stable, volumetrically minimal polynucleotides encoding the remembered numbers, than at the circuit level, with plastic synapses encoding them.

1 Like

Hi. I overread the article.

But I don’t see how to appy it to HTM.

There have to be a mechanism to keep information to be processed later on.


Let’s start by separating out numerical processing tricks learned along with language as they involve vast tracts of the brain and serial operations.

Critters and humans recognize mass groups of objects like the shapes on dice. There are many papers on the capacity of humans and some animals on what quantities of objects can be discerned; the recent finding that bees can understand zero caused a big stir in that field.

This ability of dicern quantities takes training and experience; it has been widely documented that young humans can have troubles like a short wide glass vs a tall slender glass.

IMHO the native cognitive numerical processing is part of our higher-level shape processing located in the posterior temporal region and in the collective action of several processing maps. This is at a considerably high level than what is considered part of current HTM models.

If you do wish to pursue this line of inquiry (and I hope you do!) you will have to learn how the HTM TBT works and move on to combing HTM models into the H of HTM.

methinks the bees can just hack it like using the following pseudocode: choose blank pattern if available else choose one with minimal value. It’d be interesting if they change the blank white pattern with completely black pattern (or any other color) to see if the bees can hack it. Another experiment might be to change the blank pattern with a pattern filled with much more black dots than usual. If bees can still perform then they’re hacking the counting challenge.

Basement of my home has a bunch of home spiders. It’s extremely fascinating to watch their behaviour. They show an incredibly large repertoire of behaviours, including how to manage more than one catch in their web. All of that behaviour, using just a couple of hundred thousands neurone, fine tuned by evolutionary strategies.

I consider the above comments as profound insights:

  1. numerical processing is part of shape processing;
  2. it’s at a considerably high level than … current HTM models.

I can’t help adding: It’s totally compatible with current HTM models. It is just higher up along the “H” which is lacking in current HTM models.

1 Like

The bees grasping the concept of zero is quite a stretch. Even the idea they actually count or “understand numbers” is stretchy, a certain sense of numerosity and/or subitization is a required step but not sufficient.

This video is quite an interesting presentation on number perception.
It seems that unsupervised game of pick/put/shake objects on a box or bowl would explain it in both toddlers and ANNs

We are very dynamic creatures, a lot of things we learn via play&interaction.

1 Like

“it is not easy to imagine in the abstract a scheme that satisfies
these constraints,much less to find a neurobiological realization
of that scheme.”

Quote from the Gallistel article. Gallistel is a well accomplished scientist and even though his scheme is not helpful to your question, his assessment of the “hardness” of the question you raised probably is helpful: Nobody as far as he knows (and he knows a lot of scientists & researchers) has ever gotten even close to finding an answer to your question.

Specifically, how are two numbers kept in the brain to be added later on? HTM has no answer to that question, just to be clear.

Gallistel’s article tried to (elaborately) point to a direction of searching for answers to that question, while acknowledging science today is not even close to finding an answer. Obviously you did not resonate with his direction.

Neither do I – he “logically deduced” that there must be some mysterious sub-cellular/sub-neuron, molecular level mechanism that encodes & processes “2+2=4” kind of mental tasks, in the style of von Neumann architecture of computing - CPU/Memory separation (and DNA style of information “encoding”). He published many papers in this area.

What he has been opposing to is the more prevailing belief of “brain information processing through synaptic plasticity” (this is the so-called neuron/cellular level), which a great number of hand-waving neuroscientists subscribe to – for example a short term enhanced synapse (through STDP) linking some neurons together (e.g. synfire chain) to represent a number in short term memory (working memory) … it’s hand waving because the scientists subscribe to the general idea, and have done a tremendous amount of research at neuron level/synapse level/protein level/behavior level, but not at the network circuitry level and still can not come up with a specific circuitry or example implementations of even a toy system, as described in your question.

I am totally convinced as you are in “There have to be a mechanism to keep information to be processed later on”, it is just that no respectful neuroscientist or cognitive scientist has offered a coherent and self-consistent and functioning theory and/or implementation of such mechanism, yet.

I have been searching for theories/ideas/implementations of this “mechanism” with serious efforts. Curiosity is a strong motivator.

1 Like

Direct quote from Gallistel’s paper, on how numbers can be stored in the brain:

2+2 is easy, we don’t need to store anything, we already know the answer. Most of us already know the answer for 6x7 - I mention it specially since it was such a PITA to remember in early school.
All we have is already known answers (aka “landmarks” or “positions” on the cognitive map) and step-by-step known “valid paths” we have to take to explore uncharted territory further than already known space. Paths are made of a series of learned “positions” and “action options” at each step.
I won’t get into detail, but (at least for me) there is no fundamental difference between “thinking” and navigating/exploring a town or any other territory.

The questions you rise here are more related to how short term, “workspace” memory works - you doubt this could be stored in synapses. Maybe. I would explore some feedback circuitry that keeps outputting the most recent items added in “workspace memory” which become available several timesteps further.
That before speculating sub-molecular quantum memory which, as with synapses, no one can’t describe exactly how it is supposed to work. If they could, that would be mainstream.


@cezar_t What an amazing insight! :clap:
I can’t get enough of “TBT-perspective” metaphors of reasoning. It just feels right.
BTW, you probably already know this, but Jeff would say working memory is definitely in synapses, specifically the ones that turn on and off their weights rather rapidly almost like a switch.

That’s my bad – I was trying to refer to OP’s “keep two numbers in the brain then later add them”, i.e. the “storing” part. Obviously “2+2” failed to convey that. Sorry for the confusion.

About the “adding” part, I kind of feel this is the right idea: “numerical processing is part of shape processing”.

I myself am a firm believer that it’s done at the synapse level. I was quoting the academic paper (by Gallistel) referenced in the discussion, which pointedly focused on studying the same problem OP raised. The author has been doubting (this could be stored in synapses) steadfastly.

and beware adults make 700 synapses per second. If they really can’t be formed in a second then there has to be some repeating mechanism that keeps pounding the darn neurons until it builds its synapse. How de we know that? Because we already know that’s how synapses get formed - by repeated signalling the target neuron.

So some mechanics of “keep shouting this new SDR at the cortex until some minicolumn gets it” has to be there

1 Like

Even if there is some strange molecular mechanism to store information for short term until it is permanently carved in synapses or erased, that’s completely irrelevant to AI field, because storing info instantly in computers is so trivial. At least here we don’t need to replicate how the brain does it but what it does

I think that the “holding a number” part is pattern completion in a given map. The pattern distributed across this map is the key to the sequence (hTm) that forms that symbol that stands for the value. A number (spoken or read) is stored and processed much like a song or word.

As I stated above, number manipulation emerges from the symbol manipulation parts located in the middle and posterior portions of the temporal lobe.

I think that these high-level symbol manipulation tricks are learned in acquiring language. A little reflection should show that when you do mathy things you are mostly doing rote memorization like a multiplication table or a sequence of pattern mappings like grouping numbers in addition and subtraction. These things have to be taught as part of acquiring language.

Addressing the OP question about “where are the numbers?” …

At a very high level you sequence the actions by repeated pattern updates in the global workspace with certain maps holding the parts of the patterns to be processed. For the programmers, the lvalue and rvalue, held in the brain as the first and second in presentation; more sequential (hTm) processing. This is where temporal pooling shows it’s utility. These can be chained like in the PERL $_ variable.


This was explained very well. it’s either a recall from memory, or just following a learned path (step by step mathematics), like almost anything else we call intelligence :mechanical_arm:

1 Like