# Performing basic math with grid and displacement cells

I will keep on adding to this list.

Goal

Recreate this simple calculator app in a neural structure using all the tricks we know so far.

1. Numbers

John von Neumann’s definition of a number is that it’s a set of its predecessors.

``````0 = {}
1 = {0}
2 = {0, 1}
3 = {0, 1, 2}
...
``````

Figure A

This definition transforms real numbers into unique rooms with each unique room containing other unique rooms. An empty room is 0, a room that contains the room 0 is 1, a room that contains the rooms 0 and 1 is 2 and so on.

Figure B

An experiment with a rat navigating inside three unique rooms. The concept of an empty room in Figure B and the concept of 0 in Figure A are remarkably similar.

2. Counting

@rhyolight

The lack of a representation of sensory input in a space is essentially information.

As you 've said the brain represents only what it senses not what it doesn’t sense but an empty room or 0 is not something you can’t sense. It’s not something you imagine when you look around an empty room, it feels like you are sensing it. You wouldn’t need to associate an infinite number of empty location spaces to an empty room just a finite subset of all the locations your eyes saccade while you where looking will do. It’s very useful to know that the room or the cup I just looked at is empty. Why not store this information rather than store only the features and infer the emptiness somehow? @Falco Later, I can contemplate on what to add to the room or the cup.

@Bitking

I suspect that a better way to think of this is to focus on the concept of object representation as a collection of features. One common collection of features is the default set when no particular object is present to bind to - nothing.

This means an “empty room” is a room (set of locations) with no features associated to it. If there’s such an object that gets recognized by the absence of features it allows for a neat trick.

If all it takes is the absence of features at a location (or a set of locations) to recognize an “empty room” then every object will be recognized as an “empty room” and a unique object simultaneously through sensory input. The location space of an object is bigger than the object itself, which means it contains empty location spaces. Empty location spaces are “empty rooms” by definition. Since, all it requires is the absence of features, only a single representation of an “empty room” will be associated with each unique object. You can only infer once if features are absent or not for each object. This is different from counting how many empty location spaces are inside a location space.

``````Cup = Cup AND empty room
Ball = Ball AND empty room

0 = {} AND empty room
1 = {0} AND empty room = {empty room} AND empty room
2 = {0, 1} AND empty room = {empty room, empty room} AND empty room
``````

Figure C

The “empty room” is the commonality shared between all objects and allows for a counting mechanism by association. The number 2 will be associatively recognized by any set of 2 empty rooms meaning any 2 objects not just the rooms 0 and 1 that are its predecessors in Neumann’s definition of numbers. This mechanism allows for counting and grouping of both abstract and physical objects. Also, it requires literal neural tissue that’s why it’s so difficult to count to very large numbers without devising new ways of moving in the mathematical universe.

We can use the concept of the “empty room” as the real “unit” Georg Cantor (mathematician) proposed. You can’t make a unit-based mathematical region in the brain without a window to model it from the physical space. How would you represent a “unit” ? Cantor suggested depriving objects of all of their individuating features beyond their being distinct from one another. The brain doesn’t have time to do this and it doesn’t feel like it’s doing this. When I look at a cup and a ball I know it’s two things. I don’t need to strip them down to two “units” first and then count them. The information needed in order to make this calculation must have been sensed while I’m looking at them. This is what a simultaneous sense of two empty rooms from visual sensory input can do, something not possible with units as non-sensory (abstract) objects. Also, how would you associate a unit with all abstract and physical objects? What it means to deprive them of their individuating features? The pursuit of a “unit” is a high-level, philosophical endeavor. Practically, only the concept of an empty room caters to all these needs, has an already established representation with grid cells and feels intuitive.

If you think about it an “empty room” is the “unit” because it’s what remains when you try to deprive two things of their individuating features.

The video starts from Cantor’s proposition (9:37):

When we are taught numbers we are also taught addition. Counting is fundamentally addition. The function of addition can be described visually as the act of an operator randomly selecting a location space as the space of the result and operate by “displacing” objects into it.

Figure D

For example, moving a “Cup” from the location space of a “Table” and a “Ball” from the location space of a “Chair” to the location space of a “Desk” has the result space of the “Desk” with two “empty rooms” associatively recognize the number 2. The result spaces of “Table” and “Chair” both associatively recognize the number 0.

``````"Cup" (in table) + "Ball" (in chair) = "Cup", "Ball" (in desk)

{empty room} (in table) + {empty room} (in chair) = {empty room, empty room} (in desk)

1 (in table) + 1 (in chair) = 2 (in desk)
``````

There’s nothing permanent in this displacement. Meaning you 'll have to learn all possible displacements of all possible objects. Unless you incorporate something stable like the concept of an “empty room” and how it behaves. One way to tackle this is by starting with the obvious assumption that all numbers are re-entrant structures of the “empty room” or 0. Now, you have displacements that link all of them together, meaningfully.

4. Complex Equations

Very large numbers might as well be just more complex equations. We all have agreed to use base-10 and a series of additions.

``````146,525 = 1x10^5 + 4x10^4 + 6x10^3 + 5x10^2 + 2x10^1 + 5x10^0

146,525 = 100,000 + 40,000 + 6,000 + 500 + 20 + 5
``````

We could have used 256 symbols throughout history and make a base-256 representation of the same number. It would have worked fine if we all agreed to use the same 256 symbols.

``````146,525 = 2x256^2 + 60x256^1 + 93x256^0

146,525 = 131,072 + 15,360 + 93
``````

I willl randomly assign these symbols to these numbers:

``````2 = "!", 60 = "*", 93 = "&"
``````

The number 146,525 in base-256 using the symbols above can be writen like this:

``````146,525 = !*&
``````

This is just a way of moving in the mathematical universe and adds a layer of abstraction.

It’s obvious that algebra is done using locations and location spaces but what it means to have a negative number? Is orientation included? What about a complex function f(x) = sqrt(x) + g(x^2) as a series of other functions? Is it a sequence of displacement cells or something a lot more complicated?

If we have lived our childhood in 5000 A.D. we wouldn’t be able to get past associated counting mechanisms like what animals probably do. All later math are displacements and movements discovered by neocortexes that left their findings behind before they died. It’s not like you need to recreate all this but rather implement a very clever starting mechanism that is capable of understanding and learning all of them through exposure.

@rhyolight I’d like to know exactly how math is executed in neural structures and I feel like the representation of 0 or that of an “empty room” is the obstacle I have to overcome. I’m uninterested in philosophical inquiries.

Though people have always understood the concept of nothing or having nothing, the concept of zero is relatively new; it fully developed in India around the fifth century A.D., perhaps a couple of centuries earlier. Before then, mathematicians struggled to perform the simplest arithmetic calculations.

Zero found its way to Europe through the Moorish conquest of Spain and was further developed by Italian mathematician Fibonacci, who used it to do equations without an abacus, then the most prevalent tool for doing arithmetic.

Guys, I don’t think an abacus suits The Thousand Brains Theory of Intelligence.

6 Likes

Interesting project!

What will we learn from accomplishing your goal?

Remember that mathematics is a cultural creation, a system of memes evolving through generations of human minds. It cannot be learned purely by studying physical objects. You must have the ability to create abstract ideas that are not rooted to physical objects to learn math. How this works in the brain is certainly unresolved.

2 Likes

A not philosophical answer - amplifying what was said above about the construction of “mathematics.”

Much of what we think of as “thinking” is really learned procedures, many associated with learning a language including the vasty important trick of linking a sensation (sound or sight) to a cluster of facts - symbolic representation.

Note the bit about learning that things could have a name; this comes with language.
This is one of the basic supports that “math” depends on. There are others important bits and they all come with learning a language.

To amplify on this slightly - we expect that part of the action of the forebrain to select and implement a chain of activation that results in some activation pattern along the central sulcus (motor cortex) that make the body move.
A less obvious part of this is that at an early level of this process some of the projections from the forebrain never leave the brain, they project to parts of the brain that are associated with memories of perceived objects where the WHAT and WHERE streams come together. We normally consider this as thinking. The act of learning a language trains us to use this process in certain ways. These are learned actions much like learning to walk. In this case, we are learning to communicate internal states with things like naming and elaborating goal seeking with vocal sound production instead of walking and reaching.

As you are trying to work out this basic math thing remember that there are limits to what simple local processing may be dong. It could take the coordinated actions of large number of maps to do these tasks.
Thinking of the progression of:

• Chemical messenger
• Synapse
• Dendrite
• Cell
• Mini-column
• Macro-column (or hex-grid node in my way of thinking)
• Collection of columns (or hex-grid …)
• Map
• Collection of maps
• Cortex/subcortical structure
• Sequential operations local to the brain
• Brain-body interaction

As you are trying to work out some aspect of brain operations is it critical to establish at what level(s) these are manifested at. Picking the wrong level(s) is sure to lead to dead-ends, wasted effort, and frustration.

1 Like

A ball as a physical object is coming from visual input and is recognized as:

`Ball AND empty room`

The concept of democracy as an abstract object in higher parts of the brain is recognized as:

`Democracy AND empty room`

We can count these 2 objects even though one of them isn’t physical.

IF the “empty room” hypothesis can work.

What will we learn from accomplishing your goal?

`2 + 3 = ?`

Maybe create a new math region like we have a visual region and a language region done by cortical.io

We can add new math encoders alongside existing encoders for GPS, date/time, language, etc.

Figure out new ways of using them.

For example, the system will be able to disambiguate between cats, dogs and also count how many of them there are in a picture, or how many edges a cube has.

I’m uninterested in philosophical inquiries.

I wrote this because I didn’t want to get ignored.

1 Like

At what level in the brain are these objects recognized?
Where are the facts about the WHAT and WHERE of these named concepts stored?

These are most likely distributed over large areas of the brain and take large coordinated actions to recall and process them. Consider the distribution of semantic information as described in this paper:
How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics
Friedemann Pulvermuller @ Brain Language Laboratory, Freie Universitat Berlin, 14195 Berlin, Germany

Then get back to me with your thoughts on how these are processed in “thinking.”

I am NOT ignoring you - I am trying to point you to what I think is the right area to find the answers your questions.

1 Like

I can’t think like this. Most of these structures are completely unknown to me.

I might be viewing this very egocentrically but I can only use the simple theoretical framework Numenta discovered and let the hardcore neuroscience figure out itself. For example, when I was making an HTM (back when it was called CLA) for iPhones to use in a web browser and predict the user’s bookmarks I noticed that even though there is a limit to how far a connection can reach in real tissue, in a simulation you can have cells connect to each other with each one being in the opposite side of a virtual planet.

1 Like

Time to step up your game?

The risk of taking the one trick you know and trying to make it do everything is the same one that the point neuron people are running. It may work really well but as you get away from the biology you start to lose access to all of the wonderful tricks that nature has developed.

Nementa claims that learning from the biology is the key and I agree that this is the best path forward.

1 Like

I think what Mark is getting at is also my point… that the answer to this question is much more complicated than you are making it. The brain is not solving this problem like you describe it. To solve this problem, you must introduce abstract concepts, and how to create them.

3 Likes

Whatever it takes.

3 Likes

I invite you to read the linked Friedemann Pulvermuller paper above and think about how it applies to your question. You may find it worth the effort.

I wrote the “empty room” hypothesis to work with grid cells and displacement cells based on the latest full-view model Jeff described. All I’m saying is that when grid cells that represent the location of a ball are activated in L6b there’s also some grid cells active in the same location that represent empty location spaces.

I’m not putting any weight to it it might just be gibberish I try to make-look believable but there’s no significant progress in math A.G.I. and grid/displacement cells are very inspiring.

There’s this fantasy that keeps playing in my head -philosophy time. Imagine a highly advanced alien race in a parallel universe with different laws of physics. They might be membranes fluctuating at non-perceivable speeds without bodies. I have no idea what exists out there but I’m confident even if all is different there are two concepts we will inevitably share.

1. Super-set of all infinities.
2. The concept of true nothingness.

These are so powerful concepts in themselves, the two extremes in which all exists/doesn’t exist at the same time. There might be some mechanism you can derive from these.

1 Like

Ok, but I don’t think that last part is correct. Our mechanism does not account for it, at least. I don’t think the brain is doing it. Empty space is simply not represented. Zero is an abstract mathematical concept.

But if you are saying that information exists within the representation about empty space, simply that the lack of sensory feature existence at a location implies empty space, then yes I agree with you there. But it is implicitly represented, not explicitly represented.

4 Likes

I agree that HTM theory is not currently advanced enough to be used in writing a system to do math in the same way it is done in a human brain. That said, I think it could be fun (even if not very useful) to try and implement a much simpler calculator using HTM neurons arranged to simulate logic gates. A while back in another thread we talked about how the XOR operation could be learned in HTM. One could theoretically train up a whole lot of simulated logic gates to implement a calculator. For example, addition can be implemented with an ensemble of circuits like this:

5 Likes

A group of buddies I hung with in the late 70’s talked about doing the same thing with an optical neuromorphic computer. The problem we ran into is how do you train it up? Putting in a fixed pattern is just ordinary VLSI design and kind of misses the point.

2 Likes

Yes, this would definitely be exactly equivalent (and far less efficient) than VLSI. So definitely not a “math AI” at all. The only reason I can think of for doing this would be “because I can” (like proving the Turing completeness of PowerPoint)

2 Likes

You mean something like this?

No?
This?

No?
OK then - this!

# Game of life: programmable computer

This is my programmable computer implemented in Conway’s game of life computing Fibonacci sequence. github: https://github.com/nicolasloizeau/gol… thread: http://www.conwaylife.com/forums/view…

2 Likes

You could even implement HTM on a programmable computer powered by HTM neurons…

4 Likes

Or on a conway computer simulating HTM!

1 Like

Matt I will need your help. What does it mean exactly that the location space of an object is bigger than the object? Does the brain store locations that may expand the location space of the object without features associated to them?

(video starts at 32:04)

A definition of a room is a set of locations that are connected together by movement (via path integration). Some of these locations have associated features and that defines how you know which room you are in but not all of them. You don’t have to have features at everything here you just have to have some.

Could this mean that the brain moves to locations where he senses no features and stores them as no-features@location pairs for path integration to work? If you think of the world being in your retina then every location has a feature but in a 3D grid cell projection this makes sense. If it does store no-features@locations that’s all the system above needs for a counting mechanism by association.

Remarkable results for ‘emergent numerosity’ in dnns.

“This could be an explanation that the wiring of our brain, of our visual system at the very least, can give rise to representing the number of objects in a scene spontaneously.”

An AI System Spontaneously Develops Baby-Like Ability to Gauge Big and Small