Performing basic math with grid and displacement cells

I think what Mark is getting at is also my point… that the answer to this question is much more complicated than you are making it. The brain is not solving this problem like you describe it. To solve this problem, you must introduce abstract concepts, and how to create them.

3 Likes

Whatever it takes. :face_with_monocle:

3 Likes

I invite you to read the linked Friedemann Pulvermuller paper above and think about how it applies to your question. You may find it worth the effort.

I wrote the “empty room” hypothesis to work with grid cells and displacement cells based on the latest full-view model Jeff described. All I’m saying is that when grid cells that represent the location of a ball are activated in L6b there’s also some grid cells active in the same location that represent empty location spaces.

I’m not putting any weight to it it might just be gibberish I try to make-look believable but there’s no significant progress in math A.G.I. and grid/displacement cells are very inspiring.

There’s this fantasy that keeps playing in my head -philosophy time. Imagine a highly advanced alien race in a parallel universe with different laws of physics. They might be membranes fluctuating at non-perceivable speeds without bodies. I have no idea what exists out there but I’m confident even if all is different there are two concepts we will inevitably share.

  1. Super-set of all infinities.
  2. The concept of true nothingness.

These are so powerful concepts in themselves, the two extremes in which all exists/doesn’t exist at the same time. There might be some mechanism you can derive from these.

1 Like

Ok, but I don’t think that last part is correct. Our mechanism does not account for it, at least. I don’t think the brain is doing it. Empty space is simply not represented. Zero is an abstract mathematical concept.

But if you are saying that information exists within the representation about empty space, simply that the lack of sensory feature existence at a location implies empty space, then yes I agree with you there. But it is implicitly represented, not explicitly represented.

4 Likes

I agree that HTM theory is not currently advanced enough to be used in writing a system to do math in the same way it is done in a human brain. That said, I think it could be fun (even if not very useful) to try and implement a much simpler calculator using HTM neurons arranged to simulate logic gates. A while back in another thread we talked about how the XOR operation could be learned in HTM. One could theoretically train up a whole lot of simulated logic gates to implement a calculator. For example, addition can be implemented with an ensemble of circuits like this:

image

5 Likes

A group of buddies I hung with in the late 70’s talked about doing the same thing with an optical neuromorphic computer. The problem we ran into is how do you train it up? Putting in a fixed pattern is just ordinary VLSI design and kind of misses the point.

2 Likes

Yes, this would definitely be exactly equivalent (and far less efficient) than VLSI. So definitely not a “math AI” at all. The only reason I can think of for doing this would be “because I can” (like proving the Turing completeness of PowerPoint)

2 Likes

You mean something like this?

No?
This?

No?
OK then - this!

Game of life: programmable computer

This is my programmable computer implemented in Conway’s game of life computing Fibonacci sequence. github: https://github.com/nicolasloizeau/gol… thread: http://www.conwaylife.com/forums/view…

2 Likes

You could even implement HTM on a programmable computer powered by HTM neurons…

4 Likes

Or on a conway computer simulating HTM!

1 Like

Matt I will need your help. What does it mean exactly that the location space of an object is bigger than the object? Does the brain store locations that may expand the location space of the object without features associated to them?

(video starts at 32:04)

A definition of a room is a set of locations that are connected together by movement (via path integration). Some of these locations have associated features and that defines how you know which room you are in but not all of them. You don’t have to have features at everything here you just have to have some.

Could this mean that the brain moves to locations where he senses no features and stores them as no-features@location pairs for path integration to work? If you think of the world being in your retina then every location has a feature but in a 3D grid cell projection this makes sense. If it does store no-features@locations that’s all the system above needs for a counting mechanism by association.

Remarkable results for ‘emergent numerosity’ in dnns.

“This could be an explanation that the wiring of our brain, of our visual system at the very least, can give rise to representing the number of objects in a scene spontaneously.”

An AI System Spontaneously Develops Baby-Like Ability to Gauge Big and Small

This is just AI hype. I read the paper. It’s just Deep Networks doing what we’ve always known they do. Saying they have “learned to tell the difference between big and small” is an overstatement.

4 Likes

From neurobotany, a very simple example of counting:

Entire mechanism unfortunately seems to be mostly unknown. But at least some counting ability is possible without needing a complex brain.

Right you are, at the end of the piece the doubts about the validity of the statement are articulated by the sceptical associate professor Peter Gordon. There is no notion of bigger/smaller, previous/next or any embodied knowledge for that matter, just a symbol that represents the scene. Nevertheless I do find some inspiration in these kind of romantic AI articles.

The question on the OP remains open, what does it take to make a HTM “understand” numbers and do operations with them.

1 Like

I think we need to think about mathematics more like language, in the same way music is referred to as a language. Languages are abstract conceptual objects, each one with a unique frame of reference that intersects our sensory representations in ways unique to our individual experiences with reality.

3 Likes

Remember that mathematics is a cultural creation

I can’t wrap my head around this. Bees can count using 4 brain cells. It seems that math is in the root of all HTMs. Even HTMs without a neocortex are able to count. Algebra is a lot more complex and abstract than simple arithmetic. Perhaps, algebra can be seen as a cultural creation but not in the literal sense. Culture accelerated its growth.

Ok, but I don’t think that last part is correct. Our mechanism does not account for it, at least. I don’t think the brain is doing it. Empty space is simply not represented. Zero is an abstract mathematical concept.

But if you are saying that information exists within the representation about empty space, simply that the lack of sensory feature existence at a location implies empty space, then yes I agree with you there. But it is implicitly represented, not explicitly represented

This means that you need to “conteplate” about empty space in order to understand it unless the brain can sense it and represent it explicitly.

Bees can even choose the value of zero, when trained to select the lesser of two quantities.

I can’t believe that bees naturally grew an understanding of 0 and empty space by contemplating implicit fundamental questions while developing their cognitive maps. It didn’t come at birth but it certainly isn’t a hard concept to use.

What I’m proposing is to associate empty space explicitly to objects that have been recognized. For example, let’s say we 've trained an HTM doing visual inference and showed it the Numenta cup floating inside an empty room. If the cup wasn’t there nothing would stored because there would be no sensory input to activate a representation that will later be stored. When the cup is visible this HTM should be able to recognize every object as both an object AND an “empty room”.

Cup AND “empty room”
Cylinder AND “empty room”
Handle AND “empty room”
NumentaLogo AND “empty room”
Cup AND “empty room”

You can only store the “empty room” associated with something else not by itself. The higher concept of 0 or empty room is represented like any other concept. The odd thing is that its associated with every recognizable object but not as an extra information. Every recognizable object is a unique room filled with sub-objects. The fact that it’s empty while it’s filled is the extra information.

If an “empty room” isn’t the commonality then something else must be. Even the fact that all objects are place cells in L2/3 should be considered a commonality.

When there are 2 recognizable objects in a room that share a commonality:

Object1 AND Commonality
Object2 AND Commonality

Allows for asking this question:

What 2 simultaneously active “Commonality” objects is associated with? How will it move?

The associated object (by looking at grid cells) is the abstract room of the numerical object “2” and can be moved like this: addition, subtraction, multiplication (by looking at displacement cells).

Change this to “nothing”, because an empty room is something. It took me awhile to respond, sorry!

1 Like

What I’m proposing is to associate empty space explicitly to objects that have been recognized.

Not a unique “empty room” representation for every object but a single one that refers globally and is connected to all of them. Any activation of an object would activate the representation of the “empty room” hence its always active.

When you where learning the number 7 you would construct it as a room with these 7 numbers placed inside it:

0 = {} AND “empty room”
1 = {0} AND “empty room”
2 = {0, 1} AND “empty room”
3 = {0, 1, 2} AND “empty room”
4 = {0, 1, 2, 3} AND “empty room”
5 = {0, 1, 2, 3, 4} AND “empty room”
6 = {0, 1, 2, 3, 4, 5} AND “empty room”
7 = {0, 1, 2, 3, 4, 5, 6} AND “empty room”

Any recognition of an object would recognize an “empty room” so this would apply to numerical objects also. I would assume that since there’s the rule of 100 cells for each task there can’t be more than a handful of object recognized in each cycle.

*Note: You would learn the transformations performed by operations of addition and subtraction simultaneously while learning what numbers are. You would model how the global/general “empty room” representation behaves instead of relying on unique incidents of everyday obejcts moving in and out of spaces.

Now, for an example: When 7 cups are shown to you they are recognized throughout a sensory sum of patches:

Cup1 AND "empty room"
Cup2 AND "empty room"
Cup3 AND "empty room"
Cup4 AND "empty room"
Cup5 AND "empty room"
Cup6 AND "empty room"
Cup7 AND "empty room"

This way, it would be stored as 7 cups AND their associations to the “empty room”. This is similar to 7 numbers and their associations to the “empty room”. Since, both cups and numbers are associated to the “empty room” you would be able to associate the group of objects you are currently observing to the number 7 as previously performed when you were learning what the number 7 is.


The “empty room” is the default representation of a room before it has any features associated to it:

Empty Room = True Nothing @ Empty Room Locations

I suspect that a better way to think of this is to focus on the concept of object representation as a collection of features. One common collection of features is the default set when no particular object is present to bind to - nothing.

Language cannot communicate the true essence of “nothing” without assigning something to it. So, we are stuck on whether “empty room” is something or “nothing” is truly nothing.

  1. Nothing has structure because it’s a word but true nothing has no structure.
  2. There’s always space masking the appearance of true nothing but you can also argue that true nothing has no appearance hence it’s always apparent. An empty cup is filled with air, the vacuum of space is filled with sub-particles and even true empty space is filled with spacetime. True nothingness can’t be observed through sensory input when we are bound to live inside a space-time continuum.

Black Dot = Commonality + Black Dot Features @ Black Dot Locations
Democracy = Commonality + Democracy Features @ Democracy Locations

We all know this intuitively. This is all based on true nothing not the abstract nothing we infer is “void”, “emptiness”, “air”, “spacetime”, “invisible aether”.