Because this architecture exists throughout the cortex, it suggests
that we learn, infer and manipulate abstract concepts in the same
way. The same way that we manipulate objects in the world. So the
theory is, that evolution discovered a way of navigating and knowing
and mapping out the environment. It had to do that a long time ago
because all animals move and had to figure out where they are and
how to get home.
And then, there’s another theory that’s been published, that the
entorhinal cortex – so there’s this three-layer structure that’s in
two parts – […] they proposed that the neocortex, it was formed
by folding those to halves on top of one another into a six layer
So we think what’s basically happening is, evolution preserved much
of what’s going on in the entorhinal cortex, not exactly, there’s
differences, but it preserved that, and now it’s learning how to
model objects in the world. And in the human brain, what happened,
it’s now continued that, and it using that same mechanism to model
And so we suggested that – just suggested that – when we think
about things, whether it’s mathematics or physics or brains or
neuroscience or politics, whatever, we’re going to be using a
similar type of thing.
And what’s interesting about this, it’s these spaces, these ideas of
location and orientation, they’re dimensionless. They’re defined by
behavior, and they’re not metric, it’s not like X, Y and Z. There’s
sort of this very unusual way of representing these things. And if
behaviors weren’t physical behaviors, but were mental behaviors,
like mathematical transforms or something like that, you could apply
behaviors to abstract spaces, and it’s suggested that this might be
the core of high-level thought.
This worries me. The implementation of high-level thought evolving out
of re-purposing a spatial navigation mechanism sounds like a hack. It
means there’s significant work to do conceptually separating the essence
of intelligence from its biological implementation before it can be
efficiently applied to extra-biological systems.
I would like to mingle in the debate you are opening here.
You write: The implementation of high-level thought evolving out of re-purposing a spatial navigation mechanism sounds like a hack.
You must define/exemplify “high-level” thought compared to “low level thought” before your doubt is constructive? If not you argue form a position where you know how the brain works…Jeff´s arguments are general across the system which must be the best starting point. Too often people use their own sophisticated human experience as starting point, but here we are trying to reengineer evolution and that means re-using existing structure to make new functions (if they are new and not only variations?).
One thing I have missed in HTM, which is now emerging is the understanding that the brain and the body is ONE SYSTEM deciding about a behavior (=a sequence and tuple of movements, whether it is arms, legs, head, or speechmuscles…it is JUST muscle movements. From this point of view, then the only thing that gives meaning is that movement and navigation is generalized and not necessarily depending on coordinates other that the data created in the vestibulary system´s three directions (which is a pattern more than coordinates).
If we take Jeff´s explanation, then we must work with “snese objects” and “motor objects” as the basic input out objects to the braqins decision process about "what to do now (=based on a prediction based on assumptions and anticipations, differences and repetitions)…further more the dimensions of not only where and what, must be supplemented with which, why, when and how, where the movement (=the behavior) is the result of a decision about start achieving a goal (orientation) and engaging a target (location) before starting the movement. When the location and the orientation delivers high accuracy and high precision the uncertainty is minimum and the risk of failure/loss/lost opportunity/mistake/doubt is minimized (=entropy is minimized)…below this is explained in The Human Decision System. Here the two channels of location/target and orientation/goal are described.
I think this development in HTM was fantastic and also necessary, because the only understanding of the brain that works, is that it is a decision machine based on decision biology. When one celled organisms could only eat one thing, move in one way, and sense in one way, they would not need a decision machine to sort out the risk of alternative moves.
When the brain doubled the three layers Jeff explains, then it added the combination of doubt/past/future to the present/the moment/the now. One can imagine a simple organism with only three layers that moves, but has no idea about the past, the future and doubt. It has a 1:1 relationship with its environment, and use only location cells to know that it is moving now or standing still now. It can either repeat or change direction but not much else. From this starting point evolution changed it into a human…
To clarify: What I am speculating here is that an// efficient
extra-biological implementation of intelligence may require a
qualitative, algorithmic departure from the biological example. As in,
the model/algorithm itself and not just the model parameters will need
to be somewhat different to make that happen.
This is in contrast with Numenta’s stated goal of intelligent machines
that are bigger, faster, have further-reaching sensors etc., essentially
a solely quantitative, more-is-more departure from the biological model
(“On Intelligence”, chapter 8).
Elsewhere I already argued in favor of tweaking model parameters so that
the model behaves, in a way, qualitatively differently e.g. by
preserving superpositions of input patterns across more levels of the
cognitive machinery before having them succumb to winner-takes-all
winnowing. But it could go further. The understanding of the
“entorhinal” spatial cognition mechanism discussed here may turn out to
be necessary only for the purpose of successfully removing it from the
model, leaving the rest intact (“entorhinectomy”).
I am surprised that you feel that what the brain is doing is not “efficient.”
My first thought is “how can you decide how efficient it is if you don’t even know what it is doing?” If you do know what the biologic system is doing please share because I have been reading about this for 20 years and I still have not read anything that seems to have a handle on how this all works.
Bits & peices - yes, all of it - no.
Applying a general mechanism for mapping to a mental landscape make a great deal of sense to me.
There is no reason to limit this mapping to physical dimensions; I can see that navigation through higher order manifolds can be extended through most of the human mental manipulations that I am aware of.
I would think that before you nilly-willy decide to chuck out the messy wetware as being inefficient at least try to understand what it is doing. It is entirely possible that this solution is sublimely efficient.
Looking forward to seeing you make a robot that works as well as a common lizard!
I am very excited to see how you will be doing the visual recognition and spatial mapping.
Your new and cool algorithms are sure to best literally millions of years of evolution.
For bonus points do it with the less power consumption than nature uses.
Really? This is how I’ve been programming for a long time. I think this applies more to using things for tasks they weren’t meant for.
To me, it seems like the structures of the entorhinal cortex and the neocortex are the way they are because they can encode location information, in some hexagonal coordinate system, by taking cues from linear (or at least irregular) features of the enviornment. (Rats raised in a spherical enviornment do not form grid cells as well as ones raised in a normal or cube-shaped enviornment, and V1 cells fire based on orientations of lines.) Now, what you can do with coordinate information, is compute a transform so that some feature is now back in the position/orientation you usually recognize it in. Whether this transform is on position, size, speed, frequency, etc. doesn’t matter, as long as it is a coordinate system. I believe the entorhinal cortex constantly uses speed cells and orientation cells to update this transform.
That’s my take on it. It doesn’t seem like a bad hack. Rather, it seems like it’s still being used to do exactly what it was meant for: encode positional information in certain coordinate systems so that patterns can be recognized despite position, orientation, or other changes.
Agreed as far as that goes. You will have to add that the structures are absolutely necessary for encoding ALL episodic memories. The removal of these structures utterly extinguishes the formation of new memories.
This strongly suggests that the things being encoded go far beyond the features you listed.
While you are sussing out the function of the hippocampus lets add some more known constraints.
We know that the hippocampus somehow encodes everything that you can say is entering into your episodic memory, or at least it is a necessary co-factor.
We also know that the activation patterns are also able to convey activation of pattern recognition in the amygdala. Happiness, surprise, and fear expression are known to be parseable by very young infants. We also know of a few like snakes and spiders that seem to be built-in.
This encoding is more than encoding to some animals - as you work your way down the brain-complexity path you find the structure is preserved from levels of the brain where these structures do all the heavy lifting. These animals are capable of very sophisticated behavior. http://www.cell.com/current-biology/pdf/S0960-9822(15)00218-3.pdf
Thanks for all the links. I’ll be sure to read through them completely once I have more time. For now, I have to skim.
I’m not sure if you’re thinking about this quite the right way though:
Actually, that in itself doesn’t suggest what you say. It could easily be that formation of new memories occurs because all other regions pass their data into those regions of the brain. I could see a layer of HTM cells storing and recognizing data long term if they had a slow learning/unlearning rate and information was passed to them in a usable way. (Meaning almost any localized pattern at all.)
A lot of people seem to underestimate the power of simple algorithms. I’ve written code that was ten lines long that solved one of the harder problems at my job, or that generated entire landscapes in one of my personal projects. A lot of thought tends to go into those lines, but the algorithms themselves are usually simple. It seems like it’s the same thing with nature: a bunch of tiny solutions. Like how the RGCs in the retina enhance edges with something similar to top-hat/black-hat transforms, how the simple cells in the V1 fire based on orientation, etc. And both of those transforms would be useful to some animal, like knowing where the edge of a cliff was, or which way that edge was going.
Very sophisticated behavior can come out of very simple solutions.
I studied a lot of biology before switching to computer science.
Nature has been re-using and re-purposing things since the beginning of life. This pattern can be seen even down to the cellular level. For example, both humans and plants have mitochondria which function the same way. Another example, DNA structure.
This pattern is also more consistent from an evolutionary standpoint.
Agreed. Evolution has been perfecting the what/where processing for a while now.
It seems wasteful to duplicate a grid pattern across a field of neurons. Unless - a large-scale distributed representation is part of what is coded. A fundamental problem in spatial problems is forming a good distributed representation between multi-dimensional manifolds. Thinking of how that might be used with this grid structure may offer insight into learning the problem(s) being solved in the early navigation structures.