Numenta Research Meeting - Dec 4 2019

Tomorrow 10:15AM Pacific.

1 Like

Starting soon.

Time code ~35 minutes. Discussion of the location of features.
It sounds like they are trying to cast this in the sense of a computer graphics code where everything has a spatial location in some larger frame. I see this as the wrong way to think about this.

I think that an object is a cluster of features; we parse out the features quickly during perception. One of the common set of features is perception from different angles. We also eventually learn to see parts as features - with practice we can learn to compose and decompose those features.

With a novel object, we find it hard to make out the form and features and often have to view it from different angles or manipulate it to become familiar with it. This parsing of part and whole is what goes with building the feature set that makes up this object. Learning the relative arrangement of the perceived part is normally relative to the object frame of reference.

One of our human tricks of mental navigation is zooming through levels of representation. This was well described by William James in his 1918 book “The Principles of Psychology, Volume 1,” [1] page 581, figure 41-43.
jame_580_0039_03
jame_580_0039_02
jame_580_0039_01

An object is the collection of the maps forms the geometry of a high-dimensional manifold . Mental navigation of this space is done changing parts of the manifold (one or more map contents) which changes the object representation. The transition from one high dimensional location to another location in this manifold that is the representations of an object is the evolution of the contents of consciousness. Examples might be: whole/part, whole/whole(different view), or whole/related object.

I see this mental movement as implemented by an “anchor” in part of the mental manifold, with each of the object features as branches of the manifolds. Moving through the “level of representation” is a substitution of some element of a map that is stabilized by the connected maps. Focusing on a feature changes the “anchor” of this high-dimensional manifold.

I think that the selection of this “anchor” is part of the function of the feedback direction of connected maps. The forebrain asserts with this feedback path and the rest of the cortex forms the manifold that settles into the the lowest energy configuration around this fixed point; your focus of attention.

For this to make any sense at all you have to realize that this is the large scale state space of a global activation pattern covering multiple map locations in the brain. This joining of feature space and relative locations is likely to be found in the temporal region. The mental navigation of these high-dimensional manifolds is some combination of location and components. Considering the primacy of spacial components in mental maps it is likely that some aspect of location is part of every mental manifold.

I have to stress that we manipulate the the contents of these maps at the speed of our eyes scanning an object with our constant saccades. At the speed of parsing sounds like speech. It’s so fluid that it is hard to even see it as a single state - it is flying though these mental spaces at the speed of thought.

Indeed - it is thought.

At time index 1:10 there is some rabbit chasing with Kohonen space. I am pretty sure that this is what Cortical IO is using to organize the contents of the retina space. While I don’t think that they way they build it is biologically plausible, there may be a way to build this map that is.

[1] The Principles of Psychology, Volume 1 - William James

2 Likes

Probably you already known this:

In contrast to 2D, it seem to be a lack of global lattice in 3D. Could this contradict the current notion of grid cells?

I wonder if focusing on physical objects because it is easier may be obscuring some necessary underlying mechanisms. Part of the “priming” for the cortical circuitry to work well in the physical dimensions of space could simply be higher concentrations of specific Gabor filter configurations in the right regions.

I think it is also important to remember that you do not need to go somewhere as distantly abstract as “democracy” to begin needing more dimensions than those of physical space. There are dimensions like temperature, colors, brightness, pitch, etc. etc. at even the very lowest levels of abstraction.

2 Likes

Since it came up in this discussion (6:05), here is some good research into the encoding of time:

Integrating time from experience in the lateral entorhinal cortex

Albert tsao1,3*, Jørgen Sugar1, li lu1,4, cheng Wang2, James J. Knierim2, May-Britt Moser1 & edvard i. Moser1*
Integrating time from experience in the lateral entorhinal cortex | Nature

The encoding of time and its binding to events are crucial for episodic memory, but how these processes are carried out in hippocampal–entorhinal circuits is unclear. Here we show in freely foraging rats that temporal information is robustly encoded across time scales from seconds to hours within the overall population state of the lateral entorhinal cortex. Similarly pronounced encoding of time was not present in the medial ntorhinal cortex or in hippocampal areas CA3–CA1. When animals’ experiences were constrained by behavioural tasks to become similar across repeated trials, the encoding of temporal flow across trials was reduced, whereas the encoding of time relative to the start of trials was improved. The findings suggest that populations of lateral entorhinal cortex neurons represent time inherently through the encoding of experience. This representation of episodic time may be integrated with spatial inputs from the medial entorhinal cortex in the hippocampus, allowing the hippocampus to store a unified representation of what, where and when.

2 Likes