An Apical Depolarization for Numenta: How to Generate the Allocentric Location Signal

Hi Jake,

Thank you for your interest in my theory! i would say (although, keep in mind this is a rough approximation of what i will post later on) if i had to summarize the theory in one long sentence, it would be the following:

This theory hypothesizes a way in which objects, their location in space, their orientation in space, and the body’s state are all modeled and inferred, which results in a method for modeling objects abstractly, as well as the prediction and production of motor movements based on location and orientation of objects in space, and finally introducing a mechanism for goal oriented behavior, and the production of these goals, based on the modeling of objects and their orientations, the surroundings, and the body’s current state.

moving on to your comments about ratSLAM, i have not heard of SLAM prior to this, however from what i understand about the grid cell techniques that where used, i hypothesize that it it logically identical to the pooling layer located in an egocentric layer 2/3a, as well as possibly 3b-beta. this is because these layers within an egocentric region are hypothesized to be achieving the exact same functionality as the functionality achieved by grid cell functionality. given what i know about my theory, i have concluded that the process of modeling egocentric locations i space into a cohesive map of its environment, (or its surroundings, if you will) is actually no where near a solution to solving the allocentric location signal generation problem.

…I realize that this may be a controversial view to take on this forum, but i have come to that conclusion, mainly because i hypothesize that the act of modeling the locations of objects in space is completely useless to the act of inferring features on an object at that modeled location.

In other words, egocentric location modeling does not have any direct affect on inferring allocentric features on objects. only the inferring of allocentric locations does. it is hypothesized that this inferring and subsequently production of allocentric locations is done in layer 6a in an allocentric region, (not an egocentric region), and does not involve grid cell functionality at all, (which is equivalent to an egocentric CC module pooling layer functionality)— it actually involves inference layer functionality, (specifically allocentric CT module inference layer functionality.)

(it should be noted that in most other egocentric regions of the cortex, the phenomenon of grid cells is not found— it is only really found in the medial entorhinal cortex. i hypothesize this is because this type (see {1}) of egocentric location modeling was a necessity for the functionality of the hippocampus, however not for all other cortical tissue which used a different type (see {2}) of egocentric location modeling, (which can pretty much be thought as the typical or normal type.)

to explain the difference between the two hypothesized types of LE modeling methods in the brain, (which are functionally equivalent) i like to draw an analogy from the difference between a basic scalar encoder, and a random distributed scalar encoder.

if you are familiar with these two types of representing scalar values, they are functionally equivalent (for most usage), however one encoder ( the basic one) takes a very simple, but intuitive method to representing scalar values, and simple array of consecutive on bits, which have a very obvious way to make semantically similar values to a given scalar value: just shift the position of the on bits to be at a nearby position of the given scalar value. and then you have a semantically similar scalar value, because a decent amount of bits will still overlap between the given one and the new scalar value.

There is another, less obvious way to construct a scalar encoder: simply preserve the fact semantics similar scalar values should have encodings with a high overlap score, and then represent any scalar value however you feel is adequate, as long as it adheres to the rule above.

Given this analogy for how two different scalars may be encoded, we can apply the same idea to the modeling of locations of objects.

{1}: this is the “basic scalar encoder model” where locations in space are represented as physically near each other, in the representation of cells. this is what grid cells do.
{2}: this is the RDSE technique. representing it in which ever way is most convenient, as long as it satisfies the property of similar locations having representations with high overlap scores. this is achieved by the general pooling layer functionality in the brain, in the case of modeling locations, this is done by a pooling layer being located in the CC module in an egocentric region. it should be noted that this particular pooling layer is identical in functionality to all other isocortex pooling layers in the brain, whether it be in A or E region.

i hope this clears up some confusion about the theory, and how it relates to grid cells!

2 Likes