Potential Functional Role for Minicolumns in Neocortex

Live stream at 10:15AM PDT tomorrow.

3 Likes

Starting in about one hour.

3 Likes

The David Tank picture @jhawkins is showing here comes from this paper.

And here is the Charles Gilbert paper he quotes at 29m27s.

4 Likes

Haven’t found “Cassanova, Tillquist 2015” referenced on this slide, but Casanova & Tillquist 2008 “Encephalization, Emergent Properties, and Psychiatry: A Minicolumnar Perspective” contains the quoted sentences.

3 Likes

The idea of having a functional unit of as little as 120 neuron cells is particularly intriguing. Even with an average of 7000 synapses per neuron (rough estimation - please correct me if I’m wrong) makes simulating this in an affordable computer conceiveable.

Could we break this down a bit more? How many of those are pyramidal cells? How many inhibitory and which kinds? And how are these topologically arranged?

1 Like

@Falco

How many of those are pyramidal cells? How many inhibitory and which kinds? And how are these topologically arranged?

I used figures 8 and 9 of “Izhikevich & Edelman 2008 Large-scale model of mammalian thalamocortical systems (SI Appendix)” to configure the HTM-scheme untangling_sequences experiment (an extension of Numenta’s combined_sequences project). Their figure 9 table of neuron types and synaptic connectivity, assembled from multiple earlier studies, appears comprehensive but it seems to me that there could be a few arbitrary assumptions in the counts shown. I would be interested in any similar numerical analysis of cortical microcircuitry.

(untangling_sequences models L2/3 with the column pooler algorithm and L4 with apical tiebreak temporal memories; based on the above, it is typically configured to model 30 p2/3 pyramids and 30 L4 excitatory cells (10 each of ss4(L4), ss4(L2/3), p4) per minicolumn. So a run with 25 “cortical columns” of 100 minicolumns each is simulating 150k neurons (but typical training creates only about 80 synapses per cell))

2 Likes

Following up on this presentation, I ran across this intriguing hypothesis regarding generalization of grid cells to abstract reasoning spaces:

Extracting and Utilizing Abstract, Structured Representations for Analogy

Abstract
Human analogical ability involves the re-use of abstract, structured representations within and across domains. Here, we present a generative neural network that completes analogies in a 1D metric space, without explicit training on analogy. Our model integrates two key ideas. First, it operates over representations inspired by properties of the mammalian Entorhinal Cortex (EC), believed to extract low-dimensional representations of the environment from the transition probabilities between states. Second, we show that a neural network equipped with a simple predictive objective and highly general inductive bias can learn to utilize these EC-like codes to compute explicit, abstract relations between pairs of objects. The proposed inductive bias favors a latent code that consists of anti-correlated representations. The relational representations learned by the model can then be used to complete analogies involving the signed distance between novel input pairs (1:3 :: 5:? (7)), and extrapolate outside of the network’s training domain. As a proof of principle, we extend the same architecture to more richly structured tree representations. We suggest that this combination of predictive, error-driven learning and simple inductive biases offers promise for deriving and utilizing the representations necessary for high-level cognitive functions, such as analogy.

I found this paper by way of its citation of another paper:

Grid cells, place cells, and geodesic generalization for spatial reinforcement learning.

Abstract
…although it is often assumed that neurons track location in Euclidean coordinates (that a place cell’s activity declines “as the crow flies” away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes.

So grid cells learn a kind of salience-warp to better represent the decision-space. Taken to n-dimensions with a dimension per minicolumn, might this mean that minicolumn displacement cells are learning a local salience-warp to better represent the hyperdimensional simplex decision-space?

2 Likes