Live stream at 10:15AM PDT tomorrow.

Starting in about one hour.

Havenâ€™t found â€śCassanova, Tillquist 2015â€ť referenced on this slide, but Casanova & Tillquist 2008 â€śEncephalization, Emergent Properties, and Psychiatry: A Minicolumnar Perspectiveâ€ť contains the quoted sentences.

The idea of having a functional unit of as little as 120 neuron cells is particularly intriguing. Even with an average of 7000 synapses per neuron (rough estimation - please correct me if Iâ€™m wrong) makes simulating this in an affordable computer conceiveable.

Could we break this down a bit more? How many of those are pyramidal cells? How many inhibitory and which kinds? And how are these topologically arranged?

How many of those are pyramidal cells? How many inhibitory and which kinds? And how are these topologically arranged?

I used figures 8 and 9 of â€śIzhikevich & Edelman 2008 Large-scale model of mammalian thalamocortical systems (SI Appendix)â€ť to configure the HTM-scheme untangling_sequences experiment (an extension of Numentaâ€™s combined_sequences project). Their figure 9 table of neuron types and synaptic connectivity, assembled from multiple earlier studies, appears comprehensive but it seems to me that there could be a few arbitrary assumptions in the counts shown. I would be interested in any similar numerical analysis of cortical microcircuitry.

(untangling_sequences models L2/3 with the column pooler algorithm and L4 with apical tiebreak temporal memories; based on the above, it is typically configured to model 30 p2/3 pyramids and 30 L4 excitatory cells (10 each of ss4(L4), ss4(L2/3), p4) per minicolumn. So a run with 25 â€ścortical columnsâ€ť of 100 minicolumns each is simulating 150k neurons (but typical training creates only about 80 synapses per cell))

Following up on this presentation, I ran across this intriguing hypothesis regarding generalization of grid cells to abstract reasoning spaces:

Extracting and Utilizing Abstract, Structured Representations for Analogy

Abstract

Human analogical ability involves the re-use of abstract, structured representations within and across domains. Here, we present a generative neural network that completes analogies in a 1D metric space, without explicit training on analogy. Our model integrates two key ideas. First, it operates over representations inspired by properties of the mammalian Entorhinal Cortex (EC), believed to extract low-dimensional representations of the environment from the transition probabilities between states. Second, we show that a neural network equipped with a simple predictive objective and highly general inductive bias can learn to utilize these EC-like codes to compute explicit, abstract relations between pairs of objects. The proposed inductive bias favors a latent code that consists of anti-correlated representations. The relational representations learned by the model can then be used to complete analogies involving the signed distance between novel input pairs (1:3 :: 5:? (7)), and extrapolate outside of the networkâ€™s training domain. As a proof of principle, we extend the same architecture to more richly structured tree representations. We suggest that this combination of predictive, error-driven learning and simple inductive biases offers promise for deriving and utilizing the representations necessary for high-level cognitive functions, such as analogy.

I found this paper by way of its citation of another paper:

Grid cells, place cells, and geodesic generalization for spatial reinforcement learning.

Abstract

â€¦although it is often assumed that neurons track location in Euclidean coordinates (that a place cellâ€™s activity declines â€śas the crow fliesâ€ť away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes.

So grid cells learn a kind of salience-warp to better represent the decision-space. Taken to n-dimensions with a dimension per minicolumn, might this mean that minicolumn displacement cells are learning a local salience-warp to better represent the hyperdimensional simplex decision-space?