Project : Full-layer V1 using HTM insights

#30

Here some notes from what I learned just reading about the problem:

  • The main difference is that NEST is working with spiking neural networks (SNNs), which makes all the simplification of HTM to binary computations much more computational complex.
  • Nevertheless HTM theory comes from neuroscience and the algorithms are designed in a way so they should work for SNNs but adaptation is non-trivial.
  • However in the past the SNNs have lacked scalability and essential features like sufficient plasticity-customization were missing. (This is a general problem in the scientific community and reason why they often switched away from SNNs in e.g. robotics, if they aren’t experimentalists)
  • On the other hand an implementation in NEST can also be compatible with neuromorphic hardware projects like SpiNNaker or BrainScales. This makes it really interesting, even though they add complexity they ultimately are designed to run in parallel, which is hard to archive for HTM on traditional computer/network architectures.

Kind regards

#31

About SNN adaptation : what I got very succinctly from NEST framework presentation is a reference to the fact that they would not typically operate on weight-based models, but rely on much more topology-oriented connectivity lists. Topology is not what the canonical HTM library would do in default “global” mode but is still, in my view, one of the primary strong points of the HTM state of mind : caring first about a topology of dendritic tree and not bother too much about synaptic weight.

As for conversion of a spike “frequency” to a one-bit signal… After seeing one of their visuals which does greatly look like an SDR, could this be for HTM simply a matter of tuning the simulation clock to capture each spike at precise time t as an “on” bit ?

Another idea I had a few days ago to integrate a per-cell scalar information (such as spike frequency) as input to an HTM model was that it could avoid impacting the implementation of the excitatory pathways (ie, not driving higher excitation to postsynaptic pyramidal cells), but rather, to control the level, or extent, of surrounding inhibition.

[Edit]Oh sorry, @kaikun, I think I finally understand what is at stake here. Is it that SNN have progressive increase in depolarization level until they fire at threshold-crossing ? Yes this seems harder to reconcile with HTM indeed.

#32

That’s how I do it. A bit staying on is the same thing as a neuron spiking as fast as it can. If it’s one spike per hundred time cycles there is a 1% duty cycle. This is useful for giving things priority. Whatever most often signals (such as hunger bit or other need) gets most acted upon.

#33

I found a PDF version, and scanned through it a little:

That led me to this one that I now maybe half understand:

We really need a visual of these waves, or something.

1 Like
#34

I’ve started to read the decades old classic Hubel&Wiesel book, thanks to another link provided by @bitking. I’m almost done with the MIT course, although I’ll probably watch some of them again. In parallel, I’ve scanned through more recent publications.

From all this, a general scheme for the cortical layout of the computer model I intend to write is starting to emerge. Some of it is trying to match with HTM model of excitatory neuron, and some will try additional specifications, mostly related to the handling of topology.

  • I propose that the axonal arbors of excitatory cells are everywhere a fixed and pre-wired setup, that has to be precisely accounted for in the model. They will be allowed to target one or a set of given laminaes in either same cortical sheet as their soma - in which case their source cell position will constrain the extent to which they can transmit information ; or into a distant cortical sheet (eg. axons from LGN to V1, or from V1 to V2) - in which case a topological transform function, however complex, shall be specifiable (V1 to V2 could map a fovea-to-periphery reversion, LGN to V1 should map alternating zebra patterns of ocular dominance, things like that).

  • One very important aspect of this axonal mapping is the amount of overlap, at any particular point of the receiving sheet, from other axons originating from other locations on same source topology. This will have a dramatic impact on both the computation capability of the receiving layer, and the model’s synaptic address footprint for each dendritic segment, see below.

  • In contrast to the fixation of axonal arbors for distinct populations of cells, each excitatory cell dendritic trees seems highly plastic and will be so in the model : It will be allowed any kind of growth or shrinkage, from a (specifiable) default-at-birth which I imagine randomly pre-initialized on the order of 500µm in diameter. In my view, this plasticity is such that I’ve grown the belief that what we distinguish as cell types and layers based on the overall shape of the dendritic arbours is in fact mostly input driven (1).

  • For both biological accuracy and handling the layout size of the model, I propose to push the subdivision of HTM model one step further : what HTM currently defines as segments, I will decompose into subsegments, each with a definite position (2) and taping from laminar-specific inputs (The proximal part operates on a similar layout than distal segments, albeit from a single position fixed by the position of the soma itself). The position of the subsegment will determine which precise input cell it is allowed to retrieve information from, that is, which cells from the source sheet whose axonal arbour overlap the subsegment’s position.

  • This organization above has two effects : First, it is able to more precisely capture the fact, cited in Jeff’s paper, that as few as 8 coincident inputs can trigger a NMDA spike, provided they are close to each other by ~40µm, which is comparable to the extent of my proposed subsegments. Second, by using the (precomputable) reverse-mapping of axon-source-sheet to axon-arbour-center-in-target-sheet, this allows us to consider the task of sampling from source cells whose axonal arbors are ‘overlapping’ at the segment’s position, to simply sampling an area of definite dimension (corresponding to the size of axonal arbors) around that corresponding center in the source sheet.

  • This also enables us to probably fix an upper limit to the address size of each synapse from a given source. With my current synopsis, I believe a total footprint of 16 bits per synapse (up to 12b or 13b address + 4b or 3b stochastic permanency value) is manageable. The remaining offset information to retrieve the actual source candidate are spread out over much coarser subsegment divisions, even coarser per-cell information, or static per-cell-population templates as well as axonal mapping templates.

  • As hinted by the paragraph on dentritic plasticity, a given cell may however decide to tap from several distinct sources - possibly on distinct laminars, but also each laminar (viewed as axonal arbor targets) may hold several axonal mappings (from different sources) together, as long as the sum of distinct synaptic possibilities for a subsegment does not shoot over the 4096 or 8192 candidates for the 12b or 13b address schemes for synapses on each subsegment. According to my first-shot computations, a cell whose dendrites are tapping from a single laminar containing axonal arbor of a single source could still have subsegments able to sense inputs from a lower area spanning around a circle of about 2mm (12b) or 3mm (13b) in diameter over the cortical sheet, even if bijectively mapped to it, which sounds promising for the “integration over wider area” functionality of, say, a V2 relative to a V1.

  • I’m still working on how to convert (or carry on and work with) the fundamentally per-cell-scalar output of LGN, to the proposed binary scheme of HTM. I believe I’ll try some different options at this point.

  • In the case of L4 “simple cells” in V1, In my view, NMDA will probably be a requirement to overshoot the increased threshold from vast correlated inhibition, instead of being a mechanism for prediction as in TM. Such strong input-intensity-correlated inhibition is indeed supposed to play a major role in ensuring that cells respond preferentially to orientation, no matter how faint, and not to intensity itself when less-well oriented. Some “Complex cells” achieving motion direction selectivity, however, could rely on a more TM-like scheme.

Now on a more general note, I’ll start with V1 cells around identified layer 4 and see how they manage to organize orientation selectivity by themselves, (that is, by input-driven progressive reorganization, aka. learning) also in a disposition hopefully resembling the biological one. If I do manage this, I’ll start to add layers one by one, and possibly try to quickly get to L5 with an almost direct mapping to superior colliculus and thus starting to mess with oculomotor stuff.

The simulation of V1 will be that of any primate of the Hominoidea branch (3), either extant, extinct or even imaginary. This is to ensure a V1 layout which will look closely like the human one (4), without necessarily be constrained to be all-alike on the matter of, eg, cortical size. Thus here, V1 could possibly be set as ~3cm across, which for a full scale simulation would bring the number of V1 microcolumns (30µm spaced) to a manageable number of around 1 million on one hemisphere (5).

(1) See the V1 oversizing and multi-lamination of L4 in the most commonly accepted lamination scheme, and eg, the competing Hässler views of what should be called L3 or L4. Also, there is evidence that spiny stellate cells on V1 L4 themselves start out as regular pyramidal cells and progressively take their distinctive form, most probably from the drive of their specific sensory input. I take all those as clues that these laminar concerns are probably developmental and simply dependent on their ambient signal contexts. What seem fixed to specific positions and laminations are the incoming axonal arbors, though, which I’ll try to reflect (and take advantage of) in the model.
(2) position of subsegment is readily identifable as a microcolumn index, and/or offset from microcolumnar indexed position of soma
(3) tail-less monkeys and apes, of which we are a species
(4) although retina and V1 of the quite close-to-us macaque, is already very similar, they seem to have an additional lamination in their V1 “inbox”. This is believed specific to the sister Cercopithecoidea branch to which they belong.
(5) “Manageable” here is to be taken with some reserve : it is likely that for current PC this is still vastly out of reach - and thus I’ll start with a parameter to specify a much smaller extent of the truly simulated area around fovea - but not so much out of reach as to be inimaginable for the near future, or with many computation clusters.

1 Like
#35

I have been conversing with @gmirey for some time on this general topic. We endeavored to get a sense of the scale of the relative dimensional quantities of the temporal layer, inputs and cell/column spacing, number and range of the connections.

You know - the basic facts of making a realistic model. Some of what was discussed was what is known about reach and density of the dendritic arbor in the “input” layer.

We established the key factors would be: what is the spacing of the ascending corticocortical axonal bundles, what is the spacing of the columns/cell bodies, what is the dendritic density, what is the synapse density.

I did a little digging and came up with this:
We are trying to describe a complex 3-dimensional structure composed of messy biological bits.
You indicated that you will be back for layer 2/3 at some future time so let’s focus on lower layers.


Looking at the input corticocortical connections - how apart are they? As indicated earlier - we will skip the massive thalamocortical arbors as I don’t think that you will have to model that as an information path.
These massive thalamocortical arbors are a shotgun to insure that layer IV is induced into resonance with the thalamus

We have to account for cell body spacing, ascending axon bundle spacing, dendrite reach & dendrite arbor spatial density. Fortunately, I am finding papers that offer some figures on all of these items.

Note the massive inter-layer local projections from deep pyramidal axons. Keep in mind that they primarily project on inhibitory inter-neurons, suppressing the losers in local recognition contests.

Numerous papers seem to agree that the spread of the layer 2/3 and lower layers dendrites (radius) is about 300 micrometers (0.3 mm) that gives a diameter of about 0.6 mm at the extreme tips of the dendrites. This is the long tails (in the truest sense of the word) and the average length is somewhat shorter. Also - the dendrites don’t shoot out in a straight path so the 0.5 mm shown in this diagram is a better maximum figure.
See this for more details:


The spatial density of the branching dendrites tends to fill space in a constant density.

so how many synapses are there for a unit of space?


It varies but let’s say 1 per cubic micrometer.

While we are at it - what is the density of the microcolumns and cell bodies?

Tissue Property or Measurement Value
Y, average interneuron distance. 20.0 μm (1) (estimated)
P, average intercolumn distance. 26.1 μm (1)
ρ, slide neuronal density 0.0013 neurons/μm2 (1)
l 341 μm (1)
s, thickness of the thin slice 30 μm (1)
radius or neurons (average) 5 μm
% interneurons 20% (2)
Model Parameter Value
dn, interneuron distance. 23.1 μm
dc, intercolumn distance. 29 μm
θ Uniform random [0, 2π]
φ Uniform random, [0, π/3]
% omitted neurons 40%
δdn Gaussian distribution, σ=4.7μm
δxn, δzn Uniform random, [−6 μm, 6 μm]
δxc, δzc Uniform random, [−6 μm, 6μm]
N, number of images for average. 500

How handy is that?
The microcolumns are on ~ 26-30 micrometer spacing.

Now we have to match that up with the ascending axonal bundle spacing that pokes through the dendrite arbors. You are looking at area 17 and the Hubel & Weisel paper has a lot to say about what goes on there.
so does this paper:
Organization of pyramidal neurons in area 17 of monkey visual cortex Alan Peters Claire Sethares
Sadly - behind a paywall.
The abstract is very helpful:
Abstract

In sections of area 17 of monkey visual cortex treated with an antibody to MAP2 the disposition of the cell bodies and dendrites of the neurons is readily visible. In such preparations, it is evident that the apical dendrites of the pyramidal cells of layer VI form fascicles that pass into layer IV, where most of them gradually taper and form their terminal tufts. In contrast, the apical dendrites of the smaller layer V pyramidal cells come together in a more regular fashion. They form clusters that pass through layer IV and into layer II/III where the apical dendrites of many of the pyramidal cells in that layer add to the clusters. In horizontal sections taken through the middle of layer IV, these clusters of apical dendrites are found to have an average center‐to‐center spacing of about 30 μm, and it is proposed that each cluster of apical dendrites represents the axis of a module of pyramidal cells that has a diameter of about 30 μm and contains about 142 neurons.

Is this typical for all areas of the cortex? Unfortunately, this useful paper is also behind a paywall.
The Organization of Pyramidal Cells in Area 18 of the Rhesus Monkey - Alan Peters, J. Manuel Cifuentes and Claire Sethares
Maybe you can hunt this down on your own. It has this handy histogram:
Nearest%20axonal%20neighbor%20histogram

So - 20 to 30 micrometer spacing on the axonal bundles too. I suppose that makes sense that it matches up with the microcolumns. That 30-micrometer spacing seems to turn up a lot in these papers.

Like you I have to see pictures to fix this in my mind. 30 um out of maybe 500 um is a little less than 10 percent of the dendrite arbor field. I think it looks something like this:


Yes - before you jump on me - the dendrite arbors should be denser but the picture is too complicated to make out then.

@gmirey responded with this excellent paper:
https://www.ncbi.nlm.nih.gov/pubmed/15260960
and this one:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4370903/

Working with what we have so far:

Since we are visually oriented - a Graphic solution:
30 µm spacing of cell bodies
500 µm / 30 µm = 16.6
Rounding up = 17 = diameter of dendrite reach expressed in terms of cell bodies.
Set up a repeating field of cell bodies … inscribe a 500 µm circle
inspect and remove the cells x 4 corners in the local area…
Any blue cell reaches the center of the inscribed circle.
Since this is a repeating pattern any starting point has the same solution.
The overlap of dendritic reach is:
(17 x 17) – (4 x 13)
289 - 52 = 237 overlapping cells at any given point.
An inscribed area of 500 µm diameter circle gives an area of πr²
3.141 * (250 * 250) = 196350 µm² = input area for any cell body
Assuming that the layer is I µm thick for a first approximation (prolly much bigger!)
Going with the 1 synapse per µm³ and dividing by the 237 overlapping fields
196350 µm² / 237 = 828 synapses per cell per 1 µm of thickness.

It is likely that the dendrites are spaced out through the thickness of the layer. It is very likely that this density is shared with all the layers. (2/3, 4, 6)

4 Likes
HTM Mini-Columns into Hexagonal Grids!
HTM Mini-Columns into Hexagonal Grids!
Hex Grids & 1000 Brains Theory
What does "the receptive field center" mean in SP?
Capacity of a Macrocolumn
What is the exact biological fact which justifies the existence of the potential pool?
Basic / Specific Questions about coding HTM
HTM & columns Working and biological consistence
Understanding Intra-Region Connections and Temporal Context Requirements
Synapse Competition
At how many different minicolumns is connected an unique neuron in a minicolumn?
HTM Mini-Columns into Hexagonal Grids!
SDR theoretical properties and HTM
The HTM Spatial Pooler: a neocortical algorithm for online sparse distributed coding
#36

Yes, this exchange was very fruitful and @bitking insights are invaluable.

My model values differ slighly from those of last blue square, since I’m still attached to the hex lattice to layout even the microcolumnar structure, both from personal preference and from the belief that it will help quick-compute indexes on roughly circular areas to account for both those dendritic and axonal arbors concerns.

I’ll try to come with details on some of my latest figures with the hex lattice, and specifics of most of the layout, hopefully with a few drawings as well… sorry for the dense walls of text to this point. So thank you @bitking for those figures, discussion, infos, and now for bringing color to this topic :wink:

#37

Hi everyone. This time I also tried some visualizations.

First, meet the hex lattice

HexLattice
This here could very well be a top-down view of any cortical sheet model. Each microcolumn center would be identified with one of these points.

Each dot here has an equal spacing with any of its neighbors, and there are no ‘cornering’ neighbours distinct from ‘side’ neighbours, as would be the case with a square tiling scheme. I think it makes sense for an area so heavily concerned with topology as is V1. And in fact for any cortical area if we take the view that they all operate on same algorithm.

Now, we could object that using this weird layout seems to be lots of pain for little gain, however as shown below, there are very straightforward schemes to bind our common indexing for two-dimensional arrays to this:

HexLatticeIndexing
You see that the grid above is still a common, regular, 2D addressable thing. Only skewed a little when representing it on display, or taking topology itself into account, so this layout has almost no impact on iteration times or global indexing complexity.

Now, an additional benefit is that any regular neighborhood is already a pretty good approximation of a circular area. Below is an illustration of an area which spans 4 neighbors from its center point, so this is a “9-across” area:

HexLatticeNeighbourhood
I have a quick and dirty Excel sheet at my side, which tells me that there are 61 distinct points inside this shape. And it will give me that number for any of the “neighbours from center” values. You can count them here if you wish.
With a few tweaks, you can fast-compute any offset-from the center (in the global addressing scheme described as a regular (x;y) index above) from any of these 0…60 possible address values. And this holds true at any scale, thus increasing the speed at which we can compute the extent, retrieve a point address, or iterate upon any of these roughly circular areas, of any diameter.

So, an index taken as an “offset-from-center” in the figure above would fit in only 6 bits. But obviously, from the reflection and estimations we carried around together with @bitking, we’d need to describe much larger areas for most of our concerns (which are both sides of the following matter : the extent of the axonal tree on one hand, and the dendritic extent on the other hand).

So let’s try to describe the “matter” in more detail.
You all know about the main excitatory cell of the cortical column : The pyramidal cell, and its HTM interpretation:

PCandHTM

Let’s add some rulers around this one guy, or one of its close cousins…

PCwithLegends

Now that ~30µm for microcolumn-to-microcolumn center we settled on with @bitking is maybe not a figure everybody would agree on. It is on the lower end of the proposed scales from wikipedia (up to ~50 or even ~80µm). I believe however that, if one would take a greater figure than ~30µm, the philosophy of the proposed organization would not be much altered, and it would only reduce the count of stuff-per-area that we need to consider. So, ~30µm seems like the “worst case” scenario.

Each soma of a pyramidal cell is set on one microcolumnar index, on the hex layout. Typical horizontal dendritic extent (whether basal or apical) is at its ease in a 8-neighbourhood (17-across), but dendritic plasticity would allow them to grow up to a 12-neighborhood (25-across) if really information-starved. My Excell sheet tells me that there are 217 distinct possibilities for the ‘typical’ case, and up to 469 distinct possibilities when stretched. Those require at worse 9 bits to localize a subsegment relative to its cell’s soma position.

DendriticExtent

We can thus fully encode a 3D position for the subsegment on 16 bits, if we take 7 more bits for the vertical, giving us 128 possible “input template” per cortical area. Now what’s an input template ? It is a description of the axonal side of the problem. These subsegments, as exemplified by the little part magnified in the blue box two pics above, are highly localized. Beside allowing the finer 8-coincidency threshold detection I propose, this high specificity of localization is the whole point of them.

An “input template” is localized in the vertical axis of the receiving sheet, and may allow several “sources”. A source is : a “sheet” of origin (same or other cortical area, or even a deeper part of the brain such as LGN), and a specific “population” of cells within that sheet, sending axons towards the receiving area. Now, for each given source, there is in the model a precise topological relationship to the receiving area, which allow us to :

  • define topologies as finely as the biological wiring we wish to simulate would require
  • put an upper limit to the (otherwise huge) address of input cell, as viewed from a subsegment.

We’ll take a slice below to keep things simple in an example of topological mapping :

TopoMapping

Maybe this looks like it is unecessarily complicated at this point, however now the wiring is completely specified. For a dendrite subsegment sampling the received input, the extent is applied the inverse scaling of the original mapping. In that example above, this would be divided by two, bringing the sampled neighbourood (around associated center) to two, ie. a 5-across area. You can check this on the above image, imagining how far apart on the source sheet could two cells having overlapping axonal arbors on the receiving sheet be situated. Any position in the original sheet layout thus gets a restricted area of incoming axonal overlap. For the (very lightweight) example above, it would look like this:

AxonalSampling

Now, the situation gets a little more complicated when the mapping is non-trivial, such as for ocular dominance stripes we have in V1, however I believe this is manageable.

Now, If we allow 12b or 13b per-synapse address, given a single sub-segment as proposed in my previous post, maybe there will be cases where the address-space is getting somewhat tight to allow multi-source with large axonal overlaps. Since I’m not confident we know of all weird-wiring cases which could potentially come up in the brain, let’s say we increase that value to a very conservative 16b per synapse… I’ll take whatever estimate you guys deem accurate for the max number of synapses in a subsegment having a length in the order of 40µm, but I believe this could be quite reasonable at this point.

At any rate, given a full-fledged ~6000 synapses cell, even with low filling-efficiency of subsegments at around 75% (this would bring the total synaptic “slot” count to ~8000), with 16b per synapse address and 4b for permanency value (permanency stored in another contiguous array, still very low profile as they’ll be stochastically updated), this makes the footprint of a cell at around 20KB.
The full connectivity state of a 100 thousand-cell simulation can thus be stored in around 2GB.

Seems high still ? Maybe. However the beauty of the topological structure is, no matter how many areas or minicolumns-per-area you’d wish to reasonably simulate, those 16b-address per synapse-on-given-subsegment values would never change. Were it computationally anywhere in reach, a full scale brain simulation could thus, in my view, still operate with those same values.

Which seems reasonable enough to start opening Visual Studio and work from there for the V1 sim. Now I have some work on my hands :wink:

6 Likes
HTM Mini-Columns into Hexagonal Grids!
#38

Excellent breakdown and visualizations. One thing that pops out to me is a potential memory optimization from applying a strategy like this to the Spatial Pooling algorithm. Each input bit could have random potential synapses that are addressed in “axonal arbour” coordinate space. Then given a target position in a destination sheet can become a reference point used to iterate the minicolumns which are referenced by those potential synapses. Think I’ll see if I can work this into HTM.go.

1 Like
#39

Sure, HTM at the moment does not lean heavily towards topology concerns, that’s why we could maybe see this stuff mostly as an “extensional” specification, which I believe could be integrated within current HTM Pooling or even TM algorithms, maybe with some addos for ‘t-1’ concerns (I think I will have to address t-1 also in the quite near future, maybe as what I’d call a distinct ‘population’ in the input template).
So, if you can get something running quickly along these lines and smoothly integrated with HTM that would be neat :slight_smile:

#40

Yes, I’m also thinking on how to apply it to TM algorithm. One complexity (besides T-1) is the distinction of multiple segments per cell. From a certain perspective, that could just be an abstraction born from a lack of topology in the algorithm (as you pointed out, Jeff always describes the distal synapses as coincidence detectors, activated when several of them that are close to other in space and time are active). It may not be necessary to model distinct segments in an implementation of TM that includes topology.

#41

Perhaps topology is not necessary for TM.
For my part my first “bet” in the simulation would be to consider that, besides position, dynamic input context, and output axonal wiring, nothing really distinguishes a pyramidal cell from another. So what must be true of a V1 L4 stellate, would be true of the cells which achieve TM. And I’ll try to fit-all in that scheme above.
I’m quite impatient of hearing back from your results, though :slight_smile:

#42

Thanks for sharing this. Very interesting and relevant learning on this topic.

2 Likes
#43

Thanks for the kind feedback everyone. Currently crunching numbers to come up with biologically plausible mappings from retina to LGN, and from LGN to V1. Biology is messy… and no two accounts agree on size, shape, or even function.

What’s the problematic, you ask ?

Here is the V1 retinotropic map (ie local associations from visual field to cortex) of a macaque monkey
MacaqueV1retino

Here similar stuff for a human
ModernHumanV1Retino

Scaling aside, I’m settling on an LGN “sheet” which will have similar topology as V1. So that this distortion seen above, giving huge coverage for fovea compared to periphery, will already occur on LGN.
I’m settling on a count of ~2M “relay cells” (will have to await further neuro theories to see if they are more than “relay”) per LGN on one hemisphere.

There are 6 main “layers” to LGN. The four dorsal layers are for Parvocellular (P) cells, which are assumed to relay signals associated to so-called “midget” RGC (tiny sensory area, sustained responses). The two ventral layers are for Magnocellular (M) cells, which are assumed to relay signals associated to “parasol” RGC (large sensory area, transient response). There are also 6 intercalar layers between each, comprised of Koniocellular (K) cells… which were found relatively recently and are poorly understood. But I’ll try to model those also, anyway.

Of these 2 millions, some sources claim ~80% P, ~10% M and ~10% K. So I’ll settle on this.

Since there are two layers of M, one for each eye to the LGN (each LGN operates on contralateral visual hemifield), I need to work with it as a base, setting each “point” in visual space to two M cells. Thus according to P/M ratios, each point is also 16 P cells. Given those values, I believe P sheet will be distinct in scale from M sheet. Let’s say by a scale factor of 2. So per 1 point on the M topology, we’ll have 2x2 points on the P topology. So there will be 4 cells per point on the P topology.

Thus how many points total in the P sheet ? one fourth of 80% of 2 millions, ie. 400 thousand.

I think I managed by now to find some mapping from retina to such a sheet which makes sense, still using hex lattice as described above. I’m able to fit a similar shape of about ~400,000 points, with similar retinotropic properties, in a “square” (in term of indices) hex-layout of 655*655 = 429,025 positions, thus achieving 93% coverage of the regular grid with meaningful data.

I find this 655*655 value almost surprising. This is not a staggering resolution compared to most “images” we routinely work with on a computer. This is a clue that high-res center of fovea and integration of saccades are paramount in our visual perception.

Next hairy topic to tackle is LGN to V1, with axonal “setup” of ocular dominance, resulting in a zebra pattern. Here’s what it should look like:
OcularDominanceColumnsMacaqueUpHumanDown
Top are ocular dominance stripes (one eye black, the other white) as seen on macaque V1, bottom same thing on a human one.
Black area at the periphery is due to the fact that a single eye manages to reach up to the full field of view on each hemifield (this is called the monocular area). White area in the middle is location of the “blind spot” associated with ocular nerve (but I don’t think I’ll model this last one).

I have ideas to iteratively simulate this organization (once at startup) from the CO “blobs”, which in my opinion seem to somehow hex-grid (again !), right in the middle of those stripes.

Will try to come with figures and more home-made diagrams.

3 Likes
#44

Gary,
This book is really great.

Many thanks.

#45

Some of the visualizations I promised…

There are some accounts of different P-to-M ratios depending on eccentricity, so that would contradict my presented scheme above… although I can’t get reliable figures, everyone had a different opinion about that some years ago… now nobody ever cares. At least for papers which I can find on the good side of paywalls.

Nevermind, let’s stick to it.

First, there’s my indented mapping, looking like the retina cones layout around foveola (very tiny part of fovea with highest cone density, ~50 per 100µm)

FoveolaTilingHiddenHex

This is in fact a regular hex lattice which has been reprojected to concentric circles and salted with some random variation:

FoveolaTiling

An hemifield is mapped here (grey tiles are part of the other hemifield, vertical yellow line splits the two). You can see on this picture that despite their being irregular looking, number of tiles per concentric circles grow at a steady rate, first “circle” (pinkish purple) of two tiles, second one (bluish purple) of five, then 8, then 11, then 14, then 17… increasing three by three. dashed yellow lines split the “pie” in 60° parts, on which the regular hidden “hex grid” is most perceivable.
On foveola, there is believably, rougly one OFF midget and one ON midget RGC per cone, thus both will get associated to one point in the P-sheet topology (and the other eye provides the remaining two). So those tiles will get mapped as such :

FoveolaMapping

Foveola spans to roughly 0°35’ (35 arcminutes) eccentricity, that is about 0.175mm in distance. So with 88 of these circles at around 2µm per cone, we’ve roughly reached that value, which is the limit of the foveola on each side, after which the tiling will start to get sparser.

So I set that value of 87 steps (spanning 263 points) as one ‘chord’ to my regular hex lattice, and I get to this :

FullRetinotropicMapping

Each line of vertical indices after foveola (I don’t want to use the word ‘column’ here to avoid confusing terminology) is spaced with its neighbors in the eye following a geometric progression (in eccentricity) after a few smooth-matching steps. That’s why those values grow so rapidly to 90° after some point, which is consistent with what our brains seem to be doing, and the overall potatoish shape is not much weirder that some of the messy pieces of V1 we can find a picture of. We’ll get a steady increase in number of RGC per such “line” up to about 3 degrees, then we’ll reach a plateau, and eventually fall off from 16° onwards.

In total, there will be 11,660 points in the foveola region alone, on an overall total of about 400 thousand points still (each associated with 4 midget RGCs, give or take weird things and monocularity happening at periphery).

2 Likes
#46

Interesting stuff, folks. The next HTM Hackers’ Hangout is today at 3PM Pacific time. Anyone participating in this thread is welcome to join and discuss these ideas there. You can even share your screen if you want to show graphics. Is anyone interested? @gmirey @Paul_Lamb @Bitking @Gary_Gaulin @SimLeek

1 Like
HTM Hackers' Hangout - May 4, 2018
split this topic #47

5 posts were merged into an existing topic: HTM Hackers’ Hangout - May 4, 2018

#48

Thanks for bringing that topic up on the hangout.
Mark expectations of working code seem high ^^ I hope we can deliver.

At any rate, the heavily localized disposition of V1 cells in sensory space has forced us to ponder about topological concerns at length. We’ve been discussing those concerns with @bitking in PM for some time, right from that failed joke which did not quite land where I wished.

So even before writing any code it was clear that there were topological problems to tackle. And now aside from that V1 project, I’m glad that some of what we came up with for topology seems usable enough that people like @Paul_Lamb are starting to toy around with it and see how it fits on the overall scheme.

1 Like
#49

Some musing - I have been thinking about how your model will learn edge orientations.

Thinking biologically - It occurs to me that at several stages in the formation of each map one organizing principle is the chemical signaling creating “addresses” and gradients in a given map. Sort of a chemical GPS.

At one point in early development the sheets double up and split apart and when the maps end up in whatever they go, some cells in the source map form axons that grow back to the sense of “home” in distant target maps thus retaining a very strong topological organization.

This is amazing to me - if a cell body was the size of a person the axon would be the size of a pencil and perhaps a kilometer long; It still finds its way to the right place in the distant map.

The gradients part is a chemical marker that varies from one side of the map to the others. There could be far more than just x&y. Obviously, an x&y signal will form a 2d space. Smaller repeating patterns would be like zebra stripes. Nature is a relentless re-user of mechanisms. Thinking this way, look at the ocular dominance patterns. I can see the same mechanism as being a seed to the local dendrite & synapse growth.

What I am getting to is that some heuristic seeding may take the place of genetic seeding outside of pure learning in the model. I would not view this as “cheating” but just the genetic contributions to learning.

What does that mean?
In a compiler we think of the scope of time: the reading in of the header files, the macro pass, the symbol pass, code generation, linking, loading and initialization, and runtime. Even that can have early and late binding.
All this is different with interpreters. You might think of sleep as the garbage collection of a managed language.

In creating a model we may think of different phases to apply different kinds of learning.

1 Like