Hi,
my question is short, and is this one in the title, and the same question about cortical columns thank you very much for answers.
Hi,
my question is short, and is this one in the title, and the same question about cortical columns thank you very much for answers.
Are you asking how many neurons are in a minicolumn?
I think HTM usually uses 32 per minicolumn, but not always. It depends on the data which the program processes. That’s the number for temporal memory, but it might be different for the input layer for object recognition. I don’t know whether the output layer for object recognition has minicolumns. It might not be decided yet.
In a cortical column, I don’t know if there is a standard number, but I think the number is a few hundred to maybe a couple thousand minicolumns in the input layer. I’m not sure how many neurons are in the output layer.
In the brain, that’s a hard question. I assume you are asking only about HTM, and not asking about the brain. If you want, I can try to answer how many neurons are in a minicolumn in the brain.
Are you asking how many minicolumns a single neuron is part of? Each neuron is in one minicolumn and one cortical column.
Are you asking how many minicolumns provide input to a single neuron? I’m not sure. Since a minicolumn contains multiple neurons, probably a large fraction. It depends on a lot of factors, though. It also depends on the algorithm or implementation.
In the object recognition output layer, each neuron can receive input from any cortical column.
If you are asking something else, you can send me a personal message with your question in your language and I will try to understand it.
In this post:
I point to literature that states that apical dendrites arbors (inputs) span upwards of 0.5 mm.
The literature also states that columns are spaced about 30 ÎĽm apart.
This works out to the dendrites from any given cell passing by upwards of 8 columns in any direction.
Stated another way - roughly 240 columns are within the average reach of the cell.
These axons can also span all the way to other maps and there connect to the same 240 or so column targets in the distant map.
In a different post I show that the cells in layer II/III have reciprocal axon connections that are likely to be the mechanism for forming hexagonal activation patterns.
The axons (outputs) from a cell body can reach a much longer distance in the same (local) map but the tend to connect to inhibitory inter-neuron cells and act to suppress competing activity and enforce sparse activation.
Axons can extend between cortical columns, so they can receive input from thousands of cortical columns. If the question is how many minicolumns are within direct reach, then your answer makes sense. It might be multiplied by a number because multiple minicolumns extend through the depth of the cortex. If I recall correctly, a study based on synchrony found minicolumns which don’t extend through the entire depth of L5, maybe even L5a/b.
A single HTM neuron in any minicolumn can be connected to as many neighbouring neurons(from nearby minicolumns) depending on the connection properties of the neuron. The neuron can either connect to the nearby cells in nearby minicolumns or any neuron from any minicolumn in the entire region. As far as how many minicolumns does a neuron get connected to is concerned: it is usually connected to a subset of the total active minicolumns and this subset is decided manually. A neuron can ideally connect to any other neuron in its proximity(axonal or dendritic reach) that gets active before it does often.
As far as the total scope of connectivity is concerned, there is no such essential upper bound(unless the no. of connections reduce the network’s efficiency or mess with the algorithm properties) as axons can travel long distances.
Thank you all for answers. More concisely i am asked how many minicolums are connected to a single pyramidal neuron in a single minicolum, and same questoin with cortical columns. When i tell “connected” i mean by one single axon or dentrite (i don’t really know witch ones play more in far connection), thus i mean without any relay. So for example, is one mini column connected at all others in all primary auditory cortex ? And same with cortical ones. Or they own a a receptor field ?
If you mean how many neurons share a proximal receptive field within a mini-column, the answer is variable. It depends on what layer and what animal and sometimes what part of the cortex. Our examples range from 4-32 cells per mini-column.
This question doesn’t make sense to me. A cortical column is not like a mini-column.
I think you mean there is a potential synapse between two cells. Yes, it is where the input cell’s axon stretches toward the receiving cell’s dendrite. There is a potential synapse here, marked by a scalar value. As these neurons learn, the permanence increase. If it breaches a connection threshold, we will call it a connected synapse.
I think you’re talking about distal connections, and I’m not sure about the answer. I don’t think about distal relationships between mini-columns, I think about distal relationships between neurons within the mini-columns. These are what are growing synapses.
But to try to answer your question, we try to keep all these settings variable. If you want to set up an HTM so that there are a lot of distal connections and therefore it is likely that one mini-column’s neurons connected to neurons in all the other mini-columns you could probably do it. I don’t think it is like that in the brain because of topology though.
yes i am telling about distal synapses, after reading back the paper here. But i almost beleive you say each minicolumn are not fully connected to each others. I really understand they are by reading the mentionned paper. Or are you just saying htm is not how brain work ? Or if they are not really fully connected togheter if i have a sequence with a column A activated then a column B activated only both one after the other, and they are very distant, the first could not pre acivate the second, so may be to fix the problem i need noise in the network is it correct ? Or what else ?
I recommend also reading BAMI which comes with pseudocode that helps with understanding HTM.
Minicolumns are not directly connected to each other. They are directly connected to the input cells. The Spatial Pooling algorithm converts activations within the input cells into minicolumn activations.
The cells within the minicolumns are what is directly connected to each other.
When minicolumns are activated during Spatial Pooling, this begins the Temporal Memory phase. Cells within each minicolumn that are in a predictive state become active and inhibit the other cells in that same minicolumn which are not in a predictive state. If none of the cells are in a predictive state, then all the cells in an activated minicolumn become active.
One cell in each active minicolumn is chosen as a winner. This is either the cell best connected to the active cells from the previous timestep, or a randomly chosen cell if none are connected well enough. This winner forms new distal connections to active cells from the previous timestep.
That is just a quick summary. The BAMI document gives a lot better detail.
I’ll let others comment on other areas which deviate from biology, but the main one that I am aware of is that HTM has an optimization in the Spatial Pooling algorithm. The idea is that if we assume all the cells in a given minicolumn share the same receptive field, then we can model it as all of the cells sharing one single proximal dendrite segment (rather than each of them having their own individual proximal dendrites).
Several folks here on the forum (including myself) have written variations of the Spatial Pooling algorithm which model each cell having their own proximal dendrites. However, my own experience is that this comes at a hefty cost (32X memory and CPU requirements for Spatial Pooling) with no tangible improvement to overall capability. Modeling per-cell proximal dendrites can be useful for tangential capabilities (such as temporal pooling), but there are no obvious benefits to the traditional algorithms.
thank you for explanation and the shared paper is very instructive.
Thank you all my questoin is done!