A mini column is 50 sq micro meters .
Standard SDR is 2000 bits i.e. the next organizational/grouping level column should be then 2000 mini cols.
But by size the next level col should be 1 sq mm i.e. 400 mini-cols at size of 50 sq micro meters.
Also there was talk about organizational unit of 100 mini columns ?
So then which is macro and/or cortical column ? Or are they the same ?
What are the specifications : size and num of mCols respectively ?
(extract from p48 of this ebook : š Ebook: Insights from the brain, the road towards Machine Intelligence )
Even if the different cortical areas support very diverse functions, their anatomical organization is strikingly similar.
Indeed, the whole cortical sheet is made of a collection of anatomical fundamental columnar units called minicolumns (around 50 Āµm of diameter). Each cortical area is basically a collection of millions of minicolumns (each one being composed of around 100 neurons). Minicolumns are organized in layers (generally 6 layers), with specific neuron types and connection patterns in each layer. This organization is said to be laminated.
Neighboring minicolumns share a same Receptive Field (RF), meaning that they are innervated by the same axonal inputs. Those minicolumns form structures called macrocolumns/hypercolumns (around 500 Āµm of diameter) that are thought to be functional fundamental units (the hypothetical functional role of macrocolumns remains controversial).
5 Likes
then how do you account for the HTM chosen SDR size of 2000bits.
By the logic of what you describe there should be Xcolumn with size 20 macro columns (2000 mCols).
Unless u accept as output every neuron then the output will be 10 000.
or thirdly 2000 SDR is purely mathematical construct and a macro columns communicates bit by bit densely
BTW on on my day to day testing 500-1000 SDR work w/o problems.
Working with less bits is faster
My answer was only about some numbers in the human brain at different scale levels, without any direct link with the number of bits. The number of bits reflects the choice of the modeler. The idea is to keep the number of bits as low as possible to ease the computations but it is always a very simplified representation of the reality.
The link between bits & neurons is stated in BAMI:
The bits in an SDR correspond to neurons in the brain, a 1 being a relatively active neuron and a 0 being a relatively inactive neuron.
When Numenta people model one cortical layer, I think that their SDR has as many bits as the number of minicolumns. The bits in those SDR correspond to minicolums in the brain, a 1 being an active minicolumn (please correct me if necessary).
Thus, an SDR of 2000 bits would mean a model of a macrocolumn of 2000 minicolumns. This number is one order of magnitude greater than the figure I gave (1 macrocolumn ~ 100 minicolumns). I guess it is simply because the HTM model is performing well with this size, not because it corresponds to the number of minicolumns in a macrocolumn.
On a side note, the term ācortical columnā can refer to āminicolumnā or āmacrocolumnā depending on the context.
2 Likes
thanks ā¦ make sense
Iām trying to square the circle
Confused because using 100 SDR wont allow sparsity.
Could there be something to it, if we have in the brain 100 mCol per macro .
Interpretations :
1. use several macro-columns together
2. use majority of 10 000 neurons of macro-col, sort of disregarding mCol organization
3. we still have mCol - macro col structure but the output is scattered across ... different macro-cols join hands together at different times
At least one of the key biologically-inspired experiments using HTM algorithms seems to produce reasonable results when run with 100 minicolumns per cortical column. (See further down that thread, and also this earlier thread, for reference to the Numenta paper, and description of the HTM-scheme implementation of the experiment: it was originally run with the Numenta SDR configuration of 20 āonā bits in 1024, but can produce comparable output with 4/100.)