I disagree on this point. The cells are the lowest processing structures, not the minicolumns. Without the cells, you are limited to spatial pooling, and cannot produce high-order memory and anomaly detection. Additionally, experiments which deal with Apical feedback operate on the level of individual cells, not the minicolumns.
For reference, this is the structure of a cell in HTM:
The (HTM) diagram shows what I take to be SDR bit patterns as inputs. How many? The example would appear to show 9, and if each is 2000 bits, thatâs 18000 bits of input. How many cells are needed to generate that?
The OR gate is shown as a separate operator. Is that not part of the (HTM) cell?
The triangle is presumably the HTM algorithm, and generates a single bit of output (the cell fires or it does not). Is that algorithm spelled out? Is it fixed, or is there variability between cells?
Given an SDR of 2000 bits, it would take 2000 cells like this one to generate a single SDR, yes?
So, does the minimum processing unit capable of generating an SDR correspond to a column?
Is that a serious question, or are you making a point that I should provide more detail? If the former⌠well, a typical configuration sets an upper boundary at 255 synapses per segment, and 255 segments per cell. But that might be a little busy to drawâŚ
The whole diagram depicts the concept of an HTM cell.
Again, I assume you are trying to make a point that I should post more detail. I donât think that is necessary here â folks can refer to Numentaâs papers and BAMI among many other resources here on the forum.
Assuming by âcolumnâ you mean minicolumn, then no, a collection of cells in different minicolumns produce an SDR. A single minicolumn (when not bursting) is only capable of generating one bit of an SDR. If you are referring to a cortical column, then sure - it takes a network of cells to construct an SDR. Of course you also changed the subject here â I was referring to the âlowest processing structureâ in the HTM algorithm, not the âminimal processing unit capable of generating an SDRâ.
Relating this back to the original subject:
Clearly, by âcolumnâ Cairo was talking about HTM minicolumns here â specifically the SP algorithm which has an optimization whereby multiple cells sharing a receptive field are modeled as all sharing a single proximal segment (as if the minicolumn itself were a neuron in the case of feedforward processing). He was not talking about cortical columns. I donât think anyone would assert that a cortical column in HTM is equivalent to a neuron in other algorithms.
BTW, for Cairoâs benefit, that particular optimization applies only to feedforward processing in the SP algorithm. There is more to HTM than just the SP algorithm, though, so from a broader perspective I do not feel it is accurate to think of single HTM minicolumns in general as being equivalent to single neurons in other algorithms.
Actually all Iâm trying to do here is to get a clear mental picture of basic HTM, in consistent and widely accepted terminology. This one thread and its parent have about 6 different kinds of column and talks about a âcellâ. To me a cell is a single neurone (of any kind) and equating it to a âcolumnâ or a âprocessing structureâ makes no sense.
I took âprocessing structureâ to mean column, as per BAMI:
Column: An HTM region is organized in columns of cells. The SP operates at the column-level, where a column of a cells function as a single computational unit.
Mini-column: See âColumn
I read that to say (a) there is only one kind of column in HTM and (b) the column is the (only) computational unit (structure).
My questions were trying to fix the scale: were we talking about columns of a few neurones or thousands? If the key algorithms in HTM use the SDR then surely the key processing unit is one that deals in SDRs? How many neurones/columns does it take to produce one SDR?
The diagram you quoted is in BAMI and makes perfect sense, but I canât find any such diagram for columns or any higher level structures. They would really help.
I definitely feel your pain. I have seen many conversations using the term âcolumnâ when talking about both minicolumns and cortical columns, despite them being two very different concepts. IMO, the HTM community should abandon use of the word âcolumnâ by itself, and always say either minicolumn or cortical column, depending on which concept they are discussing. Of course the difficulty with that is a lot of historic documentation exists, so anyone doing a deep dive into HTM is going to run into this.
Going back to BAMI, the column they are referring to there is the minicolumn. In most applications, this is a collection of 32 neurons (configurable) which all share a receptive field and are able to inhibit each other if they fire before the others.
In the case of minicolumns, a few, not thousands.
I think that this is slightly the wrong question. A single cell can deal with SDRs, but it obviously takes thousands of cells to represent an SDR. If the question were re-phrased to something like âwhat is the smallest, repeatable unit in the neocortexâ, then it would be easy to say the cortical column, not the minicolumn.
That said, I took âlowest processing structure of HTMâ to mean the level where learning occurs in the system. From that perspective, I think the lowest processing structure is the neuron (especially since Cairo was making a comparison with neurons in other algorithms). One could even argue that the lowest structure is the segment (since that is where the learning and SDR recognition occurs). Ultimately it is probably trying to compare apples to oranges, though â the algorithms are not the same, and such discussions are really only useful insofar as they help to address our misunderstandings of them.
Yes, as long as it is not confused with minicolumn. Iâve mainly seen folks on the forum use âcortical columnâ, âmacrocolumnâ, âhypercolumnâ, and âregionâ interchangably for this concept, as well as attempting to coin some new terms like SUI (Single Unit of Intelligence) and CPU (Cortical Processing Unit). I love the creativity of the folks in this community
Hey @david.pfx, have you seen the HTM School videos on YouTube? If not Iâd highly recommend this series because of its great dynamic visualizations, showing how the systems works overall.
Hereâs my understand of the data structures from largest to smallest:
Macro Columns (each w/some amount of regions like L4/L2/L6)
Mini-Columns (each w/ one proximal dendrite segment - for SP functioning)
Cells (each w/up to some max # of distal dendrite segments - for TM functioning)
Distal dendrite segments (each w/up to some max # of synapses)
Synapses (each w/a permanence value from 0 to 1 - those which exceed activation threshold are active)
Back in 2013 Jeff Hawkins enunciated 6 principles. Two of those were SM and SDR. Thatâs what got me in.
So if I understand correctly you identify 3 levels in HTM:
Cell, which recognises (a part of) an input SDR in an SDR context and outputs a single bit (of an SDR)
Mini-column, which recognises a sequence (of inputs in a time-varying context) and outputs a sequence of bits (of a sequence of SDRs)
Maxi-column, which recognises a sequence of input SDRs (in a time-varying context) and outputs a recognition SDR (of that sequence).
Input SDRs are the output of an encoder step. Context SDRs are the result of recognising successive parts of a sequence. Sequence memory lies in the set of context SDRs accumulated by the maxi-column in response to past inputs.
Whether Iâve got that right or not, I do think BAMI could benefit from a description of that kind, along with a couple of diagrams.
Iâve watched every video, read every paper (that is about HTM and not neuro-science) and tried all the software. Iâm not interested in synapses, dendrites or regions unless those are part of the HTM theory and corresponding software implementation.
As per my other post, it looks like I should be interested in cells, mini-columns and maxi-columns and any specific reading on that subject would be welcome. A diagram would be even better.
I would tweak this a bit. A minicolumn doesnât recognise a sequence â it only receives feedforward input. So a minicolumn can recognize spatial information from the input space. It is the individual cells within the minicolunn which (in the case of TM) recognize their place in a sequence. Of course I may be splitting hairs by making a distinction between a minicolumn and the cells within that minicolumn.
This is true only in the case of the TM algorithm. In terms of the broader theory, a cortical column does more than just recognize sequences. It is able to recognize objects (the focus of current research) and likely produce output which enables action selection (future research). It most likely also performs feature extraction and association functions. Understanding exactly what a cortical column does is the goal of HTM research.
If you follow Numentaâs approach and call out separate âlayersâ of a cortical column which perform different functions, then you could say a TM layer ârecognises a sequence of input SDRs (in a time-varying context) and outputs a recognition SDR (of that sequence)â as you stated.
BTW, diagrams of a cortical column are in constant flux (and parts are missing) as the theory evolves. There are some diagrams in the Framework paper, as well as on Slideshare (for example, see the Thousand Brains Theory slides). However, I think the best way to keep up to date is to follow Numentaâs research meetings and talks.
The point of my post was for someone to point out my errors, and provide a corrected version. Weâre not there yet.
The BAMI summary of terminology does not provide definitions for feedforward or layers. There is nothing in that diagram about cells being able to recognise spatial information, or what that information might be or how it might be encoded, or how a cell might recognise its position in a sequence.
There is always future research, but I just want to know where we are now. What is the current state of HTM theory, what model does it support, what are the algorithms, what can it do.
If the answer is: here it is in 10 pages, go read and understand, thatâs great, point me to it. If the answer is: we donât know yet but weâve got some interesting ideas, then fine, at least Iâll know where weâre at.
It sounds like you want a simple (10 page?) snapshot that captures the current state of HTM theory and encompasses all terminology with no assumptions of domain knowledge. I donât think youâll have any luck finding such a document. Youâll have to do the leg work like the rest of us.
Thanks, @sheiser1 this is not a bad summary of the sequence memory aspect of HTM. I donât think it is what @david.pfx is looking for, though, for a couple of reasons. Firstly, I think BAMI covers all of this already, and the complaint about BAMI was the lack of explaining all domain terminology (âfeedforwardâ and âlayersâ as examples). It also doesnât cover any of the aspects of TBT which are more recent to the theory.
Hopefully this doesnât come off as a criticism of your post. Iâm just pointing out that the sort of documentation that I think is being requested probably doesnât exist, in part because contributing to HTM theory requires some level of domain knowledge which requires an outsider to work to develop, and in part because while some elements of the theory are stable, others are evolving quickly.
Itâs true I donât think Iâve seen docs describing that, I mustâve gotten it from listening to Numenta people like Jeff & Subutai talk.
I know there is a talk where they do contrast the mechanics of HTM and conventional ANN, but itâs true that terms like âfeedforwardâ and âlayersâ have totally different meaning in HTM vs ANN.
When I hear âfeedforwardâ in HTM I think of Spatial Pooler activation. The âfeedforwardâ input is ingested by the SP and determines which columns will activate. This is as opposed to the TM deciding which cell(s) in these columns will become predictive, and capable of inhibiting all other cells in their column if that column activates at the next time step. Iâm not sure the term for this kind of input though, I donât think itâs âfeedbackâ.
So rather than thinking in terms of âfeed forward/back/whateverâ I think of it as activating inputs (used by the SPâs proximal dendrite segments to choose winning columns) and then depolarizing inputs (used by the TMâs distal dendrite segments to make certain cells predictive). The SPâs proximal segments are connected to the encoding, thus saying âhereâs what Iâm seeing nowâ, while the TMâs distal segments are connected to other cells within the âregion/layerâ, thus saying âhereâs the sequential context in which Iâm seeing itâ. Come to think of it Iâm not sure the difference between âlayerâ and âregionâ, or if theyâre interchangeable.
When I hear âlayerâ in HTM I think of one of these SP+TM modules, which has some activating input and some depolarizing input. The activating (SP) input doesnât necessarily have to come from a raw data encoder, it could be an output from another layer. Itâs just a population for the layerâs SPâs proximal segments to connect to. Likewise the depolarizing ⢠input doesnât necessarily have to come from other cells in that layer. Itâs just a population for the region/layerâs TMâs distal segments to connect to.
The largest data structure in HTM (afaik) is the macro column, which is composed of multiple layers. Hereâs a repo where different kinds of macro columns are built using the Network API:
Hereâs an example of one such macro columns structure:
We can see that L4 is activated by raw data from âSensorâ and depolarized by outputs from L6 and L2. These links between layers are set in the script by network.link(âŚ).
I hope this helps sheds more light, or leads to useful follow-ups at least.