Sorry Paul,
It’s likely up for academic debate or semantics but I should clarify my position on my claim. I took a look over the HTM school stuff real quick again and the BAMI document and I am reminded why I was thinking there was no minicolumns in HTM.
In HTM, as I recall, an individual cell and lowest processing structure is called a “column”. A column connects itself to the input field and has weights. In most algorithms ranging from optimization based techniques to Sparsey and other brain inspired algorithms, this is called a neuron.
In Sparsey, an individual unit that has weights directly connected to the input, is a neuron/cell. In my code its a neuron. In Rinkus’ literature I believe he refers to it as a cell depending on the paper. Neurons that exists in a column together inhibit one another. Only one neuron/cell can be active.
Unlike HTM, neurons in a Sparsey column are defined and structured explicitly. In HTM the “columns” have a inhibition radius which in an abstract less structured way I suppose could be considered the closest thing to a Sparsey minicolumn.
As for the macrocolumn, I’m not sure of any structure that relates to Sparsey macrocolumns. Essentially though, Sparsey went the route of maintaining a more structured hierarchical model. I had linked to Mark a paper describing some of the reason for that.
Thanks.