While there are many varieties of plants to identify my attention was brought to plants with diseases or plants with damage.
I was staring into one of our gardens today and saw that there were paint splatters on some of the leaves of a rhododendron, where the new leaves were clean and undamaged, older leaves were spotted with paint splatters.
I started to think of the benefits of HTM and noise tolerance. Where damaged leaves could be identified as damaged, and still be identified correctly. The thought of identifying where each leave was in relation to other leaves gave me some inspiration on how would you detect the spatial positioning.
The idea of a forward feeding network with columns of functions which can propagate information to the parallel column in sparse distribution allows for expansion of an identification network to a localization, multiple detection and segmentation network.Without compromising the hierarchy of “This is still a leaf, it is still a rhododendron leaf and it is a damaged leaf, and its position is x,y from the center.”
I can imagine that a backward propagating network would need a massive data set to determine the same information. I am thinking of HTM functions where I could ask the network questions such as “What do see?”, “Is it damaged?”, “What is its spatial position?”.
Now what my intuition is aiming at is very large tensor space. Because to keep the output sparse and still be able to connect many sparse outputs without creating a dense output layer, the amount of functions the network can perform is highly dependent on the number of layers in a column. The density of columns.
Is there a metric for the functionality of a given layer of a column? As you travel forward and disseminate to the neighbors, is there an overloaded condition which breaks down the efficacy of the network? I imagined an idea such as Spatial Pyramid Pooling, but that doesn’t really cover the condition of a single layer performing more than one function. Or even the metric of how many functions a column can perform, given a single function per layer.
I am curious if a layer can perform more than one function, or perhaps, the column is the limiting factor of the number of functions. Given parallel feeding, perhaps rows which are distant from the source column are performing the function and the result is overwritten higher up the chain. Similar to an hour glass. A tetrahedron under an inverted tetrahedron joined at the apex. Where many of these constructs exist in the SDR outputs of each layer, and these representations simply move when the instruction is to execute a function which provides a result contextually.
It was described where output of the motor functions and the input of sensory data was fed back into the cortex. It obviously doesn’t happen at the sensory input layer, of senses would be diluted. At what depth does the structure feed back the copy of the actionable information? You implied this. It allows back propagation but not to the input level of the cortex. When the old brain sends the copies, how does it know what layers are receiving the information? Are those instructions also encoded in the SDR? Which layer(s) to receive the modified SDR?
And I had to think for a minute, what is orchestrating the execution pathway? It still has to be neocortex. It gets SDR from an input, and the SDR is directly propagated to the assumed correct processing location in the membrane. In the pathway, the executing instruction is probably encoded within the SDR. The range of the SDR should be fairly massive, I can see now how 2% is a target. And can also see how it could grow to larger percentages as it moves parallel and forward through the hierarchy.
I wonder the limitations of the parallel movement of instructions if they are actually encoded in the SDR. It must have a relation to the density of the columns and each columns neighboring columns.
Not all columns will be developed to the same density. That is part of a learning experience.
All of these ideas come from looking at paint splatters on a leaf.