Reading out patterns in neural cortex simulations

This is a vague question, more on implementation than theory, but I think it has bearing on HTM theory too.
Suppose I want to use a neural block for training regular type neural net problems. For instance, back in the days when deep learning wasn’t around, but basic backprop was, my company would test our nets on problems like ‘exclusive or’, or more complicated versions of that (for instance output a 1 if the number of inputs that are on is an even number). Lets say I was using HTM theory for a simple XOR problem. I could present the inputs plus the known output as inputs to a temporal pooler. For instance, I could present first a zero, then a one, and then the output - a one again. Then I would reset, and I could present a one, another one, and then the output - a zero. So once the block knows the sequences, suppose I want to readout the result of putting in a pattern such as two consecutive ones. How would I do that? Could the readout be of the input layer itself? What I mean is, I would know what the predicted cells were, and I could look at what inputs project to them and try to guess what that means as far as the value of the input. I know the above is vague, but the basic question is: how do you readout patterns in a meaningful way?
There is also the autoencoders that neural nets sometimes use - you put a pattern (for instance a 2D list of pixels of an image) into a backprop type neural net, feed the pattern through a layer which is a sort of bottleneck since it only has a few hidden nodes, and use those as outputs to recreate the input image or pattern, hopefully keeping the more important aspects.
Could you do that with a spatial pooler? In other words, would presenting patterns to the spatial pooler allow it to recreate the essentials of those patterns? How would that ‘readout’ work?
Finally, one of the problems that neural nets have trouble with are ‘maximum’ or ‘minimum’ problems. It is hard to make a net that finds the maximum or minimum of a series of numbers. If we put this into a temporal pooler, would it be able to solve the problem?
Thanks.

This is a bit dated because we no longer have the CLA Classifier (we use the SDR Classifier now), but still relevant theoretically.

1 Like

Thanks for the video and link. I did have an idea, which I may try out if I have time, which is based on the idea that Geoffrey Hinton had. Since in the brain lower levels project to higher levels which project back to the lower levels, you can let them equilibrate. So in the case of SDRs, you would have as layer 1 the encoder, and layer 2 the spatial pooler (or temporal pooler) and once the prediction is made by the temporal pooler, you freeze any further movement of the sequence, and project the predicted columns to the input nodes (bits in the encoder). Some bits may be affected by more than one active column, and some bits, if on, would project upwards to inactive columns. That way you could eventually prune away noise and you might actually end up with an encoded SDR that tells you what number (if a scalar is being encoded) has just been predicted. I’m vague on this and I have to think about it, maybe I’m getting the basics wrong.