I am not sure if this question is welcome in this forum but I will ask anyway:
I have been working through semantic folding and struggling to come up with an HTM / biologically plausible method to form the retina map. The HTM processing space is sequential processing of topologically arranged data and much of the semantic folding algorithms seem to be based on the manipulation of the spatial distribution of the intermediate representation of the data being processed.
The API is awesome and all that but I don’t see how to do that with free-learning HTMs. The cortical.io paper clearly says that this is a complimentary theory but I would like to see if it is possible to unify these two awesomely useful tools.
I think that solving this would go a long way towards creating & processing location data for free-moving robots.
As I have been considering this problem I have been drawing back to contemplations of the interactions between various cortical layers. The basic HTM function is a combination of coincidence detection, the formation of prediction, and the detection of novelty. This is basically a logically OR function of the product of Perceptrons. We already expect the inhibitory cells around the column to facilitate voting and prediction in individual columns. Could the layers work together with the surrounding inhibitory cells to solve other logical functions such as NAND of patterns? Negation is one of the basic requirements for Turing complete computing. I can see that as an outcome of learning patterns; when I was teaching digital logic we formed all logical functions with memory but this seems strained as a basic computation method.
More on the M of HTM.