I have a different but related question.
Considering auto-associative memory, I feed a part of a pattern in and the network fills in a response pattern. I consider bidirectional associative memory an extension of auto-associative memory that fills in the output as it completes the input.
This is important to the image recognition task being discussed in a different thread.
I understand that an important part of HTM is to recognize and fill in the next part(s) of a learned sequence on an individual neuron level. I am more concerned with a static pattern for this question.
Can a map of SDRs recognize a learned spatial pattern? If so - how does it signal that it is a known pattern?
How about a partial match?
Can HTMs reconstruct a pattern from a partial match?
Can I get some sort of signal to indicate a match on part of the input?
To form a test case for discussion:
Assume that I have a camera pointing to the front of my robot. It gives me a 2d array of data points that I can feed (somehow) to an array of SDR/HTM units. How do I learn this static pattern? If it helps I can do edge detection and thresholding first.
If I drive around and come back to this point how do I know if I have learned some or all of this new pattern?