Considering auto-associative memory, I feed a part of a pattern in and the network fills in a response pattern. I consider bidirectional associative memory an extension of auto-associative memory that fills in the output as it completes the input.
This is important to the image recognition task being discussed in a different thread.
I understand that an important part of HTM is to recognize and fill in the next part(s) of a learned sequence on an individual neuron level. I am more concerned with a static pattern for this question.
Can a map of SDRs recognize a learned spatial pattern? If so - how does it signal that it is a known pattern?
How about a partial match?
Can HTMs reconstruct a pattern from a partial match?
Can I get some sort of signal to indicate a match on part of the input?
To form a test case for discussion:
Assume that I have a camera pointing to the front of my robot. It gives me a 2d array of data points that I can feed (somehow) to an array of SDR/HTM units. How do I learn this static pattern? If it helps I can do edge detection and thresholding first.
If I drive around and come back to this point how do I know if I have learned some or all of this new pattern?
I think one layer of neurons can learn a spatial pattern. Where each segment is an SDR, and a bit in an SDR represents a synapse, an HTM Neuron can be defined as:
a list of proximal segments
a list of distal segments
So I guess the right nomenclature would be “a map of lists of SDRs”? (and don’t forget the permanences, too)
But they all must work together by exhibiting the same properties and processes. The learning is a population effect within many neurons.
When we process sensory input associated with an object in the world, we end up storing these sensations as a part of the object’s representation in these “maps of lists of SDRS”. So given the union properties of SDRs, if we could produce an SDR that represents a similar sensation at a similar location (sort of like a search parameter) it should union with our known sensations to uncover objects that have similar features. This could be done with or without the location (but with location search will be better).
Perhaps there is a very different way to address this. Assume a projection of the sensed pattern onto a field of topographically arranged neuron proximal connections. I am open to a variety of activation rules, sparsity operations with inhibitory cells, columns - both major & minor, and some Hebbian local learning. The cells sense matches and predict sensed patterns.
Assume that there are projections to a “thalamus” layer with rather diffuse connections. Just like there is in the human brain.
If the thalamus was receiving projections from a given topographical area that is the same as the integration of the overall level of match of an area. The thalamus feedback to the cell could be tonic AND the area of activation of the thalamus would be a map of sensed matching activity. I could see this being used as part of a training schedule.
This activation map could be relayed to gate activation of adjacent maps to attempt a link to a related dimension mapping. If a pattern in the two gated areas resonated in the activated related map these should be some level of mapping between two stored patterns. Learning in both of the maps would strengthen links between these two sparse patterns.
The sweetness here is that more than one pattern could be active at a time. I suspect you can prove that if you have a spatial coding (like topographically organized maps) unrelated activation areas are orthogonal and spatially separated by definition.
I saw no provisions for using the reciprocal loops to structures that are known to be connected to the cortex but perhaps this should be part of the HTM canon. This is, after all, supposed to be based on modeling the human brain.
It’s not hard to extend this to any of the huge numbers of connections that are currently described with a cortico-XYZ (you pick the area) word.
I have a viewpoint that you may find interesting: “making a probe SDR” is useful from a tinkerer’s point of view but from a system point of view a stream of tokens in a functioning system must be compatible. I would think that this begs the requirement for both an encode/decode stream(s) to be considered in any system design. (debugging and all that) If I am going to use it to process images then I must have some sort of coding and access methods worked out as a viable answer. The desired answer drives framing the question.
@rhyolight Yes, there are both direct area connections and spreading /hierarchy connections.
The “Three visual streams” paper does outline the pulvinar/cortex connections but this is buried pretty deeply in the paper.
This paper does go more directly to these connections but it does start out with a dense presentation on the three-dimensional layout of the thalamus before getting to the connections:
If you go directly to figure 10 you get details of direct and indirect connections.