I have been pointing out that all early image recognition uses local features as there are no map-wide local connections in the cortex. Also - that convolution is not biological plausible.
It seems that the deep learning crowd is discovering these facts.
Maybe there is something to be learned by looking at how the brain works?
I’ve been thinking about the idea you’ve presented before about how to focus on features, with something similar to tent-poles under a circus tent. With that in mind, I happened to run into the below tutorial/documentation:
Feels like a combination of this approach might be useful for triggering where a system should pay attention to in the first place, then derive features based on that.
Maybe it’s something, or maybe it’s nothing at all. But using the local ‘peaks’ generated with the above method, then conducting local feature recognition on that, might be an interesting approach for object segmentation + recognition.
If you use the subcortical structures as inspiration - a small non-htm network is allowable to to do a global search for the peaks and feed that back as input to direct attention.
Are you willing to move the selected areas of activity to a central window and do temporal recognition of the hot-spots as you cycle through them? The brain moves them by moving the eye.
For a different, completely unrelated project, I’m doing something similar already, cycling through hotspots. For given statistical max of a matrix (image), find and extract all regions that are within a standard deviation of that (dropping the rest). I suppose it could be extended to move the next deviation down.
For my project, what I do with the high points is more statistical for the sake of CPU (tiny, embedded computing platform). Hoping to get to a stopping point in that project here in the next couple of hours as well. Ideas from that could definitely flow into an attention mechanism for HTM.