@rhyolight: I was thinking about generalization, and came up with the following tweak to the temporal pooling algorithm.
Currently for every new object in the output layer, the algorithm requires that you select an arbitrary subset of cells to be activated. This subset stays activated during the learning of all the feature/location pairs.
In my proposed change you would still do that, but you would also activate a few cells in the output layer based on existing connections between previously learned objects and the features in the input layer.
For example suppose object #1 and object #2 share a feature “f1”. Suppose the system has already learned object #1, so a few cells in the output layer have learned patterns on their proximal dendrites that overlap with feature #1.
Now you present object #2.
Instead of being completely arbitrary in the cells you select, you would be partially arbitrary. You would select N number of cells arbitrarily, but you would also allow for activations of (some) cells that were activated by feature f1 in object 1.
The learning process that you already have, if I understand it correctly, increases interconnections between the arbitrary cells in the output layer that were selected for a particular object. It could also strengthen connections between them and a few of the cells that respond to f1.
I’m not sure this would work, but I can give a reason to try it out:
There are different types of generalization you could aim for:
- generalization over rotation
- generalization over translation
- generalization over size differences
- generalization over similarity between objects.
The above idea would not help with 1 thru 3, but it might help with #4.
This assumes that generalization has something to do with similar concepts having more active cells in common than dissimilar concepts.
What do you (or other thread viewers) think?