Dense Associative Memories and Deep Learning. Generalization of Hopfield networks

I see Dmitry Krotov is still investigating variants of the Hopfield network:

https://youtu.be/lvuAU_3t134

The main reason for me to look at the video is because I am interested in the attractor state information.

2 Likes

You can have systems of completely random (error correcting) autoassociative memory. That is with just random attractor states. Then you can use a poor quality (not error correcting) associative memory to learn an association between an input and one of the random attractor states. And again from that attractor state to a classification.
Then there are 3 association layers, input to random autoassociation memory, error correcting random autoassociation to random autoassociation and then random autoassociation to classification.
There are a couple of advantages to that.
1/ The poor quality initial association would have a broad decision region which might help with generalization and definitely help reduce cross-talk between memories.
2/ The random associative memory (in the middle) can be much smaller in dimension than the input dimension. If the input dimension is a 65536 pixels the random associator could be of dimension 256. Allowing much more storage for a given amount of computer memory.
Random error correcting autoassociation in the biological brain can’t be ruled out yet I would say.
Maybe I explained that quite badly.