[Paper] - Hopfield Networks Is All You Need

This paper just came up today by our favourite, non-judgmental reddit forum and I thought given its hopfield and hebbian / energy based background would be interesting to share.

Reddit link - Here

Honestly, i’ve not had a chance to read through it as it is both mathematically dense, and also long :), Hopefully others find it interesting.

2 Likes

That paper author’s colleague wrote up a blog post tutorial: (posted in /r/MachineLearning )

Unrelated Youtube if you’re tired of reading: (1hr): https://www.youtube.com/watch?v=nv6oFDp6rNQ

3 Likes

You can just use simple associative memory with a random intermediate vector assigned to each training pair (vector a recalls vector b).
Then you have (vector a recalls vector r) and (vector r recalls vector b). That gives the pulling apart spoken of in the blog. Since random vectors in higher dimensional space are almost orthogonal. And really that to introduce an error correcting code into the system which you might do explictly if you knew polar codes for example.