Associative Memory via Predicitve Coding

Kind of simplifying associative memory right down to locality sensitive hashing selected memory block summations:
https://editor.p5js.org/seanhaddps/sketches/ojCPhNDtU
In the code the blocks are chosen on a 1 out of 2 basis, but there is no reason not to choose on a 1 out of 4, 1 out of 8,… 1 out of N basis for really large associative memory. Lol. :icecream: :icecream: :icecream:

The math is basically what you would get attaching a linear readout layer to a hash function with -1,+1 binarization. In that case statistics gets involved, the central limit theorem, the variance equation for linear combinations of random variables and linear independence issues.
Later you address the issue of locality sensitive behavior when a small change in the input causes only a few hash bits to flip.
In the pure hash function case you require a totally exact input to get the wanted associated output, anything other than that gives random noise out. There is no advantage to that because it is basically RAM.
You use locality sensitive hashing to allow less stringent addressing.
You can have different types of locality sensitive hash function that respond with more or less sensitivity to small changes in the input.

Thanks, what I didn’t understand was what is the general flow/idea of the program.
There-s a picture, I select one ore more small “windows”, by clicking them and they line up on the right.
Then I press 1 to begin training.
All nice and well training cycles are counting on, the laptop is getting hot, which is a good sign.
I press 1 again training cycle stops.
But no other feedback, nothing shows me what was the training for?

I expected some kind of … I don’t know “99% accuracy” sounds silly because I don’t even know what was training doing? “writes” the selected tiny pictures into the Associative Memory? Then I don’t know how to at least pop them out from there, to see it really did that.

After training you should see the program auto-associate using the data in the current outlined square. The current image section in the outlined square is sent through the associative memory and displayed lower down.
Maybe you didn’t scroll down, maybe you are using a macbook, I don’t know.
There are many problems in the world.
In the File menu item there is a share option and you can choose full-screen or present to get a better view. https://preview.p5js.org/seanhaddps/present/ojCPhNDtU

Well, now it makes sense.
And I use high zoom indeed, I never got both squares on the right and on the bottom. When I saw the bottom square previously I didn’t got what was it their, it was showing a changing random overlap of previously clicked windows.

Well, obviously I wasn’t there, so I don’t know what you saw. lol.
Anyway it is not an attempt to provide deliberate error correcting associative memory, just to provide memory with soft addressing that a controller neural network can possibly learn to use. Although used under capacity you will get repetition coding type error correction.
I’ll leave matters there, and just try it with a controller neural network at some stage.
One question is that while it takes a massive amount compute to train a neural network, how much compute the network itself can actually do during inference is a different matter and presumably much less. I kind of wonder if it is enough to make use of external memory.