Sparse Associative Memory Java code

I wrote this sparse associative memory code in Java:

https://archive.org/details/sparse-amjava
Blog:
https://ai462qqq.blogspot.com/2023/04/associative-memory-using-locality.html

There are other even faster ways to achieve associative memory, however I was looking for something that might have good generalization ability when used at scale.

One thing I have to try properly is using 2 associative memories with different random initialization to create an unsupervised high dimensional clustering algorithm. By training them to agree responses to particular inputs, where initially they would each output different random responses.
Maybe if the brain can’t do backpropagation it can at least do things like higher dimensional clustering and possibly other like things using its associative memory capability.

3 Likes

It is a good question what systems you can build using associative memory.
You could have a vast associative memory that you feed in all current sensory information. That would hold on to information for quite a while really before old memories were fully washed out. Maybe you could get better than human level working memory.
Like I said you can do higher dimensional unsupervised clustering with 2 or more differently randomly initialized associative memories.
Where for a particular input you get them to agree a common response, such as the average of their responses normalized to a constant vector magnitude. Once you have that clustering you could then train another associative memory to link the cluster response to a particular output. Such a system could probably be implemented in the biological brain since there is no backpropagation required, yet such a system is almost like a 2 layer artificial neural network.
You can also tell if an input has never been seen before, even in general terms since the different associative memories will have highly divergent outputs in that case.

1 Like

Yes associative memories have interesting properties yet I don’t understand how clustering is supposed to work.

Also would be helpful to have some code to compare against usual ML techniques like K-means clustering or approximate near neighbour search libraries

1 Like

I thought of the twin/multiple associative memory clustering idea a few years ago but I’m only one person, not a research team. I tried it a tiny bit back then, I’m trying it a bit more now.
I had to introduce an adjustable learning rate into the code above to get gradual convengence to agreed responses to particular inputs.
Broadly you are getting two differently initialized associative memories to call this input this and that input that, a common language.
The magical thinking is that results in a meaningful consensus after some time, that is to say it is not scientifically proven, yet I would think so.
I’ve spent the past 3 months writing AI code, it’s time for a break.
However I will continute to experiment with clustering/common response behavior somewhat. You could include a randomInit() method to fill the weights with random numbers and a setLearningRate(float rt){ rate=rt*scale/density;} method if you wanted to experiment. Then as a suitable consensus target you can add the 2 associative memory outputs together and adjust the vector magnitude to a suitable constant lenght.
I’m experimenting with the FreeBasic code.
Anyway the thing is all experimental at this stage.

2 Likes