Kohonen maps seem interesting. The only thing is if the map consists of 1000 vectors you have to check the distance of an input vector against all of them. Well, that’s going to be slow especially with highly dimensional vectors.
Viewing a single layer neural network as an analog vector to vector hash-table can you do better? I think the idea is to randomly initialize the network, then the response of the network to an input vector is “the nearest vector.” The further away the nearest vector is away for the input vector the more strongly (less strongly, or neural?) it should be perturbed toward the input vector and the net trained on the input vector, perturbed vector pair.
Would that, or something like that create a quantitized Kohonen type response from the network? I’ll try it.
The idea is that a bunch of centroid responses would be learned from very large data sets.
It would be interesting if quantitization just spontaneously happens in under capacity single layer neural networks when trained on excessive data sets at low training rates.
That network would show correct quantitization on the training set, but would have trouble outside of that because there wouldn’t be a denoising effect.
A second over capacity (with respect to the quantitization) single layer network could learn the responses of the first network and that would have a denoising effect and show more correct quantitization outside of the training set.
Hubble, bubble, toil and trouble there is already too much speculation going on here. However there is some evidence (as far as I am aware) of 2 stage learning in the brain.
So far I haven’t been able to get the behavior I’ve been looking for. I’ll try a little more with different parameters. If not, at least I have been able to evolve 4096 input/output dimension deep nets on a laptop. Which is kind of surprising. If I step down to 1024 dimensions it should even be quite fast. One idea is to use memory based mapping from a large input dimension space to a small smart deep controller neural net. I’m not sure though how to arrange sending a learn request to the memory.
Anyway I’ll continue reading up on self organizing maps and see if there is any applicable information.
Okay, I was able to do something like what I wanted with self organizing maps.
I created 2 randomly initialized neural nets. For each example input I got them to gradually learn a common representation. The interesting thing was how extremely sparse the common representation was. Being zero almost everywhere with only occasional high magnitude points. I wasn’t expecting that, I was expecting them to agree maybe N random positions in the state space that would look like a noise pattern when viewed. Go figure.
https://drive.google.com/open?id=0BwsgMLjV0BnhdmlxVlQtWExOSmc