Reservoirs as information concentrators

Here’s one to think about:
https://groups.google.com/forum/#!topic/comp.ai.neural-nets/OJSoxOuudCY

I think there are very interesting questions you can ask about decoupled learning. The human inclination is to create fully coupled subsystems that don’t “waste” anything. However decoupling has its merits.
I’m considering at the moment whether to use binary (Bloom filter like) cue recognition or analog cues with reservoirs. It seems rather important to spread out cue learning over a large time frame rather than trying to eat your lunch by putting all the food in your mouth at one time. Kinda uniform temporal sampling. It might be helpful though to learn a bunch of cues when something significant has happen giving greedy learning at times. I intend to kick detected cues back into the reservoir so you even could have cues learnt by combining other cues.
It all seems very messy!

I tried one layer of analog cue learning followed by a second layer learning cues from cues, then a read out layer. Nice generalization. I will experiment more.
It would kind of suggest that snapshot learning of synapses would be okay, you don’t need some complicated accumulator mechanism. At random and very infrequently a neuron could snapshot learn a cue (ie. uniform sampling over time.) Of course you would expect evolution to find optimizations of that, and there could be some strengthening of synapses with use to help stabilized the system.