There is an mini-explanation in the link:
http://www.freebasic.net/forum/viewtopic.php?f=7&t=24758
With regard to this:
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004967
If you keep summing in new data as it comes along into a random projection that feeds back on itself you do get a reservoir. Over time older information gets lost, depending on the size of the reservoir (how is a very interesting question.) If you take nonlinear projections of the reservoir at time t-1 and t then you can learn an association between the two. You can use that to predict sequences into the future.
There are a ton of weird things you can do with random projections. I can’t do everything. It’s up to other people to explore too.
Rectifier activation function, positive weights only, deep random projection neural network.
Linux AMD64 version
https://drive.google.com/file/d/0BwsgMLjV0BnheW1zWURpR09aQ0U/view?usp=sharing
General version (not compiled):
https://drive.google.com/file/d/0BwsgMLjV0BnhMm9IRC1NQ2p1Y1k/view?usp=sharing
Sean O’Connor 14 June 2016
Triggered Delta neural net:
https://drive.google.com/file/d/0BwsgMLjV0BnhS052eVRLSkhlM1U/view?usp=sharing
The German physics crew explained to us in 2000 why you don’t immediately update the weights in a neural net, rather you accumulate them until they exceed a trigger magnitude and then update the corresponding weight with a delta value. Did anyone listen? For sure not!!!
Did I not delete this thread? How careless of me.
Anyway here we are:http://www.freebasic.net/forum/viewtopic.php?f=7&t=24758&p=221457#p221455