Single layer neural net with 2 way nonlinearity

There is an mini-explanation in the link:

With regard to this:
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

If you keep summing in new data as it comes along into a random projection that feeds back on itself you do get a reservoir. Over time older information gets lost, depending on the size of the reservoir (how is a very interesting question.) If you take nonlinear projections of the reservoir at time t-1 and t then you can learn an association between the two. You can use that to predict sequences into the future.
There are a ton of weird things you can do with random projections. I can’t do everything. It’s up to other people to explore too.

Rectifier activation function, positive weights only, deep random projection neural network.

Linux AMD64 version

General version (not compiled):

Sean O’Connor 14 June 2016

Triggered Delta neural net:
The German physics crew explained to us in 2000 why you don’t immediately update the weights in a neural net, rather you accumulate them until they exceed a trigger magnitude and then update the corresponding weight with a delta value. Did anyone listen? For sure not!!!

Did I not delete this thread? How careless of me.
Anyway here we are: