“We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world. Although ANNs are indeed highly simplified models of BNNs, they belong to the same family of over-parameterized, direct-fit models, producing solutions that are mistakenly interpreted in terms of elegant design principles but in fact reflect the interdigitation of ‘‘mindless’’ optimization processes and the structure of the world.”
The most information an evolutionary algorithm can extract from the environment at one pass/fail instance (from one life) is one bit. But over time enough information can be extracted so that a stick insect looks like a stick for example. Which is kind of an interesting transference.
Anyway I did an updated summary of some things:
It seems one overlooked thing in artificial neural network research is the variance equation for linear combinations of random variables and how that explains what you need to do to turn the weighted sum into a general associative memory.
Thanks for sharing but is this a free resource?
I wonder what the author mean about
Yes, open access, did the link work for you? I think by mindless they mean initially random connections, and the statistical nature of adjusting them. That’s their interpretation vs. directed extrapolation dichotomy.
Ok. I think I need to login/buy it.
“To read this article in full you will need to make a payment”
Damn, that’s new, it used to be free. Hold on, I think I have pdf somewhere, or they most likely have a copy on arxiv.
accessed thanks! will have a read