“We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world. Although ANNs are indeed highly simplified models of BNNs, they belong to the same family of over-parameterized, direct-fit models, producing solutions that are mistakenly interpreted in terms of elegant design principles but in fact reflect the interdigitation of ‘‘mindless’’ optimization processes and the structure of the world.”
The most information an evolutionary algorithm can extract from the environment at one pass/fail instance (from one life) is one bit. But over time enough information can be extracted so that a stick insect looks like a stick for example. Which is kind of an interesting transference.
Anyway I did an updated summary of some things: https://ai462qqq.blogspot.com/2019/11/artificial-neural-networks.html
It seems one overlooked thing in artificial neural network research is the variance equation for linear combinations of random variables and how that explains what you need to do to turn the weighted sum into a general associative memory.
Yes, open access, did the link work for you? I think by mindless they mean initially random connections, and the statistical nature of adjusting them. That’s their interpretation vs. directed extrapolation dichotomy.