@jhawkins the authors throw some flowers your way in this paper, “The development of MBs owes a great deal to the insights of Jeff Hawkins (discussed for example in his book “On Intelligence” (Hawkins and Blakeslee, 2004)), who correctly views human brains as prediction machines that are deeply and fundamentally conditioned by their past experience” It is a great book!
Thanks for the paper.
Seems like a neat idea, until I hit the nearly random (sorry, genetic) search part.
There does not seem to be much published.
Have any useful brains been developed? Any problems solved?
I posted because of the kind references to Jeff’s work. The post was not an endorsement of that approach.
However, it might be worth pointing out that the only example we have of “general” intelligence is the result of evolution, so studying evolution seems like a good idea.
Completely agree, but chances of accidental breakthrough improvements in a human timescale I would consider very unlikely.
Evolution has been working with highly functional units for ~4 billions years with astronomical parallelism and hugely varied environments.
I think it is easy to over-estimate the power of our computing - compared to, say, even the molecule flow in a single cubic metre of water for 1 hour.
Yes, Evolution does two things: adaptation and mutation. Genetic algorithms typically use adaptation-like mechanisms to tune the values of pre-chosen parameters, and can be highly successful in reasonable a time frame.
But mutation is where the billions come in. From 10s or 100s of generations you go to millions or billions. People have tried, but life is too short.
Consider brain-inspired AI - you could make the same argument. Clearly we are incredibly far from being able to emulate a brain - so why bother. Obviously, because we can learn about principles and then engineer with those in mind. Airplanes are not birds but wings are obviously inspired from observing birds.
The key point is learning about the principles and abstractions.
There is research showing EC can optimize large DNN more compute efficiently than gradient descent. Combining EC and SGD seems reasonable. There are reinforcement learning problems where EC outperforms SGD.
Natural evolution obviously has some advantages over computation (as do brains) but the principles can be abstracted and computers obviously have some advantages for some problems (e.g. convergent search).
There is plenty of active research going on in evolutionary computing (EC). Is your opinion based on the contemporary research ?
There is so much to learn about evolution that I don’t see why we can’t expect breakthroughs in EC.
Brain-inspired computation is a subset of biology-inspired computation and I don’t see any reason to stop looking to biology. Biologists clearly know a lot more about autonomous adaptive systems than computer scientists.
Are you assuming that gradient descent will solve everything, or do you have some other optimization technique in mind?
That’s good to know - I’ll take a look.
I am not discounting EC as a topic, but, in the paper, it is presented as an originator of novel processes (via random mutation), and not as optimisation technique for something that already exists.
I am certain that combining techniques, e.g. EC + SGD, as you suggest, and other broader cross-domain work, is more likely to make breakthroughs. I think that’s why we are all here.
At least some of us
An interesting direction in EC includes the idea of divergent search, non-objective optimization, and open-endedness (a la Kenneth Stanley)
Good tip - something completely new to me.
Damn I definitely need one of those. Fresh if possible. Mine feels so clogged.