While playing with evolutionary algorithms, I had the startling realization that all genetic mutations are bad. It’s common knowledge that biology abhors genetic mutation, and I assumed that was only because mutations cause cancer. But my computer programs are immune to cancer, and they also abhor mutations. This is counterintuitive, given that evolution requires mutations to procede.
For proof of the fact that mutations are bad, consider that evolution is an optimization algorithm, and after it reaches a local optimum further mutations will be strictly detrimental. The concept of evolutionary pressure is the ability of an evolutionary algorithm to remove deleterious mutations from a population. If mutations accumulate faster than they can be removed then the population will suffer a genetic collapse. This is a common failure mode of evolutionary algorithms.
The ideal evolutionary algorithm would have at most one mutation in each individual, and each of their lives would be an experiment to evaluate that single mutation. And then through many generations of chromosomal crossover the best mutations would combine into a single genome.
Evolution is the dumbest process after the growth of entropy.
Fitting, via backprop or Hebbian, is right next to it, also swiming in random shit.
We need to grow out of that.
Evolution is an attempt to exploit the law of large numbers to overcome the barrier to achieving a very low probability event.
In order for such a system to work, the stability of the organism must already be robust enough to be able to nominally survive whatever the ambient mutation rate is. Otherwise a mutation is more likely to cause premature death than to stumble upon a beneficial trait accidentally.
Thus, to overcome a local minimum, evolution must find a survivable path away from the local attractor that eventually leads to a saddle point in the global fitness function. Keep in mind though, that the fitness function may also be changing over time, potentially leading to the transient appearance of navigable paths through configuration space.
For computer science, engineering etc. evolution is another tool in the toolkit.
For neural networks I found training by evolution after training by back-propagation fails utterly.
Which was counter-intuitive to me and suggests deep neural networks are memory, not algorithms emerging in a computational soup of weights and activation functions.
I find this algorithm to be most useful: https://archive.org/details/ctsgray
In Java one may use this algorithm to produce mutations between -2 and 2 to mutate parameters between -1 and 1.
public float nextMutation() {
int r=rng.nextInt(); // 32 bit random number
r&=0xbfffffff;
r|=0x30000000;
return Float.intBitsToFloat(r);
}
For a single mutation to completely alter a variable between -1 and 1 obviously it needs to have a magnitude up to 2. If the mutation is too much then obviously clip to -1 or 1.
Well, simple inductive CPU circuits have been designed using evolution and self assembly. Once evolved simply make the self assembly “board” bigger and it will be tiled with a circuit for a higher number of bits.
I’m not a proponent for evolutionary algorithms.
I just pragmatically want to know what they are effective for and when to use them in preference to other techniques. And I do have some sense of that.
Distinct things about genetic evolution vs it’s resulting hominid species behaviors. Genetic Evolution happens on a species level. It always involves mutation. But then in some cases, its fractals, showing self similarity. Culture is driven by genetics conducive to language and memory. Culture also evolves but outside the genome. Cultural evolution happens quickly as cultural mutations are numerous because of spontaneous random influences on group memory from many sources. Seems cultural evolution may be a more interesting case study for this topic.
I have just been using it to fit Chebyshev polynomials in fractions of a second. Like, I wouldn’t use it to evolve neural networks (which I did for years, lol.)
Right tool for the right job and all that.
See: Mini Java Collection 10 - Archive dot org.
With the NEAT algorithm, saddle points can be avoided via recombination and mutation. While also preserving the beneficial changes.
Still working on my system to blend NEAT and HTM. Looking not only to dodge the local minimum that gradient descent approaches are prone to, but also avoid the need for massively large training sets as well.
Most mutations tend to be bad, but they are necessary to create novelty to a certain degree. We need to merely quickly dispose of the bad mutations. Balancing act du jour!
Well, yeh. Numerical recipes gives 2 ways to fit Chebyshev polynomials.
As an alternative, simple evolution with mutations allows irregular spacing’s and requires only a few lines of code.
By randomly picking fitting points I think it should be possible to create a very efficient (otherwise) deterministic fitting algorithm. However that is months of work.