One problem with evolution algorithms is they get trapped, maybe in some local minimum or at a difficult saddle point. One way out that avoids local restarts is to alternate between different loss (error measurement) functions during evolution.
For a neural network etc the L2 loss function is the sum of the squares of the differences to the target , the L1 loss function the sum of the absolutes of the differences.
If the system is not trapped then a reduction in one loss function usually results in a reduction in the other. However as the system gets trapped a decrease in one results in an increase in the other. The other has actually stepped uphill. When you alternate it may be high enough to allow an escape from the local minimum, at least it has a better chance. At least you can say the system is being pulled from one configuration to another via random walks and there are certainly more chances of finding a way down that way than being permanently stuck in one place.
I tried it and it works quite well. I don’t want to put a web-page up showing it at the moment due to some hacking type activity going on at the moment.
Of course you can use more loss functions.
For biological evolution the loss function is never that consistent which may help speed it along despite having less than idea material to work with.
Anyway it is something that is obvious when you have been told it. If you ever want to cite it in a paper you can say “it is obvious that…”