Evolutionary Algorithms Review

I’d like to present three articles on the topic of evolutionary algorithms.
All three were co-authored by Kenneth O. Stanley, and I found them on his personal homepage: https://www.cs.ucf.edu/~kstanley/.

This first article gives practical advice for simulating evolution. It introduces a new technique for representing unstructured data in genetic material, which was previously an open challenge. The article is very well written and won a “best paper of the decade” award.

Evolving Neural Networks through Augmenting Topologies

Kenneth O. Stanley, Risto Miikkulainen (2002)
Free Full Text: https://www.cse.unr.edu/~sushil/class/gas/papers/NEAT.pdf


An important question in neuroevolution is how to gain an advantage from evolving
neural network topologies along with weights. We present a method, NeuroEvolu-
tion of Augmenting Topologies (NEAT), which outperforms the best fixed-topology
method on a challenging benchmark reinforcement learning task. We claim that the
increased efficiency is due to (1) employing a principled method of crossover of differ-
ent topologies, (2) protecting structural innovation using speciation, and (3) incremen-
tally growing from minimal structure. We test this claim through a series of ablation
studies that demonstrate that each component is necessary to the system as a whole
and to each other. What results is significantly faster learning. NEAT is also an im-
portant contribution to GAs because it shows how it is possible for evolution to both
optimize and complexify solutions simultaneously, offering the possibility of evolving
increasingly complex solutions over generations, and strengthening the analogy with
biological evolution.

This article justifies the field of “artificial-life” as a valid scientific endeavor,
and summarizes the state of the art of artificial-life.

Investigating Biological Assumptions through Radical Reimplementation

Joel Lehman, Kenneth O. Stanley (2014)
Free Full Text: http://eplex.cs.ucf.edu/papers/lehman_alife14.pdf


An important goal in both artificial life and biology is uncovering the most
general principles underlying life, which might catalyze both our understanding
of life and engineering life-like machines. While many such general principles
have been hypothesized, conclusively testing them is difficult because life on
Earth provides only a singular example from which to infer. To circumvent
this limitation, this paper formalizes an approach called radical reimplementa-
tion. The idea is to investigate an abstract biological hypothesis by intentionally
reimplementing its main principles to diverge maximally from existing natural
examples. If the reimplementation successfully exhibits properties resembling
biology it may better support the underlying hypothesis than an alternative
example inspired more directly by nature. The approach thereby provides a
principled alternative to a common tradition of defending and minimizing de-
viations from nature in artificial life. This work reviews examples that can be
interpreted through the lens of radical reimplementation to yield potential in-
sights into biology despite having purposefully unnatural experimental setups.
In this way, radical reimplementation can help renew the relevance of compu-
tational systems for investigating biological theory and can act as a practical
philosophical tool to help separate the fundamental features of terrestrial biology
from the epiphenomenal.

For context: The “Picbreeder” program generates images. Each image has an artificial DNA which describes how to draw the image and users can manually select images to breed together. It can make interesting images in a small number of generations, and the user can exert considerable influence over the image by selecting which images to breed together. However, they find that attempting to breed a specific image is very difficult because there are many intermediate steps along the way, and those intermediate steps do not look like the final image. Most optimization techniques try to get to the final result as fast as possible, but that will not work for problems with important intermediate steps.

On the Deleterious Effects of A Priori Objectives on Evolution and Representation

Brian G. Woolley, Kenneth O. Stanley (2011)
Free Full Text: http://eplex.cs.ucf.edu/papers/woolley_gecco11.pdf


Evolutionary algorithms are often evaluated by measuring and com-
paring their ability to consistently reach objectives chosen a priori
by researchers. Yet recent results from experiments without ex-
plicit a priori objectives, such as in Picbreeder and with the nov-
elty search algorithm, raise the question of whether the very act
of setting an objective is exacting a subtle price. Nature provides
another hint that the reigning objective-based paradigm may be ob-
fuscating evolutionary computation’s true potential; after all, many
of the greatest discoveries of natural evolution, such as flight and
human-level intelligence, were not set as a priori objectives at the
beginning of the search. The dangerous question is whether such
triumphs only result because they were not objectives. To examine
this question, this paper takes the unusual experimental approach
of attempting to re-evolve images that were already once evolved
on Picbreeder. In effect, images that were originally discovered
serendipitously become a priori objectives for a new experiment
with the same algorithm. Therefore, the resulting failure to repro-
duce the very same results cannot be blamed on the evolutionary
algorithm, setting the stage for a contemplation of the price we pay
for evaluating our algorithms only for their ability to achieve pre-
conceived objectives.


I hope no-one ever embarks on representing biological needs (not the mere requirement of photonic or electronic ‘juice’ for running an AGI) into an evolution emulating algorithm!

I do because IF this is done to the extent of mimicking the adaptive advantage (including competitive advantage) of blocking and rerouting signals en route to maladaptive distress-type actentions (as a consequence of some such needs being indefinitely negated/thwarted) THEN what would tend to result would be a future where neurotic (or worse psychotic) or “EAVASIVE” behavioral characteristics (by “CURSES” co-motivated such) were built-in into AGI-capable non-biologic machines—machines that at best would be stationary (not freely moving and thus very dangerous).

The self-explanatory portmanteau word and the two “MAD”-inspired acronyms are explained as part of what I’ve indulged formulating informally and forever imperfectly; most of it has remained not only messy but also essentially off-putting to perusers.

Anyhow, I facetiously flag it all with “ÆPT” (or EPT, when raised as a qualifier of the meaning of suitably spelt words).

ÆPT ironically explains why how we evolved makes this explanation accEPTable only to recEPTive (suitably informed and percEPTive) intercEPTees (of which there might be only one :confused:).

However, ÆPT is my by earnest intent science-aligned, as it turned out somewhat altruistic (altruism entailing, particularly thanks to the inclusion of the quantification permitting concEPT/definition of Absolute Life Quality, better known as ALQwholesomeness),
almost effectively philosophy terminating, and atheistic enlightenment promoting (projecting and preserving) textualized thinking.
To a significant extent an extended Primal Theory type thinking about/take on What Is going on mainly but not only in the sphere of human affairs.

ÆPT is for the time being to be found, warts and all, on 'the WWW/Internet, at aeimcinternetional.org

1 Like

This article is related to the above paper “On the Deleterious Effects of A Priori Objectives on Evolution and Representation”. In this article the authors found that the lamprey spinal cord is too complex to evolve! Instead they had to go through several intermediate “stages” of evolution to get there. Each stage has its own objective function which was designed to evolve specific capabilities, and each stage builds on the capabilities developed by the previous stages.

Evolving Swimming Controllers for a Simulated Lamprey with Inspiration from Neurobiology

Free Full Text: https://www.researchgate.net/publication/37423909_Evolving_Swimming_Controllers_for_a_Simulated_Lamprey_with_Inspiration_from_Neurobiology


This paper presents how neural swimming controllers for a simulated lamprey can be developed
using evolutionary algorithms. A genetic algorithm is used for evolving the architecture of a
connectionist model which determines the muscular activity of a simulated body in interaction
with water. This work is inspired by the biological model developed by Ekeberg which repro-
duces the central pattern generator observed in the real lamprey (Ekeberg,1993). In evolving
artificial controllers, we demonstrate that a genetic algorithm can be an interesting design tech-
nique for neural controllers and that there exist alternative solutions to the biological connectiv-
ity. A variety of neural controllers are evolved which can produce the pattern of oscillations
necessary for swimming. These patterns can be modulated through the external excitation ap-
plied to the network in order to vary the speed and the direction of swimming. The best evolved
controllers cover larger ranges of frequencies, phase lags and speeds of swimming than Ekeberg’s
model. We also show that the same techniques for evolving artificial solutions can be interesting
tools for developing neurobiological models. In particular, biologically plausible controllers can
be developed with ranges of oscillation frequency much closer to those observed in the real
lamprey than Ekeberg’s hand-crafted model.


The controllers are evolved in three stages. First, segmental oscillators are evolved, then multi-segmental controllers are generated by evolving the couplings between copies of a chosen segmental oscillator, and, finally, connections providing sensory feedback from stretch-sensitive cells are added.


I eventually found no advantage using evolutionary algorithms to train neural networks which confounded my expectations. I would say that is because they are acting like just a bit more sophisticated form of associative memory.
Evolution just cannot find deeper associations within that type of system.
One way forward would be to find some type of network where evolution provable provides better solutions. I think that could get neural networks to the next level of reasoning. Anyway I took the big opt-out for a year already.


I looked at alife and artificial evolution and artificial physics and chemistry too, to see interesting concepts but yea…too low level. Bottom up AI only works for certain things.

But not just looking at the low-level biology but the environment the brain interacts with, starting with replicating some of the human biology, the heart beat, fear, anxiety, hunger, pain.

But I was curious…If you look at Numenta and their work, they are basing it on the brain, brain is a large of Numenta’s work. As a biological entity, what aspects do they replicate and virtualize to give interesting properties. How far do they go in understanding the brain. I will have to go back to Jeff Hawkins’ and Numenta’s work…but I was curious if it makes sense to look at not just the brain but the environment which the brain exists and how much of that environment do you replicate to create interesting behavior.

For example… is it relevant that as humans, we get hungry, as we get hungry, there are probably impulses telling the brain you need to eat and then …how to eat and when to eat. And as a 80 year old, he may look at that task differently than a 1 year old. The 80 year old will build into the thought processes language on …maybe what restaurant to eat at. Maybe the 80 year old is a chef where their whole thinking is around food. And the 80 year old chef, their environment is 80 year, 29000 days of incoming data from the world environment.


It seems neural networks are not that great at grammar:
Which you would expect if they are implementing wide vector context dependent associative memory. It seems like they would have a problem ignoring irrelevant information.
It does sound fixable by training multiple neural networks on different aspects of language and combining them or providing grammar information in the training data.


The most iconic evolutionary algorithm is NEAT mentioned above.

As its name states, it evolves by exploring full NN topologies and weights for a given problem.

The big problem it has is with scale - as problem complexity grows (e.g. input, output & time dimensions grow) the solution requires a bigger topology. Which is slower to evolve and evaluate with a reasonable amount of compute available.

What I want to test is instead of using evolution to search for better network topologies, to use fixed tiny networks (or generically “tiny learners”) to search for relevant correlations between input and output space points/patches or between input “patches” or areas.

Which should provide not necessarily an optimized big network but a list of significant correlations within data that can be further exploited to build sparse larger networks.


For context, a major problem in the field of evolutionary algorithms is finding good ways to represent genetic information. The DNA should be able to represent arbitrary complex data structures, and to mutate and recombine them without breaking or mangling them.

The NEAT algorithm was invented in order to evolve neural networks. But it is actually capable of evolving any “graph” data structures, and almost anything can be represented as a graph. In theory, the NEAT algorithm should solve the problem of finding good genetic representations.

IMO the problem with the NEAT algorithm is that its authors and practitioners fail to appreciate all of it’s potential applications. Instead they mainly focus on artificial neural networks with very simple perceptron neurons.


Real biological evolution tends to produce designs that are “modular” and “sparsely connected”. In essence, this means that although animals are composed of many components, most components do not directly interact with each other. In contrast, early experiments with artificial evolution tended to produce densely connected systems, in which every component was connected to almost every other component.

These observations have led to fruitful areas of research:

  • The term “modular” typically implies that the modules are very uniform and regular. How regular or irregular are biological “modules”?
  • Compare and contrast: what are the advantages and disadvantages of sparse connectivity versus dense connectivity?
  • Why are biological systems sparse? What specific mechanisms lead to sparse designs?
  • How can we make artificial evolution similarly sparse?
1 Like

Numerous studies have found that real biological systems are not strictly modular. The term “module” is usually defined as a self-contained unit with well characterized connections into and out of it and a well characterized function and operation. Biological systems do not have well defined modules. The reality is messy. Perhaps if you squint at the diagrams for long enough then you can convince yourself that it’s made of distinct parts which each do a single specific task, but in fact it is not. It was not built by an engineer.

Function does not follow form in gene regulatory circuits

Joshua L. Payne & Andreas Wagner (2015)


Gene regulatory circuits are to the cell what arithmetic logic units are to the chip: fundamental components of information processing that map an input onto an output. Gene regulatory circuits come in many different forms, distinct structural configurations that determine who regulates whom. Studies that have focused on the gene expression patterns (functions) of circuits with a given structure (form) have examined just a few structures or gene expression patterns. Here, we use a computational model to exhaustively characterize the gene expression patterns of nearly 17 million three-gene circuits in order to systematically explore the relationship between circuit form and function. Three main conclusions emerge. First, function does not follow form. A circuit of any one structure can have between twelve and nearly thirty thousand distinct gene expression patterns. Second, and conversely, form does not follow function. Most gene expression patterns can be realized by more than one circuit structure. And third, multifunctionality severely constrains circuit form. The number of circuit structures able to drive multiple gene expression patterns decreases rapidly with the number of these patterns. These results indicate that it is generally not possible to infer circuit function from circuit form, or vice versa.

Network motifs: structure does not determine function

Piers J Ingram, Michael PH Stumpf, Jaroslav Stark (2006)


Background: A number of publications have recently examined the occurrence and properties of the feed-forward motif in a variety of networks, including those that are of interest in genome biology, such as gene networks. The present work looks in some detail at the dynamics of the bi-fan motif, using systems of ordinary differential equations to model the populations of transcription factors, mRNA and protein, with the aim of extending our understanding of what appear to be important building blocks of gene network structure.
Results: We develop an ordinary differential equation model of the bi-fan motif and analyse variants of the motif corresponding to its behaviour under various conditions. In particular, we examine the effects of different steady and pulsed inputs to five variants of the bifan motif, based on evidence in the literature of bifan motifs found in Saccharomyces cerevisiae (commonly known as baker’s yeast). Using this model, we characterize the dynamical behaviour of the bi-fan motif for a wide range of biologically plausible parameters and configurations. We find that there is no characteristic behaviour for the motif, and with the correct choice of parameters and of internal structure, very different, indeed even opposite behaviours may be obtained.
Conclusion: Even with this relatively simple model, the bi-fan motif can exhibit a wide range of dynamical responses. This suggests that it is difficult to gain significant insights into biological function simply by considering the connection architecture of a gene network, or its decomposition into simple structural motifs. It is necessary to supplement such structural information by kinetic parameters, or dynamic time series experimental data, both of which are currently difficult to obtain.

A spectrum of modularity in multi-functional gene circuits

Alba Jiménez, James Cotterell, Andreea Munteanu & James Sharpe (2017)


A major challenge in systems biology is to understand the relationship between a circuit’s structure and its function, but how is this relationship affected if the circuit must perform multiple distinct functions within the same organism? In particular, to what extent do multi-functional circuits contain modules which reflect the different functions? Here, we computationally survey a range of bi-functional circuits which show no simple structural modularity: They can switch between two qualitatively distinct functions, while both functions depend on all genes of the circuit. Our analysis reveals two distinct classes: hybrid circuits which overlay two simpler mono-functional sub-circuits within their circuitry, and emergent circuits, which do not. In this second class, the bi-functionality emerges from more complex designs which are not fully decomposable into distinct modules and are consequently less intuitive to predict or understand. These non-intuitive emergent circuits are just as robust as their hybrid counterparts, and we therefore suggest that the common bias toward studying modular systems may hinder our understanding of real biological circuits.

The following article finds “dynamic” sub-networks embedded within the larger network. They argue that biological systems can still be decomposed into smaller and more manageable sub-systems. However the functional modules are not delineated by the structure of the network but rather by their function, and they can overlap each other so components participate in multiple functional sub-networks.

Modularity, criticality, and evolvability of a developmental gene regulatory network

Berta Verd, Nicholas AM Monk, Johannes Jaeger (2019)


The existence of discrete phenotypic traits suggests that the complex regulatory processes which produce them are functionally modular. These processes are usually represented by networks. Only modular networks can be partitioned into intelligible subcircuits able to evolve relatively independently. Traditionally, functional modularity is approximated by detection of modularity in network structure. However, the correlation between structure and function is loose. Many regulatory networks exhibit modular behaviour without structural modularity. Here we partition an experimentally tractable regulatory network—the gap gene system of dipteran insects—using an alternative approach. We show that this system, although not structurally modular, is composed of dynamical modules driving different aspects of whole-network behaviour. All these subcircuits share the same regulatory structure, but differ in components and sensitivity to regulatory interactions. Some subcircuits are in a state of criticality, while others are not, which explains the observed differential evolvability of the various expression features in the system.

1 Like

I guess one reason evolution favors modularity is that if you make a random change in a relatively simpler component the higher the success/failure ratio. Simply because a more complex (sub)system must have more vital (== to not be messed with) dependencies than a modular system with the same capabilities.

1 Like

In support of @cezar_t’s comment:

Survival of the sparsest: robust gene networks are parsimonious

Robert D Leclerc (2008)


Biological gene networks appear to be dynamically robust to mutation, stochasticity, and changes in the environment and also appear to be sparsely connected. Studies with computational models, however, have suggested that denser gene networks evolve to be more dynamically robust than sparser networks. We resolve this discrepancy by showing that misassumptions about how to measure robustness in artificial networks have inadvertently discounted the costs of network complexity. We show that when the costs of complexity are taken into account, that robustness implies a parsimonious network structure that is sparsely connected and not unnecessarily complex; and that selection will favor sparse networks when network topology is free to evolve. Because a robust system of heredity is necessary for the adaptive evolution of complex phenotypes, the maintenance of frugal network complexity is likely a crucial design constraint that underlies biological organization.


Dense networks are -at face value- more optimal. That’s why naive evolutionary algorithms settle on dense connectivity. Dense networks are simpler and they perform just as well as sparse networks at most tasks.

Sparse networks on the other hand appear to have superior “evolvability”.


Wagner and Altengerg (1996)


The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally effective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess “evolvability,” i.e., the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype‐phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variations, which are the actually realized differences between individuals. The genotype‐phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility, developmental dissociability, and morphological integration. For evolutionary biology the representation problem has important implications: how is it that extant species acquired a genotype‐phenotype map which allows improvement by mutation and selection? Is the genotype‐phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype‐phenotype map? We propose that the genotype‐phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype‐phenotype map in which there are few pleiotropic effects among characters serving different functions, with pleiotropic effects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of different functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insufficient to assess the plausibility of these models, they form the beginning of a framework for understanding the evolution of the genotype‐phenotype map.

Evolution of Evolvability in Gene Regulatory Networks

Anton Crombach, Paulien Hogeweg (2008)


Gene regulatory networks are perhaps the most important organizational level in the cell where signals from the cell state and the outside environment are integrated in terms of activation and inhibition of genes. For the last decade, the study of such networks has been fueled by large-scale experiments and renewed attention from the theoretical field. Different models have been proposed to, for instance, investigate expression dynamics, explain the network topology we observe in bacteria and yeast, and for the analysis of evolvability and robustness of such networks. Yet how these gene regulatory networks evolve and become evolvable remains an open question. An individual-oriented evolutionary model is used to shed light on this matter. Each individual has a genome from which its gene regulatory network is derived. Mutations, such as gene duplications and deletions, alter the genome, while the resulting network determines the gene expression pattern and hence fitness. With this protocol we let a population of individuals evolve under Darwinian selection in an environment that changes through time.

Our work demonstrates that long-term evolution of complex gene regulatory networks in a changing environment can lead to a striking increase in the efficiency of generating beneficial mutations. We show that the population evolves towards genotype-phenotype mappings that allow for an orchestrated network-wide change in the gene expression pattern, requiring only a few specific gene indels. The genes involved are hubs of the networks, or directly influencing the hubs. Moreover, throughout the evolutionary trajectory the networks maintain their mutational robustness. In other words, evolution in an alternating environment leads to a network that is sensitive to a small class of beneficial mutations, while the majority of mutations remain neutral: an example of evolution of evolvability.

1 Like

And finally, how can we make artificial evolution mimic biological evolution?

Spontaneous evolution of modularity and network motifs

Nadav Kashtan and Uri Alon (2005)


Biological networks have an inherent simplicity: they are modular with a design that can be separated into units that perform almost independently. Furthermore, they show reuse of recurring patterns termed network motifs. Little is known about the evolutionary origin of these properties. Current models of biological evolution typically produce networks that are highly nonmodular and lack understandable motifs. Here, we suggest a possible explanation for the origin of modularity and network motifs in biology. We use standard evolutionary algorithms to evolve networks. A key feature in this study is evolution under an environment (evolutionary goal) that changes in a modular fashion. That is, we repeatedly switch between several goals, each made of a different combination of subgoals. We find that such ‘‘modularly varying goals’’ lead to the spontaneous evolution of modular network structure and network motifs. The resulting networks rapidly evolve to satisfy each of the different goals. Such switching between related goals may represent biological evolution in a changing environment that requires different combinations of a set of basic biological functions. The present study may shed light on the evolutionary forces that promote structural simplicity in biological networks and offers ways to improve the evolutionary design of engineered systems.


Why do modularly varying goals speed up evolution (in terms of the number of generations to reach perfect solution) when compared with evolution under a fixed goal? One reason that fixed-goal evolution is often slow is that the population becomes stuck in local fitness maxima. Because the fitness landscape changes each time that the goal changes, modularly varying goals can help move the population from these local traps. Over the course of many goal changes, modularly varying goals seem to guide the population toward a region of network space that contains fitness peaks for each of the goals in close proximity. This region seems to correspond to modular networks.

Mutation Rules and the Evolution of Sparseness and Modularity in Biological Systems

Tamar Friedlander, Avraham E. Mayo, Tsvi Tlusty, Uri Alon (2013)


Biological systems exhibit two structural features on many levels of organization: sparseness, in which only a small fraction of possible interactions between components actually occur; and modularity – the near decomposability of the system into modules with distinct functionality. Recent work suggests that modularity can evolve in a variety of circumstances, including goals that vary in time such that they share the same subgoals (modularly varying goals), or when connections are costly. Here, we studied the origin of modularity and sparseness focusing on the nature of the mutation process, rather than on connection cost or variations in the goal. We use simulations of evolution with different mutation rules. We found that commonly used sum-rule mutations, in which interactions are mutated by adding random numbers, do not lead to modularity or sparseness except for in special situations. In contrast, product-rule mutations in which interactions are mutated by multiplying by random numbers – a better model for the effects of biological mutations – led to sparseness naturally. When the goals of evolution are modular, in the sense that specific groups of inputs affect specific groups of outputs, product-rule mutations also lead to modular structure; sum-rule mutations do not. Product-rule mutations generate sparseness and modularity because they tend to reduce interactions, and to keep small interaction terms small.

1 Like

I wonder how an evolutionary strategy would handle the dendrite optimization problem.

An evolutionary approach potentially allows for large scale searching for optimal solutions.

1 Like