I have begun the development of a new NEAT engine

I know I raised this issue years ago, about integrating HTM and NEAT (Neural Evolution of Augmenting Topologies), but at the time, I had no idea how to do that.

I do now.

I call it “Spiking NEAT”, and I am writing it in Haskell. I am pulling in some aspects of HTM and the neural theory behind it, including some of the characteristics of Pyrimidial and Purkinje cells. So this is a research effort. I have no idea where this will lead.

I am more than open to suggestions. Especially now since I’ve just started. I am hoping to get something working before I get super busy with a new job, which will slow my progress a bit.

2 Likes

There-s a a possibility the genetic algorithms have more potential when they are used to alter (aka evolve) the input representation or “embedding” rather than the network itself.
All learning algorithms work reasonably well on most inputs, sure some outperform others in various respects, e.g. some achieve higher accuracy, others better sample efficiency, or compute efficiency, others forget less, some overfit, some are better suited for noisy data, etc… but be it ANN, K-NN, linear, trees, forests, xgboost - whatever - there-s no clear winner.

The takeaway here being that IF there are some learnable correlations or patterns within some input data all learning algorithms manage to figure out, with a better or less degree, that these patterns exist.

What we (animals) seem to have a power to … simplify and clarify the few essential features that expose a property or characteristic.

I don’t know of any algorithm able to do that - to figure out not only there-s a cat in the room but also to pinpoint the few key elements in sensory input which make it sure that fact is true.

Some rules of combine/extract smaller parts from sensory channels which can be evolved should not be hard to implement. With simple enough input some learning algorithm learn very fast, dozen or hundreds of milliseconds per core.

Here-s a simplified schematics:

Raw input multiple sensory channels → Simplifier/combiner layer → Generic Learner → Results

The evolving algorithm targets the simplifier/cominer layer that generates simple(r) representations of the raw input. NOT the learner. We know it discovered an improved “perspective” of the input when the results of the generic learner improve.

Here-s a MNIST example task: let’s find a topology of 20 patches x 10 pixels each which when the input image is represented as 20 scalars (each patch sums up the values of the pixels it contains) we get the best accuracy.

So the genetic algorithm starts with a population of 100 sets of 20 random patches (10 pixels each patch).
The testing algorithm picks 1000 digits from the training dataset.
And re-trains the same small initial network 100 times, every time with a different set of 20 patches.
Then tests each trained network-patch set combination
It doesn’t need to reach top accuracy, only to figure out which of 20 series of patches combinations allowed its “learner” to outperform the others, and combine the combine the winning set (e.g. 20 out of 100) in the following generation of the genetic algorithm.

2 Likes

Already I am seeing some possible ideas here. Like, for instance, Hebbian learning in the NEAT context. In the “classical” NEAT, weights are evolved. Now it occurs to me that I can have the “critter” – the group of evolving neurons out of a population of many – can also learn that way.

What we (animals) seem to have a power to … simplify and clarify the few essential features that expose a property or characteristic.

I don’t know of any algorithm able to do that - to figure out not only there-s a cat in the room but also to pinpoint the few key elements in sensory input which make it sure that fact is true.

This is a part of “reasoning” that I’ve been thinking about. And now I ask the – largely rhetorical – question of what would represent the minimal neural requirements to achieve that. Can ants do this? Worms?.. mice? chimps?

A tangential question is that of always having to set “goals” in order to “train” these systems. Kenneth Stanley, the inventor of NEAT, came up with the ideal of “novelty learning” which require no goals. I don’t know how far he went with that. He did have some simple examples of a “mouse” learning how to navigate a maze.

Much to think about here.

2 Likes