I’m not sure what genetic programming is… I would say that it is just programming. Training an AI to modify its own source code so that the resulting “offspring” score higher in various intelligence challenges than they themselves did.
Yes. I think there were some artificial life experiments done with that.
In terms of evolving quantized systems I did have some limited success with computational self-assembly. Which is interesting because if you evolve a system that can add say 2 8 bit numbers. Then if you give the system a larger area to grow/self-assemble it can add say 2 32 bits numbers. It is inductive.
Just as an example: https://arxiv.org/abs/1303.2416
Searching for a small gem in a very large space invokes this basic property:
Maybe I will go away and think about self-assembly of neural networks.
Yes, I do believe that initial attempts will fail. My actual goal is to understand what is the minimum level of intelligence required for a system to recursively self-modify its source code to produce more intelligent variations of itself (i.e. it is not an exercise in memory or parameter tweaking, but actually compiling new applications). I am hoping to gain some insights into what would be required for something like the singularity to actually function (many folks I have talked to seem to think that human-level artificial intelligence is some magical threshold, but I am not convinced)
Assuming at the worse case you have a million monkeys writing code - the chance that it will even compile is very low.
Riffing off Sean’s comments - you could make some small atomic units with highly configurable parameters that could be combined in various ways - sensing, output, computing, memory - where every combination is at least code correct. Mixing up the configuration and number of the atomic modules and the inputs and output as some of the configurable items should do something.
This method would serve the same basic function as in biology - death to monsters that had broken metabolisms and could never live.
It’s still a huge search space.
True, but it would never allow, for example, the HTM neuron itself to ever be optimized or evolved into a new type of neuron with other beneficial properties, since the creature would not have access to that level. But you are correct about millions of monkeys writing code… it may not actually be possible for a low-level intelligence to learn any type of beneficial coding patterns, and thus their edits would always be purely random.
However, one of the sub-modules is a highly configurable NN model that can be tweaked INSIDE of the other modules.This is what I was thinking about when I said - highly configurable. modules.
the other thing that is absolutely necessary is memory of whatever the configuration is so it can be inherited and modified. The DNA thing has memory of low and high level details as the creature comes into being. Parts of the program specify the chemistry, parts the low level arrangement of sub-systems, and some the higher level arrangement of the sub-systems.
I have been reading about how the DNA is elaborated into a living critter for a long time and it is utterly fascinating. Using simple chemical “smells” an axon growth cone can find a target vast distances away and yet find the right map, X/Y location in the map, layer, and cell type.
Keep in mind that DNA has NO intelligence driving the process but instead, relentless (and mindless) exploration of multidimensional search space. Those dimensions range from the chemistry to the high-level arrangement of parts. The metamorphosis of critters (butterflies, sexual maturity being examples) are further search spaces.
Do keep in mind that my goal isn’t just to train an AI to optimize configurations, but to determine what is the minimal required level of intelligence needed to improve on the application’s own source code. This might initially seem to require human level language understanding, but I suspect that is just how we learn to code. Are there other ways to write code toward a purpose that do not rely on a sophisticated human language?
In contemplating the ramifications of an AI attempting to replicate its source code and improve on it, my most immediate question would be what computer language would the source code be rendered in? In further contemplating along the same lines, all computer languages eventually end up emitting machine code (or byte codes in the case of a JVM). If we look at the task of the AI understanding the essence of what all of those machine code instructions are attempting to do as compared to a general-purpose computer language (C, Java, Python, etc.) I am left thinking that rendering machine code source code is the wrong direction to be headed. If we move in the other direction, we quickly enter the realm of domain specific languages (DSLs) and that is where I think this project might get some legs.
My approach to designing DSLs has been to verbally describe the constructs, actions and actors, in English (the only language that I think well in) and to codify an English-like syntax that covers all of the aspects of the domain. In HTM terms (and this is from an HTM neophyte, so be kind please) it would be a language that describes HTM networks, their discrete components (encoders, SPs, TMs, etc.) and the connections between the components and groups of components.
Assuming that we have a functioning AI “compiled” from source code in such a DSL, it is conceivable that it could render productions in the DSL that would mimic its capabilities and perhaps even improve on them; however, considering that we are for all practical purposes endeavoring to do that very thing, right here, right now (with intelligence that is arguably not artificial) it will be interesting to see who/what wins the race.
As I learn more about HTM and as time permits, I’ll have a crack at designing a DSL for describing HTM networks. In the meantime, I am curious about how this idea strikes the members here.
To be clear, my goal with this project isn’t to invent some new process that is better than humans at creating stronger AI. It isn’t even an attempt to make any useful product (though I expect it will tangentially lead to useful things along the way)
The goal is to demonstrate that human-level intelligence is not a threshold for a system to be capable of modifying its own source code to recursively generate improved versions of itself that are in turn able to do the same a measurable amount better that the previous iteration.
What level of intelligence is there in evolution?
Just make eliminate any confusion about brains - say in making an apple or pear tree.
The same general process is at work in all evolution, including brains and immune systems.
It is a process and not guided by intelligence.
I understand that aspect of your goal. I believe that my comment is orthogonal to that in so far as whatever we’re talking about is going to have to render something. I’m suggesting that a DSL would be more feasible than a general purpose PL or machine code. I also get that you are interested in taking this right down to modifying the HTM algorithms themselves in which case the DSL tact may well run out of gas.
@Bitking I think a better analogy than evolution is selective breeding of GMO, but I get your point
@Paul_Lamb In that we are on an HTM forum and my original post to this topic was, in fact, my first post to any topic on this forum, it would be fair to say that I did so with HTM on the brain (so to speak). Now, I get that we’re not talking about HTM here but AI in general.
From a selective breeding POV and using the @Bitking apple tree analogy, we need an agent of change to introduce a new source code modification. Something like a bee carrying a pollen sack to a nectar laden blossom. If that random event results in one of the resultant seeds germinating into a mature tree that (let’s say) produces more nectar than its parents, representing a decidedly better version (for both the tree and the bees) how would the tree “know” that a better version had been rendered.
I’m speaking figuratively here in that we don’t want to create trees but source code. We still need an agent of change and it would require constraints that would tend to guarantee a measure of success. In the apple tree analogy, the apple genome would provide the constraint, guaranteeing that a germinating seed would result in an apple tree, for better or worse. (We’ll ignore, for the moment, mutation forming actors like solar radiation and genetic engineers.) The constraints for the self replicating AI would (should?) have similar guarantees.
You can probably see where I’m going with this train of thought. I dearly want to finish it, but this is play time for me and my real job calls (screams, more like it) so I must leave for the time being.
If you are writing a special language in some kind of purpose-built execution program (byte code) you could have it feed into evolution as described in the William Calvin book “THE CEREBRAL CODE Thinking a Thought in the Mosaics of the Mind”.
You could help evolution get started by writing some trial machines in the language to kick-start the process.
From the Calvin book:
Natural selection alone isn’t sufficient for evolution, and neither is copying alone – not even copying with selection will suffice. I can identify six essential aspects of the creative darwinian process that bootstraps quality.
There must be a reasonably complex pattern involved.
The pattern must be copied somehow (indeed, that which is copied may serve to define the pattern).
Variant patterns must sometimes be produced by chance.
The pattern and its variant must compete with one another for occupation of a limited work space. For example, bluegrass and crab grass compete for back yards.
The competition is biased by a multifaceted environment, for example, how often the grass is watered, cut, fertilized, and frozen, giving one pattern more of the lawn than another. That’s natural selection.
There is a skewed survival to reproductive maturity (environmental selection is mostly juvenile mortality) or a skewed distribution of those adults who successfully mate (sexual selection), so new variants always preferentially occur around the more successful of the current patterns.
The “parent” will always start with its own source code, which is the copy.
At first, any beneficial changes would be by chance, but the goal is more akin to GMO, where it isn’t purely chance, but intelligently guided modifications (i.e. not just waiting for some random beneficial trait and then intelligently selecting for that trait, but also being the cause of the trait occurring in the first place)
System resources limit how many creatures can be running at a time, and they are competing to be selected by the judge.
The competition in this system will involve a number of challenges spanning 7 categories of intelligence, all of which impact the composite score that is used to decide which “parents” are creating more intelligent “offspring”. New challenges and updated challenges will be added on the fly, and there will often be randomness within the challenges themselves.
The parents must have completed the challenges and scored better than their own parents (or their own parents wouldn’t have been chosen). Also, there is the “death” of variants that crash or do not run, and culling of variants which have a broken interface.
Yes, we do have some pretty off-the-wall discussions in the Community Lounge from time to time
Interesting. It seems there is even a term for what I am trying to build – autoconstructive evolution (just not in the biological sense)
I clicked on something and my little comment was gone. Probably because i am a very frequent reader, but almost never post.
So I try to repeat: are you familiar with the PushGP language? It can modify itself and create of-spring.
i created a subset of the language (made a interpreter) and added functionality so that loops can be created. Those loops can not result in a error that halts the program. That has to be proven correct yet