Self-improving intelligence

The lowest level sensory input and motor commands which control the “body” (using this term loosely) will be hard-coded. The models however will be learned on-line by the organism itself, and due to random tie-breaker logic (as well as randomness in the tasks each organism encounters) will self-assemble.

In other words, classification will happen, but I don’t need to write a classifier that can be interpreted by a third party. “Labeling” will happen via some pooling algorithm (currently exploring hex-grid formation in something like TBT’s output/object layer for this particular task).


Its a bit heavy duty, but could you not go down the evolutionary computation route to ‘evolve’ your agents so to speak?


1 Like

Possibly. However, without some key features of intelligence, it would be difficult to devise a method of comparing one creature to another to determine which is better than the other. Without a way to make granular enough comparisons, you would be at the mercy of long periods of time waiting for several properties to align by chance and register as an improvement.

Sorry I assumed you intended to use something like this as an intended fitness function, though I admit if you are not careful you would have periods where assessing agents using such fitness may stall.


1 Like

Yes, it may be possible. It is just a bit easier for me to conceive of challenges that can target entities which have some basic level of intelligence. Also, the journey to develop a basic artificial organism is itself rewarding.

What do you mean with application and what do you mean with system?

Based on @Paul_Lamb’s answer (next quote), I’m not sure you’re talking about the same thing. Or I may be confused completely.

You see, in this analogy, wouldn’t the application be the dog instead of the dog show. The dog’s internal state could be “master wants me to return the ball” compared to “run around because I’m excited”. The judge needs to find out if the dog performs intelligently to award the points, and so needs to get an appreciation of the dog’s internal state.

To get back to the software application, if you want to devise an intelligent system, it needs to interpret its internal state, doesn’t it? And so probably also does the judge algorithm.

1 Like

I am also confused on difference between application and system, so will use other more specific terms to answer your question.

The creatures being judged should be able to place value on their own internal models based on how well they are meeting their needs. This of course requires them to know their own internal state. This internal valuation should self-assemble without requiring a third party to view the creatures’ active neurons and interpret them.

The judge is a third party which only needs to generate composite intelligence scores based on the results of the various tasks. This is a purely objective function which does not require knowledge of the internal states of the creatures. The judge does not even need to be intelligent or change what it does over time. It merely needs to follow a simple set of rules.


Ok, I understand now. Thx.

1 Like

Application in the business/industry sense - the application of HTM to a real world scenario.

System encompasses the HTM part, the software simulation of intelligence.

If it leads to an internally generated “prediction” (say the dog’s next motor command) that uses pooling rather than some fixed mapping/decoding, do you mean that it will learn through some sort of reinforcement learning with the outside environment? Like a baby, starting random and gradually becoming purposeful?

I didn’t mean to coax you into a long conversation with my original question, so I’m happy to wait and see how it progresses.


Yes, this is one necessary component of the creature. I am currently exploring how to save emotional context as part of an object’s model. Babbling/randomness allows the creature to explore and model the outcomes of its own actions, and emotional context allows decisions to be chosen in pursuit of satisfying needs.

No problem, I enjoy talking about the project. I just haven’t brought it up much on the forum until I have a lot more of the missing pieces figured out. There are still obviously a whole lot of pieces missing that will be required for a functional organism as I am imagining it.

1 Like

I am curious. Can you share more details about your platform of choice, environment structure and the organism’s most basic task that it would start from in your mind?

The application will be written in Go, and will allow being deployed in a distributed fashion (so that the process can be sped up or slowed down by adding/removing processing nodes)

Every creature will have an interface which provides an API with which new challenges can be created in the future, without having to code all of the challenges up front before kicking off the system. Communication from sensory input to the creature, and from the creature to motor commands will all pass through this interface.

As subsequent generations become more intelligent, or more pressure needs to be applied to a particular area, new challenges can be written which are more complex or which target specific problem areas. Another benefit of such an interface is that it could be used to plug the AI into a practical system later (such as a toy robot), once it has reached a desired level of intelligence.

The interface, being a critical component of the system, will have a series of tests that can be run against it to determine whether it is intact. The Judge will always make sure this component is not broken before doing any other intelligence scoring, and creatures with a broken interface will never be selected.

Intelligence scoring will borrow concepts from a book by psychologist Howard Gardner, called “Frames of Mind: The Theory of Multiple Intelligences”. In this book, Gardner discusses seven facets of intelligence: musical-rhythmic, visual-spatial, verbal-linguistic, logical-mathematical, bodily-kinesthetic, interpersonal, and intrapersonal. Although Gardner’s theory has been widely criticized by mainstream psychology, I think his work provides an excellent conceptual foundation for building challenges that target different aspects of intelligence. A creature’s score will be a composite all seven categories.

So for example, a musical-rhythmic challenge might consist of listening to a song and measuring how quickly and accurately a creature can mimic the song.


I would suggest a system that avoids quantization effects if you are going to use evolution. Arrange things so that a small change in one of the design parameters produces a small change in behavior, never a large change, and ideally not a no change. That way a step downward in the cost landscape is for multiple reasons and has multiple justifications. Also if you make the system large enough trapping local minimums tend to go away. The probability of being blocked from a downhill move in every possible direction you might try recedes as the number of dimensions increase.

1 Like

Since the goal is to train them to rewrite their source code in order to make the next generation more intelligent than themselves, I can’t imagine a way that a constraint like that could be enforced.

Real evolution kills a lot of the experiments before the are ever born!

Initially, the modifications they are making while exploring the IDE “body” and modifying the source code will result in many, many failures that simply do not run due to syntax errors, infinite loops, etc. The hope would be that over time they learn particular patterns that don’t crash the application as soon as it starts, and the code modifications thus would start to become less random.

1 Like

Sort of like SETI but searching for life in code space!

You are trying genetic programming? To me that is far too abrupt and quantized to work well with evolution. Maybe someone with expertise in crossover based genetic algorithms would say otherwise. I tend to use Evolution Strategies (ES) which just use mutation. What I find is that even using neural networks to soften things up (in place of code) is not exactly enough. You have to be very careful with the activation functions you use. Squashing type function are out as they quantize by saturation. Piece-wise linear activation functions that switch behavior (slope) when the output is zero are okay. Sparsity inducing activation functions like square or the signed square where you reintroduce the sign after squaring are okay. Renormalization at each layer is a good idea.

I’m not sure what genetic programming is… I would say that it is just programming. Training an AI to modify its own source code so that the resulting “offspring” score higher in various intelligence challenges than they themselves did.

Yes. I think there were some artificial life experiments done with that.
In terms of evolving quantized systems I did have some limited success with computational self-assembly. Which is interesting because if you evolve a system that can add say 2 8 bit numbers. Then if you give the system a larger area to grow/self-assemble it can add say 2 32 bits numbers. It is inductive.
Just as an example: