ML and Deep Learning to automatically create AGI?

This thread An interesting benchmark for intelligent agents got me thinking about what it would be like to have a suite of environments that could be used as a benchmark to gauge Artificial General Intelligence.

Such thoughts made me asked this question on Quora:

Considering the advent of GANs, how could we use current ML and AI algorithms to learn how to generate a general-purpose AI algorithm and data structure?

Now, I’d like to ask my question to The Nupic/HTM community. We know that the brain has one ubiquitous, repeating structure. Let’s call this structure the ‘smallest unit of intelligence’ or SUI (I’m talking about a cortical column).

We know the SUI must do many things, such as learn sequences, predict, reason by analogy and many other tasks and features pertaining to General Intelligence.

Why can’t we, use Machine Learning techniques and Deep Learning AI algorithms to determine what the structure of that shape Smallest Unit of Intelligence must be?

What we would need is a suite of environments (sensorimotor) of various types, all semantically encoded. That way you drop an agent with a particular brain structure into each one and see how well it learns the structure of its environment, the deep learning supervisor tries to improve the brain with a different SUI structure each iteration.

Would anybody be interested in working on this project? What do you think of the idea?

6 Likes

I like the idea, and my main project that I am working on is along a similar vein. I’d be willing to contribute if some others are interested in taking the lead.

3 Likes

I should point out that one should not expect that whatever is generated by such a system will match biology, but could still be useful.

1 Like

I don’t think this is accurate. The cortex can be thought of this way, but the subcortical structures of the brain have separate distinct functions. An agent would need some of those functions to operate in a challenge like the one you mentioned (a cortex alone wouldn’t be sufficient)

7 Likes

I know that’s not accurate. I meant cortex

2 Likes

Cool, I think one interesting area to explore is how to create an interface with the ML-generated SUI collections to other functions like needs, rewards, action selection, and so-on.

4 Likes

GANs are trained on examples of what you are going to generate. How are you going to implement such an approach in this case? :thinking:

1 Like

RL is probably a better approach, unless you could collect lots of examples of humans doing the challenges (which could be expensive).

2 Likes

GANs are just a proof of concept in this situation. I would make a suite of environments of various types and then let sensorimotor agents explore those environments. Perhaps a deep learning augmented genetic algorithm could be used to mutate the structure of the agents underlying smallest unit of intelligence. Once you have an agent that can function in all environments you have a generic intelligence unit.

2 Likes

I really like the idea in a general sense.

ANN can be thought of a function approximator that searches for a bunch of parameters (unknown) constrained in some form/fashion (known). I’m thinking this idea is roughly the other way around, the parameters are there (inputs) but the form (e.g. SUI) is unknown. So it can use ML to search for the form.

1 Like

Interesting,

You could switch it up and use EAs / GAs instead if GANs, but i’m not sure how easy it would be to automate a process for building unconstrained models through mutation / crossover. kind of would depend on what representation you might choose to use.

I’ve seen this question asked before. The search space is simply too big. People have tried evolutionary algorithms to create neural networks but the current best approaches are designed by people.

3 Likes

I think we can narrow the search space by leveraging what we know about the cortex. We know it must self assemble in a scale-hierarchical fashion, we know the SUI has some amount of memory capabilities to learn sequences, we know it predicts the future, we know it’s in a specific sensorimotor relationship with its environment etc, etc.

I don’t think there is any group of people on earth better suited to narrow the search space than numenta people.

Even if we start with some bloated ensemble of various types of nn, we can give the genetic algorithm large building blocks to play with, to see how it connects them in a SUI, then get finer tuned and more efficient, after we learn what configurations seem to work best.

2 Likes

Deep learning works like this: you have input (x1, x2, x3…) and output (y1, y2, y3…), but you don’t know the specific mapping process.
So you use NN to generate this process. So NN is also called function approximator.
In this case ,what is your input? sensor data? and what is your output? the behavior of intelligent agent???
Even in reinforcement learning, you should to have a evaluation standard, what is it?
the behavior of intelligent agent?

And, you have to prove that a particular NN structure will produce generic intelligence.

1 Like

Projects like Google’s AutoML give some indication that this type of approach to designing NNs is not entirely impossible. The challenges of course will be to enable very small, measurable incremental improvements and a training process that doesn’t require prohibitively expensive computing resources (or eons) to complete.

1 Like

Nature does this by spinning up multiple instances and running the trials in parallel!
Unsuccessful variations are terminated immediately to free up the resources for further testing.
Nature multi-tasks!

5 Likes

The approach I’m thinking about is breaking the problem down into smaller pieces. There are specific functions that we can theorize are necessary (head direction, egocentric/ allocentric transformations, etc). A starting point would be to evolve integrable networks which can perform some of these functions. Create a toolbox of functions first, then move to creating configurations of them to perform simple tasks. And so on, gradually increasing the level of abstraction involved in the search space.

1 Like

Nature does this by structured programming combined with extensive factoring.

The structures are admirable examples of object orientation at all scales. A object container inherits the more basic objects all the way down to insides of the basic execution unit. (A cell)

A compatible program storage unit has a distinct process map to the execution unit. modifications in the data store are directly expressed in the execution unit. This is what I mean when I say the system should allow direct access to all levels of expression in a comparable format.

Application in a ML case involves careful thought of how to partition the components so modification are distributed in a compatible way to various levels of the model.

With some reflection you may see insight in how to arrange the various layers and connections in a deep neural network (no matter what training method) to gain some of the advantage that may be gained by examining how nature does things. Nature really likes layers. I think that this is some of the inspiration for the unix way, (pipes and filters)

1 Like

If we try to learn some best practices from nature, one thing that stands out to me is that the building blocks must be highly configurable.

Taking neurons, for example. There are a many different types that do very different functions. Even ones that are classified as the same type can be used in many different ways. Length of the apical structure on pyramidal cells, for example. Or basket cells utilized differently depending on where they are in the brain, such as in the cerebellum where they synapse on the Purkinje cells.

This is a very different approach than the building blocks of classical ANNs.

5 Likes

The good thing is that you can start way down the evolutionary chain - you do need to have a valid test suite that terminates poor performers as effectively as life does. There has to be a filter for minimum performance as soon as possible to avoid wasting resources. It will have to meet whatever the minimum tests to be born into competition.
You could use this ideas of stages of testing that get progressively harder as the agent is developed.
For the GAN crowd - you could evolve the tests as you evolve the critter.

3 Likes