Please tell me what you think of this idea.
Through Jeff’s books and my research into what Numenta knows I have noticed several operating principles of intelligence, or even just attributes of it’s make up and have, for a long time, wanted to embody those principles in something I call a Sensorimotor Inference Engine.
Though it’s been on the back burner of my mind for years I think I’ve finally come up with an initial design that begins to take some intelligent principle into account.
I’m not attempting to build Nupic or instantiate the cortical algorithm. I’m just trying to use what I’ve learned combined with simple neural net technology in order to approximate a machine that maps a system and can control it, even if it’s in a highly unadvanced manner.
This is the Sensorimotor Inference Engine that I mentioned. Now, I already made a sensory motor inference engine - but a naive one. I’ll explain how it works and then you’ll be able to see what I’m trying to do to advance it.
The Naive implementation of the Sensorimotor Inference Engine works like this: the naive agent is in a continual feedback loop with the environment. The environment gives it sensory input and it gives the environment motor output. Every time the naive agent sees sensory data it saves that data in the database along with the action it chooses to take.
After it it has explored the environment by making mostly random actions, you can tell it to put the environment in any state that you want. It will look up in its database to see if it’s ever seen that state before and if it finds it it will search for a path from where it is now (what state the environment is in now) to that particular state. This path is a list of behaviors or a list of motor commands that it must output to the environment.
It’s a simple idea. I can now control an environment via the Sensorimotor Inference Agent without actually knowing how the environment works or even how to work it.
The problem is, of course, the agent that I build only works for really simple environments. Because it’s naive. It has no intelligence. It doesn’t do anything probabilistic and it doesn’t have any generalization. If the environment is too big to put into a database it cannot learn that environment.
Most environments are too big to put into a database. Even a simple 3x3 Rubik’s Cube is too big. I was able to train the naive agent on a 2x2 Rubik’s Cube because there’s only something like 3.6 million transformations. But not the 3x3.
So I wanted to make an intelligent version, a very simple, but intelligent version of the sensorimotor inference engine.
This is the design I’ve come up with so far. I figure you’re going to need an Encoder to simplify the environments so that they can be passed up a hierarchy. You’re going to need that encoder to exist on every layer of the hierarchy. You’re going to want every layer of the hierarchy to look pretty much the same. You’re going to need a predictor to know what next state you need to go to and you’re going to need higher layers in the hierarchy to tell you which next state they want you to go to. And you’re going to need to translate a state-to-state transition to a particular motor output. So I’m calling this the EPA circuit: Encoder, Predictor, Actor. It’s a way to wire up a few different models so that together they manage an environment the way you want them to.
So I’ve already made the “Naive” implementation of a Sensorimotor Inference Engine. If this implementation works it’ll be the “Simple Agent”, because I’m sure there is much more efficient and effective implementations that can be made. (I mean the neocortex is basically the most advanced implementation you could imagine).
If you’re interested in this, let me know what you think. I know almost nothing about neural nets so I need all the feedback I can get.