How are we going to teach agent?

Eventually we will have to make vast of human knowledge available for agent. But is it possible to do it without creating it human like and embodied?
Do you have any ideas how to teach it?

1 Like

The latest challenge from DeepMind has hundreds of books from the Gutenberg Project along with questions. The challenge to the AI read the book and answer the questions. Question that require understanding of what was read. Amazon has plenty of books in soft copy for an aspiring AI to learn from.

I think it will have to have some sort of motor output and way to interact with an environment. That way, it can fill gaps in its knowledge, rather than waiting for that information to happen to arrive on its sensors. I’m not sure how human like it has to be. You would probably want it to learn fast, so you would run it on a supercomputer rather than waiting a decade for it to learn. It would be worth building another supercomputer if that means it finishes learning a year earlier, if it is on a scale of intelligence where you would give it all of human knowledge.

Let’s say you’ve got enough computing power to run a super intelligence AI. At first, most of that computing power would go to waste because it is learning simple concepts. So instead of starting with one massive cortex, you would start with a bunch of separate ones, each learning more or less independently. Each small artificial cortex would receive completely independent sensory input. You might enhance some for specific types of learning or specific senses, adding any specializations you want and fine tuning things like learning rate. Then, you start joining them together. In the context of HTM, this means voting on the object (or some more abstract concept, assuming intelligence works like object recognition in some ways). Instead of learning and exploring the virtual world independently, some start pointing their sensors at the same thing at once, so they can vote on the object. For intelligence, this would mean previously independent regions start focusing on the same thoughts. They vote on their thoughts or perceptions, so the knowledge they independently gathered is combined.

I think we’ll use virtual environments to teach agents a lot of things. If the AI inside is properly general, it should transfer that learning into all environments.

Well, yes we have tons of text. But as i understand text is just a compact representation of different region states. So text itself is useless, i think. Of course we can mine some regularities and somehow transform it into states and then use to make predictions. But i don’t see clear mechanism how to do it.
Otherwise we will get another stupid chatbot.

You guys talking about simulation. Ok, imagine we want to cut a rope. The only thing we have is a cup on table. We human know that it’s possible to break cup into pieces and use on of it to cut the rope.
This example shows that there are lot of things or even most of them can’t be simulated. Or we have to go back to 80s when there were knowledge engineers who manually made world representation for expert systems.

It might be hard, but cutting a rope can be simulated. You might want to build a robot at some point to teach it about that sort of thing if it’s easier, but it seems like most things we’ll want to teach it can be simulated. You’d probably want to feed it information from physical cameras and such, but only to fill in gaps in knowledge. Simulations can be way faster than the real thing. In the near future, we might have AI before we have coordinated robots. We have robots, but just walking up a stair is a challenge.

It’s not just simulations. There’s also a vast amount of information available on the internet. Collecting that much information would be hard. It also doesn’t need to be simulations like our physical world. It probably just needs some ability to interact with its world. Surfing the web, for example, or playing with protein folding. There is a computer game for that already.


Our hope is that this dataset will
serve not only as a challenge for the machine reading
community, but as a driver for the development of
a new class of neural models which will take a sig-
nificant step beyond the level of complexity which
existing datasets and tasks permit.

I’m sure a lot of things can be simulated with sufficient accuracy. But i’m not sure i’ll see AGI during my lifespan with this approach.

It’s not just simulations. There’s also a vast amount of information available on the internet. Collecting that much information would be hard. It also doesn’t need to be simulations like our physical world. It probably just needs some ability to interact with its world.

As i said, i think this vast of information is relevant for a human being and uselles for artificial agent with another constitution. And i think you are right: we missing interraction.
Human interracts with for human information and gets adequate response.
Artificial not human like agent can’t interract with the same information and get the same response.
We need some kind of protocol for this.