Numenta Research Meeting - August 10, 2020

But is this really an AGI then?

A real AGI needs to be able to attempt everything a human can, and probably more. You should just as easily be able to ask it to install Winblows on your PC as to fix the leaking tap in the bathroom, without telling it where the PC is or which tap is leaking. And if the AGI needs some particular tools or material you don’t have handy, it needs to be able to go out and fetch them.

Anything short of that is not an AGI.

And having the required knowledge (i.e. the world model) to do such a diverse type of operations, will allow it to reconsider certain potential harmful sub goals. Because it might need to know what walking down stairs is to install your PC, or what a credit card is to fix your leaky tap. And what cuing in line means. And how to be polite to someone when asking directions to the hardware store, or what size washers are needed for a certain model of water tap.

Think about it: if you had that power, but you also knew it could result in a potential lethal outcome, wouldn’t you do some tests first? Wouldn’t you simulate first? Just like we do with testing prescription drugs before commercializing then? Same with cars and home appliances and so on. Simple people wouldn’t, but you are not a simple person. And AGI shouldn’t be either.

Testing can only be done by a system with a correct model of the world. That’s the rational way to think about intuition. “Am I going to wear a mask today? What is the chance of getting infected in the store at this time of the day? Do I want to risk getting sick if I plan to visit my elderly parents next week?”

A simple AI with a limited world model can not see that far in the future. But we want to make an AGI.

1 Like