François Chollet has published an important essay on the measure of intelligence:
We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems.
You would find obvious to measure the “skill-acquisition efficiency” instead of raw skills to assess intelligence. But it wasn’t the case in current AI benchmarks.
The new dataset dedicated to measure machine intelligence is called Abstraction and Reasoning Corpus (ARC):
At first, I thought that this benchmark was lacking temporal data. Brains are learning with continuous sensory data streams, so we need those kind of data to compare human and artificial intelligence. This was the reason why Numenta created the NAB benchmark.
But after considering it, the kind of intelligence measured in this benchmark is a high-level abstract abitility, like the one measured in IQ tests. This is the targeted kind of intelligence when we talk about machine intelligence.
I consider the prediction of temporal data streams while making sensorimotor interactions as a needed intermediate step towards machine intelligence, but not directly as intelligence. This is the current focus of Numenta with HTM.
When this intermediate step will be reached, the next step will be to detach the symbols from the sensorimotor interactions they were grounded on, in order to make more abstract reasoning by playing directly with the symbols. The following paper was enlightening for me:
Extract from The symbol detachment problem, by Giovanni Pezzulo & Cristiano Castelfranchi, 2007:
Intelligence in strict sense (not in a trivially broad sense where just it means efficiency, adaptiveness of the behavior, like in insects) is […] the capacity to build a mental representation of the problem, and to work on it (e.g. reasoning), solving the problem ‘mentally’, that is working on the internal representation, which is necessarily at least in part detached since the agent has to modify something, to simulate, to imagine something which is not already there. Perhaps, on the mental ‘map’ the agent will act just by trials and errors, but it will not do so in its external behavior
Still a long road before us!