IQ and universal cortical algorithm

I have been reading the Thousand Brains book in conjunction with Thinking Fast and Slow and basic psychometrics literature. It is interesting that there aren’t “multiple intelligences” – there is a single factor (“g”) that explains most of the variation in cognitive tests. This is surprising and unintuitive, although apparently well established. But it is also what we should expect if the cortex is basically running a single algorithm with a single data structure (SDRs) on a single material substrate (cortical columns).

I don’t have any big insights to share, but thought this was worth submitting for discussion. What do you think?

1 Like

Seems plausible.
What parameters of CC do you think will impact “g” ?
or is it the number of CC ?
how deep hierarchy they create ?
or is it dependent on the physical substrate, chemical properties ?
capacity of TM in CC ?

I think one of the central insights in Jeff’s first book On Intelligence is that intelligence is the ability to predict. Many questions in IQ tests can be interpreted as measuring the ability of the subject to predict a sequence (whether it be geometric, algorithmic, linguistic etc)

A major issue with IQ tests is that it is difficult to distinguish ability and potential. Someone could have poor ability due to not understanding the questions (e.g. not having studied a language) and have enormous potential. The standard IQ tests also measure how conformist the subject is, so if you find correlations with things like career success that may be saying more about how society values conformism than intelligence. Someone who is doing well in an IQ test is probably questioning the perspective of the test developer - predicting the predictions of the developer who predicted the predictions of the subject :slight_smile: and they are happy to play that game.

I don’t think IQ anything to do with how the predictions are generated (i.e. the “universal cortical algorithm”) - which is why Jeff made these claims in the first book before the TBT existed.

I don’t see that it follows. A plausible explanation for ‘g’ is some combination of the speed of processing, degree of parallelism, and size of working memory. There could be many algorithms, and each might benefit from the factors.

It should be noted that those who score highly often still vary as between question types. An IQ test I recall had pattern matching, 3D shape rotation, numerical and symbolic sequences, word distinctions and some other categories. The rationale was to measure overall ‘g’ rather than aptitude in a single category. That would be an acknowledgment of different algorithms benefiting from a common ‘g’ factor.

1 Like

I don’t see that it follows. A plausible explanation for ‘g’ is some combination of the speed of processing, degree of parallelism, and size of working memory. There could be many algorithms, and each might benefit from the factors.

Under tight time constraints working memory size supposedly accounts for all the difference seen in fluid reasoning tests, under more liberal time constraints it accounts for about 38% of the variation.

In the case of the “highly speeded group” (20 minutes), working memory explained all of the variance in fluid reasoning, whereas in the “unspeeded group” (60 minutes), working memory accounted for only 38% of the variance in fluid reasoning:
Working Memory and Fluid Reasoning: Same or Different? - Scientific American Blog Network

I agree that speed also likely has to do with it, because I think I once heard that intelligence was also correlated with reaction time, and with better white matter tracks, iirc.

As for parallelism, higher number of neurons in cortex is predictive of higher intellectual capacity across species.

1 Like