Is "Intelligence" Exponential?

Is “Intelligence” Exponential i.e. the more intelligent you become the harder it is to become more intelligent.
Or it is Linear Or Logarithmic ?

What does neuroscience have to say about this !

How is IQ graded, exponentially ?
Any facts that shine light on this.

While I am not able to address the main question, comparing IQ scores with percentage of population might provide some useful insight into the subject. I suspect that since the definition of intelligence is in some ways subjective and can have different facets, there may not be a simple answer to the question.

http://www.iqcomparisonsite.com/IQtable.aspx

Yep IQ is bell-curve, but the as the amount of ppl grow the IQ growth is S-curve i.e. in the low-IQ area it is exponentially growing with the increase of population and after the middle it is exponentially decreasing with the increase of population.

http://mesosyn.com/mental1-12c.html

I don’t have any actual data or “subject matter expertise”, just anecdotal experiences. I’m 38 years old now, and I don’t have trouble learning new things. I don’t think I’ve ever had trouble with it, in fact I enjoy the experience of being confused, but eventually understanding something through concerted effort. And I don’t think I’ve been learning more and more complex ideas as I’ve grown. This HTM work is just as complex as some of the work I was doing 15 years ago, just a different subject, and different scientists. :slight_smile:

1 Like

Thats true, but do you think you are groking exponentially more complex patterns ? spatial&temporal.
My anecdotal experience is that I seem to do it partially, but I have worse recall !
Could it be that is because the brain is generalizing ?

F.e. recently it took me ~1-2 weeks to learn from scratch Angularjs and build a small project, but if I have to build a new project with Mojolicious/Perl which I haven’t touched in ~1-2 years I have to spend 1-2 days just setting up the prj and following recipe from the web (if I go on interview, I will fail) and then the things start to get to me fast one after another … or if you ask me what is marginal utility (Econ101) I may not be able give a satisfactory explanation of the top of my head, but if I do a cursory read I will be able to give and explanation not only what it is but the full chain of how does it arises from human-preference, subjective theory of value and exchange in the market.

just 2 examples.

“The more you know the less you remember !”

1 Like

Looking at IQ comparisons is an interesting way to get a feel for intelligence capacity in the context of the physical properties of the human brain (number of neurons, size of various sections, and so on).

An even more interesting question is what about if those physical limits of the human brain were not in play (for example, if you could have a neural network of any size). In that case, I wonder if increasing the number of neurons has a linear effect on the capacity for intelligence, or if there are diminishing returns that come with higher levels of abstraction.

It might be interesting to run benchmarks on HTM systems with incrementally increasing numbers of columns, and extrapolate from the results (although not having hierarchies implemented yet could lead to erroneous conclusions)

I think you are limited by the Economic laws.

At some point you need to split the large “organism” to smaller “organisms” i.e. localize instead of passing all the information around, from pure efficiency requirements.

“Compact hierarchy” (nearly decomposable unit) work best on well defined problem.

What I’m saying is it would seem impossible to build single super intelligent agent (Skynet of a sort), from purely economic principles it has to evolve to community of Skynets.

It seems to me, though, that the local knowledge problem is a product of the limitations of individual human brains. One human brain does not have the capacity to understand all of the complexities of certain problems, but a society of human brains can tackle problems much larger than any one of them could ever tackle alone. The reason for the larger knowledge existing outside a central authority is because the humans in the system are individuals with their own individual wills.

The question in my mind is what happens if the agent in question has a neural network that has different physical properties than an individual human brain, but has an individual will? For example, what if the agent has 7 billion times more neurons than a human brain? Is it anything close to 7 billion times more intelligent than a human? Or is it, say, only 20 times more intelligent for example (i.e. approaching some upper limit). My suspicion is the latter, but that is not based on anything concrete of course.

1 Like

That is my question, too …

Another thing to consider is the fact that in nature, more intelligent species tend to have longer maturation periods. In the case of this theoretical agent, perhaps it has the capacity to be 7 billion times more intelligent than a human, but it takes 7 billion times as long to reach that level.

from my understanding of the question, i am inclined to think that intelligence might be closer to linear, at least more than one would think.

i think a learning system can be linearly intelligent (i.e., the more knowledge it learns, the more intelligent it is) until a point. if we assume the information an intelligence can learn about the world is finite, then there will come a point that the intelligence won’t be able to learn any more meaningful knowledge given its learning machinery it was created with. it will inevitably have to invent a smarter intelligence to learn in a different way, or just be smart enough to modify its own intelligence system to learn in a different way, in order to extract the rest of the information in the universe and thus be more intelligent.

i opine there is a difficulty involved in inventing a new form of intelligence, call the new level of intelligence N, and that the difficulty of inventing N is constant for any given level of intelligence N-1. this constant difficulty in creating N would be the slope of the linear progression, ideally.

…but of course, this is all just wishful speculation.

I’m not sure I would equate the accumulation of knowledge to intelligence (otherwise Wikipedia for example could be considered a superintelligence). In my opinion, intelligence comes from abstraction and categorization of experiences – an ability to think of one thing in terms of another, enabling a drive toward increasingly complex goals.

The real question in my mind is whether there are diminishing returns that come with higher and higher levels of abstraction, or if intelligence can be increased in a linear fashion as long as the properties of the system are improved.

One of my current interests is in exploring AI concepts that might one day lead to the theoretical “technological singularity”. In order for societies of the future to be prepared, it will be important to explore questions like this one to gain insights into how the birth of such a singularity might unfold.

I don’t have the proper references, but I think human knowledge, and perhaps also intelligence, which is based on the former, exhibits the phenomenon of “rich gets richer”. That is, once you know X it becomes easier to know Y, if you can form the proper analogies or spot certain similarities. For example, knowing language X might make it easier to know language Y. I learned German for 4 years, then tried to learn a bit of Latin, and both my knowledge of Spanish (there are common or similar words) and a bit of German (the grammar and the different noun cases and inflections) helped me get a clearer idea of Latin, although I wouldn’t say I speak it. The same happens when learning a musical instrument: going from one instrument to another might be easier than not having any background at all. I think a big feature of intelligence is the ability to spot similarities and patterns in phenomena, and this results in an ease to generate new knowledge from existing one. Having said that, I don’t think that it becomes harder to increase intelligence/knowledge after reaching certain level.

1 Like

That is an interesting point. It could be an indication that intelligence might be significantly increased by merely expanding the capacity to absorb more information, and it may not require a proportional increase in the capacity to identify analogies and similarities in the information. The latter element is of course an essential part of intelligence, but the question is how important is improving that element compared to increasing the information capacity.

Now that I think of it … learning 2nd or 3rd language (computer or real one), shouldn’t increase that much your intelligence (or may be just a bit), because it is not that much novel a information.

But following that line of thought it probably means it should have something to do with Entropy.

i.e. the more “you/it” lower the probability of “surprise”/entropy, the more “intelligent” you are !? hmm…
Pattern matching and recognition (which is what IQ measures) seems to reflect exactly that.

I think intelligence if defined as knowing something such that it can affect one’s behavior definitely has a theoretical and likely much lower typical limit in individual brains. We are governed by llimited resources. The intelligence itself though the content or knowledge is limited by the sum of our minds which is increasing exponentially. Interestingly, it is language that has evolved as the great enabler of the communication of knowledge between brains which has made intelligence exponential as opposed to bounded by a single mind.

I think our most powerful weapon is the “Language”, in other words the ability to communicate knowledge (as a faster track than direct experience). Given this, I think we should “watch” how machine agents would “evolve” if they develop their own community (even with less neuron capacity per individual).

Time plays an important role in measuring intelligence. Agent A more often than not acquires a piece of knowledge faster than agent B. Agent A is considered more intelligent. I think time makes a case for intelligence being exponential, at least to a point. A learning intelligent agent can draw vastly more conclusions about the world using its faster learned knowledge. Although for humans at least, I think it actually may be more an S-curve because we’re limited by our senses.

Speaking of senses does perspective give an agent more intelligence? Humans are said to have 5 main senses (a simplification), but what if we could recognize patterns and acquire knowledge with 10 senses? Although, would this argument mean blind or deaf people for example are less intelligent? However, many people who are missing one or more of their senses have been considered very intelligent and creative. Certainly a blind man can have knowledge about the concept of the color blue, but he’s never experienced seeing blue. He has a very unique perspective, but is it limited? I’ll have to think about this one a bit more.

1 Like

Someone who is blind or just color blind probably chooses to develop his
knowledge and skills in areas which do not need that faculty. Consider
Sammy Davis Jr, the musician. Obviously, due to his blindness, he chose a
career in which is sight was not a limiting factor to his success. Maybe
knowing that others might see him as having a handicap, Sammy set out to
excel in music because he had a mission and something to prove.
I bet it was tough at the onset. I think that a lot of the initial lessons
focused on reading music and using visual faculties to handle the
instruments.
The other thing to consider is that he probably had a unique insight into
the world of music, and was forced to approach the problem from a different
approach. “Think Different”- Apple’s motto (albeit grammarly incorrect)!
By choosing to approach music without sight, Sammy was learn music without
relying on the notes printed on a sheet in front of him.
Sometimes, when you see a magician perform an act, it is amazing. However,
when you learn his tricks, the act seems rather mundane. A good magician
is always several steps ahead of you in his approach and tricking you into
a mode of reasoning, so as to throw you off track.
Something as complex as the neocortex seems miraculous right now. But, as
we understand all the intricacies, maybe we will lose our sense of awe. At
this point, it so complicated and convoluted, we are just the spectator in
the audience at a show.

There is a difference between intelligence and knowledge discovery. I’m sure the average intelligence of the human species has been relatively constant over the past 100,000 years.
All the changes in the modern world are due to knowledge discovery being added on top of that intelligence. That involves a lot of hard work and patience in research labs by many people.
It is hard to see how there could be a rapid singularity just based on AI. You would kind of need an exponential increase in knowledge discovery as well.

1 Like