What might a bigger neocortex be able to do?

I’ve been fascinated with HTM theory but still have a very shallow understanding of it.

If a monkey’s neocortex is the size of a postage stamp and the human’s is the size of a dinner napkin, what might a neocortex the size of a football field be able to do? I know that size is a blunt way of looking at it but the point is what might we be able to do once we have fully understood the principles behind neocortex that we couldn’t do previously?

Might it be possible that such a system would eventually become so good at predicting the future that it would be as if it could see the future?

Just want to know if anyone else has thought the same thing because this seems like an inevitable outcome to me but I could be missing something.

I’ve been reading ‘Peak’, by Anders Ericsson and Robert Pool, and for its sources it uses various studies where, through practicing some task, people have enlarged various regions of their brains and became capable impressive tasks. One example is Einstein with his enlarged region of the parietal lobe [1], which he has in common with other mathematicians. Another example is the enlarged posterior hippocampus of London taxi drivers [2].

However, the posterior hippocampus isn’t part of the neocortex. Also, those taxi drivers, after successfully memorizing the entire layout of London, performed worse than average at other spatial tasks.

I was going to compare other species with similar numbers of neurons in their neocortex as humans (around 20 billion)[3], but I can’t really say whether they’re smarter or not in every way. It looks like all species with a similar number of neurons are highly social. Also, crows seems highly social and show the ability to use tools, but they use an enlarged Nidopallium which displays similar functionality to the neocortex.[4, 5]

We can’t really know that perfectly. That’s the point of science: to figure out what we don’t yet know.

I can take a guess though: I’d say we’d have machines better at complex animal tasks like underactuated movement, speech comprehension, object/text relations (like cortical.io), etc. However, I don’t believe we’d end up with anything that takes action without studying other regions of the brain or linking our virtual neocortexes with AI.

Isn’t that what we do already? Do you mean it’d be, ‘actually able to predict the future like an oracle’? Or do you mean it’d, ‘look like it was predicting the future while in actuality using logic and observations that people wouldn’t normally use to come to its conclusions, like Sherlock Holmes’? If it’s the first, then no. If it’s the second, sure.

Hope that helps your curiosity a bit. Keep in mind that I’ve just been studying this for a while too. I haven’t experimented with or made any working HTM implementations yet.

Of course it would have to be based on the data it was given as input but I think that most people would see it as an oracle since it’s predictive ability would be far better than most humans.

For example, you could feed it highly detailed information of the lives of many people and let it draw general insights about the world and human nature. Then you could feed it information about yourself and see what it comes up with.

I guess the real question is whether or not the neocortex in of itself is enough to develop insights? For example if you feed an artificial neocortex enough information, could you then have it learn math from watching a video?

Once we fully understand our own neocortex, I don’t see any reason why that wouldn’t be possible.

1 Like

So, a sociologist?

I mean, I’m pretty sure it could do that, eventually. With something like that though, you’d be much better off giving the boring tasks, like reading through the Facebook posts or Tweets of several million people, to the simulated neocortex, and have the sociologist continue doing the less monotonous parts of their practice.

I think it could, but without anything other than the neocortex, it would have to be much larger than a human’s. For example, when we look at things, what we’re focusing on is almost always in the center of our vision, and we even do things like saccade our eyes to capture all the minute details in our sight that we’re expecting, which is a function our brainstem participates in. [1]

It wouldnt be too hard to add something to track objects given that the neocortex would predict where they would go, but the thing that did the actual tracking wouldn’t be a neocortex. (Though, I suppose anything that gave input to the neocortex wouldn’t be a neocortex either.)

Anything the human brain can do can be done better on a GPU. You will probably get your answer fairly soon.

@Sean_O_Connor, that’s a pretty strong claim. How did you arrive at that conclusion? By a quick Google search I see that we have GPUs that can accomplish on the order of 5 teraflops, and various estimates put the brain on the order of petaflops to exaflops, which is different by a factor of between a thousand and a million…

It’s a fun claim. It’s around 21 Tflops for an nVidia P100. You can buy a single rack with 270 Tflops no problem. I just count the number of 65536-point random projections I can do per second. 1 megaflop gives one per second 270 Tflops gives 270 million per second, more realistically say 100 million per second.
Greedy decision/prediction tree learning actually works quite well, it can learn a book in a few ms.
It doesn’t capture any deep structure though, but it can pick up all the spelling and common phrases no problem. The question then is how to take that a little further with more dimensional data. I think you can do it with layered greedy feature learning.
It is not impossible that similar, faster and more than the human neocortex will turn up shortly and then you will automatically have your answer.

Please remember the topic is what a bigger neocortex might do. You may start an #other-topics:off-topic discussion about this if you like.

I think that’s the goal of the field of artificial intelligence in general…to develop a really smart computer (A.I.). The reason why so many animals don’t have extremely large neocortexes is because they weren’t really necessary to survive, of course it is clear that because of the large human neocortex and the way humans can communicate, pass on knowledge through each generation, among other things, that humans, unlike other mammals with large neocortexes (e.g., dolphins), truly dominate the world. Humans have the power to end the world and most life on it in matter of hours (by nuclear strikes). So the size of the neocortex isn’t everything – look at whales for example, as far as I can tell, they don’t have a very sophisticated culture, life, or technology. But what perhaps matters more is the efficiency of the neocortex (of course a big size of the neocortex would just multiply that efficient engineering), correct software(instincts/goals/drives/etc), and the large training sets with properly designed training (for example, think about the way that humans are born knowing nothing but how well we grow to know the world in just a few short years).

Now imagine a neocortex much larger than that of a human, in which the individual neurons communicate at millions of meters per second (i.e., beyond biological limits of the human brain, but instead the physical limits of a machine). If that machine is trained correctly (assuming it was already properly designed to be able to think in a general way) then you could have a machine that has thousands, perhaps millions of equivalent intelligence to that of a human. Not only that, it would be able to sort through all the petabytes of data that humans have so far generated, and gain knowledge that no human could ever gain (mainly because of our biological limits). It would possibly self improve itself (depending on its designed goals) and possibly become even more intelligent. At that point, when you have a machine that a million or a billion times smarter than you, do you think that it will listen to your commands? It will probably have the capacity to easily answer your questions, but the same way you wouldn’t walk up to Steven Hawkins to ask how many apples are you left with when you take away two, you wouldn’t want to ask a billion IQ superintelligence to perform meaningless (in its own eye) tasks. A truly mega superintelligence would be like the Borg in Star Trek (except much more intelligent).

In short, we really can’t imagine how a being much smarter than us can think. Just like ants can’t imagine how humans think. I recommend reading a book called Superintelligence by Nick Bostrom (he is famous philosopher and the book is about just this topic…the book was also recommended by Bill Gates and Steven Hawking if you were wondering).

1 Like

I kinda think that an AI with 1000 times the capacity of a human would be only like 1000 people working together in a large company to achieve some goal. Also I have pointed out many times that knowledge discovery requires determination, patience and hands on effort in actual research labs. Though a superhuman AI could think through far more option permutations than a human could in terms of doing experiments.
Here is a paper about continuously learning new features and how that might help make AI simpler: http://eplex.cs.ucf.edu/papers/szerlip_aaai15.pdf

It really depends on the design of the AI, of course. For example, literally emulating each human brain in a virtual reality or in an android would obviously get a very human-like and human-level intelligence. But if it’s unrestricted by our biological limits (that includes coding/simulating those biological limits in the AI), then it would theoretically think much faster and much more efficient than us, especially if it had a much bigger neocortex, as the OP suggests.

Build it and see. Kenneth O. Stanley seems to have the same idea set I do except he is missing out on the ideas around random projection and using crossover to avoid batching for evolving nets.
However decoupled nets with unsupervised feature learning as one part and a read out layer as the other part seems a good research direction at the moment. I am nearly finished updating my code library to work on that. I can put it into a .so, but it is limited to the older SIMD instruction set. Anyway I guess it would work out of the box on a Xenon Phi.

I am kind of lazy to read the entire paper but it sounds pretty cool. I used to really like genetic algorithms as they relate to playing video games, but DQNs (deep q-networks) seem to perform much better and faster at learning to play video games.

Yeh, a number of recent papers indicate that as the parameter (weight) space increases the number of saddle points for a set of training examples increases exponentially. The bad news is that that slows down training a lot, the very good news is that there is always a way down the energy landscape. You actually can’t get trapped in local minima because there are none!
The problem with evolutionary algorithms is you only get a good/bad binary signal (1 bit) per step. With back propagation you get a lot more information per step. But really BP is just noise driven search in my opinion. With entropy coming from the training data.

The interesting thing about the evolution of the neocortex is that with enlarged size also came increased complexity. Essentially there were more regions that can interconnect to form novel levels of abstraction.

The first interesting thing that might happen if a cortex grew larger is that every sensory region will yield greater sensory acuity. Think of the people who loose their sight but find all their over senses heighten over time in both perception and memory. The larger the sensory regions - the more detailed the memory and predictions.

The second interesting thing that could happen is general increase of intelligence (due to more association areas). The way we understand anything is by stacking and interacting contexts over time within the cortical hierarchy. Say you have engaged in a convocation, and while the person is explaining something your cortex is forming a mental model throughout the hierarchy representing what they are saying. The more abstract & long-term contexts at the top, inferring to the more specific/short-term contexts below. I believe someone’s intelligence is a result of how well they can represent something in their cortical hierarchy. If the size and quantity of the regions increased in the hierarchy then a more abstract and more detailed representations can be produced.

The third interesting thing builds upon the second - size is more to do with region quantity. During evolution our genome went through a series of duplication events, duplicating cortical regions, then augmenting them. If duplication and augmentation were to happen again, and it happened in the prefrontal cortex, then new cognitive abilities could arise. The cognitive abilities humans have are a result of augmenting the already existing memory system. Who knows what abilities could arise from duplicating and augmenting prefrontal regions.


That makes you think, why so many people think the neocortex so a must for intelligence. Think about it, we have some birds (without neocortexes) that show much higher intelligence than most species of mammals (which have neocortexes). The bird brain is also much smaller than a human brain, so imagine if we scaled that bird brain that’s already pretty intelligent to the size of a human?

1 Like

The same could even be said for ants. They can exhibit fairly complex behaviors beyond stimulus/response.

Although intelligence can be seen in many forms in nature, I believe the difference is that human intelligence is general-purpose. The neocortex is so flexible and decoupled from any particular thing, it can analyze and synthesize any sensory patterns being fed into it. The same cannot be said for non-mammal brains.

I mean some birds show much more human-like intelligence than most mammals do. Just because mammals have a neocortex doesn’t make them intelligent. Some birds (e.g., African gray parrot) can also be trained to perform pretty much any general task they’re taught like doing arithmetic, passing the mirror test, recognizing beat in human music, looking at objects and describing the objects (the object’s color, texture, and purpose). Who is to say that if their brain wasn’t scaled up to the size of a human, that their IQ wouldn’t rise proportionally?

I think at some point the law of diminishing returns would kick in ? The more complex the organism the higher the entropy.
At some point not sure it is football fields, most probably much earlier a new “formation” have to emerge which is newer to the neocortex as the neocortex is new to the lizard-brain.

Btw the real power is not so much the number of neurons but how many connections can be supported between them and a stadium size neocortex will have hard time having similar connectivity (~10_000 per neuron) … more like 100 :wink: if even possible.
At this size it has to be ensemble of neocorte-cies .

Ant and mosquito brains are kinda compatible to current deep neural nets, with static evolved functional behavior. Robots with that kind of brain would be both useful and relatively safe (being predicable.)
Is there a benefit to evolving deep neural nets over back propagation (BP)? With evolution you can only get at most one bit of the solution per training example (and usually a lot less). With BP you can get a lot more. However BP may not scale so well if you have thousands or 10’s of thousands of cores to use for training. I would not discount evolution as a good way to train nets. I have a question as to whether deep neural nets are actually learning selective features for a simple read out layer to use? If that is the case you could optimize many aspects of deep learning for greater efficiency (feature selection being relatively simple.)
Then of course you could have the idea that the neocortex is greedy learning features and features of features for a read out layer to use. I nearly have a more streamlined code library ready to try that out. However anyone with a GPU would be able to do better than me and run such code 100 times faster.