Brains: Are they overrated?

“Sometimes a science will tell something about ourselves, it’ll tell us who we are” says Jeff in his TED talk.

However, according to If Modern Humans Are So Smart, Why Are Our Brains Shrinking?, our brains are shrinking because “as complex societies emerged, the brain became smaller because people did not have to be as smart to stay alive.”

It’s as if evolution recognized the error of (energetically expensive) individual smarts and started favoring communication abilities instead. Feel free to extrapolate how this is going to develop in the future.

Maybe there’s an intuitive, strongly held but incorrect assumption that because the brain is very capable, it must be very relevant to our lives.

So if we want to understand ourselves, shouldn’t we rather be studying “complex societies”? Because individuals are puny but the crowd matters? Distributed systems, not standalone boxes? We’re cogs in a hive mind?

Sure, the studies of human organization in economics, social sciences, the internet etc. have evaded solid theories even more than neuroscience has, but maybe that just means we need to refocus our attention and try harder?

Discuss.

1 Like

In his (great) book Sapiens, Yuval Noah Harari touches on this.

During the hunter-gatherer phases of human development we were probably at our peak as far as our need for situational awareness and cunning - being stupid usually led to a quick demise at the hands of predators or of starvation.

As we developed societies, the need for individual excellence was lessened by the protections afforded by the social structure around us and the handing down of knowledge from previous generations eased our need for re-inventing the wheel each generation.

However, a “shrinking” brain (which, by the way is somewhat of a conjecture - with no solid evidence) may not necessarily mean a less intelligent brain. Our brain is probably just more specialized than our brains during previous epochs since we now rely heavily on social bonding and communication, rather than our hunting skills, quickness of foot, sharp vision, or other physical attributes. A more finely honed brain structure (with less grey matter, but probably more white matter) no doubt results in a more intelligent brain as we measure intelligence today.

My guess is that there is also a balancing act going on between plasticity/redundancy and energy requirements. As human societies develop, safety and security also increase as they accumulate generational knowledge. As a result, the brain can get away with having less redundancy built into it because we are less likely to encounter brain injuries.

I’d also point out that although there seems to be a correlation between brain size and intelligence, it isn’t a one-for-one relationship. Consider dogs, which have a huge variation in brain sizes. That variation doesn’t always correlate directly to their level of intelligence. For example, the Papillon is usually listed in the top 10 most intelligent breeds of dogs, but has a much smaller brain than say a Mastiff which is arguably one of the least intelligent breeds. I suspect that as equally important as the brain’s size is its actual wiring.

1 Like

There’s no doubt that the effective intelligence of a modern human is almost entirely a product of culture (i.e. accumulated human knowledge). In addition to the great books mentioned so far, The Social Organism makes this case pretty well.

In particular, it’s not known how competent a human would be without language to nucleate concept formation and structure its thinking. And luckily we’re not allowed to perform that experiment. But case studies of unfortunate humans that ended up in situations like that don’t look good.

That said, asocial mammals do quite well on the basis of their neocortices (and a healthy dose of pre-wired instinct). I would be very happy to develop a robot that’s even half as capable as a mouse.

But a solid case can definitely be made that just as we need to understand societies in order to understand modern humans, our understanding of those societies may depend heavily on our understanding the human minds they’re composed of.

4 Likes

hi
I cannot help giving a comment, as I follow the debate.

Evolution evolved biological decision systems when evolution reached a point where single cell organisms could sense more than one thing, eat more than one thing and move in more than one way. This gave the organisms more than one degree of freedom. To decide about what to do now, the organisms evolved decision systems. For example e. choli decides if it should invest in enzyme production to eat lactose and transform it to glucose, by uptaking 300 molecules of lactose, and then play a 50/50 game about to do it or not. In short it has developed a decision algorithm that controls its behavior. The neuron is a highly specialized version of this algorithm, since the neuron evolves the decision from a 50/50 maximum entropy version into in e. choli into an efficent entropy reducing version in the neuron. This is done by memorizing bits of information and by giving feedback when stimuli are repeated to set down the threshold from 50/50 to something less…say 10/90 or whatever…that increase negative entropy in the system. In a system like this two types of learning seeds dominate: copying others and first movers. But though we are copying, we are actually making the decision ourselves…the social thing is to structure an environment in which we can decide and act with lower risk than in an environment driven by more entropy. The restriction in all this is how much entropy = doubt that individual brains can handle in one time frame in competition with how much information needs to be processed to reach a good decision as fast as possible. Thus the competitive pressures drives us to develop ai and machines that can do our thinking (=qualifying our doubt into questions and answering those questions for us)…using such machines can increase the amount of entropy we can harvest into to negative entropy, and thus give us more degrees of freedom to decide that is best for us…

Regards
Finn

understanding of those societies may depend heavily on our understanding the human minds they’re composed of

Very well said. My interests along those lines are what got me interested in Numenta.

the competitive pressures drives us to develop ai and machines that can do our thinking

I like this idea. I am interested in human/machine hybrids. Not necessarily the sci-fi cyborg kind, but – as I wrote elsewhere – more like the “person with smart phone” kind.

Maybe Numenta’s efforts end up helping build better UIs. I’m into computer networking in my day job and I like to think of a human-machine interface of the screen, keyboard and mouse interaction kind as just yet another protocol like we have between machines. We can devise these protocols when we understand the “machine” at either end.

I also like the idea that technological developments are the result of evolutionary pressure. I think they follow an evolutionary path just like biological organisms do. This is along the lines of “memetics” with devices, algorithms or whatever being the memes.

As such humanity would probably even manage to arrive at machine intelligence – albeit at a much slower pace – without copying the brain or any other conscious effort, just by everyone solving their own little problems by themselves. A case of convergent evolution: the brain and technology eventually arriving at the same goal.

Have you read Andy Clark’s work on this issue? One of his claims to fame is advocating extended cognition, or the extended mind. The basic idea is that to the degree that we offload our memory and computation onto physical objects external to our brains, those objects are in a very real sense a part of our mind.

I recommend his books Natural-Born Cyborgs and Supersizing the Mind for more on that.

1 Like

hi
thanks for your reply. As I have tried to present earlier on the tangent theory pages, evolution pressures organisms to reproduce, mutate, adapt, and then more important to self-selection achieving goals engaging targets with movements. These are the three necessary components in a decision in my theory of biological decisions systems. The goals and targets and the movements are subject to two types of entropy: One is about the validity (accuracy of the target) and the other about the reliability (precision of the goal)…these can be solved by questions and answers (first) and by data processing (second). Thus the brain is an algorithm producing system, connecting variables with data trying to erase as much entropy as possible in these two dimensions, actually third because the third is the resolution. Inside this we find time and space. Evolution thus runs the tempo spatial dimensions of the brain which is a measurement instrument that can create its own data and measurement methods and perform the measurements and feed them back into itself.
I think bringing the discussion about ai up to this level of abstraction is necessary. It is fine with Numenta taking its starting point in a memory system and data representation but a good biological theory exlaining neurons as entropy minimizers in the first place could help understand what features neurons necessarily must have, exactly as having a standard theory in physics explaining the Hicks particle before it was measured…the symmetries in the nervous system and the brain must be understood in order to reengineer it, and this explanation must be found in evolutionary theory and translated into information theory.
Regards
Finn

I had not, will definitely!

Finn, I read your theory post in the Tangent Theory channel and could not find any attempt to provide evidence for it in the actual neuronal wiring in the brain. I think it was the same TED talk I linked to above where Jeff said a theory of brain intelligence has to be substantiated by the presence of actual cortical circuitry. He also said he likes to work with “physicists, engineers, mathematicians” because of their “algorithmic thinking”. Your theory provides neither circuitry nor algorithm.

I agree that the theory of brain intelligence should be extended to also satisfy evolutionary concerns – what evolutionary problem is it solving? – but this is imposing additional constraints on top of the concern of explaining the circuitry, not replacing it.

Now regarding what “goal” the intelligence machinery in the brain and possibly elsewhere is pursuing. You say it’s “minimizing entropy”. I don’t understand entropy enough to comment on that. If you do you might enjoy a book I read once, “Programming the Universe” by Seth Lloyd, a quick and easy read. Any coder will understand the aspects of bits and Boolean operations etc. but from there he goes on to particle physics and quantum mechanics and information theory, stuff that I can only admire from a distance. He’s got credentials from MIT and what so it’s probably all well and sound.

But whether it’s “minimizing entropy” or some other goal, something is driving the “design” of intelligence, in brain, machine and elsewhere. The often quoted Yann LeCun, a fool who believes he can bypass biology on the way to intelligence, is nonetheless right in demanding that “the algorithm minimize an objective function”, or something along those lines. I doubt we can do a proper job extending the idea of intelligence from brains to machines or crowds without taking this bigger picture into consideration.

“We’ve been living happily with artificial intelligence for thousands of years.”

Brian Eno: Just A New Fractal Detail In The Big Picture – addresses the brain shrinkage and “synergy of embodied intelligence” i.e. wisdom of the crowds.

Hah! This made me laugh out loud.

I know this was just off the cuff, but LeCun is responsible for convolutional networks, which are based heavily on the large scale structure of the visual cortex. :stuck_out_tongue:

Not really my field of expertise but a CNN is a type of ANN. ANNs are nowhere near biology, nor do they aspire to. Also I can find it anymore just now but I think it was on Hacker News a while ago, LeCun was quoted accusing Numenta of biomimicry.

I don’t want to derail this thread, I’ve fought back against this attitude elsewhere on the forum. One thing artificial neural networks (which technically HTM is as well, no?) have going for them is that they work really well. And they go way further toward explaining aspects of the brain than you might expect. I’m currently a visiting researcher in an in-vivo rodent neuroscience lab that does proper 2-photon microscopy and all the rest, and half of their students are working on deep learning. Go figure! :wink:

An accurate accusation, I’d say!

1 Like

Hi

Thanks for your comments, and I shall see if I can provide some answers.

There are two ways of getting close to the problem of “intelligence” if we use that word for the efforts here. One is a bottom up and another is a top down effort. I think all intellectual work necessarily includes both efforts, theory and practice - ie. the brain is always predicting and receiving data…top down and bottom up…

In my opinion there are three types of neurons: Sense neurons, interneurons and motor neurons.
Interneurons are in fact “doubt” neurons or “entropy” neurons…entropy is measured as the average number of questions to be asked to a random source to guess (=predict) the correct answer. The amount of entropy in a source can be measured in a random source of bits. It is at its maximum when the probability (=prediction) reaches 50/50. In my opinion the brain is trying to reduce this to, say 1/99, before deciding to move. So entropy, the amount of uncertainty is at the core of understanding the brain and the nervous system.

Interneurons impose a delay (=memory) in the circuit and they can decide to respond by firing and not firing. In between they can be in the “predicting” phase…or decision phase in my terminology.
Now to add to your request for algorithms:
Most in here agree that the allocentric and egocentric processes are real and need modelling algorithms? Now imagine what questions the allocentric and the egocentric processes must answer in order to drive their part of the decision process in the brain: It must be the “where” and the “how” questions…alone the fact that the brains processes needs to ask a question and must get and process the information (more accurate the bits) necessary to find the answers, sets entropy in the center of the understanding of what the brain is doing. So in my algorithm the brain is hardwired with the necessary questions…and entropy can also be named “doubt”, and doubt can be qualified by raising and asking questions to get the information that can provide the answers. This is to say, that the description of the different necessary systems to drive the process, observed form top down, are the foundation for identifying the necessary calculation algorithms. If you read Jeff´s very good comment about the timing signal today, this is a good example of what can be done…timing across the neocortex is a necssary function. But where is the theory that identifies, describes and explains all other necessary functions? From what basic first principles like “interneuron = entropy” or “decision = start/stop moving” can this be constructed? Because of the algorithm is constructed from intuition and no first simple, very simple principles and observations, then no body can be sure it will work…and it will end up with expanding and compensating algorithms like in deep learning…
So may be we now have the same view upon what an algorithm is?
Regards
Finn

by the way…using the trick of talking about degrees and universities as if this itself proves any kind of quality, is not scientific or academic I think, what do others think about this kind of argumentation?