The talk about the paper is out now (as a re-enactment of sorts) and at the end (49m30s) @jhawkins doesn’t mince words. “Understanding the brain” is “the most important scientific endeavor of all time” and “central for the long-term survival of our species”.
Is it really? The “human question” is: why are we special? Long-finned pilot whales have the most neocortical cells of any mammal, humans included, but still live comparatively dopey lives. Why?
An educated guess says vocal chords and opposing thumbs enable us to form complex societies, hive minds of sorts. Figuring out how that happens is central to our long-term survival and brains are cogs in that machinery. Important cogs, no doubt, but cogs. I’ve touched on this elsewhere.
N.B. Some might feel that philosophical considerations of this kind belong elsewhere in this forum. Feel welcome to continue the conversation in another category. But @jhawkins considered this aspect important enough to begin and end his talk with so I think this should be said here.
Wouldn’t simulating a hive mind, by definition, require the simulation of a mind as its basic, repeatable unit?
I believe the thing which enables the human “hive mind” is the fact that our ideas are themselves creatures competing for resources in an evolutionary process. Their transfer medium was originally spoken language, but the evolutionary process (and along with it greater and greater complexity of human society) accelerates along with new better forms of communication and information storage and retrieval.
An excellent introduction to the concept of ideas themselves being creatures, is Dan Dennett’s Dangerous Memes on TED Talks. That concept then leads to another interesting thought that @gmirey hinted at on another thread, which is the underlying “DNA” for these “creatures” might turn out to be something similar to the concepts that Calvin describes in The Cerebral Code.
That doesn’t necessarily follow. I can model the weather without modeling the molecules the air is made of. I suspect however that yes, in this case it is necessary and that’s one reason I’m here and learn HTM.
The other reason is that in some sort of fractal way the hive mind exhibits large scale structures that are brain-like the same in the same way that it’s brains in the small. Because the problem of “generic information processing” is somehow the same at both scales and convergent evolution finds the same solution both times.
“Competing for resources” is true, but true to the point of being trivial. Resources are constrained at any level, e.g. the HTM spatial pooler has limited capacity for learning patterns and patterns “compete” for making it into the k most active to be selected for reinforcement at each learning step. No need to conjure up analogies to evolution here.
I believe evolution is exactly what is happening with ideas (I didn’t mean it as an analogy). There is nothing about the concept of evolution that requires biological organisms to be the subjects. Dennett articulates the argument better than I can, though, so if his talk didn’t convince you then I won’t try
But putting evolution of thought aside for a minute, I think there is an underlying concept that perhaps we can agree on, which actually fits nicely with the “thousand brains” idea. In the theory, many different networks are modelling what they perceive, and have connections going all over the place to allow voting between them.
I believe that human evolution produced a new connection path that is able to form connections that are no longer confined to the physical boundaries of a single brain. This medium is language. It is language which makes humans special, and language is the basis for the “hive mind” that you have described. The “thousand brains” networks suddenly have the potential to become exponentially larger:
No, but the concept of evolution requires the evolving entities to be complex, the way biological organisms are complex. If an idea is an SDR that’s just not very complex.
That seems like a viable approach to start thinking about hive minds, and whether we can find brain-like mechanisms at work at large scale.
I don’t think you can just take inter-column communication and implement it 1:1 at an inter-human level if that’s what you are proposing. The connections between humans are much more sparse and low-bandwidth for starters. That’s taking the internet into account, and that one also has issues of human-computer interaction.
So if you have an approach that starts with inter-column communication as a starting point and then makes modifications A, B and C to scale it up, then I’d be interested.
I disagree that ideas are not complex. Computers, the internet, freedom, communism, Islam, and so on – are all ideas, and all very complex. The highest level SDRs that you are referring to which represent these ideas are labels (like the word “cat”) – the concepts they represents consists of pieces from other representations, which consists of yet other representations, and so on. The building blocks at the foundation are rooted in the somatosensory experience, and then built on each other to form very high levels of complexity.
Of course bandwidth is important (that’s why I’m still “me”, and you are still “you” during our conversation).
To me this makes a whole lot more sense when you think about it from the perspective of the ideas being like organisms. Those which find themselves in situations which increase the chances of their being preserved and spread to more brains will be more prolific, and more likely to be the foundational stock for future more complex ideas. There are many factors which make one idea more “fit” than another. If they address some basic need that humans have (food, entertainment, sex, acceptance, accomplishment, etc) better then their competitors, and if they mesh well with other prolific ideas, and are easy to communicate, then they are likely to spread to more hosts.
With modern technology added to the picture, there are other things that affect fitness, such as ease of storage and retrieval, social media from which they originated, and so on. With a more connected society, other factors come into play, like popularity of the host, the current political climate, etc.
And ideas often don’t have to improve the biological fitness of the host either, as Dennett talked about. I think all of this when looked at together very much describes an ecosystem of sorts, built not only on human brains as a resource, but because the creatures are no longer confined to a single brain, also taking advantage of the physical and social environment, technology, etc. whenever it improves their chances of replicating.
Oh BTW, after reading my rather long-winded post here, you will also have labeled it with an SDR in your mind for future reference. Did forming that SDR suddenly make this post less complex?
Your “ideas composed of many SDRs” would have evolved starting with a single SDR or whatever the single-cell organism equivalent is in a hive mind ecosystem. And it would have to benefit its hosts (brains) at this level, as the next level can’t evolve without the previous one being viable on its own.
So why don’t you start with that and worry about more complex scenarios later.
I think this is exactly the point I am trying to make. The “single-cell” in this system is the cortical algorithm of a brain. A brain starts with very little understanding of the world, and then through experience builds on that knowledge to form more and more complex constructs on top of each other. This mechanism originally evolved to function within a single organism (long before humans arrived on the scene) to improve its chances of survival by understanding and modeling its world. The “fitness” of these constructs are measured by how well they meet the needs of the host organism, mesh well with other successful constructs, applicability to different scenarios, and so-on. The more prolific constructs beat out their competitors and become the foundation of future constructs.
Then humans came along and evolved a method which allowed these constructs to more efficiently escape the boundaries of the originating host brain, and spread to other hosts. Prior to language, the bandwidth for this was much narrower (such as parents teaching a skill by example to their offspring), and couldn’t accommodate very high levels of abstraction.
My point is, a relatively small adaptation enabled a long-existing mechanism built into mammalian brains to expand exponentially, forming the “hive mind”.
I don’t believe complex “distributed ideas” are ideas that have grown to complexity in single brains and then made the jump to other brains in their complex forms. Complex distributed ideas start with single distributed ideas and grow to complexity in their distributed state.
The path to distributed complex idea starts with distribution and complexity follows. So show me distribution of a simple single-SDR idea, and how that works, and how that benefits the hosts.
Neither do I. Take something highly complex like a computer. It is highly unlikely that any one person completely understands every single part of a computer, down to the finest detail. But a number of people with different skill sets together do have all the knowledge needed to build a computer. And if an important concept for building a computer is forgotten by everyone in the world, it can be resurrected and begin replicating once again from various technologies which allowed it to be recorded and then retrieved when needed.
In this case, the “creature” (the computer) isn’t replicating within a single brain, but distributed across many brains. This is possible because the foundational concepts which make a computer were able to leave the original host brain which conceived them, and other host brains were able to take them and combine them into new concepts, which themselves then left their hosts and entered new ones to form new concepts, and so on.
Baby touches a hot car seat and starts crying in pain. Mother comforts him with “Are you OK? Awe, that was hot wasn’t it?”
Later, toddler is reaching toward the stove, and mother shouts “No, that’s hot!”. Toddler immediately understands, and learns to avoid a hot stove (without having to experience for himself that it was hot). When he grows up and has children of his own, he also teaches that hot stoves are dangerous, and the concept is thus further distributed.
Obviously this is overly simplified (it takes more than one scenario for a baby to learn a word like hot, of course), but hopefully you get the idea.
Yes and that’s a simple, one-SDR idea, as opposed to “the internet” or “communism”. So show me the mechanism down to the individual bit how that idea get distributed at population scale, in an efficient way, and plugs into an HTM at each node.
I thought you wanted a simple example Communism is typically spread through societies that have high corruption, are hurting economically, and where the distribution of wealth is out balance and can be blamed by supporters who are able to paint communism as the solution. Individuals rally behind the cause without any one of them fully understanding it (just like my example of any one person not fully understanding a computer).
Sorry if I am just repeating myself to support my one perspective. I’ll back off and let other ideas into the conversation. Do you have some answers in your mind for how the mechanics of the human hive mind work?
I’ll take a stab at something a bit more detailed, while not down to the level that you are probably looking for. I’ll start by pointing out that individual bits are not so important in HTM theory. What is important is the ratio of semantics encoded in those bits.
This paper paints a good view of how the semantics of words are formed. It implies that embodiment is a critical element to the process. This system is the basic foundation on which distribution of ideas between individuals can occur.
While obviously the internal representations formed by two individuals are different since they are based on individual unique experience, two or more individuals can come to at least a partial consensus on the semantics that make up the words of a language, by sharing experiences together. When these semantics are similar enough, a word like “hot” can then be used to represent an idea that is understood by multiple individuals.
If a word has been encountered in a number of diverse examples by the individuals involved, then the more likely one individual’s internal representation for that word will share similar semantics with the representations of the other individuals involved. Note that the specific bits will be different between individuals, but the ratio of semantics encoded in those bits, while not exact, must be similar enough between individuals for the distribution of ideas to occur.
Now when one individual speaks the word “hot”, the other individuals decode that word into their own internal representations (which share similar semantics with the one in the originating individual). Thus, one brain now has a method by which it can evoke activations with similar semantic content in other brains. Combining strings of words together, this forms the mechanism for encoding and transmitting ideas. Experiences build up in complexity to become words, and then language becomes a tool by which one brain can transfer complex ideas to another brain.
Note in figure one that while there will be individual differences between two people - they both ground their language learning in personal experiences in their somatosensory cortex. This is the most basic form of shared experience; they both have bodies that work about the same way.
Several posters in this forum want to strip away the messy biology and go right to a “brain in a box” without a body or emotions, seeing these things as unnecessary details.
I often wonder how hard it will be to simulate the semantic grounding in a body that all living things have as a base for their cognitive processes - without a body. I usually reach the same conclusion; that approach will end up having the same flaws that are usually called out in current AI efforts - no common sense. That is - no grounding in the things that “everyone just knows” as part of the decision process.
Society/culture works to couple negative associations with acts determined undesirable to shape your choices in selecting actions.
I am a fan of the concept that we judge intelligence as the ability to select the best action when presented with your perceptions. During your mental nursing your caregivers should be feeding you a diet of actions paired with good/bad flavorings to insure that when you have weaned from their training you will select the best actions in all situations that you might find yourself in. Some of these actions may be to gather more information to make a good choice.
I will add that these pairing have changed over time. This is one of the advantages of teaching behavior over evolution - more rapid adaption to changing environments. If nature survives long enough it might be able to develop effective survival behaviors with challenges like automobiles; it is likely to take many generations for deer to figure this out.
I wonder if our ‘specialness’ really just comes down to a high enough connectivity in the speech center for complex language. I also wonder what we’re missing out on as a result–as per the compensation mechanism they mention in that article.