Volition, Goal-Oriented Behavior, and the Future of HTM

I think that the spatial/temporal pooling stuff is well-figured, and that it’s great for the things it’s already being used for. I’ve been a fervent suitor of understanding what it is that makes animals/brains exhibit motivation, goal-seeking, etc… for over a decade now, spent many-a-paycheck on books about brains, theory, and cutting-edge AI research over the years.

I read On Intelligence back in 2007, and again recently, and have been studying every talk Jeff has given that’s online. I think that HTM is headed in a far more productive direction than all of the rest of AI/ANN/DNN research combined, insofar as machine intelligence is concerned. To my mind, machine autonomy is the holy grail.

The most interesting video I’ve come across so far (there may be others, but I haven’t found them yet) is this one, where Jeff goes into detail about the cortex’s interaction with sub-cortical regions, goal-selection, etc… It’s a Q&A from fall of 2014 during a NuPIC Hackathon, it can be found here.

Since the first few weeks of my life-long obsession with machine intelligence ~13 years ago I knew that we needed to figure out how brains worked, and what it is that all brain-possessing creatures have in common, in order to create machines that behaved in the same fashion. This has been a life-long dream. I’ve searched for the underlying common structure that all brains have, which allow creatures to exhibit autonomous behavior.

Jeff mentions in the video that the basal ganglia are presented by the cortex with possible motor output options, and it responds with the choice it makes. He states that he doesn’t know how it decides, or how to implement goal-oriented behavior. This was two years ago, so maybe he has a better idea now, but right when I saw that I knew exactly what the answer was: reward/punishment and a dimension of directing behavior through an element of pain/pleasure.


Using some representation of pain/pleasure for feedback has been explored quite a bit outside of HTM. My favorite study was from a study Kenneth Stanley was involved with. They trained a simple plastic hebbian network to learn on-line given a form of pain/pleasure feedback - https://youtu.be/J-sUzj4xu7o

I’ve found the issue with almost all machine learning that involves feedback is the problem of local optima. Again, Kenneth Stanley goes into a lot of detail in other studies of why this is. However, HTM has the possibility to overcome local optima unlike anything else. I believe to understand more about how the brain achieves goals is to understand more of how the cortical hierarchy works. In our brains the association areas in the prefrontal cortex is key to goal-oriented tasks. This doesn’t always require pain/pleasure feedback. Think about when you have a goal that’s purely logic-oriented - there’s very little emotion involved but the problem could be very complex.

Thanks for the reply. One of the books I invested in a decade ago is called “Animal Learning and Cognition” and it pursues an understanding of goal-oriented behavior and action-selection (which IMO is rather crude) by learning and associating pleasure and the actions taken to achieve that pleasure, reinforcing them. It presents these theories in simple little neural network structures that have various sub-structures serving as different ‘parts’ of the brain. My intuition is that the book is definitely on to something but that their neural networks are not interpreting the actual structure of the brain accurately. It is an interesting read, however.

Insofar as HTM overcoming the problem of local optima that neural networks has, I think it’s going to prove itself as being the most promising tech for creating autonomous machine sentience and intelligence. I’m more inclined to approach the problem from the ground up, that is, working from the behavior learning side that has a reward/punishment perception/volition associative inference and work towards increasing its capacity for abstraction - which I imagine would grant the ability to plan more intelligently, communicate, and develop fine motor skills.

That approach seems logical, as you’d be following the evolutionary steps, from simple to more complex.

I’m going to check out “Animal Learning and Cognition”. Then it looks like the next step will be “The Neurobiology of the Prefrontal Cortex”.

One issue with following evolutionary steps is that there isn’t enough research on specifics of anything more complex than a leech. You need information on exact neural structures, and that could be hard to find. I’ve only looked on the internet though.
That’s not to say that looking at evolution is completely useless though. It’d just be hard.

Well, actually, that’s a misinterpretation of what I said. I didn’t say I’d follow evolution, that’s a false assumption. What I meant was that I’ve always been approaching the problem of autonomy from the reward/punishment and behavior learning side, as opposed to the spatio-temporal pattern recognition/classification and abstraction side, which neural networks and now HTM has seemed to be making progress on at a respectable rate.

I think you’re missing the point of AI research if you still think that specific neural wiring is what’s needed to make a brain work. The vast majority of specific wiring is actually formed randomly and then pruned/reinforced throughout a creatures lifetime. If you mapped out the neocortex connections of an unborn baby, you would not find anything at all similar to what is in an adult brain, because very little information has been processed at that point in the brains development.

You will not find Broca’s area to be wired the same, or Wernicke’s. Infact, they don’t even exist in an infant, and are clearly areas of the neocortex that only come into being as the direct result of the brain processing information. It’s pretty obvious looking at the ‘areas’ of the neocortex that the various areas, such as the spatial association area, are a direct product of their proximity to whatever hard-wired input areas are surrounding it - where the spatial association area lies directly between the visual and tactile sensory processing areas. It only ‘becomes’ the spatial association area because the information processing activity of the immediately connected sensory organs to their respective areas processes into higher and higher level abstraction the further away from it you travel, and at the juxtaposition between such areas you will find these ‘magical’ areas that become responsible for handling more specific things.

Prominent patterns in sensory input, such as a spoken language, will find a place among the ‘balance’ of other prominent patterns that exist in the human experience. If a person loses sight, their visual regions will be taken over by all the surrounding areas that are still processing input and generating output. The cortex is highly dynamic and adaptive. You could remove any part of the cortex not immediately connected to a sensory input or a sub-cortical region that is regarded as handling a commonly recognized function, such as facial recognition, language recognition, etc… in a newborn baby, and they will grow up into a normally functioning adult, albeit with a slightly diminished capacity but still fully functioning in learning language and faces.

There is actually a lot of research on brains and the reward system. The dopaminergic pathways have been figured out, as well as serotonergic. Mammals have a common structure to the brain, regardless of the exact wiring of each individual structure amongst itself. It’s the overarching flow of information between the different parts, the ‘structure’ of the brain as a whole, which generates the brain wave ‘cycles’ that organize their connectivity into something relevant, not the exact wiring inside of each individual area - because that’s developed through the processing of information, exposure to the environment and learning behaviors through reward/punishment.

I’ve been studying brains a long while, in an effort to get to the cutting edge of the creative efforts, and we are definitely close. Between the promise and possibility of HTM and deep learning, someone has to connect these with a reward/punishment system to create a dynamic system capable of learning organically how to behave to achieve whatever it’s designed to achieve.

Personally, I believe that ANNs do not distill what it is that the brain is actually doing information-processing-wise, not nearly as much as SDRs do, computationally speaking. Even with the advent of so-called ‘neuromorphic’ chips that reduce the cost and increase the speed of running ANNs, I think that equivalent hardware designed to operate on SDRs would possess orders of magnitude more capability for the same power cost. It simply appears… sloppy, to manage every neuron’s synapse with every other neuron, and combinatorially more expensive.

1 Like

Sorry, I was really unclear. I didn’t mean to imply that synapse-specific or region-specific connectivity is really important. I just meant that you need access to the details which matter. Exact connections matter in really simple nervous systems, but they generally don’t matter in complex organisms. The rules still matter, and you can’t find those rules without enough research. As far as I know, we have nothing close to a map of neural evolution at that level of detail.

That’s not what you were talking about though. Behavior and reward in the context of SDRs is absolutely important and doable.