“Prediction” from the first principles


#1

I agree with Jeff et al that intelligence is all about hierarchical pattern discovery and prediction.

But disagree that imitating neocortex is good way to do it. I am curious about neuroscience, but everything I know about the brain looks like a kluge. Which is not surprising for product of blind evolution and severe biological constraints.

So, I’ve been working on a purely functional design for scalable pattern discovery and projection. “Functional” means deriving all operations from the objective function: maximizing predictive power, defined as projected match of model to experience. Where match is quantified as additive compression of representation.

I am looking for collaborators, will pay for contributions: https://github.com/boris-kz/CogAlg
Thanks!


#2

Thanks for posting this @bkaz, your website is an interesting read and I like that you contrast your approach against others in detail.

Personally I think reverse engineering the “kluge” algorithms of biology is a helpful starting point in the absence of a better theory, but if someone out there can figure out a more direct path to a better implementation then well done to them! :tipping_hand_man: :tophat:

I am curious to know if (and at what point) you’d incorporate integration of different data feeds into your algorithm - apologies if this is mentioned somewhere, I only saw reference to static image processing. The brain of course integrates a complex mix of sensory modalities when building stable representations of things - do you see this as an important part of intelligence?


#3

Thank you very much, @jimmyw! I have some draft code for video: https://github.com/boris-kz/CogAlg/blob/master/video_draft.py , and plans to incorporate audio and text (labels) in “Implementation” section at the end of readme. Additional modalities are definitely useful, but I don’t think they are essential for general intelligence. Vision is ~%80 of primary input in humans and deaf people can be just as smart :).


#4

The brain isn’t a complete mess. There are lots of repeating units like minicolumns, macrocolumns, and regions, so there’s some fundamental operations going on. It’s not super clean either, but a lot of the apparent messiness comes from neuroscience, not the brain, since it’s hard to study.

We don’t even know what intelligence is except at a very abstract level, and it’s going to take more than a few words to describe. We have educated guesses, but we basically don’t have a clue what intelligence is. We have made decent progress on perception, but that’s a long way from intelligence and except for brain-based perception, it says nothing about intelligence. Perception achieved -> . . . -> intelligence achieved. Without any guide towards intelligence, developing AI is limited to testing things which seem likely to work. We’ve done this over and over for 50 years. No one is going to have a sudden spark of insight that leads to AI, because this isn’t sci-fi, it’s science. We need a more iterative process, not a bunch of random guesses. We can build on deep learning and the like, but there’s no way that will lead to AI that remotely resembles human intelligence.

We’re trying to understand how something right in front of us works, so why not use the original example of intelligence which we are trying to replicate? At this rate, the current approach will take centuries to produce intelligence.

Not that the AI being produced is worthless. It just isn’t suited for the same tasks as human intelligence. Even with billions of neurons, the brain can’t do some things modern AI can do. Intelligence shouldn’t be the goal, though.


#5

The brain isn’t a complete mess. There are lots of repeating units like minicolumns, macrocolumns, and regions

Yes, minicolumns are repeating, but a lot of neuroscientists consider that a developmental artifact (I disagree). With macrocolumns, it’s not at all clear that they are similar across brain regions or even present in higher association cortices. And the regions are mostly specific to human sensory and motor modalities, which has nothing to do with general intelligence per se.

That’s true for most people, but you have to be careful with that “we” :).

I disagree, and so does Jeff Hawkins. In On Intelligence, he specifically argues that hierarchical perception is GI-complete. Humans do have heavy emotional and cognitive bias towards action, but I think that’s a phylogenetic artifact: brain evolved to guide specific body. That’s not relevant for AGI, action / experimentation is helpful, but we can do science without it.

Without any guide towards intelligence, copying the brain is limited to sexual reproduction :).
You wouldn’t know what to copy: the whole substrate? Developmental, metabolic, immune system artifacts?
Various built-in biases that subserve hunting and gathering? Mechanisms that compensate for ridiculous amount of noise in the brain?

You don’t know that. It’s not science, it’s a meta-science.

Yes, we need deeper introspection. Deeper than most people can sustain, with our specie-wide ADHD.
It takes a lot of cultivating focus.

I am not arguing for SGD DNNs vs. HTM, that should be obvious from the first glance at my introduction. My approach is strictly functional, and you are not addressing any of it.

Yes, why not use it, for introspection?
I don’t know if you noticed, but HTM is supposed to model neocortex only, specifically excluding subcortical areas. That is, excluding most of innate complexity in the brain. Why? Because Jeff Hawking does have a guide, he decided that intelligence is a capacity for value-free learning, vs. tons of other things that brain does.
That came from introspection.
And I think we should exclude most of what neocortex does too, it’s also loaded with artifacts. And to do that, we need a constructive definition of what value-free learning is. That’s what I start with.

Ah, that’s why we have this “disconnect”. Intelligence is my goal, and I thought it was Jeff Hawkins’s too.


#6

Numenta’s primary mission is “to be a leader in the new era of machine intelligence.”

Since all theories are tentative it’s a “we will know when we get there” type of thing, which is subject to change over time.

I found that topics in this forum include all parts of the brain. At the moment one of the biggest challenges is to figure out how the neocortex works. Without this knowledge you are only guessing.

Although our “brain” forms line segments in more than one direction or angle at a time this from your GitHub page looks to me like a good start towards the signal processing hierarchy that begins in our retina:

level 1 compares consecutive 0D pixels within horizontal scan line, forming 1D patterns: line segments.
level 2 compares contiguous 1D patterns between consecutive lines in a frame, forming 2D patterns: blobs.
level 3 compares contiguous 2D patterns between incremental-depth frames, forming 3D patterns: objects.
level 4 compares contiguous 3D patterns in temporal sequence, forming 4D patterns: processes.

I expect much better results can be achieved by starting off with line segments at multiple angles. It’s then possible to (using the same number of photoreceptors) see tiny well rounded circles, instead of wrongly seeing tiny squares and rectangles. Following contours that define an object would become easier.

The results of your experiments would in either case be useful for testing your hypotheses. From my experience with similar ideas I do though expect that what you propose for a system would be easily beat by one that is based upon on how our brain works.


#7

I have some quibbles that trying to set up a “dry” evaluation hierarchy will get the results you are looking for; There are several things the “messy” brain does that are outside the simple ascending complexity model.
The semantic meaning is “grounded” in the somantosensory system. In the brain there are numerous cross-connections between the various hierarchy’s, with “skips” between levels.

Some things to see in this paper:

  1. figure 2 - note the map “skipping” in semantic learning.
  2. Figure 1 - note the extensive grounding in meaning coming from the somantosensory area. These were traditionally called “mirror neurons” which never made any sense to me. Taken in a new light - as part of the memory engram for a concept grounded in motor experience this makes a lot more sense. This is the part that will have to be replaced to build an AGI. This may be one of the hardest parts.

How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics
https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(13)00122-8.pdf

These is much reason to think that the “higher” cognitive functions are in fact an extension of language learning and without much of the functions that we associate with intelligence never form. Simple making a well ordered processing chain will not automatically form an AGI.

In the “Dad’s song” working group we are following the model that you first learn to hear a meaningful sound (attaching some value to a sound is itself a daunting task that seems to emerge from the limbic system) and then learning to drive the motor/production system to imitate this sound and derive goal satisfaction from this successful imitation. (also a limbic system function) At some point this makes the connection that making some sound results is goal satisfaction such as obtaining food or personal comfort, reinforcing this process.


#8

As far as contribution of drives from the sub-cortical structures:

I have been addressing various aspects of the interplay between the subcortical structures and the cortex for a long time. This post collects pointers to some of the key ideas:


#9

Numenta’s primary mission is “to be a leader in the new era of machine intelligence.”
Since all theories are tentative it’s a “we will know when we get there” type of thing, which is subject to change over time.

Thanks Gary, but this is corporatese, I am going by what Jeff said in On Intelligence.

level 1 compares consecutive 0D pixels within horizontal scan line, forming 1D patterns: line segments,
level 2 compares contiguous 1D patterns between consecutive lines in a frame, forming 2D patterns: blobs.
level 3 compares contiguous 2D patterns between incremental-depth frames, forming 3D patterns: objects.
level 4 compares contiguous 3D patterns in temporal sequence, forming 4D patterns: processes.

I expect much better results can be achieved by starting off with line segments at multiple angles. It’s then possible to (using the same number of photoreceptors) see tiny well rounded circles, instead of wrongly seeing tiny squares and rectangles. Following contours that define an object would become easier.

There are two separate issues here: shape of pixels and directions of their cross-comparison. Square pixels are pretty much universal in computer vision because they are consistent with orthogonal nature of spatial dimensions. This does introduce a bias: preferential comparison in vertical and horizontal dimensions, but their orientation can be adjusted via feedback.

You suggest immediate scan under multiple angles, but that adds a lot of redundancy and makes it very expensive to preserve positional information. That’s why the brain doesn’t: hippocampus only maps locations for higher association cortices, while visual cortex does quick and dirty recognition: good for fight or flight but pretty useless for deep analysis. I don’t think we want to replicate that “feature”

In my scheme, scanning under adjusted angle is selective for strongly oriented patterns, I have a draft code for that in “orient” function of https://github.com/boris-kz/CogAlg/blob/master/frame_draft.py

The results of your experiments would in either case be useful for testing your hypotheses.

I don’t expect low-level results to be very meaningful, initial patterns are tentative. On a higher level, they are evaluated for internal filter adjustment, reorientation: cross-comparison under a different angle that you mentioned, and some other reprocessing. That’s why initial inputs are buffered, something that brain simply can’t do, it has no passive memory substrate.

From my experience with similar ideas I do though expect that what you propose for a system would be easily beat by one that is based upon on how our brain works.

Are you sure they are similar? Because I haven’t seen any. Do you have a reference?

Thanks.


#10

I suspect that this is not completely correct. From patient HM we know that he was capable of spatial navigation and manipulation without a hippocampus and related structures.
I will go out on a limb and offer that the purpose is to do rapid one-shot learning in parallel with the much slower normally Hebbian learning cortex. During the spindle phase of sleeping there is a normalization process that pushes this days learning back onto the cortex to amplify the bits that were part of that day.

I think the older one-day storage of the hippocampus is greatly amplified by the cortex into longer term memory but the older function of coloring memory with emotional tones is done in the hippocampus and the combination is what is pushed back onto the cortex.

Note that without this emotional good/bad coloring it is impossible for the cortex to make effective judgments.


#11

Ok, it’s more complex, there is egocentric mapping in the dorsal pathway.
Still, it’s also significantly removed from primary visual cortex.

I will go out on a limb and offer that the purpose is to do rapid one-shot learning in parallel with the much slower normally Hebbian learning cortex. During the spindle phase of sleeping there is a normalization process that pushes this days learning back onto the cortex to amplify the bits that were part of that day.

I think the older one-day storage of the hippocampus is greatly amplified by the cortex into longer term memory but the older function of coloring memory with emotional tones is done in the hippocampus and the combination is what is pushed back onto the cortex.

That sounds right to me.

Note that without this emotional good/bad coloring it is impossible for the cortex to make effective judgments.

Effective for what? Do you count pure curiosity as an emotion?


#12

I count exploration of the environment as one of the basic drives on par with eating, grooming, mating,…
We get this from the sub-cortical structures as I explained in the referenced posts above.

It’s in the “Nociceptor” one.

Q: "Effective for what?"

In that same post you will find:
“In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.”


#13

Technically, you are right, it is mediated by dopaminergic areas. But I think this is an artifact, that dopamine is mostly tonic, it doesn’t carry much information. I think it’s mostly frontal cortex feeding on itself, although via a long loop.


#14

Loop? This one?


#15

No, I think it’s mesocortical dopamine pathway, via VTA


#16

Do you have papers that support this assertion? I am not speaking of just the simple reinforcement path but the entire functioning system.

Keep in mind that much simpler creatures do just fine without the big cortex but with much of the structures that are still in the center of our brain. I have no reason to think that they ever stopped doing what they have always done.

I have been working though the interplay between the cortex and sub-cortical structures and in particular the model I call “dumb boss-smart adviser” for several years now. I see that much of our drive and judgement originate in the sub-cortical structures and the cortex serves to feed this “lizard brain” predigested information and elaborate the resulting simple decisions from the older brain into more complex actions. I have hundreds of papers and books that I have collected over the years that buttress most of this model in both the gross and fine details. This included the writings of Damasio.

It’s taken more than 20 years to get to this point but I am willing to consider other viewpoints if they are supported by solid research and not guesses or opinion.


#17

Check “Neuroevolutionary origin of human emotions” by Damasio. He lists seven innate drives, one of which is “seeking”, basically same as curiosity. He traces it to dopaminergic areas, particularly VTA. Sorry, can’t link at the moment. But this is not material, I agree with you that the “boss” is dumb, to the point of irrelevance.


#18

I think the book you were looking for is by Jaak Panksepp:
The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions


#19

Yes, my bad, sorry :).


#20

You are missing the central point - the boss is central to every action and decision at all times. It never stops being the boss. The cortex makes it a lot smarter but it is completely and absolutely in control.

Consider that most AI work tries to figure out how to make cortex initiate action and make decisions - mostly to end up with hand waving and vague mumbling that “it’s in the forebrain.” And what drives the forebrain? Sub-cortical structures.

All sorts of critters without big forebrains do just fine in the same basic actions: while I don’t agree with Panksepp’s list he hits most of the high points and these are the same basic actions you might find in some very “simple” creatures - again - without much in the way of cortex. Evolution sorted this list of actions out very early in the game and this has been well conserved.