“Prediction” from the first principles

Numenta’s primary mission is “to be a leader in the new era of machine intelligence.”
Since all theories are tentative it’s a “we will know when we get there” type of thing, which is subject to change over time.

Thanks Gary, but this is corporatese, I am going by what Jeff said in On Intelligence.

level 1 compares consecutive 0D pixels within horizontal scan line, forming 1D patterns: line segments,
level 2 compares contiguous 1D patterns between consecutive lines in a frame, forming 2D patterns: blobs.
level 3 compares contiguous 2D patterns between incremental-depth frames, forming 3D patterns: objects.
level 4 compares contiguous 3D patterns in temporal sequence, forming 4D patterns: processes.

I expect much better results can be achieved by starting off with line segments at multiple angles. It’s then possible to (using the same number of photoreceptors) see tiny well rounded circles, instead of wrongly seeing tiny squares and rectangles. Following contours that define an object would become easier.

There are two separate issues here: shape of pixels and directions of their cross-comparison. Square pixels are pretty much universal in computer vision because they are consistent with orthogonal nature of spatial dimensions. This does introduce a bias: preferential comparison in vertical and horizontal dimensions, but their orientation can be adjusted via feedback.

You suggest immediate scan under multiple angles, but that adds a lot of redundancy and makes it very expensive to preserve positional information. That’s why the brain doesn’t: hippocampus only maps locations for higher association cortices, while visual cortex does quick and dirty recognition: good for fight or flight but pretty useless for deep analysis. I don’t think we want to replicate that “feature”

In my scheme, scanning under adjusted angle is selective for strongly oriented patterns, I have a draft code for that in “orient” function of https://github.com/boris-kz/CogAlg/blob/master/frame_draft.py

The results of your experiments would in either case be useful for testing your hypotheses.

I don’t expect low-level results to be very meaningful, initial patterns are tentative. On a higher level, they are evaluated for internal filter adjustment, reorientation: cross-comparison under a different angle that you mentioned, and some other reprocessing. That’s why initial inputs are buffered, something that brain simply can’t do, it has no passive memory substrate.

From my experience with similar ideas I do though expect that what you propose for a system would be easily beat by one that is based upon on how our brain works.

Are you sure they are similar? Because I haven’t seen any. Do you have a reference?

Thanks.

I suspect that this is not completely correct. From patient HM we know that he was capable of spatial navigation and manipulation without a hippocampus and related structures.
I will go out on a limb and offer that the purpose is to do rapid one-shot learning in parallel with the much slower normally Hebbian learning cortex. During the spindle phase of sleeping there is a normalization process that pushes this days learning back onto the cortex to amplify the bits that were part of that day.

I think the older one-day storage of the hippocampus is greatly amplified by the cortex into longer term memory but the older function of coloring memory with emotional tones is done in the hippocampus and the combination is what is pushed back onto the cortex.

Note that without this emotional good/bad coloring it is impossible for the cortex to make effective judgments.

1 Like

Ok, it’s more complex, there is egocentric mapping in the dorsal pathway.
Still, it’s also significantly removed from primary visual cortex.

I will go out on a limb and offer that the purpose is to do rapid one-shot learning in parallel with the much slower normally Hebbian learning cortex. During the spindle phase of sleeping there is a normalization process that pushes this days learning back onto the cortex to amplify the bits that were part of that day.

I think the older one-day storage of the hippocampus is greatly amplified by the cortex into longer term memory but the older function of coloring memory with emotional tones is done in the hippocampus and the combination is what is pushed back onto the cortex.

That sounds right to me.

Note that without this emotional good/bad coloring it is impossible for the cortex to make effective judgments.

Effective for what? Do you count pure curiosity as an emotion?

I count exploration of the environment as one of the basic drives on par with eating, grooming, mating,…
We get this from the sub-cortical structures as I explained in the referenced posts above.

It’s in the “Nociceptor” one.

Q: “Effective for what?”

In that same post you will find:
“In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.”

Technically, you are right, it is mediated by dopaminergic areas. But I think this is an artifact, that dopamine is mostly tonic, it doesn’t carry much information. I think it’s mostly frontal cortex feeding on itself, although via a long loop.

Loop? This one?

No, I think it’s mesocortical dopamine pathway, via VTA

Do you have papers that support this assertion? I am not speaking of just the simple reinforcement path but the entire functioning system.

Keep in mind that much simpler creatures do just fine without the big cortex but with much of the structures that are still in the center of our brain. I have no reason to think that they ever stopped doing what they have always done.

I have been working though the interplay between the cortex and sub-cortical structures and in particular the model I call “dumb boss-smart adviser” for several years now. I see that much of our drive and judgement originate in the sub-cortical structures and the cortex serves to feed this “lizard brain” predigested information and elaborate the resulting simple decisions from the older brain into more complex actions. I have hundreds of papers and books that I have collected over the years that buttress most of this model in both the gross and fine details. This included the writings of Damasio.

It’s taken more than 20 years to get to this point but I am willing to consider other viewpoints if they are supported by solid research and not guesses or opinion.

Check “Neuroevolutionary origin of human emotions” by Damasio. He lists seven innate drives, one of which is “seeking”, basically same as curiosity. He traces it to dopaminergic areas, particularly VTA. Sorry, can’t link at the moment. But this is not material, I agree with you that the “boss” is dumb, to the point of irrelevance.

I think the book you were looking for is by Jaak Panksepp:
The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions

1 Like

Yes, my bad, sorry :).

1 Like

You are missing the central point - the boss is central to every action and decision at all times. It never stops being the boss. The cortex makes it a lot smarter but it is completely and absolutely in control.

Consider that most AI work tries to figure out how to make cortex initiate action and make decisions - mostly to end up with hand waving and vague mumbling that “it’s in the forebrain.” And what drives the forebrain? Sub-cortical structures.

All sorts of critters without big forebrains do just fine in the same basic actions: while I don’t agree with Panksepp’s list he hits most of the high points and these are the same basic actions you might find in some very “simple” creatures - again - without much in the way of cortex. Evolution sorted this list of actions out very early in the game and this has been well conserved.

Sorry about that. I didn’t mean to imply anything about your approach, and sorry if I was rude.

This might not be the right place for this discussion so I’ll reply to your points in a spoiler.

Rest of this response

Regions have specific roles but the same circuitry (or at least most of it) is used for everything the neocortex does. The connections between layers are consistent. As far as I know, there’s not a single region with an extra layer completely different from the others. There are definitely specializations, but if at least the same components of the cortical circuitry can be used for every function the cortex serves, then either the cortical circuitry is intelligent or it’s a short step away from intelligence.

Predictions and hierarchy could easily be central to intelligence, but that’s not enough to claim more than an educated guess in my opinion. How does it make novel predictions? Hierarchy might help with that, but it can’t solve everything. How does the brain understand sentences or produce thoughts (in word form, image form, or whatever else, I’m not arguing language is important or unimportant for intelligence)? I don’t see how prediction could do that.

The thinking about hierarchy has changed since. Check out this podcast: Episode 1: Research Update with Jeff Hawkins - Part 1
I don’t know if hierarchical perception leading to general AI is still the goal, because there are still aspects of hierarchy, just not as much emphasis on the physical cortical hierarchy.

I’m not arguing that action is important. I just don’t see how perception could lead to intelligence on its own.

From my perspective, theories about AI are very opinionated. I have strong opinions which are weakly supported too. It’s impossible not to have weakly supported opinions in my opinion because so many ideas about AI have failed. When I have a new idea which I’m excited about, I try to remind myself that there’s a 90% chance it will fail. Maybe I shouldn’t try to force that on others.

No, you wouldn’t know what to copy at the start, but you can figure that out. There are ways to get around distracting things which aren’t involved in intelligence. For example, there are unessential neuron classes. That’s known because there are neuron classes unique to some regions which you can be intelligent without. If you look for a given neuron class in more than one region but only find it in a small fraction of the regions, you can be pretty certain you can ignore that cell type. Then, you can solidify that based on theory, either showing that the cell type isn’t required for intelligence or for the cases where you get it wrong, showing that it actually plays some essential role based on the rest of what the circuit does, and it just wasn’t discovered yet in other parts of the cortex.

Another way around the distracting details is to not require anything to be included in the theory until there’s a need to do so. That’s an approach Numenta uses, I’ve read.

There are loads of other approaches to get around the messiness, and other people probably know of other approaches. Grid cells have been big in HTM theory recently, and those we discovered by recording neurons. There are some things in neuroscience which aren’t messy and ambiguous, which can really help get a framework going. Another approach is to just try and figure out the role of a connection, neuron class, layer, or whatnot, without worrying too much about how it carries out that role. For example, let’s say a type of neuron activates a little while after all the others. It also doesn’t fire much unless the animal is behaving. But it isn’t involved in generating behavior because it starts firing a while after the animal starts behaving and it has restricted receptive fields. Based on that, that type of neuron might be involved in processing movement, maybe moving sensory input or maybe to deal with the impacts of behavior on the sensory input.

Approaches like these can be used together to build up a better and better sense of what’s going on over time. That’s also just the neuroscience side. I don’t know much about how to test things in code in ways which go alongside neuroscience-oriented approaches.

I don’t really see the difference. I agree that it’s not really right to call it science, because there aren’t any measurements, but science also involves analyzing and philosophizing, especially for really hard problems.

Subcortical structures like thalamus and basal ganglia are still on the table. Regardless, neocortex is still pretty dang complex. I don’t think it’s super complex in terms of core operations, but things which can be described simply can be complex without that description. Neocortex is also pretty messy, especially when most info is based on tiny isolated slices or anesthetics with massive influences on its activity.

Introspection and neuroscience complement each other. Jeff Hawkins has argued that we can’t get to general AI in the near future except by copying the brain’s core principles of intelligence.

I agree we shouldn’t copy most of what the cortex does, although I’m not sure if you’re talking about general operations or more specific things.

Let me try to illustrate why I think we should copy the cortex with an exaggerated story which might not be relevant to your reasoning. I wouldn’t blame you for skipping my rambling.

I find some coffee poured on the road, and I’ve never seen coffee before but decide I want to make some because that coffee was pretty good but, you know, it was on the road so it could be better. So, I go buy some coffee beans because they smell similar but have no idea what to do with them. Do I just leave the coffee there on the road, or do I take it home to help guide coffee making? There’s all kinds of dirt in it so that would distract from the flavor I’m trying to make, so does that mean I throw out the coffee? No, I should keep it around to see if the recipe is heading in the right direction. If I do that, when I add pepper I know the taste is definitely wrong. If I don’t, I’ll just keep going with the pepper because it seems like the right starting point. Maybe I’ll actually start with putting them in water, a good start. But then I can’t check if the color is right based on the roadside coffee, so I end up with beans in water. It tastes like coffee, but not quite right, so I keep adding all kinds of spices, because that’s how you get subtle flavors, right? I never end up making the right coffee because I never realize you can grind coffee beans. Instead, decades in the future when I’m retired from the coffee development business, I’ve made something pretty tasty, with all the right combinations of spices (pepper plays the central role), but it’s definitely not coffee. Still good though.

That’s not what I was trying to say. By “being produced,” I meant any AI not based on the brain. Intelligence is the goal here. My point was that if it’s not based on the brain, it’s not going to develop easily towards intelligence, so it’s going to be something else.

2 Likes

No problem, I am not Miss Manners either :).

This might not be the right place for this discussion so I’ll reply to your points in a spoiler.

These things are hard to argue about, too much intuition involved. I trust mine, but communicating it takes forever. So, people check the wavelength first :). I will try to reply, but it may take time.

Rest of this response

Regions have specific roles but the same circuitry (or at least most of it) is used for everything the neocortex does. The connections between layers are consistent. As far as I know, there’s not a single region with an extra layer completely different from the others. There are definitely specializations, but if at least the same components of the cortical circuitry can be used for every function the cortex serves, then either the cortical circuitry is intelligent or it’s a short step away from intelligence.

Predictions and hierarchy could easily be central to intelligence, but that’s not enough to claim more than an educated guess in my opinion. How does it make novel predictions? Hierarchy might help with that, but it can’t solve everything. How does the brain understand sentences or produce thoughts (in word form, image form, or whatever else, I’m not arguing language is important or unimportant for intelligence)? I don’t see how prediction could do that.

The thinking about hierarchy has changed since. Check out this podcast: Episode 1: Research Update with Jeff Hawkins - Part 1
I don’t know if hierarchical perception leading to general AI is still the goal, because there are still aspects of hierarchy, just not as much emphasis on the physical cortical hierarchy.

I’m not arguing that action is important. I just don’t see how perception could lead to intelligence on its own.

From my perspective, theories about AI are very opinionated. I have strong opinions which are weakly supported too. It’s impossible not to have weakly supported opinions in my opinion because so many ideas about AI have failed. When I have a new idea which I’m excited about, I try to remind myself that there’s a 90% chance it will fail. Maybe I shouldn’t try to force that on others.

No, you wouldn’t know what to copy at the start, but you can figure that out. There are ways to get around distracting things which aren’t involved in intelligence. For example, there are unessential neuron classes. That’s known because there are neuron classes unique to some regions which you can be intelligent without. If you look for a given neuron class in more than one region but only find it in a small fraction of the regions, you can be pretty certain you can ignore that cell type. Then, you can solidify that based on theory, either showing that the cell type isn’t required for intelligence or for the cases where you get it wrong, showing that it actually plays some essential role based on the rest of what the circuit does, and it just wasn’t discovered yet in other parts of the cortex.

Another way around the distracting details is to not require anything to be included in the theory until there’s a need to do so. That’s an approach Numenta uses, I’ve read.

There are loads of other approaches to get around the messiness, and other people probably know of other approaches. Grid cells have been big in HTM theory recently, and those we discovered by recording neurons. There are some things in neuroscience which aren’t messy and ambiguous, which can really help get a framework going. Another approach is to just try and figure out the role of a connection, neuron class, layer, or whatnot, without worrying too much about how it carries out that role. For example, let’s say a type of neuron activates a little while after all the others. It also doesn’t fire much unless the animal is behaving. But it isn’t involved in generating behavior because it starts firing a while after the animal starts behaving and it has restricted receptive fields. Based on that, that type of neuron might be involved in processing movement, maybe moving sensory input or maybe to deal with the impacts of behavior on the sensory input.

Approaches like these can be used together to build up a better and better sense of what’s going on over time. That’s also just the neuroscience side. I don’t know much about how to test things in code in ways which go alongside neuroscience-oriented approaches.

I don’t really see the difference. I agree that it’s not really right to call it science, because there aren’t any measurements, but science also involves analyzing and philosophizing, especially for really hard problems.

Subcortical structures like thalamus and basal ganglia are still on the table. Regardless, neocortex is still pretty dang complex. I don’t think it’s super complex in terms of core operations, but things which can be described simply can be complex without that description. Neocortex is also pretty messy, especially when most info is based on tiny isolated slices or anesthetics with massive influences on its activity.

Introspection and neuroscience complement each other. Jeff Hawkins has argued that we can’t get to general AI in the near future except by copying the brain’s core principles of intelligence.

I agree we shouldn’t copy most of what the cortex does, although I’m not sure if you’re talking about general operations or more specific things.

Let me try to illustrate why I think we should copy the cortex with an exaggerated story which might not be relevant to your reasoning. I wouldn’t blame you for skipping my rambling.

I find some coffee poured on the road, and I’ve never seen coffee before but decide I want to make some because that coffee was pretty good but, you know, it was on the road so it could be better. So, I go buy some coffee beans because they smell similar but have no idea what to do with them. Do I just leave the coffee there on the road, or do I take it home to help guide coffee making? There’s all kinds of dirt in it so that would distract from the flavor I’m trying to make, so does that mean I throw out the coffee? No, I should keep it around to see if the recipe is heading in the right direction. If I do that, when I add pepper I know the taste is definitely wrong. If I don’t, I’ll just keep going with the pepper because it seems like the right starting point. Maybe I’ll actually start with putting them in water, a good start. But then I can’t check if the color is right based on the roadside coffee, so I end up with beans in water. It tastes like coffee, but not quite right, so I keep adding all kinds of spices, because that’s how you get subtle flavors, right? I never end up making the right coffee because I never realize you can grind coffee beans. Instead, decades in the future when I’m retired from the coffee development business, I’ve made something pretty tasty, with all the right combinations of spices (pepper plays the central role), but it’s definitely not coffee. Still good though.

That’s not what I was trying to say. By “being produced,” I meant any AI not based on the brain. Intelligence is the goal here. My point was that if it’s not based on the brain, it’s not going to develop easily towards intelligence, so it’s going to be something else.

[/quote]

1 Like

I was thinking of cortical area V1:

In my opinion it is the system that starts off only able to detect two angles of motion that becomes “pretty useless for deep analysis”. Having to make adjustments adds another step to the process. It makes more sense to me to start off with the brain’s center-surround receptive field organization.

Are you sure they are similar? Because I haven’t seen any. Do you have a reference?

I’m going by the results of hundreds of great sounding ideas I tried including ones that parallel what you proposed, and did not work as well as expected. From my experience successfully reproducing what is found in neuroscientific literature results in discovering a trick that can be modeled with a small amount of code. For example two frame place avoidance behavior and getting from place to place without bumping into anything unless still learning to walk/run/fly or were startled:

Neuroscience requires answering questions like: where would the brainwaves be represented?

How would you even model this kind of behavior without first modeling waves?

1 Like

I am familiar with orientation columns, but they only represent that one parameter: orientation.
My comparison derives and encapsulates into patterns multiple parameters at once. That’s far more complex, general, and informative.

In my opinion it is the system that starts off only able to detect two angles of motion that becomes “pretty useless for deep analysis”. Having to make adjustments adds another step to the process. It makes more sense to me to start off with the brain’s center-surround receptive field organization.

Adding another step to inputs that actually deserve the costs is more intelligent than adding the same step to every bit of noise that comes across.

Are you sure they are similar? Because I haven’t seen any. Do you have a reference?

I’m going by the results of hundreds of great sounding ideas I tried including ones that parallel what you proposed, and did not work as well as expected. From my experience successfully reproducing what is found in neuroscientific literature results in discovering a trick that can be modeled with a small amount of code. For example two frame place avoidance behavior and getting from place to place without bumping into anything unless still learning to walk/run/fly or were startled:

Neuroscience requires answering questions like: where would the brainwaves be represented?
How would you even model this kind of behavior without first modeling waves?

It’s fine that you didn’t get very deep into my intro, but then you are not in a position to judge what it is similar to. I think I made it pretty clear that I am not doing neuroscience, and that’s not because I don’t have clue.

I don’t need global synchronization, via waves or otherwise, because my parameters are encapsulated into patterns, rather than distributed across the whole system. This encapsulation means that they can be processed locally and asynchronously, in parallel with patterns of other levels. The brain can’t do it because there is no RAM within neuron, so it must use dedicated physical connections for memory. That’s a huge handicap, and we don’t need to replicate it.

That’s true if you are talking about instincts, but they are a small part of human motivation.
It’s not true about conditioning because there is a value drift, driven by cortical learning.
Which means that cortex can and does swap it’s bosses all the time.
And it’s definitely untrue about value-free curiosity: that boss in an empty chair.

Why did you bring your non-neuroscientific model to a neuroscience forum?

Because my model offers a conceptually better way to achieve the same purpose.
I actually asked Matt if this is a good place for it, and he approved.

I am not attacking here so please don’t take this the wrong way.

You seem very hung up on curiosity as if it is somehow special. I take this to mean that you place this as some sort of different behavior from - oh say - seeking shelter or a food source.

In my own case I have a very solid “big picture” idea of how it all works as a system and I am trying to learn how the parts I don’t understand operate to fill in this picture; mine goes from a helpless infant to a functioning adult. I have been working on this since the early 1980’s and there are still many loose ends.

Do you have a complete working model/framework that you think can be elaborated into a working AGI even if it is not documented in your writings? Said differently - Does your partial model fit into a bigger picture or is just an interesting sub-problem to be solved in any way you can work up?
Does it account for saccades and assembling those snapshots into a mental model?
Does this model include being able to drive a body and generate speech?
Does it account for the sub-cortical coloring of perceptions from experience to form judgments?
Does it account for the known observations of various defects of the human brain and the affects it has on expressed behavior? This is important as these defects form the fence of what a “broken” AGI would look like.

I see that these things are not random questions but instead - paths to understanding how an AGI will have to function to be compatible with human culture. I have said this before but I will raise it again: As a researcher in the AGI field, I spend considerable time thinking about the various mental defects and wonder if I would consider it a win to create a fully functional profoundly autistic AGI. Or a fully functional psychotic one.