“Prediction” from the first principles

That is the bottom of S-T scope hierarchy. Higher levels have greater scope of search and generalization.

BTW, our intelligence is not a blank system at the beginning. It contains quite complicate hardwired primitives higher than V1.

That’s a product of evolutionary learning, and these primitives are very simple compared to what we end up learning. In any case, I was talking about intelligence in general, specific shortcuts are not essential to it.

Calling it adaptation or behaviour doesn’t say anything about how to design it.

There is some hierarchy in our brain, but it’s hierarchy of abstraction, it’s not straight forward, and it’s artificial classification, not algorithmical.
Basically our brain is collection of semi independent interconnected modules and for many of them you can’t define which is higher.
Because of the variety of the modules it’s possible to determine patterns for emergent properties, which we call abstract. It’s obvious that evolutionary they could be created later and morphologically they are typically described as higher regions. However they are just other semi independent modules in the network, you can describe them as placed aside, or at the center - it’s just a matter of convenience of thinking about it.

It’s true only if you exclude evolutionary development as a way to develop it.

On other side, an arbitrary choice of one of its properties as a fundamental description is not the best practical approach eigher.

It’s a good point. The problem is we don’t know any other intelligence but natural. So, it’s unclear where to start with such approach.

@bkaz - I get that you are deeply invested in your model - to the point where you have paid other thousands of dollars to spend time with it.
As I pointed out earlier - I see it as being more like an edge detection kernel being applied a temporal stream. In the example of this that I posted there are deep mathematical analysis for what is being done and how they work. I don’t think I have seen this in your posts.
Turing proposed certain features that are necessary for a “Turing complete” computer and demonstrated this with his tape-based computer model that is now called a Turing matching.
Do you have in underlying computational structure that is behind your model?
What makes it hard to embrace is that the explaination does not elucidate what the mathematical underpinning is - what is offered is an algorithm to implement this principle. (Marr’s level 2)
There are levels of explanations and it may help your cause if you break it down to Marr’s Three levels to make it more approachable and describing it at all three levels.
It may also help you to frame your responses.


Please don’t take this as an attack. I have been teaching computer science for many years and have found certain tools useful in getting ideas across to students - I am offering this advice to your project free of charge.

I am invested in the purpose only, my model stable because I spent > 30 years on refining it.
I don’t have any problem with changing my mind, given a good reason.

to the point where you have paid other thousands of dollars to spend time with it.

That money is quite minor compared to the time that I spent, and it was generally worth it.

As I pointed out earlier - I see it as being more like an edge detection kernel being applied a temporal stream.

You are looking at the 1st level. These atomic operations are simple because additional complexity must be conditional. And most of that complexity is in incrementally derived parameters.

What makes it hard to embrace is that the explaination does not elucidate what the mathematical underpinning is - what is offered is an algorithm to implement this principle.

There is no fixed mathematical transformation, depth of comparison depends on the input.

There are levels of explanations and it may help your cause if you break it down to Marr’s Three levels to make it more approachable and describing it at all three levels.
It may also help you to frame your responses.

These levels won’t work because GI is not a fixed object. There is no end to how complex it can get, but there is a beginning: pixel comparison. I don’t have a top-down explanation because there is no top.
All I have is a starting point and recursive process of generating higher levels of functional complexity.
That’s the nature of the beast.

For as much as you attack deep networks and related models as having no foundation in theory - I see that most of them have more foundation than you are offering.
I would think that you would have more after 30 years working with it.

I am sorry you feel that way, we just have to disagree here.
But I do appreciate your interest, and in no way take this as a personal attack :slight_smile:

1 Like

I try to understand both the complex biology and first principles. This may be true for everyone in the forum.

My spatial navigation network is a first principles model. It came from clues I found in “Dynamic Grouping of Hippocampal Neural Activity During Cognitive Control of Two Spatial Frames”. I use the model as a basis for understanding how a biological brain might similarly use traveling waves to navigate.

What I most need are both. I want to perform tests that include such things as the formation of religious beliefs that make no sense at all. For example in a modeled society I could teach one that I am the creator of the universe and they are my chosen disciple to teach the holy word written by their omnipotent god himself, their lord Gary Gaulin. An emotionally biased biological brain is expected to be prone to blindly believe everything that this disciple says, because it feels good to do so, while others should demand evidence before accepting such a thing is true. From this it is also possible to understand the neurological basis of religious psychosis and other conditions that have through history helped keep our world at war. With enough time it’s expected that religions of many kinds should all by themselves emerge.

A model based on first principles may or may not have the same problems. In that case the best outcome is (after reading everything found in Google Scholar) to logically answer “big questions” that were once thought to be impossible to test.

I would very much welcome the model you are proposing. The problem though is that the first principles are mostly unknown. So I’m again forced to first try making sense of grid cell modules and other biological systems. It seems that you are very knowledgeable in that area. Your insights can be of great value to myself and others.

1 Like

Yes, but “principles” is just a higher level of generalization, which is relative.

I would very much welcome the model you are proposing. The problem though is that the first principles are mostly unknown. So I’m again forced to first try making sense of grid cell modules and other biological systems. It seems that you are very knowledgeable in that area.

I am sorry to disappoint, but my knowledge of neuroscience is quite patchy, especially on computational level.
I don’t know much about mechanics of grid cells, but actual navigation is probably guided by interaction of multiple mapping mechanisms: allocentric grid cells in hippocampus, place cells in EC, egocentric map in dorsal “where” stream, Numenta-proposed grid cells in L5 pretty much all across the cortex, etc.
So, it’s pretty hard to model and test it terms of ultimate behaviour.

My model is very different, locations are not stored in separate areas, in fact there are no allocentric locations at all. All I have is multivariate patterns, which represent relative distances to similar / contextual patterns as spans of negative patterns or gaps. Both related (positive) and intervening (negative) patterns are ultimately defined in 4D space-time.

At a lower mechanical level you may want to start here:

This does NOT address the navigation aspect - only the cell type itself.

1 Like

As far as I remember, primary motor cortex is missing L4: no connection to thalamus, and primary visual cortex is missing L6: no connection to basal ganglia. Just a detail.

There are definitely specializations, but if at least the same components of the cortical circuitry can be used for every function the cortex serves, then either the cortical circuitry is intelligent or it’s a short step away from intelligence.

I agree that it’s intelligent, just grossly suboptimal.

Predictions and hierarchy could easily be central to intelligence, but that’s not enough to claim more than an educated guess in my opinion. How does it make novel predictions?

Only random changes are completely novel, and they are not predictive. Any prediction is a projection of previously discovered patterns, what counts as novel is interaction between these projections. Different co-projecting patterns may cancel or reinforce each other, so combined prediction looks novel.

How does the brain understand sentences or produce thoughts (in word form, image form, or whatever else, I’m not arguing language is important or unimportant for intelligence)? I don’t see how prediction could do that.

“Understanding” is just an intuitive term for complex recognition. You understand something when it matches some known pattern, however complex.

The thinking about hierarchy has changed since. Check out this podcast: Episode 1: Research Update with Jeff Hawkins - Part 1
I don’t know if hierarchical perception leading to general AI is still the goal, because there are still aspects of hierarchy, just not as much emphasis on the physical cortical hierarchy.

Hierarchy is still a macro-structure of HTM, it’s just that they are focused on workings within a level for now.
The objects are now recognized by each level, but it’s still supposed to be a hierarchy in the end.

From my perspective, theories about AI are very opinionated. I have strong opinions which are weakly supported too. It’s impossible not to have weakly supported opinions in my opinion because so many ideas about AI have failed. When I have a new idea which I’m excited about, I try to remind myself that there’s a 90% chance it will fail. Maybe I shouldn’t try to force that on others.

I was pretty cautious too, a lifetime ago. But confidence grows when you keep questioning your opinions, but then arrive to the same conclusions.

I don’t really see the difference. I agree that it’s not really right to call it science, because there aren’t any measurements, but science also involves analyzing and philosophizing, especially for really hard problems.

The difference is that science is empirically specific, and general learning ability is not, by definition.
It is the ability to do science.

Jeff Hawkins has argued that we can’t get to general AI in the near future except by copying the brain’s core principles of intelligence.

Without a reason, it’s just his gut feeling. Anyway, it depends on how “core” these core principles are.

Let me try to illustrate why I think we should copy the cortex with an exaggerated story which might not be relevant to your reasoning. I wouldn’t blame you for skipping my rambling…

Sorry, I am not into examples and analogies, they are all flawed. This tendency towards analogical thinking is one of cortical “bugs” that I want to fix :slight_smile:

Whether or not primary motor cortex has L4 is unclear. It receives thalamic input (all layers receive thalamic input, including the ones adjacent to L4 which may or may not include depths in the sheet which are actually L4 in primary motor cortex).
Primary visual cortex has L6 and does project to basal ganglia. Layer 6 doesn’t project to striatum, it’s layer 5.

It’s okay if the circuit has deviations in some regions, although a primary region not having a layer would be significant. Different senses present different problems. If primary auditory cortex lacked a whole layer, I’d say that’s because it only needs to handle sequences, not locations, and the missing layer is for recognizing locations on objects.

Sets are important too, not just sequences. Object recognition cannot be purely sequence prediction-based or it would take forever to learn objects. Rivers can sound pretty irregular but are still easy to recognize.

I don’t think understanding is just pattern recognition. Maybe it is at some low level, but putting together those patterns into an understanding is a hard problem so it’s not really pattern recognition at that point.
What about creating sentences?
With repeated iterations of recognition, like in a hierarchy, it’s not going to be intelligent, it’s just going to form more complex sensory representations. If it were so straightforward, human-level intelligence would’ve evolved before mammals.

It has a lot of implications if it isn’t a physical hierarchy but instead a hierarchical composition of objects within one region. It’s hierarchy but it doesn’t do the same things.

If you don’t have a general AI, you shouldn’t accept your ideas in their exact form. I’m confident that my ideas will work, but I try to find flaws, not because I hate my own ideas, but to build on them based on their flaws. It’s just the scientific method. A lot of people who work on HTM test stuff in code to find flaws.
Besides, it’s boring if the solution is straightforward because there are no moments of realization.

Oh, sorry I wasn’t clear. I wasn’t talking about what intelligence is. I was talking about creating intelligence.

Explain the flaw. How can we think about this directly? No one has created intelligence. You didn’t address my point that if you have an example of something you’re trying to produce, you shouldn’t ignore it. You’re trying to reinvent the wheel. Wheels are simple but they weren’t obvious to the millions of people who lived before them.

Please don’t call my thinking bugged without a reason.

Two points:

  1. You constantly whine about how terrible the biological solution is. I would be far more inclined to take this seriously if you could provide examples of your better solution.

To my way of thinking - as I learn how nature has done something, I keep coming away with the feeling that I could never have come up with something that works that well.

Every time I get stuck I go see how nature is doing it and find that it’s way better than whatever solution I was trying to make. It’s really hard to out-do millions of years of evolution.

  1. Your criticism of other people’s approach based on experience and intuition would be better if you could actually provide a counterexample of a realistic approach based on first principles - rather than yet another approach based on a vague feeling that the methods you propose, if they ever get worked out, will be better.

For that matter, have you clearly elucidated what those “first principles” are to the point where anyone can know what they are?

At this point, all you seem to have is a proposal that is “open-ended” so you don’t really have any concrete solution as a counterexample.

How is this different, other than from your point of view, you have yourself as the authority so you have slightly more trust that it is a good idea?

1 Like

Ok, you know that better. As I said, it was just my recollection, and it’s not relevant to the question at hand.

Sets are important too, not just sequences. Object recognition cannot be purely sequence prediction-based or it would take forever to learn objects. Rivers can sound pretty irregular but are still easy to recognize.

Yes, sets of co-projecting / overlapping patterns. Sequence is a basal pattern because it’s 1D, you can’t get any lower. Higher patterns can have any dimensionality or hierarchical depth.

I don’t think understanding is just pattern recognition. Maybe it is at some low level, but putting together those patterns into an understanding is a hard problem so it’s not really pattern recognition at that point.
What about creating sentences?
With repeated iterations of recognition, like in a hierarchy, it’s not going to be intelligent, it’s just going to form more complex sensory representations.

And what exactly is the difference? See, you don’t define what you mean by intelligence, there is no argument here.

If it were so straightforward, human-level intelligence would’ve evolved before mammals.

Not really, as far as I know cortex is the only deeply hierarchical structure in the brain.

It has a lot of implications if it isn’t a physical hierarchy but instead a hierarchical composition of objects within one region. It’s hierarchy but it doesn’t do the same things.

One doesn’t preclude the other, regions simply form a higher-order hierarchy. The notion of cortical hierarchy is well established in neuroscience, regardless of HTM. There seem to be multiple orders of physical hierarchy:

  • first within each of sensory and motor cortices, then between association cortices:
  • frontal cortex of the latter is relatively higher than parietal cortex of the former, then
  • medial cortices (default-mode network) are relatively higher than lateral cortices (task-positive network)
  • left | dominant hemisphere is relatively higher than the right | contextual one.

If you don’t have a general AI, you shouldn’t accept your ideas in their exact form. I’m confident that my ideas will work, but I try to find flaws, not because I hate my own ideas, but to build on them based on their flaws.

So do I. But you need to find an idea you can be confident in, otherwise you will never get to general AI.

Besides, it’s boring if the solution is straightforward because there are no moments of realization.

That moment is when you find such solution. The very purpose of science: explanation, is to find simplified / compressed representations.

Oh, sorry I wasn’t clear. I wasn’t talking about what intelligence is. I was talking about creating intelligence.

This is closely related. Yes, you can study specific substrate of intelligence, but it’s nature is a substrate-independent function. Even math started out as experimental science, in ancient Egypt, even though it is clearly not empirical in nature.

Explain the flaw.

The flaw is in trying to find parallels between inherently unrelated things. Such as making coffee and understanding GI, seeing a face on Mars, seeing animals in constellations, cities in the clouds, your personality and fate in a horoscope, streaks of luck in lottery.
In general, seeing patterns where there aren’t any. This cognitive bias goes by different names: priming, framing, analogical thinking, confirmation bias, initialization bias. Basically dirty thinking.

How can we think about this directly?

Introspective generalization.

You didn’t address my point that if you have an example of something you’re trying to produce, you shouldn’t ignore it.

I don’t exactly ignore it, but every time I read about the brain, I see something badly wrong with it.
So, I go back to working from the function, it seems to be far more productive.

You’re trying to reinvent the wheel. Wheels are simple but they weren’t obvious to the millions of people who lived before them.

Wheels are simple in pattern and function. Another example of analogy that doesn’t fit :slight_smile:

Please don’t call my thinking bugged without a reason.

It’s a well known fact that human thinking is bugged in a lot of ways. Kahneman is good read :).

You touched on an interesting human metal function: How do we know what is right or when something is true.
It turns out that this actually is something you can measure and point to in the brain.
Google: neurological “sense of knowing”
You will get some very interesting hits on this - for example this one:
https://www.sciencedirect.com/science/article/pii/S089662730200939X

https://www.cell.com/neuron/pdf/S0896-6273(02)00939-X.pdf

@bkaz mentions that he keeps coming up with the same answer so this confirms that he is right. This is not surprising as he keeps starting with the the same data set. What does that look like to an outside observer?

I have a constructive definition of intelligence, and you are more than welcome to point out how my implementation doesn’t follow from it.
As for the definition itself, it is a generalization of entire subjective experience, which is impossible to make explicit. So, my purpose here is not to convince people who disagree with it, only to attract those who do agree and want implement it.

At this point, all you seem to have is a proposal that is “open-ended” so you don’t really have any concrete solution as a counterexample.

It’s open-ended by nature. What I am proposing is a second-order recursion: not only reusing outputs as inputs, but also incrementing complexity of the algorithm to process these outputs. So, the problem is how to define such increment. I have a pretty good idea how to do it, but only in 1D, which doesn’t fit the universe we are in.
That’s why I am currently messing with additional 3 dimensions, instead of coding hierarchical 1D algorithm.

Is this conversation going anywhere? It seems to me that @bkaz has a viewpoint exactly counter to one of the HTM Community’s and Numenta’s primary positions: understanding the brain is the fastest way to understand what intelligence is and how to create it in machines.

Since we cannot get past that first core principle, I don’t see that this conversation is going to produce anything.

If you think studying the brain and the neocortex is the best way to implement machine intelligence, continue reading and participating in HTM Forum and its associated projects. If you disagree, feel free to join @bkaz’s efforts.

4 Likes