Are there similar attempts to understand the neocortex on a high level?


I was wondering what other approaches/research/theories exist in this space except HTM? Who else is trying to understand the neocortex in a high level and model it mathematically?


A lot actually - some of them get links here.
See this one as an example:

If you want to sip at the firehose google: artificial intelligence periodical


Look at these poachers.

The Sparse Manifold Transform:


If they can make progress, more power to them.

None of this work or research is a zero-sum game. The more the merrier!

Every new perspective potentially adds to the pile, and any one of those perspectives might push us just a little more forward in our understanding of “intelligence”.


It is good third of my model. But for Mr Chen to accept my definition of consciousness is
too fringe for him. So he will be stop in his tracks. Which my model his highly optimized

Chinese Universities are patenting everyones research from AI journals- AI Forum C2 Montreal #C2M18:


The internet is full of papers and texts that are put out to further the body of knowledge available. Almost everything that I know has been from reading the works put out by various researchers that have made the results of their work publicly available.

@keghn_feem I find it odd that you are calling others “poacher.”

If you put your work out there for others to use they can do what they want with it and take as much or little as they find helpful. Oddly enough - they are not even required to say if they got any inspiration from your work if they feel no need to do so.

You may think that this is not right but I would like to point out that you have gotten your ideas from others and I don’t recall seeing you credit where you were inspired in your work.


In this video he asks “what the goal of the cortex”? I’ve found that almost all models (including their) is just about mapping inputs to outputs (hetro-associative or auto-associative). Either way, at a reasonably high level, the cortex is a memory system for storing, searching & retrieving data. Like the Sparse Manifold Transform & deep learning, the network is a memory function mapping inputs to outputs. Deep learning approximates functions to map inputs to outputs as hetro-associative memory. I suppose the challenge is to build as system that is as good and as flexible as the cortex in data mapping.


As various researchers explore the same problem space they are very likely to duplicate the same answers and converge on similar results. In this case, the problem space is figuring out what the brain is doing.

As research provides new insights this will drive research to explore these newly uncovered threads. These new finding work to steer the “pack” of practicing researchers in a new general direction.

This will fertilize further new investigations and the resulting findings drive this convergence process onward.

At some point I expect convergence on the basics of AGI much like we have with plate tectonics or evolution.


Brian needs a N dimension model of the world. So when it make an exploratory
move it can measure the change. From the past data a prediction is made of what is expected.
Like hope field networks do. Hop field give the most likely output, auto association. But the output can be completely different. Another network is needed to fix this and that is a chaining RNN. And a quick local retraining.

Weighted metrics.
All sub feature, features objects, background, and pattern loops are weighted so a iteration distance, in my model, so a iteration distance between all things.
By selecting two objects and change the weight until both are the same. The amount
of change is the eigen distance between the two.
All distances in associative memory and object in the next video
frames can be found this way.
So When you start with a lump of clay and want to make a clay bust ex president Obama your mind has measure system to track the change to target goal.


Before HTM theory, Researchers were awash in the multiple resolutions at which one could try to understand the flow resulting in cognitive representation and/or any kind of processing model to which one could attribute knowledge acquisition and prediction.

There was the representation (SDRs), which was a breakthrough, and then there was the abstraction of cortical columns as binary arrays - none of which (to my knowledge) had been conceived prior (or since).

Numenta actually jumped in and proposed a framework to actually interpret the seemingly infinite scopes at which one could attempt understanding.

I would be interested in seeing anybody who can site that kind of closely attributable framework, biologically.


Kanerva’s framework in his Sparse Distributed Memory book ties closely to the cerebellum (I haven’t quite gotten to the juicy biological details yet as it is at the end of the book). So far it seems his work seems to be more focused on autoassociation. However there are a lot of core ideas between his work and HTM that come from ideas in biology.

But yeah, although there is overlap there, I cannot think of anyone else who puts more weight on the biology than Numenta.


It is good you give respect to another person work. Very honorable. I hope all
is explained in Feynman style? So honest researcher can move at light speed.
The papers written of today are written in anti poaching script. So thick, that few will want to deal with them.


I really like that video above (Sparse Manifold Transform) mainly because of the more intuitive ideas behind it. Although the intuitions are not new, they bring them to light in a new way.

The first thing he explains is that the cortex seems to be pulling consistencies out from the world (again old idea, even Aristotle alluded to this). With such a chaotic stream of input data the cortex tries to represent the consistent structure of the world with stable representations. So while you watch your friend play football, the inputs are fast changing (and never identical) but your internal representation of your friend is constantly stable. Of course we understand this stability to be hierarchical in that stability increases as you go up the hierarchy (like Jeff talks about). So,in other words, it’s dimensionality-reduction/autoencoding/classification all the way up.

If I understand the theory correctly, their using sparsity to represent stable features then ‘flattening’ them out into a space where the features can be interpolated, so therefore regenerated. This interpolation is getting at the core idea of inference/generalization in that novel input combined with memory can predict novel output.


There is three way of doing unsupervised learning.
1). sample form two different locations and look for same sub features.

2). Have a little internal doodle boarrd generate some internal data and then go
look and see if exist in data stream from a sensor.

3). A little soft detector that activates in the presence of a certain sub feature.
Its internal setting a randomly set. Then it is shown a data stream. If the detector
activates then it is keep and and a small tile picture is taken of what activated and
linked together. Like that little tile dictionary lossy gabor picture from the video.

These activation are routed to a self organizing map. The very fist soft detector
that come into existence will be the nucleation side for this self organizing
map, SOM. Or THE SDR.

When many more the one come into existence
a weight is pared with each little image capture and detector system do do k means.
So that manifolds, clustering, and simulated annealing can be done.

A weight can be paired with binary SDR bit, to turn them on or off, to do k-nn. Or
rough clustering.

Also, the little picture that is taken should be non lossy. Like chain code or a special type of down sampling.

Perceptually Based Downscaling of Images (SIGGRAPH 2015)

For semantically similar and generalization, A stay on shot delay timer can be added
to a soft detector. That is cloned in neighboring area. So that this detector can do temporal “bag of features” generalization:-))))))))))