How can we help Numenta research?

Another note on scalability. In my interpretation of TBT, the models created by a single cortical column are “fuzzy” and imprecise. It takes many CCs voting together to specify a precise concept. I see it working similar to how grid cells work – a single grid cell by itself is fuzzy, but a bunch of them working together can specify a precise position. This means we need to scale up a lot more than the “toy” models that we typically work with.

4 Likes

Great thanks @Paul_Lamb these are all very helpful. Do we have an idea of what “voting” would look like implementation-wise?

P.S. I will finally have time to look at etaler and would like to thank @marty1885 for sharing this excellent work.

1 Like

This is a little more speculative, but at a high level-- firstly I believe we need a layer which represents the object/concept which is currently being attended. This layer (which Numenta has labeled as the Object layer in a couple of their papers) would bias the lower layers. Conceptually, this biasing would work something like what I walked through in this post. Of course, that was just stepping through a toy example to a simple TM layer, so the idea would need to be fleshed out, and applied to more complicated SMI-related layers (i.e. reference frames, motor outputs, etc)

That Object layer would need to implement some form of Temporal Pooling. I think the “right” implementation would involve hex grids, but one could play around with simpler implementations (there are some floating around the forum) or forego learning in this layer initially (as Numenta did in a couple of their papers) while focusing on other pieces of the CC circuit.

It is in the Object layer where voting would occur. Whether that is through competing hex grids, or perhaps more simply by using the TM learning algorithm between CCs in this layer, the idea would be to set the layer up in such a way that it gets a driving input from the lower layers in the same CC and a biasing input from the Object layer in other CCs.

2 Likes

You (the reader) are welcome to be the leader you wish to see on this forum, however, managing research and development can be a bit like herding cats. Everyone has their own goals and ideas for acheiving them.

4 Likes

At least for me, the main bottleneck for TBT and HTM right now is that no one outside of Numenta knows how to implement TBT. It’s a great hypothesis and model. But the only existing implementation lives somewhere in htmresearch. Nor the paper contains enough detail. If I remember correctly it’s thalamus and a branch of NuPIC (changes in C++). I read it and gave up not long later.

Also the fact that Numenta have announced that they are moving away from full neural research mode. Otherwise I would have applied for their HPC engineer position they opened.

I’ve also made my attempts at implementing it. All ended with failure. (Implementing Thousand Brains Theory in a developer usable form (project initiative), A recommended roadmap to implement Thousands Brains Theory?) The best Numenta can do now is to share a clear(er) implementation and mathematical description of TBT with us and we can figure out from there. But I don’t think that’s happening. @Bitking have shared his view on hex cells as a basis for voting mechanisms with me before. But I also never figured out enough to code it.

Let me know if anyone want’s to try. I’ll do my best to help.

5 Likes

The original article describes almost everything you need to reproduce it, with the exception of how the output layer cells “vote” with their distal basal dendrites.

I think part of the problem is that their theory is incomplete, and so several details that are critical for real world applications are missing.

2 Likes

Going back to the original topic. Numenta is focusing on applying sparsity to deep learning. Seems that they have some success in that direction. Personally I don’t like deep learning that much due to the stupid computation power needed. No one single person can train a state of the art model from scratch. And most people just reuse code on GitHub and claim the know AI - not saying I dislike deep learning (though I do like HTM’s biology approach better) I just don’t like the community there.

I think most folks on the forum are here for HTM. We like the approach and believe in the value it can bring. On the other hand, neural science is really hard. And being frank HTM have it’s problems. Reward contribution, hierarchy. encoding complex data, etc… and there’s no clear solution to these very hard problems. All the while deep learning is producing very impressive results with much less (data) modeling. It works.

At the end of the day, it’s your own time. You should be the one deciding what you are interested in and what to work on. And we are here to support you.

7 Likes

@dmac i think this theory is very interesting but not complete. The demo of Numenta is understandable but I am not aware about any successful tests from our community. Myself I have no successful application for object detections like it!

I really do not know if the community will reach any further success without Numenta.

1 Like

IMO, I think the voting part is relatively easy to hack together, so long as you aren’t concerned with doing it the “right” way. The simplest naive way would be to run output from lower layers in the same CC through the SP algorithm to activate minicolumns in the Object layer. Then run the TM algorithm on cells in the Object layer, but taking their input from the cells in the Object layer of other CCs. And finally run a separate round of the TM algorithm (for apical dendrites) on cells in the lower layers, but taking their input from cells in the Object layer of the same CC.

Now of course that wouldn’t be very optimized (all features in each CC would need to learn associations with all other features in the other CCs), so it wouldn’t scale well. You’d probably want to put a little more work into it, and implement TP in the Object layer rather than (or in conjunction with) SP, to eliminate that combination explosion problem. There are a few different TP implementations floating around (including one written by Numenta in NuPIC’s research code)

To me the much bigger grey area in TBT is how to implement a learning algorithm that establishes gridcell-based reference frames. Since reference frames are really a core concept of TBT, without an algorithm for that you are kind of stuck in the realm of sequence learning and unable to unlock SMI.

2 Likes

The one thing I’m really worried about is that based on the science of complexity theory it is very hard to predict the future states of a complex system (e.g. brain) given its fundamental computational units (e.g. neurons). In Numenta’s case, they are doing the reverse case - from cognition to its simplest parts.

Usually, simulations and runs of the simple parts are necessary to see our desired emergence (e.g. learning). How must we predict the learning power of TBT if it’s only done by very few people?

Going back to the original topic, I don’t like to waste my time working on “my personal interest” only with very little research guidance but I’m happy to help any way that I can to test Numenta’s iterations of researches or implementations/engineering, most likely the latter.

1 Like

Thank you everyone for your responses.

I’m going to close this loop for now. I conclude that there is no guidance from Numenta about the original question BUT there are great points identified so far based on the responses;

  1. It’s up to us to continue researching or engineering HTM/TBT
  2. There is an outstanding HTM problem mentioned by @Paul_Lamb
  3. Using etaler is a step towards solving the problem in #2 and a great alternative for nupic because it has enhanced computational capabilities, thanks to @marty1885 .
4 Likes

I just want to add. I’ll keep supporting Etaler is someone is using it/sends patch in. I’m not in academia anymore. But that much is easy enough for me

3 Likes

What do you mean it?
In TBT experiments Numenta used Column Pooler for connecting with others CCs. If I remember correctly, @dmac had some ideas to modify the original CP for classification tasks with MNIST!

1 Like

The term “Object layer” and “Output layer” are used interchangable in reference to the layer in TBT which represents the object/concept being attended. I prefer “Object layer”, because the word “output” implies that the goal of the CC circuit is some sort of input > output function like in DL, which I believe is the wrong perspective.

I don’t know what CP is, sorry. I see the link to it in the quote above, so I’ll take a look. Note that I’ve been unplugged from HTM for quite a while, so my perspective is likely outdated.

2 Likes

@Paul_Lamb best thanks for your explanation!

2 Likes

“…HTM’s biggest strength is TM!”

+1 (quote is from previous edit of @Paul_Lamb 's post)

It seems to me that TM (and particularly the ATTM variant used in Numenta’s neuroscience papers) is still the star HTM algorithm. TM works in a ML context (eg for anomaly detection) and as a computational neuroscience model, where it can be mapped to the biology of L4 cortex for both sequence learning and feature-at-location recognition for object learning (links are to Numenta posters to show their 2016-2017 overview – pre TBT)

2 Likes

Just to clarify the context of that statement (I deleted it for a reason.) I had not yet looked at Numenta’s “column pooler” algorithm and was speculating what algorithm CP referred to. This was just prior to noticing that you had posted a link to the source code, and therefore speculation was not required.

The point I was in the process of making (before I deleted it) is that there are a lot of folks who throw out the TM algorithm and instead focus on stacking the SP algorithm into a hierarchy to tackle image classification problems. The reason for doing this is obvious, of course. SP works for abstracting spatial patterns, but we don’t have an equivalent way of abstracting temporal patterns. Discarding TM and focusing on SP allows one to actually build hierarchies. But unfortunately, doing that is throwing the baby out with the bath water, because (vanilla) HTM’s biggest strength is the TM algorithm.

I realize this is not relevant to the topic here, but need to make sure I’m not being quoted out of context :slight_smile:

3 Likes

Interesting to follow this old thread and see what I suspected being restated as recently as Dec. '22.

The key problem with HTM is that it lacks a principle for abstracting temporal patterns.

3 Likes