How can we help Numenta research?

As an open group how do you think we can help Numenta research? Are there any open technical problems we can contribute to?

HTM is deprecated what is next, are there any new implementations we can try? How about thousand brains theory?

2 Likes

Ask not “What Numenta can do for you?”, but “What can you do for Numenta?”

An inspiring notion :slight_smile:

At this point, as far as I can tell, Numenta has little to no presence in this forum. They discontinued support for NuPic years ago, and they have more or less stopped posting meetings on their Youtube channel.

For all intents and purposes, I would say we are using their forum to discuss principles of the algorithms they openly shared in the past few years but are no longer continuing to share. We have little idea what the state-of-the-art HTM algorithms they currently implement are.

What I would enjoy seeing in this forum is a bit more discussion regarding implementation.

I’ll admit, I came a bit late to this party, just the beginning of this year, after the open era overseen by Mr. Matt Taylor. I am very disappointed to have missed that time, when it seemed like real, open-ended discussion of HTM implementations washappening.

These days, I ask if anyone has experience running parallel HTM processes in htm.core and hear nothing. Meanwhile, tangential philosophical discussion about the brain and consciousness are all the rage.

It has had me wondering if the HTM architecture and htm.core algorithms are even viable.

Guys, can we talk a bit more about the code we are writing, or trying to write, and give each other specific assistance if we can?

P.S.
Sorry if this is too pessimistic, or if I am missing good implementation discussion/community engagement from Numenta that is right in front of me, just giving my point of view.

P.P.S.
Many, many thanks to @David_Keeney et. all for updating htm.core to work with python 3.11. I have heard (from Guido on the Lex Fridman Podcast) that the 3.11 update have many cpython improvements that increase computation speed.

6 Likes

TBT (HTM with lateral connections) has not gotten much attention on the forum.
TBT offers a path towards distributed large-scale representation achieved with local computation

I proposed an implementation a while ago but other than that, not much has been done on the forum.

.

3 Likes

Numenta made a single major discovery and wrote a bit of code to show it off. Then nothing. Maybe they expected the next bit to be easy, and it wasn’t.

To my eyes their discovery is being slowly picked over by others, but on its own it’s not enough. The Next Big Thing will be somewhere else.

2 Likes

When Jeff was on the Machine Learning Street Talk podcast last year, the hosts were all about HTM- but Jeff didn’t want to talk about it… my understanding is that Numenta is not investing any mental energy in HTM. TBT is the future (or at least part of it).

Was HTM released 10 years ago?

I will have a look at this when I get a chance. Thanks for sharing.

@david.pfx @Bitking

I have an idea, probably a wild one.

How about if we progress HTM or TBT ourselves? By that I mean we continue experimenting or theorizing/research?

Another imteresting strategy for progressing HTM or TBT or its varieties is to look into the current advancement in deep learning and learn from its improvements for example Transformers or GPT. Maybe we missed something fundamental and we can learn a lesson from deep learning? IMO it’s really about constantly progressing these theories to reach advancements. In saying all these I’m no TBT expert nor DL.

2 Likes

I think I can agree to this. Maybe Numenta needs to take advantage of the current advancements of deep learning, not in a way that they should use it but going back to their first principles.

1 Like

What does TBT stand for?

1 Like

TBT stands for Thousand Brains Theory.

2 Likes

They’ve been extremely open about their research and ideas for years. I think they’d share any major breakthroughs to get scientists etc. more interested.

They’ve spent years full time working on that. At this point, I think the only way to make solid progress is if Numenta manages to collaborate with scientists more, or do the research directly. There is (or was) an organization which will do neuroscience studies if you design the study. I don’t know how feasible that is, but it’d be pretty cool.

3 Likes

I’m aware of this. It’s fair to say that Numenta has not yet harnessed the power of OPEN collaboration, I mean open source, open ideas, open AI background, etc. So far there is no such thing as “we have done this much already” because nobody yet discovered and successfully implemented AGI. DL is a great example, look at it now, AFAICT the Transformers architecture was not a grand and glorified idea in the first place, it just worked. Largely its progression is due to continuous progress done by massive research groups including ordinary AI practitioners who critique or try these architectures.

2 Likes

@Casey Appreciate your response by the way.

We can start with the core questions which are likely answered in this forum already.

I’ll omit TBT for a bit because IMO although TBT delves at the minicolumn level algorithm, it’s still at a higher conceptual level than HTM, HTM has a reference implementation already in python so it’s more concrete. For all, we know TBT implementation would be an extension or improvement of HTM or not, but how can we know? A lot of features are already in HTM such as online learning, sensorimotor, SDR, sequence learning, etc. Therefore there is a high chance that TBT will use/borrow HTM implementation.

  1. With respect to learning, why does HTM doesn’t scale well both mathematically and conceptually?

  2. Has HTM reached its limitation with respect to learning? Can we prove this both mathematically and conceptually?

  3. If HTM has reached its limitation (from number 2), is it still possible to extend/enhance it? Why or why not?

These are just high-level questions that are not constrained by Neuroscience and I know these are not Numenta’s way, but what is the purpose of an Open Forum if we simply follow Numenta’s path? My 2c is that there is value in deviating from the origin research and learning from other feats such as from current advancements of DL.

I’m not sure if they are still around but I’m cc’ing them as they’ve been very active in the past, 10x more active than me - @Paul_Lamb @sheiser1 - what are your thoughts?

cc @Bitking who I believe is the moderator of this forum what are your thoughts?

3 Likes

In basic HTM, particularly in the smaller implementations, the topology switch is off.
See this HTM school for more details:

Setting the topology switch changes the receptive field from sampling the entire input space to a more restricted local space. As I said above, for a very small model (needed for older, less capable hardware) the model is not large enough to make much difference.

As the models get larger there starts to be some advantage to having a “society of agents” processing the input space. The down side is that these agents really don’t work together; each is doing it’s own thing.

The original thought that was part of very early HTM theory was that the hierarchy would stitch them back together. The problem is that connections between maps maintains topology without sampling the entire input map space; it would take several layers to get signal from one side of a map/region to interact with signal from the other side of the and that really does not match up with what is being seen in the wetware.

The problem can be stated as this: How can we get global recognition with local operations in a single map/region?

Enter TBT. We have local topology, but the receptive field has a lateral component within the map/region. We learn and recognize both our local input receptive field -AND- the response of some of our neighbor mini-columns. As we learn a given pattern we also learn it in the context of what our neighbors are learning. That means that we can locally see what would be two identical patterns in our receptive field but with the lateral component we can learn to differentiate them as being different patterns.

Considering that one of the more useful features of HTM is pattern completion/filtering this dramatically improves the pattern recognition power.

1 Like

I think the question is a problem itself. What does global recognition mean and why do we rely on that if we achieve it we get better results.

Sounds very intuitive to me but how can we make this work and if it works does it really become better?

I think two of the questions we need to try to solve is that;

  1. What is the core algorithm of composable learning units?
  2. What is the optimal communication mechanism of #1?

Number one is hardware (doesn’t mean physical hardware) units and number 2 is their comms mechanism. For Deep Learning, their #1 is point neurons and their #2 is feed forward and backpropagation. What must be HTM/TBT’s and why mathematically or practically would it make a difference?

2 Likes

I offered the “cliff notes” version.
Start here for a more detailed description.

1 Like

I’ve got the TBT book, I’ve also listened to Jeff’s interview with Lex Fridman, and was excited and very curious about Jeff’s optimistic statements about they’re going to have great results in the near future.

I am amazed by TBT and my poor brain tells me it’s HTM + more context. The big BUT is that these computational models are likely shaped by evolution (e.g. cellular automata) can only be realized when it is RUN. The concepts are great, please don’t get me wrong, but I still can’t see or intuit how it’s going to be better than Deep Learning for example. I try to be agnostic with HTM or TBT by the way because IMO we or at least in this forum haven’t really progressed well on HTM, what if it was progressed all along what might it has become? Well TBT could be considered its progression though.

3 Likes

Tell me again which deep learning models do good one-shot learning?
How many deep models do single layer pattern completion?
How many deep models do sequence/transition recognition?
How many do all of these things at the same time?

2 Likes

Awesome I love this summary :sweat_smile:

Hmm but it doesn’t answer the scalability aspect. I’m not looking for an answer BTW, I’m going back to first principles/reasons why and where we are at now for HTM/TBT. Because it gives reason to How we can help Numenta research which is the original question.

2 Likes

IMO, the main hinderance in this area has been the computational resources required to run the HTM algorithm. One project that I have always felt is a potential game changer is etaler, which brings GPU acceleration to the table.

Now, Etaler’s massive speed improvement does come at the cost of a couple functional deviations from the vanilla algorithm, but the detrimental effects of those optimizations are mainly theoretical (there are a number of posts on the forum which show etaler to be highly capable).

I would view Etaler as a good candidate for experimenting with scaling up lots of models and applying the SP and TM learning algorithms to new components of TBT (such as the lateral connections between CCs, for example).

1 Like