Big Announcement from Numenta Later Today

Hey everyone! I know you haven’t been hearing much from Numenta lately but I am excited to say that this is about to change. Later today Jeff Hawkins will give a talk at an event by the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He will be part of a 4 speaker series, each giving a 10 minute talk followed by a panel discussion. The session starts at 9:35am PST and Jeffs talk will be at around 10:05am. The event will be live streamed here: HAI at Five: Celebrating 5 Years of Impact

At the end of this talk, Jeff will announce a super exciting project that’s been in the works at Numenta for a while now and may be of interest to many of you. We will share a website and more information after the talk but wanted to give you all a heads up. Tune in for more infos!

7 Likes

Thank you so much for the announcement. I’m looking forward to it.

2 Likes

Faith in Numenta restored.

3 Likes

Looking forward to giving the new framework a try!

4 Likes

Congratulations on the project! Please keep uploading research meetings on the Youtube channel, that was super valuable!

5 Likes

They’re back :slightly_smiling_face:.

Is this is it? Thousand Brains Project | Numenta

4 Likes

Yes, thats it! Here is also our Twitter and LinkedIn if you like to follow or you can sign up for email updates on the website.

https://x.com/1000brainsproj
We have about 1.5 years of meeting recordings and will go through them to see which ones may be useful :slight_smile: We’re also going to upload our meeting recordings again going forward. Just have a bit of patience with us, we’re just getting the project started again and are a very small team at the moment (job openings on the website :wink:).

4 Likes

That’s great news!
And congratulations.

2 Likes

Happy to see a push forward,

What it seems to be a preliminary architectural outline paper is very interesting however not easily accessible: https://www.numenta.com/wp-content/uploads/2024/06/Short_TBP_Overview.pdf

PS that event is a 9 hour video, here starts Jeff’s presentation: https://youtu.be/wVqNLaN7cJQ?t=4684

6 Likes

Excellent. I hope that Numenta will release soon this open-source TBT for all and HTM-community.
But if understood as correctly, the current release of TBT does not support prediction so that I really do not know how to combine 2 frameworks HTM and TBT together.

2 Likes

Hi @thanh-binh.to,
don’t worry! Prediction is a crucial component of every learning module. I think the section on prediction in the overview document is formulated in a bit of a misleading way. We have predictions happening at every time step. For instance we predict, “given my current hypotheses of which object and pose I am sensing + the movement I just executed, what will I sense next?”. The thing that we mean when we say that we didn’t implement prediction in time yet is that our models don’t encode object behaviors yet. So we can have a model of a stapler and make predictions about what we will sense at any location on the stapler. But we don’t have models implemented yet that encode the movement of the stapler during stapling. I hope that makes sense? It is definitely a crucial capability we want to add as well.
Also, learning modules can be HTM based (we experimented a bit with this but not too much yet) so there is definitely an option to combine the two. It just requires an additional mechanism (like grid cells) to incorporate the reference frame aspect to keep track of locations while moving.

2 Likes

@vclay thanks for your Feedback!
Is your SDK implemented in C/C++?

2 Likes

Why, where-s the problem? I mean sure, the model must count for the hand/actuator too, can’t the two models “fit” together, with play/practice of course.

1 Like

But this is the biggie, rarely talked about. Current AI cannot model anything from sensory input, whereas animal and human brains construct models for everything they encounter. This IMO is where AGI lives.

1 Like

Ok but how do you determine that under-the-hood is any (and hopefully improving) model other than just practical test - e.g. next step prediction or an RL task?

Is there a more objective measurement of “modelhood”?

1 Like

A model is about how objects behave over time, about the consequences of actions, about how to make choices that lead to desired outcomes.

A dog is not born knowing how to chase a ball or how the ball will behave, but it can leap at just the right moment to catch the ball in mid-air. We know behaviour like this when we see it but no, I’m not aware of any practical test. I just know current AI can’t do it.

1 Like

There is a difference between memorization and generalization. The difference is similar to fitting vs modeling, or if you prefer, kinematics vs dynamics.

Kinematics/fitting will result in a representation that describes the superficial behavior of the data and can reproduce an adequate approximation in the vicinity of the training data.

Dynamics/modeling will result in a representation that models the process(es) that generated the data. If the model is accurate enough, then it can be used to generate predictions that are much further from the training set. The model could potentially even provide insights about the actual behavior of the data that would not necessarily have been obvious from the training data alone (i.e. due to errors/noise in the data or insufficient resolution/coverage).

A generative model based on kinematics is more flexible and generalizable. As such, it can typically be used to make salient predictions on previously unseen data or fill in the gaps in sparsely sampled data sets.

More importantly, such a model can also implicitly encode symmetries and other salient factors that could improve the accuracy and robustness of its predictions.

Depending on the choice of internal representation, these models may also be factorable. In that case, a model might be able to be decomposed into independent modules that could then be used in a compositional manner to generate new models.

2 Likes

Nicely put. And my ongoing concern is that while mammalian and avian brains routinely construct models that can be “used to make salient predictions on previously unseen data or fill in the gaps”, it seems that current AI cannot.

It would appear that any predictive models (such as those in self-driving cars or some aspects of generative AI) are the result of software engineering layered on top, not a result of the training process.

The attraction of HTM is its ability to predict sensory input, so is this a step on the path to predictive models, or not?

1 Like

Hi @vclay May I know what will happen to this forum, is it going to proceed with TBP or is there going to be a new forum, or is this going to be reorganised?

@Bitking Might know?

1 Like

Hi @Jose_Cueto
That’s a good question! The TBP is a new project and not a continuation of HTM, so we are not planning to do anything specific with this forum. We will start a new place for people to exchange ideas and questions around the TBP. Of course, we invite anyone from the HTM forum to join and expect that it will be interesting to many people on this forum. We will have to see how large the overlap of interest is to decide if we want to incorporate the HTM community into the new TBP community and have a dedicated place there to discuss HTM-related ideas. However, this is currently not the plan, and the HTM forum should keep running as it was before. Definitely let me know if you have any thoughts or ideas about this!
-Viviane