Numenta's New Open-Source Community & First Video Releases 🚀

Hello HTM Community, my name is Will and I’m the new community manager for the Thousand Brains Project over here at Numenta :wave: I’m excited to invite this group to a new open-source community around the Thousand Brains Theory and its implementation, the Thousand Brains Project. If you’d like to get involved, we’d love to welcome any members of the HTM community in our new TBP community.

Today is also a special day, as we have just released our first two videos on our new YouTube channel.

In the first video, Viviane Clay, the director of the Thousand Brains Project, introduces you to the channel, its playlists and explains why we have started this open-source project. https://youtu.be/bs6ZYLpEz1k

The second video is the meeting that started it all: Jeff Hawkins presents his vision of how the principles of the Thousand Brain Theory can be used to build true machine intelligence. This video covers the launch of the Thousand Brains Project, emphasizing a new ‘cortical messaging protocol’ (called ‘AI bus’ in this video) and the difference between structured and unstructured AI models. It explores cortical columns, sensorimotor learning, and the architecture of the brain. https://youtu.be/Pody7qWszHg

What is the Thousand Brains Project?
Jeff’s book, “A Thousand Brains,” describes how the brain works on fundamentally different principles from current AI. This project is an ambitious endeavor to rethink AI from the ground up. We know there is a lot of hype around LLMs, and we believe that they will remain useful tools in the future, but they are not as capable as the neocortex and work on fundamentally different principles. In the Thousand Brains Project, we want to build an open-source platform that will catalyze a new type of AI. This AI learns continuously and efficiently through active interaction with the world, just like children do. Partially funded by the Gates Foundation, the project will be completely open source, and we aim to publish all of our code, ongoing and past research videos and extensive documentation. :sweat_smile:

What will happen to the HTM Forum?
The HTM forum will continue to exist as is. We love the community formed around the HTM algorithm and ideas and would like to see this continue. The TBP community is not a replacement for the HTM community. Although we expect that people on this forum may be interested in the TBP as well, we do not expect this to be the case for everyone. There are many differences between the HTM algorithms and the work we do at the TBP now. If you are also interested in the approach we are taking in the TBP, we’d love for you to join us on our new forum as well! :heavy_heart_exclamation:https://thousandbrains.discourse.group

What about licensing and patents?
The TBP codebases will use an MIT license, giving you free access to create commercial software or fork the code and do whatever you like with it. We want everyone in the world to take advantage of this new kind of AI and we’ve chosen a commensurate license.
While Numenta has many patents around this work, we have put patents related to this project under a non-assert pledge. You can read more about it here: https://www.numenta.com/thousand-brains-project/patents/ :nerd_face:

Resources
The new discourse server - https://thousandbrains.discourse.group :left_speech_bubble:
Our YouTube channel - https://youtube.com/thousandbrainsproject :tv:
We’re also on X - https://x.com/1000brainsproj :speech_balloon:
LinkedIn - https://www.linkedin.com/showcase/thousand-brains-project :office:

If you have any questions, please DM me on the Thousand Brains Project forum or post a new topic in the appropriate category. :tada:

9 Likes

Thank you for all the exciting information. And congratulations with your new job!

1 Like

Congratulations @codeallthethingz on the job . its a great position to spread such powerful algorithms for the next era of True AI

1 Like

Thanks @Falco ! Since I started a couple of months ago I’ve watched a lot of the videos we’re going to publish and it gets considerably more exciting.

2 Likes

@Morpheus I honestly couldn’t be more excited!

2 Likes

Any timeline on published codebases? …or is something already published and i am out of the loop :slight_smile:

2 Likes

It’s a great question, we’re hoping to release this month.

2 Likes

If you view neural network as layers of simple associative memory and then say the layers cause no information loss, information about the input and prior layer recalls is fully retained going forward what’s the result? You have hierarchical very context sensitive associative memory that can generalize through the hierarchical aspect.

1 Like

I just mention it as a thought experiment. And I mean training via backpropagation. I saw somewhere that François Chollet thinks the scaling laws of deep neural networks indicate their behavior is entirely due to sophisticated memory recall.https://www.youtube.com/watch?v=JTU8Ha4Jyfc&t=340s

1 Like

I think it’s a great point and a good video. Yes, I agree, LLMs are essentially giant recall machines. They’re trained on all the data imaginable so their recall is pretty excellent and their ability to generalize language constructs is good. But they won’t be able to extract novelty from the future like humans can. And novelty, by definition, won’t have any benchmarks to test against.

2 Likes