Thousand Brains Project series - Transcribed and summarized YT videos

Hey everyone,

I wanted to share a summary of every new videos from the Thousand Brains Project’s core series with a new post in this Topic.
This is all “gen-ai” transcribed and summarized. So I can`t guarante for correctnes.

Proposal for a Roadmap to Machine Intelligence

This is a key video outlining the foundational ideas that are guiding this exciting initiative. Here’s a breakdown:

  1. Foundational Concepts:
  • The project is built on structured and unstructured models in AI. Structured models are crucial as they assume data structures that help in learning spatial and temporal relationships.
  1. Terminology Update:
  • What was initially called the “AI Bus” is now referred to as the “Cortical Messaging Protocol (CMP).”
  1. Sticking to the Vision:
  • The core ideas have largely stayed the same, with only minor tweaks to address unforeseen challenges.
  1. Focus on Sparsity and Reference Frames:
  • Sparsity and active dendrites are significant elements of the project. There are future plans to integrate reference frames for a more complete model.
  1. AI Challenges:
  • The video discusses current AI limitations, particularly in handling spatial and temporal recognition tasks that are naturally easy for humans.
  1. Insights from Neuroscience:
  • It highlights how the brain’s use of grid and vector cells informs the modeling of spatial relationships, emphasizing the importance of reference frames.
  1. Integration of Sensory Models:
  • For effective learning and recognition, different sensory models should operate within the same framework.
  1. Proposed Framework:
  • Implementing a standardized communication protocol or “bus” among AI models is suggested for more independent development.
  1. Vision for the Future:
  • The aim is to develop a decentralized system architecture that allows models to communicate and collaborate, setting the stage for future advancements in AI and robotics.
3 Likes

Research Questions to Figure out About the AI Bus

This video is particularly interesting as it outlines the approach to AI development through their innovative “Cortical Messaging Protocol” (previously termed AI Bus). Here’s a detailed breakdown:

  1. Categories of Focus:
  • The team has divided their exploration into three primary categories: understanding the information transmitted across the AI bus, the operations each module performs on this information, and internal operations common to modules.
  1. Information Transmission:
  • Questions are raised about how key data like object ID, location, and orientation relative to a common point are represented. They consider different coding methods like Sparse Distributed Representations (SDRs) and explore how learned representations work.
  1. Modular Operations:
  • Each module interacts with the AI bus, handling data differently based on its function. Modules may vary significantly, for example, vision modules versus episodic memory modules.
  1. Internal Module Consistency:
  • Uncertainty representation and voting mechanisms are important themes, dictating how modules can resolve ambiguity in their inferred data.
  1. Coordination and Central Point:
  • There’s an ongoing discussion about how common points of reference (like the body) may change and how this impacts the bus communication protocol.
  1. Graph Structures in the Brain:
  • The team is questioning how data like state and object similarity get represented and whether current systems can adequately address these subtleties without overly complex designs.
  1. Practical Implementation and Bus Design:
  • Emphasis is placed on defining a straightforward bus architecture that will allow independent module development. This architecture doesn’t have to perfectly mirror biological systems but should work practically and effectively.
  1. Discussion of System Capabilities:
  • The video challenges viewers to consider both simple and complex sensor systems, pondering the trade-offs between having multiple ‘dumb’ sensors versus fewer ‘smart’ ones for better collective inference.
2 Likes

Intro to the AI Bus

The team explores ideas around their AI bus—now called the Cortical Messaging Protocol—and discusses the future direction of the project. Here’s a detailed breakdown of the key points:

  1. AI Bus Concept:
  • The AI bus, now known as the Cortical Messaging Protocol, is a communications protocol designed to facilitate information sharing between AI modules. It moves beyond traditional methods by incorporating common reference frames to simulate how different parts of the brain communicate.
  1. Neuroscience Foundations:
  • The project remains grounded in neuroscience, aiming to reverse engineer the neocortex while exploring applications in machine intelligence. The AI bus is inspired by how cortical columns in the brain might interact and coordinate.
  1. Current Progress and Implementation:
  • The team is using explicit graphs and Euclidean reference frames to represent object models, which deviates from how the brain operates but makes debugging and visualization easier. This approach has been successful in setting up a framework for learning and communication protocols.
  1. Voting and Sensor Integration:
  • The video elaborates on the idea of “voting” as a way for sensory modules to reach a consensus on the information they process—similar to how eyes, ears, and touch coordinate in our perception.
  1. Learning and Custom Modules:
  • The framework allows for custom learning modules which can be implemented in various ways, as long as they adhere to the cortical messaging protocol. This flexibility supports innovation and adaptability in developing AI systems.
  1. Future Vision:
  • Jeff Bezos’ approach to Amazon is cited as inspiration, emphasizing the importance of starting with feasible projects while keeping sight of larger goals. This framework is expected to help develop more complex AI and robotic systems over time, with an educational focus on current robotics challenges.
  1. Project Roadmap:
  • The immediate focus is on sensor fusion, developing and iterating protocols, and building test systems to evaluate AI bus functionality. Long-term goals include robotic applications and understanding cognitive and abstract thinking processes.
  1. Options and Future Steps:
  • The team will continue exploring deep learning networks, particularly integrating sparsity and dendritic properties, while also considering the AI bus for more cognitive and abstract applications.
2 Likes

Initial Outline of the Requirements of Monty Modules

  1. Features and Poses:
  • The video clarifies the notion of “features” in the context of the Thousand Brains Theory. Unlike traditional computer vision, features here are understood as entire objects with three-dimensional poses. Each feature includes detailed orientation and location information, which can be as simple as a point normal and curvature directions on objects.
  1. Cortical Messaging Protocol:
  • This protocol is designed to enable communication between different cortical column modules. The idea is that each column can model entire objects and communicate their pose and features relative to each other.
  1. Object Recognition and Modeling:
  • Objects are modeled using both “what” and “where” modules, which transmits positions and orientations of features. Lower-level columns might understand basic features, while higher-level ones combine these into complex objects like a coffee mug.
  1. Pose Representation:
  • A pose is defined as a relative location and orientation between two objects. This is crucial for understanding how objects relate to each other in space, and how the brain or a system might model this.
  1. Object and Body Reference Framework:
  • The framework assumes that each sensor or feature has its own reference frame. These frames are used to determine the pose relative to one another. This helps in understanding and recognizing objects, even if their position changes relative to the observer.
  1. Sequential and Combined Sensing:
  • The discussion highlights that sensors work by making sequential observations, but when combined across multiple sensors, the system quickly calculates and updates its understanding of the object being sensed.
  1. Model Independence:
  • By defining shared reference points, the model of the object remains independent of the body’s movements. This allows recognition of objects regardless of how they’re oriented with respect to the sensor’s frame of reference.
  1. Hierarchy and Learning:
  • The system uses a hierarchy where lower-level models are built slowly over time to ensure stability. Higher-level models use the input from multiple lower models to better hypothesize and identify complex structures like words composed of letters.
  1. Metaphor of Voting:
  • The video also touches on the idea that these systems can ‘vote’ on what they recognize, which leads to consensus and rapid identification of objects. This process leverages information from various modalities, improving accuracy and reducing latency in object recognition.
2 Likes

Continued Discussion of the Requirements of Monty Modules

  1. Understanding Hierarchies and Motor Patterns:
  • There’s a focus on deepening our understanding of the cortical hierarchy and how motor patterns are planned and executed. Key questions revolve around how sensory modules decide on movement, what directs this, and how motor information integrates with sensory inputs.
  1. Stretchy Graphs and Object States:
  • The team explored the concept of a “stretchy graph” to explain recognizing objects despite variabilities such as different scales or deformations. This is relevant for recognizing different states of objects, like a shirt being folded or worn.
  1. Feature Discretization:
  • Identifying and discretizing features are crucial for understanding and modeling objects. This involves addressing how features can be classified and interpreted, especially in dynamic environments.
  1. Motor Control and Modeling:
  • There’s an interest in translating the understanding of sensory models into effective motor control strategies. This involves exploring how anticipated conditions affect actions and applying these insights to robotic systems.
  1. Potential Demonstrations and Applications:
  • Ideas for practical demonstrations include recognizing objects from various poses and integrating input from multiple sensors. This would exhibit the system’s ability to model objects accurately and adapt to changes in environment or perspective.
  1. Integration of Where and What Columns:
  • Discussions touched on the one-to-one correspondence between “what” and “where” columns in cortical models, highlighting how these systems might be organized and interact within the brain’s hierarchy.
  1. Commercial Relevance and Challenges:
  • Connecting theoretical insights with practical, commercially relevant problems in robotics remains a priority. There’s a call for understanding current industry challenges and potential applications where these neuroscience-based models could be groundbreaking.
  1. Future Direction - Behavioral Tasks:
  • For future implementations, there’s a strong interest in applying these models to complex behavioral tasks, which could significantly enhance robotics and other AI fields.
2 Likes

Is there a link to the video?

1 Like

Here you go https://www.youtube.com/@thousandbrainsproject/videos

This is the offical chanel

2 Likes

Thanks.

1 Like

2022/10 - The Legend of Monty

The Monty Project: A Deep Dive into Advanced Module Development and System Integration

Introduction:

The Monty Project continues its exploration into the intricate world of AI, advancing our understanding of how intelligent systems can imitate human cognitive processes. This journey involves unraveling complex hierarchies and motor planning patterns, with an emphasis on creating adaptable, neuroscience-inspired frameworks.

Understanding Hierarchies and Motor Patterns:

A key focus is understanding the cortical hierarchies that model both sensory and motor patterns. Researchers aim to decipher how sensory modules determine movement direction and how motor information seamlessly integrates with sensory inputs, which is crucial for executing planned actions.

Stretchy Graphs and Object States:

The concept of “stretchy graphs” is introduced to address object recognition challenges, accounting for variabilities in scale and deformation. This model is pivotal for understanding objects in different states, like a shirt being either folded or worn, demonstrating the system’s flexibility in recognition.

Feature Discretization:

Discretizing features is essential for accurately modeling and interpreting objects. The concern here is how features can be categorized and understood, particularly in ever-changing environments, enabling more robust object modeling.

Motor Control and Modeling:

The translation of sensory understanding into motor control strategies offers substantial advancement potential. This involves anticipating conditions and applying insights from sensory models to enhance robotic systems’ actions, focusing on effective motor execution.

Potential Demonstrations and Applications:

Practical demonstrations are conceptualized to display the system’s proficiency in recognizing objects from various poses using multi-sensor integration. Such demonstrations underline the capability to model objects accurately while adapting to environmental and perspectival shifts.

Integration of Where and What Columns:

Explorations continue into the one-to-one correspondence between “what” and “where” columns in cortical models. Understanding how these systems are organized and interact within the brain’s hierarchical structure is crucial for advancing AI’s interpretative mechanisms.

Commercial Relevance and Challenges:

Aiming to bridge the gap between theoretical models and practical applications, the focus includes solving commercially relevant challenges. By understanding industry needs, the potential for groundbreaking applications of neuroscience-based models in robotics and AI becomes apparent.

Future Direction - Behavioral Tasks:

Looking ahead, there’s ambition to apply these sophisticated models to complex behavioral tasks. This could significantly enhance the capabilities of robotics and influence developments across various fields of AI, heralding a new era of intelligent systems.

Conclusion:

The Monty Project stands at the forefront of innovation in AI and robotics, inspired by the complexity of human intelligence. As the team delves deeper into the cortical hierarchies of perception and action, their work promises transformative impacts in both academia and industry.

2 Likes

@weiglt / @flajann2 The videos are also linked over on the Thousand Brains Project forum where there are some pretty in-depth conversations happening. https://thousandbrains.discourse.group/

Also on our socials for general updates and new post alerts.
Twitter https://x.com/1000brainsproj
LinkedIn https://www.linkedin.com/showcase/thousand-brains-project

2 Likes

thanks, I didnt see the discord group:)

1 Like

2023/06 - The Cortical Messaging Protocol

The Monty Project: An In-Depth Exploration of Module Connections, Responsibilities, and Code Implementation in the Cortical Messaging Protocol

Introduction: The Monty Project is at the forefront of AI innovation, aiming to replicate the cognitive functions of the human brain through advanced communication protocols. The Cortical Messaging Protocol (CMP) is central to this endeavor, enabling efficient interactions between various Monty modules. This article delves into the module connections, specific responsibilities, and technical implementation of this sophisticated system.

Module Structure and Responsibilities:

  1. Sensor Modules:
  • Function: Tasked with collecting raw environmental data, these modules are integral for capturing sensory input such as visual, auditory, or tactile information.
  • Data Conversion: They transform raw data into structured “State” objects using the CMP format, converting details into structured attributes like location, orientation, and features for higher-level processing.
  1. Learning Modules:
  • Role: Learning modules analyze and interpret the data provided by sensor modules. They generate hypotheses and predictions, serving as the system’s cognitive backbone.
  • Learning and Adaptation: Through pattern analysis and feedback, these modules enhance their predictive capabilities over time, refining the system’s decision-making processes.
  • Communication: They engage in both hierarchical and lateral communication. Hierarchically, they process information from lower to upper levels and vice versa, while laterally they share and vote on information with peer modules to establish consensus or refine insights.
  1. Motor Systems:
  • Purpose: Motor systems convert the processed data into physical actions, translating information into real-world interactions.
  • Conversion to Action: Using “State” objects, these systems generate motor commands to execute movements or operations, such as manipulating objects or navigating the environment.

Key Connections and Interactions:

  1. Hierarchical Communication:
  • Top-Down and Bottom-Up Processes: Reflecting the human brain’s multi-layered structure, this communication involves the flow of information from general interpretations to specific actions and vice versa, enhancing interpretive depth and accuracy.
  1. Lateral Voting and Communication:
  • Lateral Interactions: Learning modules share and vote on information laterally, enhancing collective decision-making through diverse input.
  • Voting Mechanisms: Modules engage in voting procedures to assess various hypotheses, choosing the most probable outcomes, thereby increasing robustness.

Technical Implementation of the CMP:

  1. State Class as the Core Component:
  • The CMP relies on the ‘State’ class as its fundamental building block. This class standardizes input and output across modules, embedding attributes like location, orientation, features, and confidence values.
  1. Attributes in Detail:
  • Location and Orientation: Specify spatial positioning within a common reference frame, vital for spatial cognition.
  • Features: Captured as flexible, dictionary-based entries to accommodate various sensory modalities.
  • Confidence Value: A numerical indicator from 0 to 1, showing the reliability of the information conveyed for decision support.
  • Sender Information: Contains sender ID and type, crucial for context and routing within the system.
  1. Code-Level Enforcement:
  • The implementation ensures that all messages comply with the State structure, promoting consistency across modules.
  • Modules interpret State messages based on context—transforming sensor observations to learning hypotheses and motor commands.

Challenges and Future Enhancements:

  • Complex Action Policies: Future developments may add goal-oriented state exchanges, allowing for hierarchical goal-setting and execution.
  • Expanded Connectivity: Introducing top-down and more sophisticated motor pathways can facilitate complex hierarchical planning and execution.

Conclusion: The Monty Project, through the Cortical Messaging Protocol, exemplifies a cutting-edge approach to AI, replicating the intricate connectivity and complexity of human cognition. The integration of module responsibilities, connections, and robust technical implementation positions Monty as a transformative force for future AI exploration and application.

1 Like