My "Thousand Brains" book review

Not sure this is relevant, but CCs can communicate with neighboring CCs. That probably allows voting to propagate a long ways without needing so many synapses for all-to-all connectivity. Probably still sparse all-to-all connectivity, to avoid degrading info after several steps between neighbors and to speed up learning.

Apical dendrites are often presented as having a single function, but they can’t because they have like 5 subcomponents which do different things and receive different inputs. They receive many different inputs, especially on their apical tuft segments. That makes sense for voting because voting combines info from many sources and supposedly happens for object ID, location, orientation, and probably other things.

2 Likes

I was commenting on Numenta’s models diagrammed in the Columns paper:

We are talking about the CC-to-CC connections, which are depicted as blue horizontal arrows in the output layers. Compare that to the legend on the right. Blue represents context, and points to distal dendrites. Purple represents feedback, and points to apical dendrites.

So in Numenta’s model, the axons out of the output layer are connecting to distal dendrites of cells in output layer of other CCs, and to apical dendrites of cells in the input layer of the same CC. Again, this is Numenta’s model (at least at the time of the Columns paper). I implemented the output layer in a different way.

In my implementation, the CC-to-CC connections in the output layers are proximal connections (which I call them in the code) or we assume the apical branches are short enough in the upper layers, that they drive active states instead of predictive states (in which case they could be apical connections).

Keep in mind that I am much more interested in functionality than with biological plausibility, so I could very well be doing something implausible in my implementation.

3 Likes

Paul, thank you for sharing these important details and different possible interpretations of CC-to-CC communication. My perspective above also hinges on these details. Like yourself, I look at these models from a functional point of view, more so than from a biological point of view. Like all of us, I would also like to know if there is biological evidence supporting that there are multiple pathways in the CC-to-CC communication. This would have tremendous potential for higher-order models at the Inter-CC-Level. And that was were I was coming from with my insights.

This is very interesting and in my opinion a critical point that should be looked into from our neuroscientists, for more precise evidence of how CC-to-CC connections are taking place. How many pathways are documented? This crucial question seems to still have a good degree of uncertainty in the research community, not just Numenta.

Thanks for your input, Eric, and for pointing out other possible pathways that may also be in play. These other pathways, which are also mentioned by Jeff in his book, open many other possibilities as to how consensus-building across groups of CCs is taking place. That was what my largely intuitive insights above are based on. Like Paul, I look at the functional perspective and my perspective is closer to information theory than to biology. But I am a firm supporter of the principle, that we have to base all our models on biological evidence. At least that is the case if our goal is to understand the brain, and I share that goal firmly.

One of the questions that I would like to have any one of our experts who are very well informed about all the published neuroscience in the TBT context to answer is, whether there are any studies which have measured the activity patterns of large sets of co-active CCs during and after this “voting” process? Most wet-lab observations I have read thus far, are either at single-neuron-pair level or for more recently for large sets of neurons, using fluorescent dies dies and genetic engineering. But has any research group simply polled the activity sequences across large sets of CCs? That data would be invaluable to substantiate the TBT voting models.

If that had been done I think Jeff would have put it in his book.

You are right, but it was also mentioned by Jeff in the live Brains@Bay Meetup that since the time of publication of the book a lot has happened and he could basically already be working on his next book. I addition to that, I asked Jeff directly during that session, how he acquires field research and lab results to support his theories. He gave a long response. In essence he is often made aware of research that may support Numenta’s work. But often they need to scrape the publication platforms.

Thanks again for bringing up the Thalamus as an addtional potential pathway. I have allowed myself to quote just one sentence contained in my book review posted above: “Movement is essential to all this perceptual activity. Movement enables attention to be shifted, perhaps with the participation of some hierarchical co-voting and some intervention by the thalamus.” Just wanted to point out, that I was thinking in this direction, though perhaps not as the general pathway from CC-to-CC communication. I assumed the existence of other CC-to-CC pathways with lower layer inputs.

1 Like

I just started Conscious interiority on this blog. Our modern consciousness evolved, in the west, between 1000 and 500 BC. Until then, we had sex but not erotics, fights but no regret, no fantasy etc. Please have a look there!

Hi dmac,
I like your hypothesising!:blush:

I crudely envision/think of the basal ganglia (BG) as a brain region (of our individual brains or neural “actention” selection serving systems") that contributes an “actention switching station” [one that does its ‘switching’ based on input from sensory input gathering and evaluating (and storing) areas at all levels elsewhere] to how we come to behave the ways we do.
The BG quite possibly also contribute to how we become preoccupied emotionally as opposed to intellectually, and vise a versa.

My notably ‘MAD’-inspired, messily and redundantly textualised and in parts mirth-inviting (since on one occasion written in a spirit of squeaky clean antiseptic humor—as well as more often with aid of septic humored jocularity) perspective or overview of (or outlook on) what is going on (one mainly but not only focused on humans affairs/brains/evolution) might be gleaned from what I’ve uploaded to my own aeimcinternetional.org :grin:

2 Likes

I just decided to have a chat with ChatGPT on my question regarding the Thousand Brains Theory and this is what resulted:

User

You

How does the Thousand Brains Theory need to be expanded in order to include attention as a central element?

ChatGPT

ChatGPT

The Thousand Brains Theory, proposed by Jeff Hawkins, is a fascinating framework that suggests the brain doesn’t have a single centralized processor but operates more like a collection of small, localized “mini-brains” distributed throughout the neocortex. These mini-brains, or “columns,” collectively process sensory information and generate predictions about the world. However, the original theory doesn’t explicitly address attention as a central element.

To expand the Thousand Brains Theory to include attention, we could consider how attentional mechanisms facilitate the processing of sensory information within the framework of distributed columnar networks. Here are some key points to consider:

  1. Selective Attention Mechanisms: Attention plays a crucial role in selectively focusing on specific sensory inputs while filtering out irrelevant information. Within the Thousand Brains framework, attentional mechanisms could involve modulating the activity of certain cortical columns or enhancing the connectivity between relevant columns to prioritize processing of specific sensory features.
  2. Dynamic Allocation of Resources: Attention involves the dynamic allocation of cognitive resources to different stimuli or tasks based on their relevance or salience. This could be achieved within the Thousand Brains model by adjusting the strength of connections between columns or by dynamically reconfiguring the network to allocate resources to the most relevant sensory inputs.
  3. Integration with Predictive Processing: Attentional mechanisms could interact with the predictive processing aspect of the Thousand Brains Theory. For example, attentional signals could influence the generation of top-down predictions to prioritize processing of expected sensory inputs or to update predictions based on new information.
  4. Feedback Loops: Attention involves both top-down (voluntary) and bottom-up (stimulus-driven) processes. Feedback loops between higher-level cognitive areas and sensory processing regions could modulate attentional selection within the Thousand Brains framework, allowing for flexible control over which sensory inputs are processed in greater detail.
  5. Attention and Learning: Attention is closely linked to learning and memory processes. Within the Thousand Brains framework, attentional mechanisms could influence synaptic plasticity within cortical columns, shaping the network’s representations based on the attended stimuli and facilitating learning of relevant associations.

By incorporating attention as a central element within the Thousand Brains Theory, we can develop a more comprehensive understanding of how the brain efficiently processes sensory information and generates coherent perceptions of the world. This expanded framework would provide insights into the neural mechanisms underlying attentional control and its interaction with predictive processing within distributed cortical networks.

1 Like

User

You

Could you generate some sample programming code to illustrate this sequence above?

ChatGPT

ChatGPT

Sure, here’s a simplified Python code example to illustrate the sequential processes involved in visual perception and object recognition within a model inspired by the Thousand Brains Theory with feedback loops, dynamic resource allocation, and predictive processing:

pythonCopy code

import numpy as np

# Define cortical columns representing different features
class CorticalColumn:
    def __init__(self, receptive_field):
        self.receptive_field = receptive_field
        self.activity = 0

    def process_input(self, sensory_input):
        # Simulate feature detection based on receptive field
        self.activity = np.mean(sensory_input[self.receptive_field])

# Define higher-level cortical areas for predictive processing
class HigherLevelArea:
    def __init__(self, prediction):
        self.prediction = prediction

    def generate_prediction(self):
        # Generate top-down prediction based on prior knowledge
        return self.prediction

    def update_prediction(self, sensory_input):
        # Update prediction based on prediction error
        prediction_error = sensory_input - self.prediction
        self.prediction += prediction_error * learning_rate

# Simulate visual sensory input
visual_input = np.random.rand(100, 100)  # Simulated visual image

# Initialize cortical columns representing different features
cortical_columns = [CorticalColumn((i, i+5, j, j+5)) for i in range(0, 100, 10) for j in range(0, 100, 10)]

# Initialize higher-level cortical area for predictive processing
higher_level_area = HigherLevelArea(np.zeros((100, 100)))  # Initial prediction

# Simulate sequential processing steps
for _ in range(num_iterations):
    # Step 1: Sensory input processing in cortical columns
    for column in cortical_columns:
        column.process_input(visual_input)

    # Step 2: Feedback loops modulating attention
    attentional_signal = higher_level_area.generate_prediction()  # Top-down signal
    for column in cortical_columns:
        column.activity *= attentional_signal[column.receptive_field].mean()  # Modulate activity

    # Step 3: Dynamic allocation of resources (optional)

    # Step 4: Predictive processing and object recognition
    for column in cortical_columns:
        # Integration of predictions and sensory signals
        prediction = higher_level_area.prediction[column.receptive_field]
        prediction_error = column.activity - prediction
        higher_level_area.update_prediction(column.activity)

# Final prediction after processing iterations
final_prediction = higher_level_area.prediction

This code is a simplified simulation that represents the sequential processing steps involved in visual perception and object recognition within a model inspired by the Thousand Brains Theory. It includes the processing of sensory input in cortical columns, feedback loops for attention modulation, dynamic allocation of resources, and predictive processing for object recognition. Note that this is a highly simplified example and does not capture the full complexity of neural processing in the brain, but it provides a basic illustration of the concepts.

1 Like

Got to love it, huh? :wink: :thinking:

  1. Processing Steps:

sqlCopy code

   +-------------------------+
   |    Sensory Input        |
   +-------------------------+
                 |
                 v
   +-------------------------+
   | Cortical Columns        |
   +-------------------------+
                 |
                 v
   +-------------------------+
   | Feedback Loops          |
   +-------------------------+
                 |
                 v
   +-------------------------+
   | Dynamic Allocation of   |
   | Resources (Optional)    |
   +-------------------------+
                 |
                 v
   +-------------------------+
   | Predictive Processing   |
   +-------------------------+

This diagram represents the sequential processing steps involved in the algorithm, starting from the sensory input and proceeding through the cortical columns, feedback loops, dynamic allocation of resources (if included), and predictive processing in the higher-level cortical area.

Please note that this diagram provides a high-level overview and does not capture all the details of the algorithm or its implementation. It’s meant to illustrate the flow of information and processing steps in a simplified manner.


What are the main processes taking place in the feedback loop step? And what are the factors or features that are influencing the feedback in this loop? Give some possible examples.

ChatGPT

ChatGPT

In the feedback loop step of the algorithm, the main processes involve the modulation of cortical column activity based on feedback signals from higher-level cortical areas. These feedback signals can influence the processing of sensory information in the cortical columns, thereby modulating attention and shaping perceptual processing. Here are the main processes taking place in the feedback loop step:

  1. Top-Down Modulation: Feedback signals from higher-level cortical areas provide top-down modulation of cortical column activity. These signals convey information about the observer’s goals, expectations, or attentional priorities, which can influence how sensory information is processed.
  2. Attentional Selection: The feedback signals can selectively enhance or suppress the activity of cortical columns representing specific features or stimuli. This attentional selection process allows the system to focus resources on relevant sensory inputs while filtering out irrelevant information.
  3. Contextual Influence: The feedback signals carry contextual information that helps contextualize sensory inputs within the broader context of the perceptual scene. This contextual influence can guide perceptual interpretation and disambiguate sensory inputs based on prior knowledge or expectations.
  4. Adaptation and Learning: Feedback signals can also convey information about the success or failure of perceptual processing, which can inform adaptive changes in the system. For example, if a prediction error occurs due to a mismatch between the predicted and actual sensory inputs, the feedback signals may drive learning processes to update internal models and improve future predictions.

Factors or features that influence the feedback in this loop include:

  1. Task Demands: The observer’s goals or task demands influence the content of the feedback signals. For example, if the task is to search for a specific object in a scene, the feedback signals may prioritize the processing of features relevant to that object.
  2. Stimulus Salience: The salience or significance of sensory inputs can influence the strength and direction of the feedback signals. Highly salient stimuli may receive stronger attentional enhancement, whereas less salient stimuli may be suppressed.
  3. Expectations and Prior Knowledge: The observer’s expectations and prior knowledge about the environment influence the content of the feedback signals. For instance, if the observer expects to see a certain object in a particular context, the feedback signals may bias perceptual processing towards features associated with that object.
  4. Perceptual Context: The broader perceptual context, including surrounding stimuli and contextual cues, can shape the content of the feedback signals. Contextual information helps guide attention and perceptual interpretation, ensuring that sensory inputs are processed in a coherent and meaningful manner.

These factors interact to dynamically modulate cortical column activity and shape perceptual processing according to the observer’s goals, expectations, and the characteristics of the sensory environment.

1 Like