I guess this is similar to spatial pooling in HTM?
“Winner-take-all” is a biological mechanism that occurs when one or a few neurons within a set (i.e., the one/ones with the highest activation level) influence the outcome of a computation. The more active neurons essentially suppress the activity of other neurons, becoming the only cells contributing to a specific decision or computation.
Iqbal and his colleagues tried to realistically mimic this biological computation using neuromorphic hardware and then use it to improve the performance of well-established machine learning models. To do this, they used IBM’s TrueNorth neuromorphic hardware chip, which is especially designed to mimic the brain’s organization.
Capturing biologically-grounded WTA computations from the brain (A) to design neural state machines (B) and translating them onto IBM’s TrueNorth neuromorphic hardware chip (C) as well as implementing the WTA as a neural layer (D) in AI architectures. Credit: Iqbal et al.
“Our biophysical network model aims to capture the key features of neocortical circuits, focusing on the interactions between excitatory neurons and four major types of inhibitory neurons,” explained Iqbal.
“The model incorporates experimentally measured properties of these neurons and their connections in the visual cortex. Its key feature is the ability to implement ‘soft winner-take-all’ computations, where the strongest inputs are amplified while weaker ones are suppressed.”
By performing these brain-inspired computations, the team’s approach can enhance important signals, while filtering out noise. The key advantage of their NeuroAI system is that it introduces a new biologically-grounded and yet computationally efficient approach to processing visual information, which could help to improve the performance of AI models.
“One of our most exciting achievements was the successful implementation of our brain-inspired computations on IBM’s TrueNorth neuromorphic chip,” said Iqbal.
“This demonstrates that we can translate principles from neuroscience to real hardware. We were also thrilled to see significant improvements in the performance of Vision Transformers and other deep learning models when we incorporated our winner-take-all inspired processing. For example, the models became much better at generalizing to new types of data they hadn’t been trained on—a key challenge in AI.”
Iqbal and his colleagues combined the soft winner takes all computations performed using their approach with a vision transformer-based model. They found that their approach significantly improved the model’s performance on a digital classification task for completely “unseen” data through zero-shot learning.