Could a neuroscientist understand a microprocessor?

An interesting paper appeared yesterday that is very relevant to HTM work. In the paper, they asked whether current computational neuroscience techniques could be used to understand the microprocessor, another complex system. They tried techniques like lesioning, looking at statistic of bit patterns, analyzing tuning properties of transistors, dimensionality reduction techniques, etc. They concluded the answer is “no” and ask what this means for trying to understand the brain. They don’t really have a good answer, except that Neuroscientists should be more open to innovative techniques.

I think the conclusion that current computational techniques are insufficient has been pretty obvious to most of us involved with HTM. (I’ve often used an example very similar to this to try to explain why we need a different approach.) In any case it is nice to see this experiment actually tried out by reputable computational neuroscientists.

The full paper is here:

– Subutai

Could a neuroscientist understand a microprocessor?

Eric Jonas, Konrad Kording

Abstract

There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.

6 Likes

Love it, thanks for sharing.

A similar point was made by James Smith in his paper Biologically Plausible Spiking Neural Networks:

The vast majority of the logic in a modern day superscalar processor is there to increase performance, enhance reliability, and improve energy efficiency. Meanwhile, the von Neumann paradigm is concealed, deeply enmeshed in all the circuitry implementing the performance and reliability improvements. For example, the program counter is an essential part of the von Neumann architecture. Yet, if one were to try to find the program counter in a superscalar processor, the task is ill-stated. There isn’t a single program counter; there may be many tens or even hundreds, depending on the number of concurrent in-flight instructions.

So it is with cognitive paradigms in the neocortex, which is a mass of interconnected neurons that communicate via voltage spikes. In essence, the neocortex is a gigantic asynchronous sequential machine built of unreliable parts, connected via paths with variable delays. Maintaining stability of this huge asynchronous sequential machine likely requires constant adjustment via a complex set of mechanisms, as well as consuming a large fraction of neurons for maintaining reliability and performance. Cognition itself may appear almost as a second order effect which is deeply intertwined in the multitude of supporting neurons and mechanisms.

4 Likes

Good quote! Borrowing from them: we need to find the computations underlying intelligence that are “deeply enmeshed” in brain circuitry that is also trying to do a whole bunch of other stuff that is irrelevant to us.

Subutai, do you think your (and the whole research team as a whole’s) intuition for discerning the “relevant” processes from the “irrelevant” one’s is improving as time goes by, or is it your experience that every step forward is a struggle in and of itself?

I’m wondering whether intuition about neurological processes is even possible?

The task of reverse engineering a modern CPU might be harder than reverse-engineering a brain, if one is equally clueless about them. The problem with CPU is the number of abstraction layers - there might be a dozen of such layers in a modern computer system. People created those layers in order to manage complexity. Ironically, those layers help us understand the system if a function of each layer is known, but complicate understanding if it’s not.

4 Likes

Yep, and most of those pieces are to solve “engineering” problems. Most of the complexity comes from solutions to the memory/off-chip wall problem. This add caches, speculative execution, etc… (there are others such as power density, reliability, etc…).

But in the bottom, von-Neumann model (which is the base of all of them) can be explained (and understood) with very few words.

Biology has much more “engineering” problems… and the solutions that make hard to understand where is the “von-Neumann”. Certainly, the way to do it is use CLA approach… otherwise perhaps we never will be able to discover it.

PS: Note that this paper uses a really simple architecture. I couldn’t imagine to reverse engineering the coherence controllers of any modern processor. Its not easy at all to “forward” engineering … reverse will be near to impossible.

1 Like

This topic is quite related to questions I often get on our approach. I wrote a blog post on the paper which was just published a couple of days ago:

http://numenta.com/blog/can-neuroscientists-understand-the-brain.html

4 Likes

We are building the Model T equivalent of the neocortex.

Thanks for the blog post Subutai. I love this sentence because it’s a very succinct metaphor for HTM.

2 Likes