Two different coordinate systems feed the same neuron

there is a recent book (for general audiences) on brain maps (Brainscapes) that mentions neurons that get inputs from two different coordinate systems. Some parietal neurons get inputs from both a retinal coordinate system and a body coordinate system. This reminded me of Numenta’s ideas of shifting coordinate systems even when you look at a cup.
On another topic: I saw a 2021 article saying a scientist named Idan Segev had simulated neurons successfully with a ‘temporal convolutional network’. He found that the types of neurons in the cortex need much more layers for the simulation than do some other types of neurons, because the cortex neurons have NMDA receptors. He says TCNs require much less computing time than the usual method of simulating neurons (by sets of equations). He says that he trained the net on some neural behaviors, and it generalized to the other input / output spikes and input / subthreshold voltages that the neuron exhibits.
This is exciting, because he is able to include all the nonlinear behavior at the various dendrites, and in theory, (it seems to me) he now has the building blocks of a brain. Numenta models of neurons may be leaving something important out.

2 Likes

Beniaguev etal 2021 Single cortical neurons as deep artificial neural networks:

Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons’ input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs’ weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.

3 Likes

If they use DNNs to mimic flesh how does that replicate the learning abilities of real neurons since there’s no back propagation?

1 Like

There are a continuum of neuron modelling methods from hi-fi spiking neurons to the familiar DNN version (ie. basically just matrix-multiplication).

This paper is a good review of the set from 2022:

One take away is that DNNs are often used first (think of rapid dev requiring a full tool chain) and only later ported to Spiking neurons for embedding in neural chips (power reduction and speed) with difficult or absent debug methods.

However running a model across different node types helps show what are required node features to get the desired outputs/behavior. It appears that some models don’t need the extra realism - it does not improve them but just increases the complexity/computational load.

For those brand new to spiking networks (like myself) the tutorials from SNNTorch also explain this continuum well.
https://snntorch.readthedocs.io/en/latest/tutorials/index.html

5 Likes

They use backprop to create a model of a neuron. Once they have the model, I don’t think backprop is needed any more.
But I’m not clear on the following - once the neuron model has been created, how its synapses get updated when it interacts with other neurons.

1 Like

This seems like a functional model or computational equivalent - to aid thinking.
I don’t think they are replacing one model with the other, in any literal sense.
DNNs are not dynamic in their implementation - the process is often described as crystallization.

2 Likes