Synapses may not be purely binary - time for weighted connections?

This very well done study suggest that the size of the synapse correlates with the strength of the evoked potential:

“Finally, we have now provided first direct experimental evidence for alinear relationship between synapse size and synaptic transmission strength, which supports the general functional relevance of quantifying neocortical synapse size through EM.Intriguingly, we found a robust size-strength dependency only when many responses were temporally averaged in the very stable recordings we achieved in slices.”
-snip-
“Thus, the linear relationship we established between synapse size and synaptic transmission strength provides the experimental means for extending the simple binary label of ‘connected’ or ‘not-connected’ in neocortical wiring diagrams to assigning actual physiological weights to synaptic connections. This is a key step on the path towards simulating information flow within neocortical connectomes.”

5 Likes

I am surprised it had not yet been established but… nice :slight_smile:

1 Like

I have read a fair number of papers on the subject and this is the first I that I can recall seeing.
There are other important conclusions in the paper but this one jumps out as perhaps the most important to the work we do here.

I think it’s also worth pointing out that synapses are not necessarily limited to one-to-one connections between axons and dendrites. The axons and dendrites branch out, and their respective arbors can connect in multiple locations. Thus would also permit a natural scaling of influence between pre- and post-synaptic neurons. I’d be willing to bet that there is some kind of sigmoid like function that can be derived from the distribution of the branches of these arbors and the availability of local resources to form synapses such that threshold and saturation characteristics of traditional Hebbian-type synapse weights can be approximately recovered from this kind of physical constraint.

2 Likes

I believe that even if there is information represented by synaptic strength the same capacity and information processing capability can be achieved using purely binary bits, though you would actually be using multiple bits to serve the function of one synapse. Simultaneously, as far as hardware implementation is concerned, the efficiency, performance, and cost of sparse bit vector operations can be at least an order of magnitude better than trying to model varying-weight synapses. It’s much more efficient to perform a binary operation between two vectors than it is to compute a matrix multiply with floating point values, or even reduced-bitdepth values. Granted, all speculation, but sparse bit vectors are a very powerful means of representing and processing information that I think we’ve barely scratched the surface with.

1 Like

I don’t know that this would be more efficient. While it may be technically fewer bits to encode the value (one bit) you lose this in keeping track of the address of the bit. If you can trade one byte and a 32 bit address vs 4 bits and 4 32 bit addresses to point to the bits you did not win any space; it cost more.

In the many-2-many matrix of values (axon to dendrite/synapse) you have to keep track of both the connection strength and location or address of that connection. This can be sparse but you still have to keep track of what is connected to what.

Likewise, you add the memory addressing time to call up those four bits vs a single memory lookup for the single byte. From the processor point of view a 1 bit logical operation takes exactly the same time as a add or multiply. If you can replace multiple memory accesses and computations with one you have sped up the computation.

Perhaps the binary activations should be reconsidered as well. I have serious doubts as to whether binary weights and activations are truly as powerful as scalar ones.

Synapses between L2/3 pyramidal cells show a bimodality (2 latent states in addition to the “non connected” state). It is the first time this bimodality is clearly evident in the neocortex (already found in the hippocampus).

It is not the case for synapses between a L2/3 pyramidal cell and other neurons.

modeled a continuum of synapse sizes (Arellano et al. , 2007) by a log-normal distribution (Loewenstein, Kuras and Rumpel, 2011; de Vivo et al. , 2017; Santuy et al. , 2018) . A continuum is consistent with most neural network models of learning, in which synaptic strength is a continuously graded analog variable. Here we show that synapse size, when restricted to synapses between L2/3 pyramidal cells, is well-modeled by the sum of a binary variable and an analog variable drawn from a log-normal distribution.

https://www.biorxiv.org/content/10.1101/2019.12.29.890319v1.full.pdf

4 Likes

This is relevant. Can be overlap in SP a function of the permanences (instead of using a threshold)? Always had that doubt… The current approach seems to “discard” valuable information.

@Deftware @Bitking As far as implementation goes, I think one could get a long way by essentially using a standard 16, 32 or 64-bit value type as a sort of bit mask with the each set bit corresponding to an established synapse between two neurons. If the pre-synaptic neuron fires, then the post-synaptic dendrite would receive that number of input spikes. Thus, the strength of the connection between the two neurons is encoded in the number of set bits.

Another variation might be to allow each byte (or k-bytes) to be allocated to a specific dendrite on the post-synaptic neuron. In that way, only one neuron-to-neuron-look-up is required to obtain activations for multiple target dendrites, with each one having it’s strength encoded in the number of set bits.

1 Like

This is not the case if you used memristors to model the synaptic weights.