While ANNs are rather trained for unidirectional propagation, action potential propagation in biological neurons is symmetric e.g. ”it is not uncommon for axonal propagation of action potentials to happen in both directions” (from “Dynamics of signal propagation and collision in axons” PRE). As it is possible, they should be evolutionarily optimized for such multidirectional propagation, what might be crucial e.g. for learning (currently not well understood), consciousness (?)

Are there considered artificial neurons operating in multidirectional way?

One approach is somehow containing representation of joint distribution model e.g. ρ(x,y,z)

, which allows to find conditional distributions in any direction by substituting some variables and normalizing - below is such inexpensive practical realization from https://arxiv.org/pdf/2405.05097 , allowing for many additional training approaches - could biology use some of them?

Are there different approaches? Research in this direction?

Is multidirectional propagation important/crucial for (e.g. learning of) biological neural networks?

I don’t think they-re optimized for that it’s simply the way action potential work it propagates radially in all directions, except towards areas that already discharged. So it will gradually discharge the whole cell membrane regardless where the spark starts.

Imagine a network of pipes having the shape of a single neuron filled with air and flammable gas. You can start a spark anywhere within the pipe and the flame will propagate everywhere and after all gas has burned, it is slowly replenished for the new firing.

The above seems as “directional” because most frequently, the “spark plugs” = upstream input axon terminations (synaptic boutons to be more specific) attach to dendrite trees because these expose/cover a larger surface area / volume than any other part of the neuron.

Well, at least that’s my armchair half-scientific arguing.
Synaptic boutons if you look at pictures are like tiny fingers they can “grab” to any other neuron membrane they bump into.

I don’t think the directionality of the signal between different neurons actually changes - I mean yeah if action potential starts on the axon it will propagate both ways - towards axon’s own boutons and towards cell body and dendrites, but only the axonic synapses (boutons) will be able to forward that signal to other neurons. Synapses are one way.

But from one side we don’t understand e.g. learning of biological NN - use e.g. literally backpropagation in ANNs for that … so isn’t some propagation in the opposite direction crucial for biological learning?
Maybe being focused on unidirectional propagation is one of reasons for being unsuccessful in this understanding?

From the other side, such symmetric multidirectional propagation just happens in biological realizations - shouldn’t evolution do it’s best to exploit it?

Indeed “Neurons that fire together, wire together” is a nice summary, but seems we are still far from understanding of learning of biological neural networks?

Maybe there are some more sophisticated mechanisms to uncover, and focusing on unidirectional propagation could be what is blocking us …

For example in theory neuron could hide/estimate/update model of joint distribution of its connection, what can be relatively simple to represent as above … how to confirm or deny it?

I suspect this is still a low level understanding - not details of their consequences, leading e.g. much more efficient learning than current artificial neural networks (?)

To really understand it, we could start with reachable theoretical possibilities - then search for them in hidden subtle dependencies of biological neural networks.

And agnostic theoretical possibility of single neuron is modelling joint distribution of its connections (more than standard for ANN value dependence) - containing the entire statistical dependencies, allowing for multidirectional propagation (as in biological neurons), adding subtle novel training possibilities … and relatively easy to represent using density as linear combination like above - should be reachable by biology+evolution.

Agnostic as avoiding arbitrary assumptions - here: instead of guessing parametrization like standard ANNs, model joint distribution which contains all statistical information.

Standard ANNs are brute force optimizations of guessed parametrizations … maybe biology is smarter? The question is in which direction …

Available statistical information is joint distribution, neuron modelling it could additionally e.g. propagate in any direction, values or distributions, would have additional training modes … seems basic but looking new research direction.

Just made update of https://arxiv.org/pdf/2405.05097 e.g. extending KAN-like with many additional possibilities:
• it can propagate in any direction,
• propagate values or probability distributions,
• interpretation of parameters as mixed moments,
• consciously add triplewise and higher dependencies,
• inexpensive evaluation of modeled mutual information,
• additional training approaches, e.g. direct estimation, tensor decomposition, information bottleneck.

Mainly adding information bottleneck training which I believe is great - instead of optimizing weights, we directly optimize content of intermediate layers: maximizing mutual information with the desired outcome, but also minimizing mutual information from the input - to remove noise, extract the most crucial information.

Recently popular Kolmogorov-Arnold Networks (KAN) use just summation and trained single parameter functions. I will introduce to it and go to my approach/extension based on Hierarchical Correlation Reconstruction (HCR) - with neurons modelling joint density of neighborhood as a linear combination.
Such HCRNN can be degenerated to KAN-like if including only pairwise dependencies - can consciously add higher order, and have many new advantages in comparison to KAN, especially multidirectional propagation, also of densities (both available for biological neural networks), and many additional training approaches (biological use something different from ANN backpropagation), like direct estimation, tensor decomposition, and most promising: directly using the famous information bottleneck approach.

Turned out there is also HSIC direct training with information bottleneck (2 articles below) - the optimized tr(Kx Kz) terms are similar, but they use KDE local basis which is terrible in high dimensions (also extremely dependent on kernel width sigma). I use global basis, which handles high dimensions, also additional optimizations and possibilities from neurons containing local joint distribution models.

They emphasize it is more biologically plausible training, additionally mine allow for multidirectional propagation, also of probability distributions - biological NN can both.

A summary of differences between current artificial and biological NNs … which come from low level differences - understanding of which could allow us to build closer ANNs …

I would say such differences are: multidirectional propagation, also of probability distributions, and use of learning different from backpropagation (e.g. IB, all 3 are allowed for joint distribution neurons) … what more?

Aside from axonal triggering, what about the dentritic tree – when a firing potential is received there, is it the case that the firing propagates throughout the entire tree even before it gets to the axon? It does not seem likely to me that the dentritic tree would have “blockers” – akin to electronic diodes – to prevent that from happening.

If that’s the case, the next question is what impact might this have on biological learning and general processing? There are probably papers written up on this, I would imagine.

Multidirectional propagation of biological neurons is indeed more complicated, here are some materials: Neural backpropagation - Wikipedia

Also meeting opposite waves of action potentials, they would cancel - there could be made complicated calculations e.g. through subtle changes propagation times.