This is more of a thought problem, instead of a question as to why this doesn’t actually happen (or maybe it does shrugs)
Thinking about how few neurons are active at any one time, and the reasons behind this, I’m wondering if it wouldn’t be more efficient - or even impossible - to condense the processing by using different triggering mechanisms (perhaps this might be what neurotransmitters do to some degree anyway). So, for instance, let’s say there’s an image that’s 3x3 (for simplicity) -and- R= red, G=green, B=blue.
Couldn’t this be fed into a network where each of the 9 neurons per layer would be capable of representing each of the 3 colors? So the horizontal connections would each be connected to each other. Each synapse would be capable of receiving and sending (let’s just call our pretend neurotransmitters R, G, and B respectively). So while any one neuron is in a B state, it’s horizontally reinforced by the other neurons in a B state by way of the B neurotransmitter. This keeps the horizontal active network separate from the others at any one time. Depolarization wouldn’t be necessary because each neuron is always active. And because no one neuron will ever need to represent more than one pixel, it would never be able to interfere with any pattern of any other color. Each pattern would be completely safe from interference. Instead of turning the neuron “on”, you would only need to change the neuron’s representational state.
I’m super new to a lot of this, so be kind - but it would seem to me, that this would - at the very least - save on material resources, no?
Implementing this digitally might require adding a list of ports and port handlers for neural communication for any one particular application and some sort of state management for the neuron. I have no idea what I’m even saying right now - I haven’t even looked at the code-base yet (which is, ironically, one of the main reasons I registered this account )