Standard ANN with sequentially activated neuronal input

Regarding ML implementations of neuroscience findings. Assuming some sequential input (e.g. 1D numeric/binary), has your team considered simply creating a standard ANN (backprop trained if differentiable) but in which each neuron must receive input in some predefined sequence (eg input X1 must fire before input X2)? Multiple neurons may be connected to the exact same input (previous layer) neurons, but have different constraints on their firing order (higher layer neuron A->X1;X2, higher layer neuron B->X2;X1). This should introduce a non-linearity into the network in and of itself. The network would have to be sparsely connected because the number of permutations of each possible input sequence for a neuron increases exponentially with number of inputs.

1 Like

Numenta has published a paper about applying sparsity to DL systems. Outside of that however, not that I know of.
Regarding the algorithm you’ve suggested, I don’t understand how that would work. Can you elaborate, please?
I guess what you’re suggesting has a strong resemblance to the displacement cell Numenta has suggested in this paper.
But I still don’t know if that would work.

Note optionally: to increase the number of inputs/connectivity for each neuron (e.g. A), each sequential input (e.g. X1; X2) could detect some static combination of inputs (Numenta uses the terminology “pattern” here) from the previous layer (e.g. x1-1, x1-2, x1-3, x1-4; x2-1 x2-2, x2-3, x2-4, x2-5). Each of these combinations (e.g. summation) could be passed through an activation function (to reach some detection threshold). The sequence order would only be enforced between each amalgamated input (X1; X2).

Standard artificial neural networks apply some function (e…g W*X + b) to the input of each neuron, along with some non-linear activation function (e.g. sigmoid/Relu). They do not discriminate with respect to the order in which the input (previous) layer neurons are fired. Given that we know neurons (through distal connections) can become biased towards firing based on some previous network state, it would make sense to test such conditions in an existing (known functional) architecture/learning algorithm.

Thanks I will check out the paper.


I have started an example implementation;