Dynamic Fuzzy Clustering Spiking Neural Network

https://www.researchgate.net/publication/332257743_DYNAMIC_FUZZY_CLUSTERING_SPIKING_NEURAL_NETWORK?fbclid=IwAR05oFe83wwS81p3C2j9B5m7OzJlCTFcjjD4VQ5d8DpbQlx91TG8kk-6_Ys

We describe a new type of spiking neural network where
nodes exist in some sufficiently high dimensional space that
any node can theoretically be within a certain radius r of any
other node. this radius forms a hyper sphere around each
node and a node may have any number of other nodes within
its radius. If a particular node Z is known to have activated the
algorithm selects only those nodes within the functional radius
as candidates to fire next. The closer a node is to the node
that activated the greater its probability to fire next. If a node
Y, within Z’s functional radius, has a probability of firing next greater than a threshold value
it will fire and if that node is within the radius of another node
X as well that had also previously fired along with Z, then both
probabilities it has in Z and X are summed to determine
whether it should fire next or not. To break the symmetry of
connections we create two instances for each node. A post
and pre mode, in the pre mode it exists as a centroid while in
the post mode it exists within the radius of a centroid . the
algorithm performs a variant of the Fuzzy c means clustering
algorithm in order to affect each nodes probability of firing next
from a particular centroid’s perspective and coordinates this
across the entire network. We will ultimately train the network
to represent a “way of thinking” that it determines by
connecting nodes added to the system that do not represent
input variables or output variables. But mediate between them.

In your paper “DYNAMIC FUZZY CLUSTERING SPIKING NEURAL NETWORK” I think that you are describing recruitment of neurons based on proximity. Locations in these hyper-spheres. The properties are interesting.

Have you considered the effects of the network of inhibitory inter-neurons that are excited by the action of these spiking neurons? They have physical extents that create a defined area of action and input receptive field. This action will certainly change the spatial properties of the activation pattern.

One of the defining features of the cortex is the sparse nature of the activation. The cortex is already one of the highest consumers of resources in the body and evolution has acted to reduce the number of active elements to the lowest number needed to carry a representation.

Can you relate your model to the formation of the SDR (Sparse Distributed Representation) as has been described in great detail in the Numenta papers?

Good insights Mark, my algorithm uses a 2 â„… value in its equations to guarantee that at any one time only a sparse number of nodes can be active. Also I profess that I believe perhaps naĂŻvely that inhibitory neutrons were there solely to dampen propagation so that the sparecity was met.

I also believe that sparecity makes for properties that are advantageous in their own right such as resistance to noise.not only to preserve energy

There are several topological functions that have been observed and are thought to aid in parsing sensations. These are formed by the interaction of lateral axonal connections and the inhibitory inter-neurons. The Gabor filter in the V1 area is one of then. The widely documented “Mexican hat” receptive fields is another.
Some references:

Google “Gabor filter in the V1” for much more on this.
It is widely used in Artificial Neuronal Networks for convolution processing.
https://pdfs.semanticscholar.org/8f8a/5be9dc16d73664285a29993af7dc6a598c83.pdf
https://pdfs.semanticscholar.org/3372/33837287a1750c90e88cd0560a6c66401ac6.pdf

For the “python code-y” people on the forum:
http://scikit-image.org/docs/0.11.x/auto_examples/plot_gabors_from_astronaut.html

Most of the texts I have read on neurology document this in describing the receptive fields of neurons.
Again, google is your friend here.
http://www.physics.usyd.edu.au/teach_res/mp/ns/doc/nsCoupling.htm

https://www.pnas.org/content/100/5/2848

Thank you for That information.but all these processes are intended to determine when a particular neutron will fire. So each activation has a semantic interpretation within the set of those that activated at any one time. We could view the sparse representation of activity as a language where the order of the words doesn’t matter.and from each time step to the next we are saying different sentences. All those considerations would need to make for a more expressive language than mine in order to matter, I can show that with enough number of dimensions my system can express any other possible “language system” where SDR activity leads to sequences of activity. So to increase the amount of nuance in my SDR’s I can arbitrarily increase the number of dimensions

I look forward to seeing your demonstration models.

1 Like

Thankyou for the encouragement :slight_smile: And if you must know i am also not 100% certain that those topological requirements are unneccessry. But the proof will come, or at least more evidence will be there once it has been implemented.

1 Like

I deliberately framed the neural network design this odd way to exploit some key features that i would not have been able to “reach” if i had used a more traditional approach. The features i was introducing was that i would have not just dynamic connections but dynamic routing as well. This means that the similarity of the pathway mapping input to outputs in the newtork was directly proportional to the similarity of the inputs and the outputs themselves. If we trained the algorithm to be a chat bot and it was to process the following sentece…" The people were famished" it would reply in a simmilar way were it paresed the sentence …“Hunger had gripped everyone” …because it would use a pathway that uses the exact same hidden nodes…note that that pathway would have been trained only incidentaly through using those individual words in other utterances…and we would not explicitly need to train every possible sequence of words…Understand that there is hardly anything “new” we encounter in life…every new thing or situation is just a rearrangement of the variables…so it should make sense that this new network should be very robust to “novel” experiences, using routes within it that as a group had “never” been used before…also the hidden nodes represent a way of processing thats akin to how we think…if we were to train a chatbot with utternces, the training process i hve outlined will force the system to form the simplest representation of the input output mapping , at first this will simply be a direct mapping, then with more training examples the system starts to create more elaborate pathways, because of the reuse of hidden nodes…i beleive that this elaborateness represents thinking of some sort…this model even has dynamic memory of symbolic information…if we start at a node and follow the pathways during processing and get back to the sme node, the nodes that were firing with it the first time it fired probbily wont be the same when it fires again, and the next time and the next…this represent some sort of rotation within the space of nodes, this rotation represents learning…i can say that whenever the exact same input is fed into the system, the past inputs will also effect the output because there hidden nodes firing alongside the current input nodes will have been “chosen” by which nodes fired in the past…this means if you ask it, " whats the time?" twice…it could reply “what are you deaf?” the second time around :slight_smile: