I just watched Neuromorphic Computing with Priya Panda on this forum and I thought that the work I’ve been doing on neuromorphic networks might be of interest. The brief video I posted here shows how the actions of a “bug” can be controlled by a neuromorphic network and how the bug can learn a new action based on its experience. The video isn’t technical and focuses mostly on the learning proof of concept, but if anyone is interested in how it works, please ask me.
The application is written in Java and does not use any AI-specific APIs. Although the simulation runs on a traditional computer, the logic of the model does not require memory, system clocks, or other features of a von Neumann machine. In addition:
-
The network is made up of a collection of nodes (“neurons”) that are connected to each other in a specific logical pattern. The nodes resemble McCulloch-Pitts neurons inasmuch as inputs can either be excitatory or inhibitory and there is a simple summing function that causes a node to fire when its threshold is met (in my model, the thresholds are very simple…either 1 or 2 depending on the role of the node in the network). The nodes are not labelled (all nodes of a specific type are identical) and there is no central list of nodes or their connections. The network is subdivided into subnetworks each of which performs a logical function such as processing an exclusive OR (XOR).
-
The network processes information asynchronously (there’s no timing function that coordinates the firing of the nodes). However, the relative timing of events is important. For example, red followed later by green will be seen by the network as two separate colors; red and green seen at the same time will be interpreted as yellow.
-
The network does not need training; the objective is for the network to learn from its experience.
-
The network is deterministic; no weighting or probabilistic functions are used by the model. Once a node is constructed, none of its properties change. One of the benefits of the deterministic approach is that the actions of the network are explainable (i.e., it isn’t a black box).
-
As with HTM, the spatial aspects of the network are very important, especially to the learning algorithm. Nodes are laid out on a hexagonal 2D grid.
I know that the sparse detail here isn’t enough to evaluate the model thoroughly but, from an HTM perspective, do you think there may be anything useful here? Could the logic here help in figuring out what goes on in individual cortical columns? Any feedback would be really appreciated!
Video link: https://youtu.be/cGwmNzWQv7A