I have developed a spiking neural network defined by a binary vector that is mapped to itself using an ordinary continuous network. The ordinary network is responsible for knowing which next binary vctor to produce. So each slot in the vector is the representative of a neuron and the ordinary network keeps all the weights and activation function.

What make this special is that we will have the inputs as well as outputs as a real valued vector concatenated to the binary vector. So if we had a robot, we concatenate the binary vector to its inputs and the outputs also , we then map the combined vector to a vector that contains the next itteration of the binary vector and the next itteration of the outputs .

This vector is then connected to the next inputs of the robot (taken from the environment) and the process continues. To train the system we get a human to perform actions while wearing a suite to record his inputs and outputs. we will then take this system and place it on the robot.

Now we have a sequence of input/output and input/output,pairs…we then place the binary vector over them. We will place a random collection of slots as 1’s with the condition that they are sparse (2%) throughout the training data, then train the ordinary neural network to map these input output pairs.

This network will ultimately learn by the hebbian effect. If a bit was on by coincidene everutime the input was an image of a cat then after training an image of a cat will make that neuron light up. because this is random this is unlikely to be the case, but what we do know is that also because this is random some f the neurons that light up when the human first saw the cat will light up with other, not all , cat experiences. these neurons will reinforce each other .

When the robot finly sees a cat itself they will have the highest potential, and after the innitial mapping , when we choose the 2% with the strongest activations , they will most likely be there among them. You can see that even the neurons themselves are subject to this rule and neurons that occur together frequently will also be present around similar circumstances. the binary vector represents the brain here and causaly we should expect similar environments to cause similar neurons to fire , and similar neurons to neuron mappings to occur.

Since the inputs can be segmented logicaly the binary vector also gets sgmented into networks and subnetworks. this segmentation should occur to maximise the processing capabilities of the environment with the way the human acted as a limit and should be isomorphic to how the humans brain was if the number of neurons is equal to the humans. it will overfit if there are fewer neurons and underfit if there are more than enough neurons.

The experimental results show that this will work

.we used a random vector and mapped them to mnist digits, and the hebian factor described caused new random vectors to produce new digits. Here are the results of the experiment: