Using A Generative Model To Generate Each Weight In A Secondary Ann The Size Of The Human Brain As And When Needed

I have developed a spiking neural network defined by a binary vector that is mapped to itself using an ordinary continuous network. The ordinary network is responsible for knowing which next binary vctor to produce. So each slot in the vector is the representative of a neuron and the ordinary network keeps all the weights and activation function.

What make this special is that we will have the inputs as well as outputs as a real valued vector concatenated to the binary vector. So if we had a robot, we concatenate the binary vector to its inputs and the outputs also , we then map the combined vector to a vector that contains the next itteration of the binary vector and the next itteration of the outputs .

This vector is then connected to the next inputs of the robot (taken from the environment) and the process continues. To train the system we get a human to perform actions while wearing a suite to record his inputs and outputs. we will then take this system and place it on the robot.

Now we have a sequence of input/output and input/output,pairs…we then place the binary vector over them. We will place a random collection of slots as 1’s with the condition that they are sparse (2%) throughout the training data, then train the ordinary neural network to map these input output pairs.

This network will ultimately learn by the hebbian effect. If a bit was on by coincidene everutime the input was an image of a cat then after training an image of a cat will make that neuron light up. because this is random this is unlikely to be the case, but what we do know is that also because this is random some f the neurons that light up when the human first saw the cat will light up with other, not all , cat experiences. these neurons will reinforce each other .

When the robot finly sees a cat itself they will have the highest potential, and after the innitial mapping , when we choose the 2% with the strongest activations , they will most likely be there among them. You can see that even the neurons themselves are subject to this rule and neurons that occur together frequently will also be present around similar circumstances. the binary vector represents the brain here and causaly we should expect similar environments to cause similar neurons to fire , and similar neurons to neuron mappings to occur.

Since the inputs can be segmented logicaly the binary vector also gets sgmented into networks and subnetworks. this segmentation should occur to maximise the processing capabilities of the environment with the way the human acted as a limit and should be isomorphic to how the humans brain was if the number of neurons is equal to the humans. it will overfit if there are fewer neurons and underfit if there are more than enough neurons.

The experimental results show that this will work
.we used a random vector and mapped them to mnist digits, and the hebian factor described caused new random vectors to produce new digits. Here are the results of the experiment:

It is an interesting experiment and powerful model to overlap (concatenate?) inputs with previous outputs what I’m skeptical about is

  1. The “ANN the size of human brain”. Besides the size barrier, that might not work without a deeper consideration on how the said “ANN” should be structured.
  2. Having a suited actual man to provide training data (inputs + outputs) to the ANN… that’s not how babies/humans work.
    Having just the robot with its own inputs/outputs and a means to refine/structure its own “experience stream” into useful representations seems more close to how natural brains do.

The suited man is basicaly a method of transfer learning, where the dispositions of the man get transfered onto the humanoid robot. So it will act and beleive it is him/her. the point is to collect the data and then place it on the robot, so it acts autonomously as the human acted.

the ANN the size of the brain would be a binary vector with as many slots in it as there are neurons in the brain. the networks and subnetworks will be the way that the vector is mapped onto itself by a smaller perhaps convolutional network.

this is not a one layer network, what will happen is that we can list all the neurons of the brain in a vector and then only those that are activated at the present moment will have 1’s , and those that are active at the next moment will gain their activations through the use of a CNN mapping that vector to its next state, which will be the same vector with a new placings of 1’s at the next active neurons positions.

The CNN will implicitly contain all the weights of the spiking network in terms of each other. so the weights between neurons will or may be expressed in terms of multiples of each other.

Also see the quoted section on how it will be structured. The training data will partition the neurons into networks and sub networks, through the hebbian effect