Dynamically generated sparsely connected network benchmark

I have started to benchmark the performance of a dynamically generated sparsely connected multilayer neural network with binary weights (EISANI) on categorical datasets. Training uses no backpropagation;

  1. initialise a sparsely connected neural network as having no hidden layer neurons or connections (input layer and output layer class target neurons only).
  2. For each dataset sample propagate the input through layers of the sparsely connected multilayer network.
  3. generate hidden neurons (segments) within the sparsely connected multilayer network, where the weights of the generated hidden neurons (segments) represent a subset of the previous layer distribution (activations).
  4. the network has binary excitatory and inhibitory weights or neurons.
  5. connect the activated hidden neurons to class target output neurons. Output connections can be weighted based on the number of times a hidden neuron is activated for a given class target.
  6. prediction is performed based on the most activated output class target neurons.
  7. can subsequently prune the network to retain a) the most prevalent (highest weighted) and predictive (most exclusive) output connections and b) their respective hidden neurons (segments).
3 Likes