I have started to benchmark the performance of a dynamically generated sparsely connected multilayer neural network with binary weights (EISANI) on categorical datasets. Training uses no backpropagation;
- initialise a sparsely connected neural network as having no hidden layer neurons or connections (input layer and output layer class target neurons only).
- For each dataset sample propagate the input through layers of the sparsely connected multilayer network.
- generate hidden neurons (segments) within the sparsely connected multilayer network, where the weights of the generated hidden neurons (segments) represent a subset of the previous layer distribution (activations).
- the network has binary excitatory and inhibitory weights or neurons.
- connect the activated hidden neurons to class target output neurons. Output connections can be weighted based on the number of times a hidden neuron is activated for a given class target.
- prediction is performed based on the most activated output class target neurons.
- can subsequently prune the network to retain a) the most prevalent (highest weighted) and predictive (most exclusive) output connections and b) their respective hidden neurons (segments).