There are some criterias that any network should follow in order to become a good general problem solver.
- Model with continuous function & plasticity property. (Continuous function - Incoming input information meeting the demands of the old information)
- Model should have a in-built preference for specific information.
- Model should have destabilizing mechanism. This mechanism can make the model to become continuous.
- Easy access to some stored information - More like preference but not.
Architecture for a model that satisfies the above stated criteria:
[Higher valued clusters(blocks) will always get high priority for the algorithm search]
- Divide the network space into blocks - Meets the demand for plasticity and extracts above-average information from the input.[(1)]
- The blocks get narrower as the model produces good results, the number of random neurons increases in those blocks - Meets the demand for preference criteria and reward criteria.[(2),(4)]
- Confirmation of good output by clustering nearby blocks and not clustering far-away blocks. If a cluster happens, the blocks multiply (number increases and size decreases within the selected block that was clustered), and if a cluster doesn’t happen, the cluster will be combined into one(number decreases and size increases) - Meets the destabilizing mechanism that maintains the equilibrium state and continuous function in the model, where precision of information also increases. To keep the model continuously working, the introduction of a destabilizing mechanism is necessary. [(1),(3)]
- Far-away clusters will become singular this will meet the demand for the decaying of weights in neural connection criteria.[(3)]
Visual representation -
Blocks multiply by nearby clusters & becomes singular by far-away clusters.
A detailed look at the architecture -
Nearby clusters are valuable -
This architecture was designed based on the insight - In brain Neurons were not fixed to other neurons, they are fixed to spaces.