Optimal arrangement of 2 input neurons


If you restrict the number of inputs to a neuron to 2 what is the optimal way to arrange them in layers, in terms of getting them all to interact as rapidly as possible?
You can copy the organization of the Walsh Hadamard transform.
Going through the inputs pairwise in sequence have 2 neurons act on the 2 input elements. Place sequentially the output of one neuron in the lower half of a new array. The output of the second neuron sequentially in the upper half of the array.
Then process the new array the same way. After logbase2(n) stages any single input value can affect any or all of the outputs.



The features are coded by position; there is no reason for a distant feature to interact with this local feature.

My question is somewhat the same as saying that
this ----> X <---- is not required to interact with
this ----> Y <---- to have meaning.

This group of letters ----> XYZ <---- is not required to have any relation with either of the prior two letters to have meaning.

This sounds like a deep learning issue and not an HTM issue.

That said, the lateral binding of hex-grid coding is a way to join local features into larger feature groupings in a spatial sense.The information of a local mini-column is communicated to a “distant” mini-column at a distance that roughly corresponds to macro-column spacing. This communication is carried out by relatively long lateral axonal connections; much longer range than the “normal” dendrite input range.

I do see that the “other” sense of grid coding brings up something that IS related to your question. What is causing the distributed and repeated grid coding across the local maps? I could see that there is something related to a spatial FFT that takes a point feature and “spreads” is over space - the spatial frequency of the feature is turned into repeating position coding, with a regular hexagonal spacing. This spacing is known to be much larger than macro-column spacing.

If you could apply your thinking into how these lateral connections combine to spread features the way that grid coding is known to work you may be onto something big.

Keep in mind that there is a hard biological constraint that is known as the 100-step rule that is an observation that input to output processing must be working is less than 100 cell interactions due to known neuron firing rates and measured time it takes for humans to respond in trials. The “other” constraint is that there are only 100 or so maps in the brain and a path through any of the hierarchies to the temporal lobe goes through 10 or so maps so whatever processing is done in about ten stages of encoding and perhaps the same number number of stages of output. Call it 20 stages end to end. Assuming that the cortex really does do the same computation everywhere (a core belief of HTM) that give 5 cell firing for the local computation.

Connectionist Models and Their Properties

Feldman and Ballard 1982


Well why not think about it? I see no reason not to. For one thing it fits in very well with efficient CPU memory access patterns. It also has some nice properties for the early stage of visual processing where the neurons could develop some regularities of behavior. There are also some connections to the Irwin - Hall distribution you might make use of. No need to preclude the notion without basic investigation.
True, I am not concerned with HTM, though surely I take note of the vast memory capacity of the biological brain.


I am not discouraging you from thinking about whatever it is you want to think about.

Fancy a shot a stardom?

I am trying to tempt you to consider that the Walsh Hadamard transform has some of the properties of a spatial FFT. If you are familiar with this type of thinking -and- if you change some of your base assumptions to bring it inline with the known biological facts you could actually produce some interesting and useful explanations for a knotty theoretical problem that is a very hot issue at the moment.

How do the grids of grid cells activation patterns form in the HC/EC formation? I assure you that if you had a model that matched the facts there are lot of people that would be VERY interested in what you had to say.


At the moment I’m taking a break and doing analog electronics. If I get back into AI I’ll probably try to combine associative memory with neural networks in the simplest effective way I can find. That been on the agenda for a long time. I’ll take cues from biological systems, but I don’t need to know specifically the solutions evolution has arrived at for any particular animal or class of animal.