Hey everbody, i was just reading the book “on intelligence” and came across the 100-step rule.
i am no biologist in anyway, so i hope to get some answers here. thank you.
reading the book it seemed to me , that the 100 step rule is used to show how the brain can “calculate” diffucult task in only 100 steps , therefore it “must” use some memorybased system, because there is no way in 100 steps to make so big of a calculation.
after some research i have found out , that there is neuronal divergenz and potentialy very long neurons. This combined implies that within 100 steps potentially the whole brain can be activated in 100 steps. multipel times . it just depends on how many different neurons are activated by the source of activity. if we say one neuron activates 3 more , we have 3^100 neurons activated within 100 steps. i think this is a vast amount.
with this consideration in mind , i dont see the point of this theory. can anybody help me make sense of it?
Rudolf,
Yes, millions, perhaps billions, of neurons can be activated in 100 steps. However, you are still limited to 100 steps. It is like a parallel supercomputer. You might have a million parallel processors but each one can only execute 100 instructions. You can’t calculate much of anything in computer code in 100 machine instructions.
The point is just to say brains are not computers. The 100 step argument has been around for a long time, I didn’t make it up.
Jeff
The massive parallelism is likely where the “genius” in the system is, both
the existence of the parallelism and the specific way it is implemented in
the brain. As you branch and combine different sensory information paths
with each other and with different memory paths, you also have to combine
each of the tentative “results” in some way. I’m pretty certain that it’s
not as simple as the “input signal” travels “in” from sensory organs, in
exponential complexity increase, until it gets to a single, definite "peak"
of complexity, at which point the “signal” goes through a similar process
in reverse, as the maximum number of involved neurons are exponentially
regularly whittled down until you get to the “grandmother cell” at the end
that reports the “conclusion” of the computation. That is, I don’t think
it’s possible that the typical path is The brain seems to be too modular
for that, and different areas are making specific types of computations,
combining specific types of inputs.
In itself, this theory is already important, in that it describes why we
cannot easily physically duplicate the structure of biological brain
information flow in silico.
Thank you both very much for ur quick answers. It helped quite a bit.
This theory is extremly interesting. By the way Jeff your book is awesome. thank your for writing it.
I wish you all the luck for your research.