Cognitive Limits Explained?

Hi,

I was just wondering if recent discoveries of the structure and operation of the neo-cortex has offered more understanding/explanation of certain numerical cognitive limits that were theorized based on psychologists experiments. One classic example is Miller’s Law - the magic number 7 ±2 and why we chunk information - when for example recalling sequences of things.

As a follow-up, if the answer is yes, (we do now understand why there are these cognitive limits exist based on our knowledge of the architecture of the neo-cortex), then has the modeling/analysis done for HTM revealed higher capacity instantiations or is it some fundamental limit that biology and artificial architectures kind of have to live with ?

I suppose that conditions such as exhibited in autistic savants might imply that from a memory and computation point of view, the typical limit can be overcome, although along with the “superpower” autistic savant abilities, my understanding is that it comes limitations in other areas of cognition.

1 Like

To clear up a probably small thing, this is the third time I saw this quoted in the forum. I forgot but someone even attributed it to a possible connection with hexagonal formation. 7+/-2 is a 1956 study by Miller specific to visual short term memory which some argue that it lacks backing. More recent research by Cowan (2001, 2005, 2010) and others argue this number to be 4 based on rate distortion theory and it is a more accepted generalization in Human Memory field. In either case, this may be considered as a characteristic of the architecture rather than a limitation. In my perspective, you can alter these characteristics which potentially costs you in other forms. These are sort of reference points in cognitive sciences that whatever model you are working on should converge to if the goal is to simulate human behavior. If you are only after AGI bypassing biological behavior maybe you can ignore these constraints.

To answer your original question, I do not think HTM is there yet to provide explanations to high level cognitive phenomenon such as the validation of these kind of limits. However, you can speculate about low level phenomenon such as a capacity of a single cortical column or psychological constraints on sequence learning on the level of a cortical column. But then I am not aware of any psychological studies that dwell on this sort of lower level phenomenon and I am not sure if this is possible without some elaborate brain imaging. Take this with a grain of salt.

You actually poked my current problem. How do I tie up psychological studies with HTM to converge on biological behavior? As it is also valuable on top of neurobiology. Nengo, the closest thing to HTM considering the emphasis on neurobiological plausibility, also strives to concur with psychological phenomenon.

1 Like

I’m not aware of them either, because we can’t do the type of invasive brain imaging on humans as we do on rats. However, brain imaging techniques are rapidly advancing. Keep your eyes peeling in this area, because it will inform us greatly.

Thanks for the reply. Informed speculation was at most what I was expecting on this subject. I thought maybe while reverse engineering the neo-cortex there may have been some speculative “aha” moments related to this.

From the opposite perspective, it seems to me that in many cases, psychological experiments have informed neuroscience as to what must be happening or not happening in areas of the brain - without the need to actually measure brain activity. I’m thinking of things like reaction times to visual stimuli that happen too fast for signals to have been sent to certain areas of the brain and back, and therefore we conclude that the cognition (perhaps the wrong word) task is performed locally (e.g. in the retina).

FYI - I am an electro-optic systems engineer and not in the neuroscience or psychology field so forgive my ignorance. My interest in this is both general curiosity and in understanding biological vision to inform designs of artificial vision systems.

1 Like

The “100 step rule" is perhaps one known dimension.
This relates to the known time for neuron-neuron interaction and the measured human response time in various experiments.
http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog0603_1/abstract

On the other end of the timescales I hypothesize this:
I propose that a single thalamocortical resonance cycle in the cortex is the smallest quanta of human perception.

I propose that a single thalamocortical resonance cycle in the episodic portions of the cortex (those connected to the hippocampus) is the smallest quanta of human experience.

Union of SDR while preserving the sparsity allow to store/merge 5-10 orthogonal items before they can not be distinguished

In [54]: x.a.vsize
Out[54]: 2000

In [55]: x.a.spa
Out[55]: 0.020

In [56]: x.a.spa_nbits
Out[56]: 40

(x.a + x.b ) // x.a
Out[60]: 0.475

isdp.thin(isdp.union((x.a,x.b,x.c)), new_spaOnbits=40) // x.a
Out[52]: 0.375

isdp.thin(isdp.union((x.a,x.b,x.c,x.d,x.e)), new_spaOnbits=40) // x.a
Out[47]: 0.175
isdp.thin(isdp.union((x.a,x.b,x.c,x.d,x.e,x.f,x.g)), new_spaOnbits=40) // x.a
Out[53]: 0.150

~10% it start to get hard to distinguish.. because you can not guarantee 100% orthogonality

In [48]: x.a // x.e
Out[48]: 0.025

overlap

In [61]: x.a / x.e
Out[61]: 1