So, I’ve been thinking a little about chunking of sequences in the brain. And I guess my main thought and question is why does the brain do this?
It is easy to find examples where it feels like my brain is chunking, so I presume it is a real thing. Say you are recalling a password, I seem to break it into subsequences. You start at the first subsequence, then the end of that subsequence prompts the next subsequence, and so on. But you only know your password in sequence. It is impossible to recall the sequence in reverse (without some mental gymnastics), or even predict the element a few steps down from where you are. You only know what immediately comes next.
I guess a couple of easy examples are the alphabet and the digits of pi. The brackets show how my brain chunks them, though other people may have different chunk sizes:
So the question is, why does the brain chunk? Why not just store a full sequence such as:
alphabet -> A -> B -> C -> D -> E -> F -> G -> …
Instead it seems to be:
alphabet -> alpha1 -> A -> B -> C -> alpha2 -> D -> E -> F -> alpha3 -> G -> H -> …
ie, a sequence of sequences.
The higher order sequence:
alphabet: alpha1 -> alpha2 -> alpha3 -> alpha4 -> …
and the lower order sequences:
alpha1: A -> B -> C
alpha2: D -> E -> F
alpha3: G -> H -> I
alpha4: J -> K -> L
Then to complicate the picture a little, it seems when you spell out the letters of a word, it doesn’t use chunking. Again, if I mentally spell out even long words my brain doesn’t feel like it is chunking. So why for the alphabet, digits of pi, or secure passwords does the brain chunk, but not for spelling words?
BTW, I have working proof-of-concept code for learning and recalling chunked sequences in my notation. So the sequence of sequence idea works. And of course, HTM high order sequence learning is a key part. And it should be easy enough to extend this to the more general idea of a sequence of a sequence of a sequence of a sequence of … but I can’t currently think of a good example to test that idea with.