– “as i understood, the number of contexts this algorithm can remember, is of linear order to the number of cells per minicolumn. and if 32 cells makes 32 different contexts possible, in a problem like natural language, that makes only one or two previous words, which definitely doesn’t get the job done. so don’t we need to recognize where we are in the larger sequence? i remember a paper by Numenta telling that it can predict not only the next letter, but to some extent, the next syllable and the next word.”
++ “It is not limited in the way you are speaking. Remember that there are potentially thousands of minicolumns, each looking at different spatial aspects of the input. They all have different receptive fields. Each one is looking at a specific aspect of the input space and recognizing those spatial patterns temporally. Each column is limited to the amount of temporal contexts one input can be recognized, but working together they put together a much richer picture of the spatio-temporal space.”
– "yes, but considering the language example again, is the spatial pattern of ‘A’ any different from the spatial pattern of ‘A’ ? and if 32 contexts make… one letter (that’s the exact number in my mother tongue ) so… what we can do with that? does it give us anything other than a one letter context?
++ “A and A’ spatial patterns will be the same. BUT there are not only 32 contexts for this spatial pattern. Each minicolumn will see a different part of “A” because they have different receptive fields for the input space. Each will create different temporal contexts for the receptive field of “A” that it sees. One column might recognize the bar across the letter. Other letters will also have a bar (like H). “A” is only recognized when many minicolumns predict that A is coming next, each looking at a different receptive field of the spatial input space.”