Thanks you very much!
I understand your explanation about input sparsity and thanks for the bit on minicolumns in layers of cortical columns.
I see. I thought if it more on the lines of giving preliminary processed data from low levels to levels multistep ahead in the hierarchy to maintain its importance in higher level abstractions as well, despite the voting mechanism.
There was a discussion going on about this in another post in which I said that during first novel input at the first time step, make all the cells in the active columns as winning cells and then the selective winning cells of the following pattern at the second time step will form connections with all these cells. So when the sequence repeats, the first input will already be in the context of the last input and it will also have connections with the following second input when it arrives for the second time. The redundant connections could be lost using synaptic decrement. But would those redundant connections make a real difference? It’s this even correct?
Does this relate to the reset function? Is it really necessary?
This is the post.