Is my data being predicted correctly?

As of now, an HTM network operates with only 1 layer ie 2048 columns with 32 cells in each column. However, our brains have many more such layers, which are arranged in regions and the sum of those regions make up our brain. If I remember correctly, Jeff and Co are working on understanding and getting multiple layers and regions to communicate with each other. Once this is done, a true hierarchy will be implemented, and thus hierarchical processes may be performed, which should give rise to the possibility of higher order thinking such as inductive reasoning. However, there’s quite a step missing here, which I’m very interested in, namely the ‘processing’ part of it all. I’m fine with HTM ‘learning’ patterns (I look more at it as remembering sequential changes instead of learning/understanding), but I’m really looking forward to the processing part.

Well, that’s quite a large number, but nothing unbearably huge. I haven’t tried that myself, but I know that increasing the cpc lengthens the HTM’s processing time. Notice that doing so would give all 2048 columns 2^30 cells each, which I guess would take your computer for ever to process, but maybe not, all I know is it noticeably increased my processing of data sets when I went from 32 cpc to 128, by about a minute or 2 more maybe? Additionally, I’m not sure if there would be any real benefit to it, as increasing the number of buckets and resolution can ensure to reduce the amount of sequential links created between somewhat similar values, plus the fact that our brains don’t have so many cells per column.

Sorry, I haven’t even looked into nupic.vision, so I don’t know anything about it :sweat_smile: