So I have my own very realistic design for AGI. I went through several HTM school videos and read some about HTM, and have several interesting questions.
So far I have got that there is binary maps/codes made…standing for input senses? So a sound of a car horn, or image of a cat, becomes a square map of random bin numbers?
Then, maps can be partially matched, or even similar location-dotted, and still pass cuz they look similar, right? I thought these were like hashes though, how can it get by being non exact?
Binary maps can be compressed smaller like CNNs kinda do, so the square is smaller but looks similar…?
Unions combine memories/bin maps…no this can’t be, maybe it is the very similar ones like cat images of the same cat just slightly different looking images of him? Does it allow you to recognize an animal with cat ears, dog nose, and monkey legs: a mammal?
The spatial pooler, hmm, its columns cluster the similar binary maps? Like cortical columns?
And ok but, where in all of this allows it to predict the next word, I mean if input is “I was walking down the”, how/why does it grab the next part of the sequence?
Does HTM use reward? Does it leak reward to translatable/causality nodes (cat=dog=rat…A>B ex. X>AGI) to make new rewards (AGI, cars, homes, games)? I.e. things it heavily predicts or things that lead up to such AGI/food goals.
Why use SDR, why not just store memories in a hierarchy and heterarchy like this so you can have input flow up and partially robustly match similar memories? https://imgbb.com/p22LNrN In my design, each node stores how many times it has been reached/seen, and when 2 memories are seen in close time delay, they link hebbian style, ex. “thank”+“you”=“thank you” node. My hierarchy therefore knows its stuff i.e. # of times seen a memory and what is made of what parts. And when node dog is active, energy leaks to its prediction nodes like play, eat, sleep, barked, bites, and those leak to “cat”, and these 2 active nodes now cat and dog form a heterarchy link, it’s the only way the network can find out cat and dog are related - through (literally) their shared contextually paths stored in the network. Same for category nodes Dog stands for beagle, terrier, Labrador, poodle, they are all active at the same time so it links them, making a hierarchy of heterarchy groups. The brain merges same word, similar words, category similars, 2 eyes, multiple senses, motion blur, reward, recency, and more.
What makes HTM’s predictive accuracy so good? The HTM school videos, although more interesting than Deep Learning, explained nothing about how to predict the next word in a sentence, or next part of an image. This is really bad (if you asked me), HTM is not clearly explained then. For example, if I see dog 55 times in text, and dog bark 10 times, and cat 44 times, and dog played 1 time, i can better predict the more data i eat, what follows, or even what word is more probable solo with no context. If I translate my context question ex. cat=dog, i may predict cat barked, this helps me predict much better with less data, by recognizing similar time delays, related words, etc, ex. ‘i was walking down the road’ is very similar to ‘theuy had beeinn walkiing up thiiis road’.