HTM wait what

So I have my own very realistic design for AGI. I went through several HTM school videos and read some about HTM, and have several interesting questions.

So far I have got that there is binary maps/codes made…standing for input senses? So a sound of a car horn, or image of a cat, becomes a square map of random bin numbers?

Then, maps can be partially matched, or even similar location-dotted, and still pass cuz they look similar, right? I thought these were like hashes though, how can it get by being non exact?

Binary maps can be compressed smaller like CNNs kinda do, so the square is smaller but looks similar…?

Unions combine memories/bin maps…no this can’t be, maybe it is the very similar ones like cat images of the same cat just slightly different looking images of him? Does it allow you to recognize an animal with cat ears, dog nose, and monkey legs: a mammal?

The spatial pooler, hmm, its columns cluster the similar binary maps? Like cortical columns?

And ok but, where in all of this allows it to predict the next word, I mean if input is “I was walking down the”, how/why does it grab the next part of the sequence?

Does HTM use reward? Does it leak reward to translatable/causality nodes (cat=dog=rat…A>B ex. X>AGI) to make new rewards (AGI, cars, homes, games)? I.e. things it heavily predicts or things that lead up to such AGI/food goals.

Why use SDR, why not just store memories in a hierarchy and heterarchy like this so you can have input flow up and partially robustly match similar memories? https://imgbb.com/p22LNrN In my design, each node stores how many times it has been reached/seen, and when 2 memories are seen in close time delay, they link hebbian style, ex. “thank”+“you”=“thank you” node. My hierarchy therefore knows its stuff i.e. # of times seen a memory and what is made of what parts. And when node dog is active, energy leaks to its prediction nodes like play, eat, sleep, barked, bites, and those leak to “cat”, and these 2 active nodes now cat and dog form a heterarchy link, it’s the only way the network can find out cat and dog are related - through (literally) their shared contextually paths stored in the network. Same for category nodes Dog stands for beagle, terrier, Labrador, poodle, they are all active at the same time so it links them, making a hierarchy of heterarchy groups. The brain merges same word, similar words, category similars, 2 eyes, multiple senses, motion blur, reward, recency, and more.

What makes HTM’s predictive accuracy so good? The HTM school videos, although more interesting than Deep Learning, explained nothing about how to predict the next word in a sentence, or next part of an image. This is really bad (if you asked me), HTM is not clearly explained then. For example, if I see dog 55 times in text, and dog bark 10 times, and cat 44 times, and dog played 1 time, i can better predict the more data i eat, what follows, or even what word is more probable solo with no context. If I translate my context question ex. cat=dog, i may predict cat barked, this helps me predict much better with less data, by recognizing similar time delays, related words, etc, ex. ‘i was walking down the road’ is very similar to ‘theuy had beeinn walkiing up thiiis road’.

2 Likes

It seems that you’re desiring/expecting an application-specific tutorial, rather than an explanation of the concepts. The videos as they were created by Matt Taylor did a great job at explaining the concepts in a broad, general way… for me personally, that fact that it wasn’t deep learning, and was so different from deep learning in the first place, was quite a stumbling block. I had to unlearn a bit of what I thought/believed.

I think that parts 1 and 2 here do a great job. As far as applying the concepts to a specific problem in a tutorial or coding fashion, it doesn’t do that, and I don’t think it was the intent either. It doesn’t leave you with a ready-made tool, but only a concept, and perhaps that’s some of the frustration that I seem to pick up from your writing (and forgive me if I’m mistaken and that isn’t the case).

So, what to do about this?

For me, I had to sit down for several hours, pause the videos, write out and draw on paper the concept that was being explained, how the rules worked, compare that with BAMI, and fuse all the details together until it made sense.

But the point that you’re raising, which deserves some discussion, is that we as a community could better improve our materials to show more of a tutorial “how-to” htm applied to different problem sets. This would go a ways towards showing the practical applications of the algorithm(s) to current problems, and probably help increase its adoption.

But doing that takes time/energy and some type of compensation, whether monetary and/or prestige in nature. Since the untimely passing of Matt earlier this year, it’s fair to say that we’ve been searching our footing as a community and just trying to get by. I feel certain that somebody will be creating more tutorial/explanatory videos in the future (in addition to the meetings/discussions which are shared weekly here), but we just haven’t made it there yet.

With all that said, welcome to the community!

We hope you’ll engage with us and others with mutual respect and understanding. We each have our pet theories on how things (AGI, brain function, etc.) work, and absolutely none of us have the absolute correct approach. But by respectfully gathering together here, sharing ideas, research, and critiques in a friendly way, we can all help move our understanding of the brain and AI forward to help solve the practical problems in our world and lives.

3 Likes

It’s not that the walkthrough didn’t give a application-specific tutorial, it should have been able to explain how AGI/ the brain works (as far as they got at). It’s not something that is a template, and based on your use, changes it. The brain can learn many things but there is a few mechanisms behind all it does, and works on vision, audio, and the mechanisms even work together. It’s the car, the driver is the user instead, so to speak…

So…how does HTM store the text “hello there”? For example: in a 2D block of white cubes, with hash-like yellow dots. And, how does HTM store the memory made of 2 known memories “hello there + my friend”? If you can answer these 2 questions, it should build the basis for understanding how it stores sequences. In my brain design, it doesn’t store the same word, or phrase, or letter more than once, it builds off it as I showed above in an image, and too similar cat images would group together in a cortical column node making a category - since no image is ever the same from a camera and needs to still be stored “only once”. So I link together these clusters…mycat+mycat+mycat…cat>sat>and>said>hi…then link together nodes of such that…vision+text cortices…then those…AI+AI team…

It doesn’t. Not in any standard way. That’s left to the implementation. I think some people are using hash-tables/dictionaries to store what input resulted in which output SDR, some use compressed bit representations and similarity scores, etc.

The temporal algorithm itself only addresses the current input frame within the context of the previous input frame, and the possible next-step output frames… it’s closer to context chaining/linking. So to represent an entire sequence or possibility of sequences.

The other thing about HTM is that the vanilla flavor of the algorthm(s) is several years old, focuses primarily on neocortical columns, which only make up one part of the brain. Newer explorations are examining the influence/relationship of the thalamus, cerebellum, and other, more ancient parts of the brain that also share interconnections/context with the neocortex.

If you haven’t had the opportunity to look at the larger scope of the brain’s parts/interconnections, I like the below playlist, which helped put it more into perspective for me as well. I recommend watching the videos from oldest to most recent, as they build atop one another.

Brains explained:

2 Likes

Does HTM have a benchmark on The Hutter Prize? It is so fun seeing how good it predicts text, and this way they have it is the best AND funnest, so I hope HTM has a score against Lossless Compression, or tries it. Even I tried mine, which is barely coded in full (~9%) only having some (the core) of my design implemented.