HTM does’t work at all!

HTM is useless at all, it seems to be just a simple RNN, why can it evolve intelligence? Just because of the hypocritical “prediction hypotheses”? If HTM can generate intelligence, why not RNN? HTM still has no effective application so far, just a model that simulates neurons. There are thousands of such models, all of which are deceiving the public, except for deep learning.

If no one explains why a simple prediction hypothesis can generate intelligence, HTM is just another money-fraud project in the cloak of artificial intelligence.

You will always say that the modeling of the cerebral cortex is still not perfect, but this is still not the key to the problem, the key is “why prediction can generate intelligence.”

You would say that the brain works by prediction, but there is insufficient evidence, what if the brain generates intelligence by quantum entanglement? Simulating the brain is not the key at all. The key is why prediction can generate intelligence, and if the prediction hypothesis holds, why can’t current deep learning technology generate intelligence?

OpenAI’s GPT-3 word model is trained on so many texts, the prediction effect is already very good, is it intelligent? When can HTM predict better than GPT-3?

1 Like

Hello @john_zhang. Welcome to the forum.

Please tell me, why the animosity? Where did you learn about HTM that you have to criticise it so fervently? It’s not that forum members here are against criticism, but it seems you’ve had an emotional argument about it that lead you to your post.

If you have a particular issue with HTM, please tell us about it so we can try to address it.

12 Likes

Welcome, @john_zhang. I think the core of your question is what does prediction have to do with intelligence. Before I go into that specifically, let me address a couple of your other points.

First, you appear to have the impression that folks who are working on HTM theory believe pure prediction by itself will magically lead to intelligence. I don’t think anyone here who understands HTM would make that argument. Prediction is part of intelligence – an important part of it. But there are other parts which are just as important, like movement and agency. HTM theory is some time away from describing intelligence.

Second, you give the example of RNN, and ask what makes HTM better. The current implementations of HTM won’t beat RNN at very many tasks (one big reason for this is because HTM does not yet incorporate hierarchy, despite its name). At some tasks, on the other hand, HTM is better. For example, HTM’s ability for one-shot learning means that it requires orders of magnitude less training to learn sequences compared to RNN. It also has an advantage in its resistance to error, noise, and the catastrophic forgetting problem.

So getting back to your central question (if I understand it correctly), what does prediction have to do with intelligence? I believe the answer to that is to think of why intelligence exists in the first place. Intelligence exists in order to increase the survival rate of of an organism. The reason organisms are able to survive and make copies of themselves into the future, is because they are capable of resisting entropy. Any orderly system which is not able to resist entropy will disintegrate and become random chaos. This is a basic law of the universe.

Ok, so what does that have to do with anything? Well, in order to resist this tendency toward chaos, an organism must be able to model the world around it. This allows it to make predictions about what will occur. Like I mentioned before, that alone isn’t enough, though – simply knowing what will happen alone won’t enable an organism to resist entropy. What is also needed, in addition to prediction, is the agency to act on those predictions.

If you’ve not read about the Free Energy Principle, I highly recommend it. I think it cuts to the core of your question, if I understand what you are asking. This video does a pretty good job of explaining the basic concept.

One additional point which you did not mention, but which I believe is important, is why one would explore the biology of intelligence, and what advantage that approach may have over the more traditional approaches. For that question, I think Matt did a great job of explaining in this video.

20 Likes

HTM works to faithfully emulate a single structure in the brain. It happens to be a structure that is the basis of the cortex but it does not mean that this is the entire brain.

This model is much closer to what cortex does than any RNN. The “extra” features are very important to emulating what the cortex does in a future, much larger model of the brain.

You are correct that by itself, HTM does not do much that is interesting. It will be a very necessary component of a much larger model that does do some very interesting things.

Working out how to make a logic gate or memory cell does not make a computer, but these elements have to be worked out before you can make a computer. HTM fits in this same component space.

Nobody is any closer to making an intelligent machine, but following the biological path makes sense as it happens to be the only working example there is. Every (EVERY!) non-biological attempt to date has started out with much promise and then crashed and burned as it faced the real world of combinatorial explosion. The micro-parsing and interspersed memory/logic inheritant to the cortex model is the only path I have seen that solves this very difficult problem.

The other critical element that will be needed to harness this power is the subcortical structures. This is much of the area I am focused on. In my work I assume that HTM will faithfully emulate what cortex does. RNN does not do that.

15 Likes

Don’t feed the “12M$” troll (GPT-3 train required 12M$ in Azure “unloaded” instances). Since we don’t need a chilling machine attached to our skull, the problem might be not a matter of gflops.

Low hanging fruits are becoming scarce. Before the summer, wait for a last winter :slight_smile:

8 Likes

Very short …
HTM prediction so far is the Model part (possibly cheaper than NN) of a RL (so u need basal-ganglia implementation too).

You put on top of that Planning (another yet not clear part of HTM) and you have general-learning agent. (SDR allow symbols)

NN’s do not have time integrated, learns in batches not online and cant classify more than hundreds of classes (said it other way, do not support distributed representation).
No distributed representation , no symbol logic support. No Syms means every “reasoning” have to be codded manually as yet another NN module… pattern recognition is not reasoning.

AFAIK HTM does not over-fit, instead it forgets.

So it is not just HTM, but what the platform is build on top of : time, SDR, VSA, both connectionist&symbolic

BTW, to have intelligent agent you first need to define what Intelligence is ?!

3 Likes

Hi @john_zhang,

We appreciate that there are many opinions about the path to machine intelligence. We believe that an approach based on biological principles, not just inspiration, is the fastest. Our efforts are currently focused on applying HTM to existing deep learning systems, rather than pitting HTM against deep learning.

In terms of “what we would say”, perhaps these 2 resources summarize it best: Our VP Research on applying HTM to deep learning and a ML guide to HTM that dives into the details of HTM algorithms from a ML perspective.

9 Likes

If that is your point of view , I am sorry to say that you have entirely missed what HTM is all about , A great proponent of HTM just passed away insteaded of a rant, I would suggest find his videos on youtuble watch them and then engage in a meaningful dialog.

7 Likes

HTM works very well.
See:

3 Likes

He’s a bot.

1 Like

Hello,

This is more of a comment than an answer to the post:

I work in a computational neuroscience lab, and I find often people utterly confused about the core aims of the HTM theory. People from the machine learning community think often along the lines of @john_zhang post, and people from the neuroscience community think that the theory is based on many vague assumptions and it’s too abstract to be validated against biology.

I find @Paul_Lamb, @Bitking, and @mraptor answers very enlightening and I just hope more people learn about that and give justice to the theory.

all the best,
Younes.

13 Likes

@john_zhang Please do not just shitpost anywhere without full information. I really appreciate the answers given by some of the experienced people here and find their answers very helpful.

As regards to GPT3 - it would do you much better to read about its actual workings rather than some shitty blog that doesn’t even have a remote idea of what it is. GPT3 has 175B parameters and is pretty much overfitted. its’ not intelligent - it’s just predicting things that most of the times are not natural but are grammatically correct.

Its prediction efforts are not good - due to overfitting it has a considerable bank of information to respond via (overfitting is not something bad, since it is really what humans do anyways when learning new languages) but they are based statistically due to the self-attention mechanism used along with the very nature of Deep Learning.

the key is “why prediction can generate intelligence.”

It doesn’t. Intelligence is not just mere prediction that sounds relevant to the context. intelligence is much more and is tied directly to being conscious.

what if the brain generates intelligence by quantum entanglement?

There is absolutely no correlation that has been found that the brain “generates” (notice the quotes) intelligence via quantum entanglement. It’s an entirely quantum phenomena that does not affect any biological processes even remotely (unless you have just read some shitty Mandela effect guy who claims quantum computing causes it and is probably a flat-earther)

Simulating the brain is not the key at all

Debatable. If today I was able to simulate a brain perfectly, I doubt there would be anything else that prevents me from creating an actual super-intelligence

3 Likes