@gmirey dropped a nice link on this earlier:
@gmirey dropped a nice link on this earlier:
Update: almost 2000 users.
Update: almost 150K views total
Looks like you’re behind.
I think a cellular automata like network but using dynamic connectivity, could probably exhibit rhythms and other brain like properties, perhaps even general intelligence, while potentially being much simpler in mechanisms.
I am new to this forum, I am a bioinformatician and heavy user of Deep learning…
To me, Numenta approach is much more interesting and profound than deep-learning and other ML methods, but the ML community is more interested in what works. Deep learning works in many cases ABOVE the average human performance, and hence, it is useful in many areas. But eventually it is just a powerful function-approximator.
Nevertheless, I have got the feeling that the ML field is moving toward Numenta direction faster than we think. Late research showed a significant improvement in unsupervised / self-supervised approaches which is the real "cake" of ML as Yan Lecun stated.
In 2018 we learned that unbelievable results can be achieved with system that just try to “model” the word, like: self-attention, language models and generative models. Thus, even in the deep-learning world, it is becoming clear that the future paradigm of ML is a not a data-angry classifier, but a strong model of the world, learned from sequential or forward predicting model (like GAN), which shockingly similar to Hawkins view of the brain: A “constantly changing multiple predictive models of the world” – In my opinion it is not far from what the recent “BERT” and “StyleGAN” models are, although the architecture is very different.
The amazing thing about the brain is how it can learn such a complex model of the world without much data and with a very low-powered and slow hardware. This is where Numenta should focus: The future AI technology is probably not a data-angry monster running on multiple GPUs, but a context-aware, self-supervised, ensemble of learners, that can learn numerous concepts and objects, and have a “common sense” about the world like human does. HTM has a much better chance to go there than DL.
I think that Deep learning is here to stay, but it is just a step toward a brain-like system of the future. We just need to go through GANs, Attention, Capsules, Language-models, etc. before hitting HTM again, probably with some clever take on current computation paradigm…
If the Numenta software will be as mature and elegant as keras or pytorch and will show some interesting unsupervised results on real problem, then it will fly. I don’t think that it has a chance to beat DL in big-data supervised tasks, where humans can’t.
Thanks for your thoughts. I agree with a lot of what you said. I think HTM will require input from multiple sensors in space to really shine on hard tasks like object modeling. We need to split up the senses so we have different cortical columns modeling different locations in space, attached to different sensors. That is how sharing lateral connections will enable sensor fusion. We think this is how local patches of one sense are fused, and we also think this will apply across sensor boundaries.
But one thing we can do with today’s tested theory is see how it can be applied to the Deep Learning systems of today to improve performance or accuracy. That is currently what we are looking into, and I have to say that from what I’ve seen so far I am encouraged that we can make some impact on the current Weak AI scene by adding some HTM ideas.
Hi everyone. I’ve been reluctant to join the conversation for a couple of years, for many reasons, and will restrain myself further, as writing posts like this one takes too much time. But here are my .5 on a given topic.
TL;DR: to succeed, the HTM ecosystem needs systems, processes, practices and tools that belong to general software development domain, or that are spread across multiple domains: system programming, network programming, game programming, to name a few.
First of all, yes: these algorithms sure are non-trivial, if not hard. But implementing one is totally feasible nonetheless. As with any software, developing and polishing an algorithm is an iterative process (even/especially when it’s backed with massive amount of scientific data). Hence, it requires a feedback loop.
An effectiveness of such feedback loop depends on a development process. How quickly you can get the results from a working version of the algorithm or an integrated system? How quickly you can decide whether the system behaves correctly and does what it meant to do? Are you sure you get clear results? Etc.
These are questions that are being asked every day in general software development domain. The libraries, frameworks and tools are used to make the process easier and more complex tasks to be more “tackleable” (is there a more appropriate word for this? I’m pretty sure there is).
From what I’ve seen, HTM ecosystem lacks that feedback loop almost completely. This is a complete show stopper.
The second one. These statistics algorithms, or ML/DL they called these days - you don’t need them to succeed. And they won’t get you, the theory, and the HTM algorithms anywhere. They might bring some tools and practices into development processes though, but not much.
More than anything else, you need a system that scales; and scales very well, and doesn’t consume all available resources while constantly operating. This architecture needs to be the foundation of the solution, and you need to build it first. What kind of task the solution will perform is insignificant, because any algorithm can be embedded in it.
And then, you can elaborate processes and feedback loops on top of that system. And improve it further, and improve algorithms you embed.
I was pretty surprised when I saw the news that Numenta is hiring ML/DL developer (and looks like the position is already filled since the announcement is gone). Numenta, and/or the community, needs a couple of good software generalists, with strong production-grade experience in network programming at least. They might not be familiar with some concepts, but they can build an infrastructure that you can use. Scientists don’t construct laboratories; instead, they formulate requirements to be met.
I won’t discuss the theory itself here, since it doesn’t matter much; no one have been able to prove or refute it because there is no clear process (of proving/disproving) to follow, yet. Whether the theory is right or wrong (and I assume it’s right, at least in its core principles), a well-defined process, that is easy to follow, will get you to the point more quickly.
Jeff: “I think we are pretty mainstream, we are just rare.” (source)
@rhyolight can’t wait for Episode 16 to come out.
While I don’t have a binary opinion on this, I’m more inclined to this opinion. Going DNN or integrating DNN’s might lead HTM to a trap. One of the things I liked about HTM is that even though the algorithms are complex and involve self-organization and probablistic techniques, they are still mechanical and can be modelled with classical CS computational models. IMO this is something that is extremely important both for engineers and businesses so that a certain level of confidence can be established about how an HTM application will operate in the real-world just like any other non-AI software out there.
Don’t worry we are not going Bayesian. We’re just trying to see what tricks we can pull off in the current ML space to garner some attention. Maybe there is some low-hanging fruit where HTM ideas bring big gains? It is a good time to ask these questions.
HTM isn’t mainstream because there isn’t any public relateable news about it. Deep Neural Nets went mainstream the second they defeated world grandmasters in Go. Funny thing, I wanted to try making a Go AI using HTM before DeepMind existed. At the time I didn’t have the skills and there wasn’t a Windows version. Anyway, people relate to games. If HTM can play games, people will pick it up, especially if you can pit it against DeepMind. People would also be more likely to pick it up if it’s just the download of a library.
I agree with much of what has been said in the discussion about why HTM isn’t mainsteam yet.
I think that it may be necessary to depart somewhat more from biology to make it more useful in the short term. The lack of supervised learning / reinforcement learning is quite a problem, in my opinion, in terms of applying it to most ML problems. What I imagine may bridge the gap is a combination of HTM with another deep learning approach.
Imagine the following: HTM processing of image data -> sparse representation output -> convolutional neural network -> object classification.
To be useful for most problems, it needs what I would call a decider network. It needs a way to choose between various responses and then be reinforced based on accuracy. You’ll never have AGI without this IMHO. Humans and animals learn through an immensely complex process of supervised and unsupervised learning. Humans in particular have very sophisticated decision, reinforcement, and punishment systems.
Has anyone tried combining HTM output with a supervised learning approach?
I am planning to implement something opposite to that…
(1) Deep-learning as the “sensor” to extract image features (ResNet trained on ImageNet)
(2) encode dense space of the DL-sensor last layer into SDR using some clustering approaches.
(3) using multiple HTM columns to learn this representation in an “unsupervised” way. (I was thinking of simulating “movement” of the detector over an image as a sequence of data).
(4) using this HTM-ensemble to do “zero-shot” object-classification / object detection.
The idea here: use DL in supervised-learned pattern recognition, and use HTM where it shines: making sense of the world and model the world in an unsupervised way.
The test case here will be to perform “zero-shot” object recognition/detection …
given an UNSEEN object class: can an ensamble of HTMs recognize new categories?
Can it tell for example that some unique animal it hasn’t seen before, is somewhat similar to animals that it has seen?
There are demos from Cortical.io doing just this with words of animals (inferring the eating preferences of Coyotes after seeing those of Wolves and Dogs), though I don’t know of anything like that with images…
That sounds like an interesting idea. I’m just not certain how you end up with a decision or meaningful output in this case, but it sounds like you have ideas for that.
I feel like you’d still need a softmax layer on top of the HTM stuff to train as a classifier and help determine the error (which would trigger the HTM system to engage in it’s hebbian learning process). Afterward, when you’re happy with the performance of the classifier, you could either ditch the softmax, or keep it in parallel with the HTM system.
That’s where I’d start.