This post was flagged by the community and is temporarily hidden.
Welcome to the community!
I see that you are new here; You may want to read this for a fast start on what we are doing:
HTM does not use back-prop, nor reward based reinforcement learning as you are used to seeing in deep learning. We don’t need thousands of presentations to learn our internal models.
I think that you will be delighted to see how it is possible to do useful learning without all these trapping of deep learning. This online rapid learning is one of the best features of HTM.
HTM really is a very different way of doing artificial neural networks.
Thanks for your question @gogoalshop. I am not sure if you are asking in the context of Deep Learning or HTM. What @Bitking said above is spot on. But I have moved this into the #engineering:machine-learning forum since you seem to be asking more about the former? Please provide clarification if necessary. And welcome to HTM Forum!
A large enough artificial neural network can always find a solution by memorization using BP. That is to say it is not going to find sophisticated logic and inductive concepts, instead it will shortcut them. Evolution is gradient free, vanishing and exploding gradients in particular are no problem and you can have logic that is not possible with BP. However that is only really to say evolution can produce smaller networks that in some ways are more sophisticated.
Anyway current artificial neural networks have an unacknowledged problem where you have multiple weighted sums operating on a common vector. That leads to weak classification separation ability and low memorization capacity, an inefficient use of weight parameters. If you understand how the weighted sum is acting as an associative memory you can fix that by applying a parameterized nonlinear function to each term of each weighted sum before the summing process and such that each nonlinearity is different. The parameter could just be a (random) bias term to a standard nonlinear function or you could include random projections to produce the diversity necessary. Unfortunately when I read papers on say arxiv or where ever I don’t see much sign that anyone knows what they are doing at a basic level. I guess there are historical reasons for that. However it is a terrible failure of science.
Did someone just repost something I posted before???
Maybe a bot???
Anyway time has moved on and I am able to answer some of my own questions a bit better.
Anyway weird. gogoalshop = bot
Man these spambots are getting good. I did not catch this, but I think you are right.