Is there any challenge in htm theory?

Hi dear
I am phd student and I am interested in HTM theory, is there any challenge in HTM theory that i could solve it as my phd thesis? Is it acceptable to defend a phd thesis?
thanks

3 Likes

I mean do you have any idea for continue HTM as phd thesis?

@sunguralikaan combined HTM with reinforcement learning for a master’s thesis. You could try combining HTM with deep learning or something. Maybe you could find a model of another part of the brain, like the cerebellum or hippocampus, and try to combine it with HTM.

It depends on what subject it’s on. Is it computer science in general, something more specific, or something else like cognitive science?

Does it need to be something which could be used in HTM theory? If so, it needs to be possible in the brain. If your field is neuroscience or something similar, you could figure out how inhibitory interneurons contribute to things like the spatial pooler and minicolumn bursting/non-bursting. Maybe it will confirm those aspects of HTM or expand them.

You could try to show whether basic components of HTM are applicable to other things. I don’t know much about AI besides HTM, but perhaps sparse distributed representations or minicolumns are applicable to other forms of AI, like long-short term memory (LSTM) since that does something sort of similar to HTM maybe, or deepleabra since that’s possible in the brain. That wouldn’t expand HTM theory, but it would draw attention to it and show the validity of its mechanisms. Showing that HTM’s mechanisms have potential to be built on is important because HTM can’t be proven mathematically.

HTM has binary weights, and that’s all that’s needed, but as far as I know non-binary weights haven’t been tested and it’s unclear whether they would help.

You could look for how the brain does sequence resets. There’s currently no known biological mechanism for that.

7 Likes

How about an encoder to process images? I’m not sure what’s already been done on this, tho a good image encoder (greyscale or even color) could open up a new set of application areas for the HTM algorithms. For instance video processing, which I think is commonly done by feeding CNN outputs into RNNs.

1 Like

A rigorous description of the mathematical foundation of the HTM neuron, alone and in an assembly?
The HTM model has been criticized for the lack of a mathematical treatment. One exists for the SDR portion of the model but there is not one for the overall model.
Here is the paper describing the properties of the SDR:

This document (BAMI) describes the algorithm of HTM in great detail but it lacks a rigorous mathematical description of what is being calculated.

Here is an example of the kind of analysis that is common with deep learning:

You may explore the Numenta collection of HTM papers here:

4 Likes

thanks, it was amazing. I am computer engineer and work on artificial intelligence. I think your idea about combining htm with deep learning is very good, but I do not know what should I do.

great,:+1: thanks.can you explain more?

thanks

It’s BitKing’s idea. I don’t know much about AI besides HTM.

See BitKing’s post:

@BitKing could you elaborate on this? What sort of merging do you see happening with deep learning or any other form of AI? What sorts of problems do you think it could solve?

1 Like

An example is using the recall/anomaly properties to signal that at some level - the perceived signal is novel and to modify the learning rate to capture it. I think that the cortex does exactly this in combination with the subcortical structure called the Reticular Activating Complex. (also frequently called the Reticular Activating System)

Another possibility is the connection of the subcortical structures and the cortex (HTM) cells. What is the division of labor and what information is passed between the structures?

HTM has the possibility to do one-shot learning, that is, to remember with a single presentation of an item. Traditional deep learning requires many more presentations of a training set to learn the central tendencies of the data. Can the one-shot learning of HTM be extended to delta-coding in deep learning?

2 Likes

Well I basically mean an encoding function that would take in the greyscale values of all the pixels (which are often given their own input neurons in common ANNs) and output an encoding vector that could be fed into the SP. It would need to follow the basic rules of encodings, where the number of active bits is always the same, the same input gives the same encoding and similar inputs yield overlapping encodings.

1 Like

You’ll find a mathematical formalization of HTM’s spatial pooler here: https://arxiv.org/pdf/1601.06116.pdf

1 Like

@shiva what did you end up doing?