HTM do not perform well when learning a simple function like y=x!

This is not an encoding problem, it’s an intelligence problem. At this level of the HTM theory (1 layer of cortex running SP/TM, processing output as distal input), it won’t learn anything about an ever-increasing value, even if there is a regular function dictating the processing of points. Every point will look like a new point to the layer, there will never be any end to any sequences, and it will be in perpetual confusion about what point comes next because it has seen every one exactly one time.

You could use the delta encoder. It will encode the difference in value at each step, which should be a constant value, enabling proper prediction. In more complex setups, you can have both a regular numeric encoder and a delta encoder if both provide relevant information.

2 Likes

Thank you @rhyolight
I understand what you are saying. But I think we have to give importance to the way we encode information and what the HTM predictions should represent depending on this initial SDRs. Since these two can vary even for a single problem.(?):thinking:

Thank you @scott
That’s what I was thinking about. HTM should learn the information presented in the SDRs and we can exploit this property to encode data differently in order to get different results.

1 Like

thank you very much .I will pay attention to it next time.