HTM forgets the past and learns from the present

In my arabic language we call human by "insan الإنسان " ( forgets) because the humane can’t remember everything,
can you imagine a human can not forget anything ?
it means that he will always remember all the sad memories (example: the death of his mother) he will live the rest of his life sad, because in some research, sad memories affect the human more than the happy moment.
but he will remember the sad memories but not like the first time, a long of time he will forget the sad memory and he will see a new memory happy and sad but of course the human must remember the basic elements like his name, write, read , … etc.
So my question is how HTM make forget HTM what he has learned in the past, to learn from other more important period of time, but at the same time, he does not forget completly what HTM has learned in the past (remember implicitly).
I will give an example: temperature anomaly detection, HTM must learn the temperature changes of the whole year, for example the temperature in the day is different from the night, the temperature in the whole month is not the same, the temperature between July and January It’s very different.
I want to ask if HTM can learn in a certain period, not only in the first period and take into account the changes of time.
I want anyone to tell me or explain to me in these cases what HTM can do or if HTM can forget a historical learning like the man does.

1 Like

I can’t address the HTM forgetting question but I will offer this article that discusses human memory and forgetting over a long time scale.

2 Likes

Thank you for your reply. So, in some cases, we need to make HTM forget some information to learn new things and get better results with the minimum amount of time.

HTM and NuPIC already have ways of forgetting. In Hebbian learning, it is the decrement amount that is used to degrade synapses when patterns are not seen for a long time. There are others ways to do it, like removing old synapses as new ones are created, which I have talked about before when discussing keeping small model sizes.

3 Likes

emmm yes that’s true but how can HTM learn from a sequences, I mean if HTM forget and degrade synapses with a long period of time so why HTM don’t learn from another sequence.
for example::
HTM learn the temperature of the month of May, and after a long period HTM learn the temperature of month of October and after another period HTM learn the temperature of january …ect
I don’t know if you understand what I mean
My problem is if the pattern is change over the time with specific context and HTM can’t learn all those changes ?
sorry for my bad English :roll_eyes::persevere:

I don’t think I understand you. You seem to be talking about hierarchical temporal patterns, but HTM treats these just like long sequences.

1 Like

yes exactly, ok I’m agree with you but in some cases the sequence is very long and maybe it will influence and give bad results, so we don’t must learn all the sequence, just some of sub-sequences

I see what you are saying. I think you need to run a temporal pooling processes to extract these sub patterns. Unfortunately, we’ve been investigating this concept through sensorimotor integration theory, not scalar stream prediction.

1 Like

this so bad, but I’m very grateful for your reading and for your answers. thanks matt :slight_smile:

1 Like

When we think about how HTM works, being based on biology and all that, it is useful to reflect on how the human mind works. What is it you remember, and why do somethings get remembered better than others?
Spoiler alert - SURPRISE! and emotional coloring.
Generalization smears out the memory, blending it with other similar memories.

3 Likes

I’ve been putting a lot of thought into applying HTM, especially the TM part, to implement a state machine.

Last night, just before bed, I realized that the learning rules about what changes to remember for a change of state could be just the same as HTM.

For an HTM system where both I/O are encoded into the current time step (where t-1’s output is encoded as a part of t’s input), we can define a state machine (non-finite) where each desired “state” can be its own group of pools, where each pool is attached to a given system output. Goals for the state machine could be “decrease hunger”, “increase excitement”, “decrease ‘pain’ (as concatenated from hunger, fatigue, spikes of sensory input/surprise)”, etc. Each ‘output’ area (move forward, backward, noise, etc.) for each desired state consists of columns which are attached to a lower-level pool/group. When a given pattern seems to cause a desired change, strengthen connections. When it seems to cause a negative change, weaken those connections (or alternatively have a “state” specifically remember those negative changes, so that state-pos and state-neg sum their outputs to cancel out).

encoder(s) (can convert between encoding and input value) ==>

first layer HTM pool (or group of pools) ==>

State Layer (where each potential output gets an HTM pool, input space is from first layer SDRs output)

Assuming our encoders can take and encoding and transform it back to (approximately) its given input for that given encoding, we can then have our states of our state-machine layer, when needed, fire down to their O’s connected columns, and have those columns fire back down to the ‘input’ space, generating an encoding based on their column distal connection strengths, which is then passed back out to our encoder, which then translates the encoding into an approximate system output.

Such a system would (at least in my though experiments) be inexact, ‘flavored’ by competing objectives of the state layer… at the start, it would be a fitful storm of IO noise, with wild and chaotic movements, and in some instances, “death” of the state machine. But take this approach through some simulated Darwinian competition over a large population of agents, whose initial parameters are set via genetic algorithms, and I suspect we’d see optimal setups emerge that would be able to settle into healthy objective-seeking/managing scenarios.

It’s probably not very clear, but I do see a path for expressing memory, state, and instructions in terms of HTMs. Just need to finish my ElixirHTM system, making it more friendly, and start experimenting away.

1 Like