HTM Based Autonomous Agent

I have been travelling the last couple of days so I couldn’t respond quickly.

The motor layer consists of only 30 neurons and no columns. It is as shown in page 28. There are 4 rows just to visualize different states of the same neurons. There is actually a single row of neurons. image

So the motor layer can be treated as 30*1 (30 columns and 1 neurons per column). The functionality can be realized by a full layer but I just simplified it this way.

I thought of this myself too at some point. It works but there is a catch. What happens when the layer 5 activation changes slightly because of a new pattern or boosting? In this case, there will not be a motor command mapped to the slightly changed layer 5 activation for whatever reason. It can work if the layers are highly stable but you basically remove noise robustness and any change requires a new mapping which kind of contradicts with what HTM does.

I think I understand your confusion.

Each layer 5 activation corresponds to different activations on D1 and D2. Suppose that at time t, L5 has activation L5(t), D1 has activation D1(t) and D2 has activation D2(t). Activation D1(t) and D2(t) takes activation L5(t) as their input. Therefore D1(t), D2(t) and L5(t) all have differing activations. However, there is a relation; both D1(t) and D2(t) occurs when they get L5(t) as their input so they encode activation L5(t) on their own unique way. As the time goes on, D1 and D2 learns all layer 5 activations. On the temporal memory side, D1(t) and D2(t) takes their distal input from L5(t-1).

This is like motor layer association with layer 5 with a single difference. For motor layer you associate L5(t) with Motor(t). In this case, you associate D1(t-1) with L5(t) through apical connections instead of D1(t). Same with D2.

So, there are also apical connections forming to L5(t) from D1(t-1) and D2(t-1). So any activation occurring in D1 or D2, depolarizes cells in L5 that are expected to be active in the next time. What you end up is at any given time there are predictive cells in L5(t) that are distally depolarized by the activation from L5(t-1) and apically depolarized by the activation from D1(t-1) and D2(t-1).

You can achieve the same thing by directly using the same columnar activation of L5 on D1 and D2, however that does not seem to be how biology does as one activation is in the cortex and one is in the striatum. These should be mapped but not the same.

This is a VERY crucial problem. I had this sort of a problem from the beginning. Currently, you can only get around this by redesigning your encoders. This is a research area on its own. In my case, I was interested on the parts of the image that changed so I tried implementing an event based visual sensor here. This allowed the agent to sense what actually changed (what matters). Maybe you can come up with an encoder that ‘magnifies’ what you need until we can have a crack at the attention problem.

Any of these can be a starting point. You will probably understand what is important about the input after some trials. I really spent weeks trying to come up with something kind of universal to “zoom” on to the important bits in the input. However, biology has very sophisticated tools tailored just this task such as the retina and thalamus. If you are interested in how the eye does it, you can read on neuromorphic vision sensors but then again, this is another area of research.

2 Likes