Timing Circuits

In the video entitled “Exact Timing and Oscillatory Dynamics” in the HTM Chat with Jeff series posted here HTM Chat with Jeff, @jhawkins mentioned that he has a solution for timing and is confident of the solution but it is not a priority and is willing to offer it to anyone is interested.

Well, I’d like to take him up on that offer. The applications that I’m working on are very sensitive to timing and the ways in which timing can vary between different but similar signals. In the efforts to learn explicit timing relationships in HTM, I use two approaches that have different strengths and weaknesses.

  1. Use temporal sequences as an implicit time signal. Depends on the ability to learn long sequences and quickly loses context when minor variations occur. However, it will pick up a new sequence very quickly. Only the transition between the old and new sequence gives indication of an anomaly score. This is analogous to trying to match known sequence segments onto the current signal.

  2. Add a time signal as an explicit ScalarEncoder input. The allows you to do 2D matching of sequence curves with time as the X axis. So if there is a break from one sequence to another, the whole latter sequence will be anomalous instead of just the transition.

I do short-term data analysis so I don’t use the time-of-day, seasonal, hourly, datetime encoders provided by the HTM OPF framework. I’ve had to explicitly focus on how time is represented on the subsecond level.

I see that @jhawkins briefly discussed his timing theory back in January below:

I see there was also some interest from @onejgordon in this post:

So, @jhawkins, can you lay down some knowledge on us about your proposed timing mechanism? If it’s not too difficult, I’d like to experiment with building it.

3 Likes

Jake
I briefly described my idea for timing in my Jan 24 post and elsewhere. The basic idea is the cells that are learning sequences have two contexts. One context is the previous state of the temporal memory. This determines “what” comes next, it is what we have implemented and tested extensively. The second context is a timing signal. It determines “when” the next input is expected. A cell enters the predictive state when both contexts are recognized. We have not implemented this.

When we change the tempo of a sequence we change it for all elements in the sequence. This tells us that the circuitry that generates the timing signal should be centrally located and shared by all the regions in a particular modality. This way we can speed up and slow down all elements in the sequence.

I have suggested that matrix cells in the thalamus are a good candidate for the timing mechanism. The matrix cells do not have a topological mapping. They receive converging input from all the regions (L5 thin tufted cells) in a modality and project back broadly to L1 in the same regions. This is the type of anatomy we need for timing. The timing signal would be recognized on apical dendrites of the cells in the temporal memory.

Humans have the ability to learn timing between elements up to about 1 second. Therefore we would expect the matrix cells (or where ever the timing circuit is) to generate some sort of changing pattern for up to 1 sec. The timing signal would have to start again with each new note in a melody. BTW, I talked to a person who studies rats and she told me that the matrix cells in rat start firing at the beginning of each sweep of the whiskers, which supports the matrix cell hypothesis.

Finally, I recently read a book titled “Your brain is a time machine”. It described several types of circuits that could encode time as I have postulated.

3 Likes

From an implementation perspective, could this be abstracted as a third state for cells? (something between predictive and active) Context from previous input on distal dendrites puts cells into predictive state, SP puts them into “to be activated” state, and then recognised timing pattern on apical dendrites puts them into active state. Or do you imagine another process?

For sequence memory, I assume that activations in the TM layer are what should trigger the timing signal to restart, correct? In this case, timing for a feature would always be relative to the feature just before it.

Think I’ll also write up an implementation of timing, seems pretty straight-forward. I’m needing timing for my RL system, and should learn a lot from applying it to TM first.

Would you be willing to briefly elaborate on this, if it’s not too far out of scope for this thread? I’m interested in hearing about different tasks where people envision timing being important.

5 posts were split to a new topic: Non-precise timing

I described the use case I am using to test my RL system in this thread. As @sunguralikaan pointed out, timing isn’t exactly needed, since the agent could just learn to do some random actions to take up time. This is not aesthetically appealing though for a game agent. I also proposed another solution to improve the aesthetics by imposing a penalty for random motor actions except for one set of “motor” commands which merely changes the agents perspective without any outwardly visible actions. I think preferable to either of these workarounds would simply be to implement an actual timing mechanism.

Another area timing will be important for a game agent is for determining how fast an enemy is approaching or a platform is moving, in order to time jumps for example. The sequence of sensory inputs (assuming no repeating sequences) would be the same for a fast moving object compared to a slow moving one, unless a timing mechanism were added. (also assuming velocity isn’t captured in whatever encoder is being used… in other words, there is more than one way to skin a cat)

Gotcha. This is just my opinion, and it’s based in the methodologies of mainstream reinforcement learning and robotics, but I think the right thing to do is maintain a consistent update rate and just show the agent every frame. It’s not like our brain shuts off if nothing changes. And of course if you want the agent to be able to wait, then you make one of your actions a no-op, and add a slight penalty to all the others if you want to discourage fidgeting. This is mainstream and works fine. In my opinion it’s also the most biologically plausible solution. Having a no-op that actually shifts the agent’s attention around sounds fine too.

Maybe a timing mechanism would help in addition to those things. Where I see timing being necessary is also in perception. In order for parts of a sequence representation to be invariant to rate, it seems like you’d need a timing signal in order to separate properties of speed and identity of the sequence and compensate for different possible rates. Then your representation can contain an invariant sequence component, and a timing component. We don’t perceive the same songs at different speeds the same way, but instead we can tease apart the speed versus identity aspects of the sequence. And on the behavior side, the same reasoning applies to performing actions at different speeds. Then you can tease apart the rewards that should be assigned to action identity, speed, or both.

But anyway, if I think of anything good on this I’ll take it to your RL thread.