Time Perception and Distortion: The Neuroscience of Subjective Time

I am proposing to start this topic, because I have not found this subject covered in any existing thread. Listening to the following talk delivered by David Eagleman from the Dept. of Neuroscience & Psychiatry at Baylor College of Medicine, brings up a lot of questions regarding how our cortical hierarchies are operating. In this video you will even find some evidence that there is a calibrating mechanism in our cortex which is capable of synchronizing temporally distinct events and this process can also be manipulated in order to distort time perception to the point that causalities of events can be completely concealed to the mind. It is suggested that this may be the cause for some pathologies which include hallucinations, like hearing voices attributed to other sources and Schizophrenia in general.

Would it be possible, that the temporal latencies between the HTM regions can be altered and are self-regulating? This interesting temporal analysis also raises the question; Which is the layer or location in our cortical hierarchy in which we perceive time and temporal order? I could imagine that it must be the same hierarchical layer (or set of regions) in which we perceive our consciouness and awareness of our context, which is clearly an illusion taking place in our present (NOW). This illusion is an approximation of what is really taking place in our surrounding environment and has access to our stored memories. I have the hypothesis that time perception (which is needed to establish causalities) is being generated (or emerging) within the same regions in which our illusion of consciousness (including contextual awareness) also emerges. If this holds true, then this would also support Eagleman’s theory that many mental pathologies can be caused by flaws in this temporal perception process. Does any one at Numenta or from the Redwood Institute have any evidence or research to support or dispute this hypothesis? Is it possible that some HTM regions perhaps use inference of inputs from lower levels in order to synchronize the inputs with temporal differences, before they are passed on upward in the hierarchy to higher levels with higher degrees of abstraction in their representations?
Thanks for your thoughts in advance!
Joe

2 Likes

How HTM plays with time seems to be a key missing aspect indeed.
I didn’t read “On Intelligence” yet but I studied pretty much every document from Numenta and this thread is the closest I found, therefore I would like to express my own questions here as well.

The fact that time is subjective is confirmed by our everyday experience and backed by research such as shown in the video above. In situations when there is a lot of novel sensory input, new neuron patterns are activated, more memory is created and the general feeling is that we experienced more time. Therefore to some extent, we can consider that the clock of the brain is the input data itself, so in absence of other references to hold onto, this is all information the brain has got for knowing time. If this would totally be the case, then sensory input could be accurately modeled by Temporal Memory in nupic, where input is being pushed once at a time by calling a compute function. The gaps between the inputs can’t be known and really don’t matter, since time would not exist outside of the events.

However, we know that our brains also have some other time references, and their working is specific to the time scale they are sensing. We can distinguish between 3 time scales:

  • Micro-time: 0 to a few milliseconds
  • Mini-time: a few milliseconds to a few seconds
  • Macro-time: a few seconds to days/years

On the Micro scale we sense time with the advent of organs that do time to frequency domain transforms. This includes sensing the pitch of a sound, which we don’t do by measuring directly the time interval between 2 wave peaks, instead we have an array of cells in the ear each tuned to a narrow frequency range, so when a certain frequency wave is received then only the nerve wired to that particular cell is excited.

On the Macro scale we have the circadian rhythm and other natural cycles that tell us how much time has elapsed. Maybe we can consider artificial clocks here too? - although reading a clock invokes a high order cognitive process… But I see this as the least interesting part, because these rhythms are encoded as external events happening outside our bodies just like everything else.

What is of particular interest and actually the subject of my question is the Mini time scale. I think there is some research that points out at Corpus Striatum and Hippocampus as parts of the brain responsible for measuring time, but I did not find anything too detailed about the exact mechanisms, especially how they are wired to the cortex and as such, how would HTM theory need to account them. (I will come back soon with an example detailing this question)

I do realize there is a date encoder that can be fed into TM along with other encoders, but that looks like an implementation hack and not like a satisfying HTM solution.

Also. The idea that time and the ego are illusions was figured out hundreds of years ago in certain regions of the world, but it’s a mystery whether or not this is relevant when applying HTM as something external to ourselves…

So how does time perception on the scale of milliseconds to seconds work?
The precise perception of time is essential for proper enjoyment of music, so let’s consider the brain listening to music as an example.

Music can differ greatly from culture to culture, but essentially it is a play on time intervals. These intervals hold the void / silence between the events which excite the ear / brain.

So when you listen and only listen to a beat sequence, you can tell the distance between the beats, and you can tell if a certain beat was skipped because you were expecting it to occur at some exact point in time.
For example you can feel the difference between these two sequences of 3 beats:
^_ _^_^ has a pause of length 2 and a pause of length 1
^_^_^ has two pauses of length 1
The beats are more than 50ms apart and the fequency component of their combination is well under 20Hz, therefore the perception of such intervals cannot be explained by hair cell oscillators matching in the ear.

Is there some part of the brain that ticks according to physical time or is there a more subtle frequency transform mechanism at work? In either case, how is that connected to the cortex?

1 Like

It seems to me that for this specific example, simply having time encoded in the sequence itself should be enough to perceive the difference between these two example sequences. Of course, that doesn’t address your main question about where the apparently emergent perception of time comes from, but thought I would point it out, since explaining this specific example doesn’t necessarily require an external system of measuring time.

Practical implementations of HTM tend to use discrete time (t-1, t, t+1, etc), but in reality (though I know little about neuroscience so could be wrong), I would guess sensory input is coming in continuously at a relatively constant frequency. If so, then the above sequences being input to the lowest regions in the hierarchy could be plotted as a graph of numerous evenly-spaced points along a time axis. These long sequence of inputs would contain a higher-order pattern depicting the beats and the spaces between the beats. Therefore, when a beat is missed, this would be sensed as an anomaly in the expected pattern.

@Paul_Lamb:
you are right that the use of a time encoder may solve this this problem. But, practically, we have additional problems as follows:

  1. we do not have any data or time encoder for milliseconds or microseconds.
  2. how to handle with missing data, i.e. for time serie data t+T, t+2T, t+3T, t+4T, t+5T, the data at t+4T is missing?
    Normally we can use interpolate or extrapolate for handling this case, but what is happened, when the missing data is caused by any sensor defect? In this case, any extrapolation are not correct.
  3. What is happened if I use 2 input data with different frame rate for CLA? in the car data, for example, the vehicle sensors (velocity, acceleration, etc.) provide data in CAN bus at different frame rate (10ms, or 20ms).

Do you or Numenta guys have any idea for solving those problems?

Just to clarify, I didn’t mean encoding time explicitly into the content of the input, but rather that a property of temporal memory in a continuous time system (versus a discrete time system) is that time is implicitly encoded in the sequence itself (if we can assume sensory input is coming in continuously at a relatively constant rate).

@thanh-binh.to, thought I’d go a little off topic to try and give my thoughts on one of your specific problems (I am not an expert though). The other two I don’t have an answer for.

You could potentially use a scalar encoder for this, but before that would work, I think you would first need to determine what is the frame of reference. Milliseconds/ microseconds from what?

I don’t think you could just use “from the epoch”, because then there would be no pattern to the time element of your inputs (each time would be unique). “From the previous input” might work, but you probably still won’t be able to learn a pattern if the inputs are irregular (which they probably are, otherwise there wouldn’t be any need to explicitly encode time in the first place, unless your goal was to identify when an input was skipped).

If there is some cyclical element in the system, you might be able to reference that in your encoding. For example, the date time encoders capture scalars like day of month (months being the cyclical element), time of day (days being the cyclical element), etc. The answer is probably really going to depend on the class of problem, and what cyclical elements are relevant to encoding semantics.

I would like to toss some opinions into the ring. I’ll do it very succinctly…

To me, its evident that the timing between input events is essential to be able to detect and reproduce. Otherwise we wouldn’t be able to dance, keep a beat, play rhythmic musical accompaniments etc. However I see no intuitive reason why absolute time should be part of any part of our emulations. So the time between events is important - not the absolute time or any ability to internally track absolute time. In my opinion, the frame of reference would be the amount of time (as an intuitive feeling) since the last input.

Another thing to keep in mind is that experientially I don’t believe human beings perceive anything with a finer granularity than 10 beats a second. So millisecond and nanosecond granularity would be a waste (unless we’re building something beyond human capacity).

One more thing. If the software or hardware system can only process 100 inputs a second (I don’t remember NuPIC’s current capabilities), than millisecond/nanosecond granularity is pointless… :wink:

1 Like

Yes, discrete time is functionally equivalent if there is equal time between each recorded input. There are use cases where we don’t care about the time between inputs, which doesn’t match biology but can be useful for various machine learning problems.

I was assuming the question was about analyzing recorded data or sub-samples of data (perhaps looking for anomalies), rather than live feeds, but that is a very good point.

1 Like

Indeed, if that would be the case, time would already be encoded as a discrete count, but we know the brain doesn’t work like so. Continuous input at constant frequency sounds contradictory, perhaps you can elaborate what you meant by that?

The brain does not sample input at a fixed frequency like a digital processor, but it does not work like a continuous-time analog processor either. As far as I remember reading (not having formal training in neuroscience), the brain input is event driven, therefore excitations come at random intervals (lower bounded by the propagation times of electrochemical impulses). When there is silence, there is no input, when there is a sound, certain nerve excitations represent the input. Same applies to vision, except that compared to hearing, there is a lot more pre-processing being done by the retina and other neural circuits before reaching the cortex. I considered that a vision example would bring unnecessary complexity to the time problem.

OK, if you have no internal ability to track absolute time, then how are you able to distinguish between a 2x interval pause and a 1x interval pause between beats? Remember that there is NOTHING else happening in-between the beats. There is no excitation of the nerves that feed the audio input to the cortex, there is void, silence.

Of course, beats coming at a faster rate that 20Hz will be perceived as a single audio tone (plus the higher harmonics) by the hair cells in the ear. This is why I addressed the problem for time scales slower than that, from a biological perspective.

1 Like

I think we need to define what we mean by “absolute” time. What i’m talking about is something more akin to a decay interval (relative time), and not a timestamp. Of course we would need some time mechanism to measure the decay between input events, but not as an actual time stamp. If that makes sense?

1 Like

Practical implementations of HTM tend to use discrete time (t-1, t, t+1, etc), but in reality (though I know little about neuroscience so could be wrong), I would guess sensory input is coming in continuously at a relatively constant frequency.
I’m not sure if the inputs are synchronized in sensory organs or spinal cord, but the thalamus and cortex have a sort of oscillation which roughly corresponds to a time step. More cells are active during part of the oscillation than the other part. The graph of the activity likelihood vs. time looks like a sine wave, but messier.

These oscillations aren’t constant though, and different states (deep sleep, REM, wakefulness, focused attention, etc.) impact the frequency.

2 Likes

I defined it as physical time - the flow of which is determined by physical processes defined by physics laws, such as the rate of oscillation in a quartz crystal. There is no “absolute” time as shown by special and general relativity, but we only care about “local” absolute time. The time on which 2 brains would agree while listening to the same beat sequence. And my question was exactly about the way the cortex senses these local time difference, whether it’s an actual time stamp or not.

This seems to be a likely mechanism, as if every cortical cell would be clocked from a steady source! I can see how this is different from just sampling the input, because these cells would get excited (even if only slightly) also during periods of no input, hence providing the ability to sense the duration of silent pauses between beats. Btw who is the OP, did @Paul_Lamb edit his post afterwards? Maybe you can point to some neuroscience resources about it.

1 Like

I have to say that I have no clue how the biology works (my background is in programming and electronics – no background in neuroscience). I am aware just from my own experience that the nervous system is inherently noisy (for example let your arm go to sleep, then you will experience “tingling” until the nervous system has a chance to tune out the noise – presumably that noise is there all the time and we are just not consciously aware of it). I was theorizing that there is no such thing as a period of “nothing” – there is constant cellular activity, and I was suggesting that this activity is coming in at a constant frequency. When I say at a constant frequency, I am assuming there are physical cyclical elements in the system – cells have some “cooldown” period, synapses can form at some specific rate, etc. In a system of continuous input, these cyclical elements would have the effect of establishing resonance frequencies. Again, I do not have any knowledge of biology, so if that is not how it works, then I stand corrected :slight_smile:

I think what happened is @Casey accidentally included the first paragraph of his reply in the quote

By “nothing” I meant nothing external, thinking that there must be something internal going on – it could be the activity that you’re talking about (or @Casey, it could be the same thing), but there are still a lot of details that must be researched before making an artificial cortex understand time the way we do :slight_smile:

1 Like

Aha, I get your point. I was thinking that from the cortex perspective, noise coming from lower elements in the nervous system is probably not distinguishable from external stimuli. My thought is that if there is some recurring pattern to that noise (due to resonance), it would be learned as part of any sequence, and thus the sequence would have time implicitly encoded into it.

This theory could be tested by introducing resonance noise into our implementations and analyzing the temporal memory see if time can be derived from what is learned.

2 Likes

To depict this discussion visually:

If we were to graph just the sounds coming in from the previous example of drum beats, we’d have something like:

Since there is no sensory input between the beats, these would both encode to an indistinguishable sequence like this:

However, if we introduce noise that has a more or less constant frequency (some pattern that can be learned), graphed something like:

Then that pattern would become encode into the sequence, and thus the two different drumming patterns would be encoded differently from each other (with time implicitly encoded in the sequences):

3 Likes

Resources for neural oscillation:



The relevant information is spread out, so probably use ctrl + f to search for Hz, oscillation, frequency, etc.

1 Like