How the Brain Creates a Timeline of the Past

The brain can’t directly encode the passage of time, but recent work hints at a workaround for putting timestamps on memories of events.


The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1


The encoding of time and its binding to events are crucial for episodic memory, but how these processes are carried out in hippocampal–entorhinal circuits is unclear. Here we show in freely foraging rats that temporal information is robustly encoded across time scales from seconds to hours within the overall population state of the lateral entorhinal cortex. Similarly pronounced encoding of time was not present in the medial entorhinal cortex or in hippocampal areas CA3–CA1. When animals’ experiences were constrained by behavioural tasks to become similar across repeated trials, the encoding of temporal flow across trials was reduced, whereas the encoding of time relative to the start of trials was improved. The findings suggest that populations of lateral entorhinal cortex neurons represent time inherently through the encoding of experience. This representation of episodic time may be integrated with spatial inputs from the medial entorhinal cortex in the hippocampus, allowing the hippocampus to store a unified representation of what, where and when.


If I am understanding these papers correctly, they discusses the coding of time as a laplace transform and shows the biological underpinning to make that happen.

Mapping between spaces is kind of the core for connectionist computations and this is the key to seeing how time fits in that framework.

The rest of the brain works hard to support a what/where extraction from the sensory stream. This concept shows that the MT-EC codes this as a what-where/when in going from cortex to HC coding.

It explains a few things very elegantly.


Hello Mark, thanks for sharing this. Quite interesting.

I haven’t read the paper but just from the abstract alone I need some clarity if you may, please?

This Laplace Transform and its inverse they speak of, is it just researchers representing biological realities in a mathematical format or can we argue that the brain is actually doing the Laplace transform itself?

In essence, does the brain use this math to do the wonders or is math just an obvious way for us to model the brain’s wonders?

Yes, it’s this one.

1 Like

Haha. I got lost in my own question I suppose :see_no_evil: which is it though? Please elaborate.

Thanks Mark for the info. I didn’t know about the “inverse Laplace transform” time coding theory. I quickly scanned the first paper (Howard 2014) and it looks intriguing.

@MabuManaileng: yes, we are speaking of the mathematical Laplace transform that the brain is approximating with neural networks (according to Howard’s theory).

Looking at Howard’s more recent papers, I came across this one that retained my attention: Predicting the Future with Multi-scale Successor Representations (Momennejad & Howard, 2018):

In what follows we show that a neurally plausible linear operation, namely the inverse of the Laplace transform, can be used to compute the derivative of multi-scale SR.
We have shown that a multi-scale SR ensemble is equivalent to the real Laplace transform of a given states timeline of successor states. The inverse of this Laplace transform computes the derivative of the SR ensemble, recovering which future states lie within given temporal horizons of a given state (e.g., the present state, or the goal state).

In short, mixing ideas from Laplace transform & Successor Representation (SR) help to overcome limitations of SR when modeling place and grid cells. This combination is called multi-scale SR ensemble and I really like the idea!

Some more info on SR in this post: