How the Brain Creates a Timeline of the Past

Thanks Mark for the info. I didn’t know about the “inverse Laplace transform” time coding theory. I quickly scanned the first paper (Howard 2014) and it looks intriguing.

@MabuManaileng: yes, we are speaking of the mathematical Laplace transform that the brain is approximating with neural networks (according to Howard’s theory).

Looking at Howard’s more recent papers, I came across this one that retained my attention: Predicting the Future with Multi-scale Successor Representations (Momennejad & Howard, 2018):
https://www.biorxiv.org/content/biorxiv/early/2018/10/22/449470.full.pdf

In what follows we show that a neurally plausible linear operation, namely the inverse of the Laplace transform, can be used to compute the derivative of multi-scale SR.
[…]
We have shown that a multi-scale SR ensemble is equivalent to the real Laplace transform of a given states timeline of successor states. The inverse of this Laplace transform computes the derivative of the SR ensemble, recovering which future states lie within given temporal horizons of a given state (e.g., the present state, or the goal state).

In short, mixing ideas from Laplace transform & Successor Representation (SR) help to overcome limitations of SR when modeling place and grid cells. This combination is called multi-scale SR ensemble and I really like the idea!

Some more info on SR in this post: