If I saw split-second flash image of an object I would recognize it. It would be far too fast for saccades to scan. However, to recognize an object by touch my skin will need to scan over the surface of the object - which requires movement, which happens in time. If I were to recognize an object by sound my ears will need the vibrations over time. The only reason time is involved with temporal sensors is because a spacial ‘picture’ needs to be built up over time. A bit like a computer scanner. As it scans across the page in time it gradually builds a spatial picture. Like the sense of touch, your skins scans the surface of the object forming a gradual spatial ‘picture’. However, if time is not needed it will be spatial by default (like vision and smell and taste… and positional touch).
Given a stream of 1 second data from all senses (disabling visual saccades) they will all have spatial input/‘picture’ (set of simultaneous action potentials) similar to that of a single frame of an image. The picture from the touch of a stone may look like a series of bumps and dots. The picture from auditory of a stone hitting the floor may look like a sudden attack/decay waveform.
Forgive my naivety, but could an encoder be created to collapse temporal data into spacial data? If so would it have practical use?