no someone’s partial joke on reddit. More reasonable question is how do we figure (search?) out what matters and what not in sensory data.
How big that pixel needs to be in order to get noticed.
Movement is a great clue.
Synchronization bettwen agent’s actions and pixel motion are also very important.
If I wiggle some muscles in a certain rhythm, what parts of the sensory stream oscillate in sync with them?
That’s why I thought content-neutral cycle sensitivity could be very useful.