I talked to @jhawkins and @subutai about this recently, and they used different terminology for this issue than synchronous vs asynchronous (but I can’t remember the terms). It had something to do with timing. Jeff, maybe you can elaborate? We’ve discussed this issue before, and this is a topic I’ve grown more interested in recently.
I wanted to do a TM demo for HTM School that was running all the time and receiving input from the keyboard. Ideally, I would be able to press a few keys on the keyboard, which would change the SDR input that NuPIC was getting depending on the key presses. If there were no key presses, the input would be empty or random noise. I was not going to encode time values at all, just rely on “real” time passing as ticks in the HTM cycle. I was hoping that if my human input was clean enough, I could tap out short, simple melodies by pressing sequences of keys over and over, and that it would learn to predict what key would come next.
Jeff and Subutai said this wouldn’t work because there’s no “exact timing” in HTM (or something like that). I know there are neuroscience terms for this that I’m missing, but it is an important subject, so trying to bridge the gap here. Hoping Jeff or Subutai can fill help.