Sometimes ago, if i remember correctly, we discussed about how to use the asynchronic data (e.g. Spiking data from neuromorphic sensors come to nupic not at the constant time interval, but sometimes more and very dense spikes, sometimes very little or no spike for long time window) with nupic, but no solution was proposed.
My current idea is to use the time distance between two spikes as input so that we can see it as a continuous signal, which is then converted into discret time series with a given small sampling time.
How do you think? New ideas?
I thought the general consensus was to treat no data as separate encoding but still an encoding just like other values. If possible, semantically closer to the lowest(?) values.
The question that should be answered with your idea:
Is the distance between two spikes have any semantic relevancy to how the other inputs are encoded?
Or in other words,
Does the increase or decrease of that distance correlate to the overlap with the other encoded values after the encoding?
It may still work if the answer is no. It would certaipnly produce better results if the answer is yes.
Edit: I realized that I might have understood it wrong. Are you proposing to encode your input values merged with their respective times in between? Or is the respective time in between is used as the actual input, which is what I understood at first read?
@sunguralikaan Yes, the distance between 2 spikes is relevant, because the higher number of spikes within a time windows means the higher activity an object does. By using my idea, we can encode and decode the distance easily.
I moved this from #nupic into #htm-theory because although it has NuPIC in the title it’s really about exploring HTM theory.
@rhyolight: Best thanks.
Does anyone have an idea for “asynchronous” NUPIC instead of the current “synchronous”?
Here, “asynchronous” means that NUPIC can learn even the data that do not come at the constant frame rate.
I talked to @jhawkins and @subutai about this recently, and they used different terminology for this issue than synchronous vs asynchronous (but I can’t remember the terms). It had something to do with timing. Jeff, maybe you can elaborate? We’ve discussed this issue before, and this is a topic I’ve grown more interested in recently.
I wanted to do a TM demo for HTM School that was running all the time and receiving input from the keyboard. Ideally, I would be able to press a few keys on the keyboard, which would change the SDR input that NuPIC was getting depending on the key presses. If there were no key presses, the input would be empty or random noise. I was not going to encode time values at all, just rely on “real” time passing as ticks in the HTM cycle. I was hoping that if my human input was clean enough, I could tap out short, simple melodies by pressing sequences of keys over and over, and that it would learn to predict what key would come next.
Jeff and Subutai said this wouldn’t work because there’s no “exact timing” in HTM (or something like that). I know there are neuroscience terms for this that I’m missing, but it is an important subject, so trying to bridge the gap here. Hoping Jeff or Subutai can fill help.
Maybe the terms you’re looking for are “relative” versus “absolute” time? Where relative time would express the delay between events and absolute time would be the actual o’clock time open event?
I became interested in this when I first saw that music demo at the one Hackathon and I was wondering how the HTM replayed the delay between the notes?
No, that wasn’t it.
They removed all the rests.
Wouldn’t you be able to get something similar by encoding the “duration” of each event along with the data-point info?
Yes, but that is really just a hack to encode time into the representation like we do with date encoding. The “exact timing” issue isn’t solved by that.
The brain must encode the relative time between events and it must have a way of globally adjusting rates too. In music, we store the duration of each note, but we can also adjust the rate globally. We can recognize a melody played at different tempos and we can play back a melody at different tempos. The same capabilities are needed for almost all motor sequences. For example, signing your name is very much like singing a song, it is a series of motor movements where each element in the sequence has a duration, and you can speed up and slow down the entire motor sequence.
When we developed the HTM sequence memory these timing issues were always on my mind. I felt that if our sequence memory couldn’t solve these timing issues then it wasn’t correct. I have a hypothesis of how the cortex encodes timing and how it is possible to adjust the rate. The HTM sequence memory can accommodate this hypothesized mechanism. We did not implement it because it didn’t seem important enough at the time and we didn’t need it for the applications we were working on. In short, relative timing is an important component of all time-based inference and motor output, however, adding the timing component was secondary to getting the basic sequence memory working.
In the past I have used the term “absolute timing” to refer to the relative timing of elements in sequences. That might not be the best term.
In the past I have described the mechanism I believe implements timing in the cortex. Briefly, there is a projection from L5 (small pyramidals) to the matrix cells in the thalamus, which project back to L1. This pathway is broad, meaning that the matrix cells receiving input from auditory regions will project broadly back to all the auditory regions. There are similar broad circuits for vision, touch and motor. What happens is for each new event (“note”), a cascade-type clock sends timing info to L1. The sequence memory cells use this signal to learn how much delay occurred since the last “note”. If the cortex is playing back a sequence a sequence element is immediately predicted by the preceding element but doesn’t become active until it also recognizes the time delay on its apical dendrites. The central position of the matrix cells allows for the cascade-like clock to be sped up or slowed down. In some recent papers I have seen evidence supporting this hypothesis.
In some recent papers I have seen evidence supporting this hypothesis.
Could you link those papers?
One was a paper that cited evidence that the matrix cells in rat somatosensory thalamus are involved in timing of movements and that L5 cells send a signal to these cells that mark the beginning of new movements…
Another paper showed that cells in the hippocampus exhibit a timing mechanism. The cells marking time exhibit the exact behavior I predict we will see in the matrix cells in the thalamus (a cascade of active cells). Up to this paper I had no evidence that such a mechanism existed anywhere in the brain. Knowing it exists in the hippocampus showed it was feasible and could be preserved and replicated elsewhere.
Of course you will want references…but I have an almost pathological inability to remember authors, titles, and journals. Maybe someone else at Numenta will remember these papers and cite them.
Wow, I didn’t see this… Sorry, I dictated this. It was supposed to be, “…the actual clock time of an event.”
@jhawkins It’s awesome that you’ve identified the biological entities responsible for recreation of event timing! I can’t imagine that we’ll be able to do anything near AGI level language intelligence without it?
Let's see if your clues refresh someone memory.
I agree that real brains, fluid robots, and probably anything we call AI in the future need something like this. My proposal is just a hypothesis but I give it a pretty good chance of being correct. I have asked a couple of experts, and as far as I can tell no one has measured the activity of the matrix cells.which would help determine if the hypothesis is correct.
There’s just so much to do, but I can say that I’m grateful to live in the beginning times, and for the contributions you and the other talented staff of Numenta, are making which grants us an opportunity to be a part of it all.
Just promise you’ll get Cortical Column/what-where world modeling; Hierarchical and Apical contributions to depolarization; and relative timing done before you hand the mantle off to someone else? Lol! I don’t ask for much…
Hah, that’s funny! In all seriousness we are aiming to get a fairly complete cortical theory. It definitely feels like progress is speeding up as more pieces come into place. We are working on the new sensorimotor theory now and I think that is a big piece.
Agreed! Wow. How did I forget “Sensorimotor”!? That’s huge! Oh well, no rest for the weary