Does anybody know if the source code for the song prediction project from the 2013 hackathon is still available somewhere? The link from the hackathon blog post goes to a 404 on github.
Yeah, it’s gone unfortunately. I asked @snikolov about this a year or so ago and he could not find it.
That’s really sad. I also looked through my computer, and it was not archived. Someone must have made a backup?
With the hope of “when it’s online, it’s there forever”, I searched on different search engines with different keywords, only to find the following related links (but not the source):
- ANOMALY DETECTION WITH CORTICAL LEARNING ALGORITHM FOR SMART HOMES referenced it
- SONG IDENTIFICATION USING THE NUMENTA PLATFORM FOR INTELLIGENT COMPUTING
- THE HIERARCHICAL SEQUENTIAL MEMORY FOR MUSIC: A Cognitively-Inspired Model for Music Learning and Composition
- Music critic hack from 2014 Spring Hackathon
Thought these links might be helpful if someone is planning on a reimplementation.
AFAIRemember, they encoded the notes of the song as MIDI codes, such as 52, 64, 51, 66 etc, put it into a scalar encoder (using the lowest note and the highest as limits), and repeated the sequence with a reset at the beginning. They put a CLAClassifier on the predictions and fed it back to itself as an input. This is not hard to do.
We should “formalize” that demo (make a Python and Java version of it). It is a very very impressive one for being so simple, and it really drives home the point of the HTM’s learning ability.
Question… do MIDI codes encode the note time value? because there is definite tempo between the notes too!
I propose we make this proposal? A “Demo Request?” @rhyolight?
No, in their example they just used regular timesteps (quarter notes or something) and indicated a silence by using a invalid MIDI value (eg one below the lowest in the song). So the records might be 52, 52, 51, 56, 58, 51, 63, 63, 51… where 52 was the lowest note and 51 is a rest (the note you don’t play back).
The song prediction example is closer to people’s daily experience than many other examples, and it has some sort of artistic sense. These characters make it a compelling example to us sentimental creatures. It also inspires the vivid imagination of the potential of HTM.
I agree, I think it would be pretty cool, and I would like to see how it’s done; also, it would be fun to play around with. I wonder if they used swarming on the song, and then just predicted the notes.
Did this effort ever go anywhere? I’d like to try out this demo.
Not that I know of, but I’m still super interested in the idea of a MIDI Encoder. I wish I had time to work on it.