I’ve obviously been thinking a lot about topology lately. One of my interesting finds is that we really don’t have any good examples of encoders taking advantage of topology. The closest I got was streaming a loop of animated gif frames into NuPIC (as shown in the video above).
Maybe we can utilize topology at the scale we currently use HTM networks. I think this area is ripe for exploration. Here’s one idea for an HTM application that might benefit from applied topology.
Cortical’s API can give you a topological 64x64 bitmap for most words in several languages. Just check out the demo above. Enter a word, then hover over the on bits in the representation to the right. This representation is topological, and we can generate streams of it just by processing any text.
It would be interesting to try and feed these representations (from a classic text or poem or something) into an HTM model and try to tune the topology settings of the SP. Perhaps someone would be able to start predicting the next word in sentences with some accuracy?
Of course, there are a lot of interesting problems to solve here as well. How do you deal with “if” “and” “the” “or” etc. There are a lot of abstract terms in language that do not translate well into the retina Cortical.io has created. But these things like “type of word” can be looked up with other tools and used to help with the prediction (more info and ideas about this in this video).
Is anyone interested in working on a project like this?