word fingerprints have topology

I’ve obviously been thinking a lot about topology lately. One of my interesting finds is that we really don’t have any good examples of encoders taking advantage of topology. The closest I got was streaming a loop of animated gif frames into NuPIC (as shown in the video above).

Maybe we can utilize topology at the scale we currently use HTM networks. I think this area is ripe for exploration. Here’s one idea for an HTM application that might benefit from applied topology.

Cortical’s API can give you a topological 64x64 bitmap for most words in several languages. Just check out the demo above. Enter a word, then hover over the on bits in the representation to the right. This representation is topological, and we can generate streams of it just by processing any text.

It would be interesting to try and feed these representations (from a classic text or poem or something) into an HTM model and try to tune the topology settings of the SP. Perhaps someone would be able to start predicting the next word in sentences with some accuracy?

Of course, there are a lot of interesting problems to solve here as well. How do you deal with “if” “and” “the” “or” etc. There are a lot of abstract terms in language that do not translate well into the retina has created. But these things like “type of word” can be looked up with other tools and used to help with the prediction (more info and ideas about this in this video).

Is anyone interested in working on a project like this?


@rhyolight Wow Matt, I never knew that categorization experiment existed! I think it’s very interesting to feed in meta categories of words in a sentence and be able to predict the part-of-speech of an upcoming word in a sentence!

Anyway, I would like to say, (since I also work for, that if anyone needs help or guidance I’m also available, and/or I can reach people in that may be able to help. So, just letting you all know that that resource is available too…


As far as suggesting the next word - two thoughts come to mind.

It seems natural to use the H part of HTM to put in the equivalent of Parsey McParseface. [1]

The second part is a bit trickier. What exactly do you expect sentence generator to make? There are a vast number of sequences that can be generated from most starting words and phrases; each one means a different thing.

How is it that you might expect some sort of sensible thing to come out of it? If you were to input something like actors and relationships maybe it might make some sense but free-running? Not so much.


1 Like

Correct me if I’m wrong, but I think the intention was to feed in multiple texts and have the HTM predict the next word(s) within that limited context?

The linguist app Chetan wrote can already do it on a letter by letter basis for a single text…

I don’t know if your system would get stuck in a loop repeating the same few sentences after a while. I had that problem with simple hash walk decision/prediction trees. On solution I though of was to train a few different trees on somewhat different data sets and then pick one tree at random to predict the next letter ahead. You could certainly curn out the chatter that way.

1 Like


I don’t believe “getting stuck” would be a problem here. HTM systems adapt to new input anyway, but more to the point - I believe the way @rhyolight described the problem, it’s a closed system which includes a finite set of input sentences.

Well Matt I didn’t expect to get called out the very first day I joined the forum but that is exactly my interest and in fact I’ve been working on it for a long time but in Chinese. I take a swing at it every ten years or so and it’s that time again, this time I have the decks cleared for ’17. The last time was when Jeff’s book came out. Here’s a poem that I posted in the On Intelligence blog in October ’04:

yea a fully pixelated damn near binary brain.
a mandelbrot fractal of neural bifurcation
propagating up from the initial excitation
reverberating down in recognized reflection
and when the wave is standing
from the top down to the bottom
your thoughts you know
you know you know
‘cause you’ve already thought ‘em
this universal making sense
of all your stimulations
is what we used to call
good good go-od
good vibrations

That’s why I should stick to lurking but your bot bugged me into joining.

In the mean time I’m trying to figure out what you and Francisco are doing. I’ve watched a couple of his lectures but it’s still fairly opaque. I’m beginning to get a grip on your stuff and I love your lessons, I’ve seen them all at least twice, they’re great.

Good Holidays!

1 Like