Learning how to spell using HTM inspired sequence learning

So, I borrowed some ideas from HTM sequence learning, and using my notation applied it to learning and recalling 74,000 words/sequences. I guess I’m approaching the AI problem from a different perspective than NuPIC. Heh, everyone needs a niche. I’m aiming for a more symbolic mathematical approach, with a focus on getting the representation right, rather than strict adherence to the biology. But my approach is closer to HTM than it is to deep learning/ANN’s, or prolog for that matter. My whole language is designed around manipulating SDR’s, since I too think SDR’s (or superpositions as I call them), are the fundamental data-type of brains. So the driving idea becomes, given a SDR what is the set of interesting transformations of that SDR. And you end up with a collection of operators that map SDRs to SDRs, that can be trivially chained together. So you end up with learn rules, operators, and sequences of operators, and that is the basis for the entire language. And operators have an easy interpretation as steps or tools, and sequences of operators as some kind of pathway through some abstract space.

From the blog post:

In this post we are going to learn how to spell using HTM style high order sequences. This may look trivial, eg compared to how you would do it in python, but it is a nice proof of concept of how the brain might do it. Or at least a mathematical abstraction of that. There are two stages, the learning stage, and the recall stage. And there are two components to the learning stage. Encoding all the symbols we will use, and then learning the sequences of those symbols. In our case 69 symbols, 74,550 words, and hence 74,550 sequences. The words are from the Moby project. I guess the key point of this post is that without the concept of mini-columns (and our random-column[k] operator), we could not represent distinct sequences of our symbols. Another point is this is just a proof of concept. In practice we should be able to carry over the idea to other types of sequences, not just individual letters. I’ll probably try that later, eg maybe sequences of words in text.

2 Likes

Wow, this sounds a lot like what I have in mind, but you seem to have a 2 year head start. Gonna read through your blog. I like the focus on representation.

I myself was looking for a way to combine the semantic web with some (stochastic) prolog language to create some rule system.