Chaos/reservoir computing and sequential cognitive models like HTM

You’re right. With just letters it will be fully connected almost immediately.

We need some way to represent that the ends of words will have a greater variety of predictive connections.

Perhaps that is just more nodes in the letter representation again.

With more nodes in the letter representation we can have paths over different subsets of those nodes representing the context back along the word.

And we can have a greater variety of those subsets going out to different first letters of subsequent words (or different subsets of those first letter representations, representing the different whole words it is a member of), representing that it is the end of a word, and the next word is less predictable.

2 Likes