Some rambling Bitking pontifications:
RNN and HTM can both do sequences and they both can match up a sequence in progress and do auto-completion. Great if you are doing spell checking. Beyond that - meh.
That is rather limited. In the bigger picture you can imagine typing in “It was a dark and stormy night” and a terrible pot boiler would spew out. I think that bad writing could be automated that way. No - I don’t think it should but - but it could.
As far as something a little more useful you need a purpose to write - it has to be about something. HTM is supposed to be based on how the brain works and it is thought that it works in what is sometimes called a predictive mode. I don’t know if that is really the best way to describe it. It is more about recall of a learned sequence(s).This may involved bits and pieces being jammed together in a synthesis to match a novel perception.
In the H of HTM we expect that the higher level representation is formed and is fed back to to the lower level so there is recall at all levels. If there is a “topic” that has been registered (we are talking about x) then the recognition at lower levels is guided to select fragments that are compatible with this current topic.
The canned generator should not be based on letter sequence but on parts of speech.
In sentence generation a more interesting version would work like Parsey McParseface and discover what it was that we are talking about and construct sentences that have something to do with the discussion.
I think of this as a blackboard. I can conceive of this as holding the current nouns and the relations between them. (verbs and adjectives)
The logic of generation should be distributed over several levels of representation rather than at a single level like this example program.
It would be asking a lot to have world knowledge of what the practical actions that could be taken (common sense) but it would in interesting to see what is possible. Hook this up to Eliza and let it go!