Search new idea in HTM

hi dear matt
I Searching for New Ideas in HTM theory to write my proposal and thesis, can you help me ?
thanks

2 Likes

Do you mean you want to create new ideas and are looking for problems and such which could lead to new ideas? I don’t know much about problems from the computer science side since I don’t do much programming. You’ll probably have to get super familiar with HTM (the theories, the implementation, and the places where HTM runs into problems). That’s especially true if you want to build on it within the constraints of HTM theory, by which I mean not drawing from other forms of AI and just using SDRs. Maybe if you experiment with HTM and try applying it to a bunch of different things, you’ll find a problem or limitation to work on.

One problem is that the spatial pooler can be unstable in terms of input/output patterns depending on the circumstances, I think I’ve heard. I’m not sure about that since I’ve never implemented a spatial pooler except one which was too small to work properly. The problem with that was there were too few patterns (less than 50) for each cell to be active the desired fraction of time steps (1/50). I suppose that could be a problem in normal circumstances (a couple thousand cells in the SP and many patterns because it’s real data) with a topographical spatial pooler.

Another problem is sequence resets. Currently, it requires the coder to decide when to do resets (clear sequence context from the temporal memory) I think. You could try to figure out how to learn when to do that

Do you mean you are searching for information about the ongoing research?

Here are some places to look to get a sense of the ongoing research. Not all of it is yet known to us, for good reasons, but they’re going to publish some papers this fall.

https://www.youtube.com/user/OfficialNumenta/videos
The Hackers’ Hangouts sometimes include research updates. See the most recent one for info about the papers being written.

https://numenta.com/ and https://numenta.org/ have information and links.

4 Likes

I see a need for being able to recall or identify learned patterns.
HTM is awesome at learning things really fast; I don’t know how to discover what it has learned.

Some Neural Networks do this by completing a partially presented pattern.
Some work by applying a key of some sort (actually a sub-set up pattern completion)
If I understand Cortical IO correctly you have to create a key for each word to sample populations.; this allows the exploration of the islands of features in a Self Organizing Map.

I don’t know any good way to discover what transitions a HTM network has learned.
Alternately - probing to learn what bit is associated with a feature in a given SDR.
Does anyone have a proposal to recall the learned semantic information or (dare to dream) a catalog of memories?

2 Likes

This wouldn’t help with patterns which branch from a common root, but maybe as a place to start, one could in theory identify the beginnings of learned patterns by first identifying every cell which has lots of (above some threshold) connected synapses on its axon, but few (below some threshold) connected synapses on its distal segments.

Then that identified population of cells could be organized into separate groups of cells which share lots of (above some threshold) synapses on their axons to the same next population of cells. Each group would theoretically represent the first element of a learned pattern.

From there, however, I think trying to unfold the rest of the pattern would become challenging. It would get fuzzier and more saturated the longer or more generalized the pattern is, as well as having the branching problem I mentioned.

2 Likes

What about something like a working Temporal Pooling algorithm that works with the Temporal Memory? This could be very useful like, for example, how the same object can be recognised with different sequences of saccadic movements, or how different sequences of touching movements lead us to recognise a particular object.

4 Likes

That is exactly what Numenta is working on. Stay tuned, we’ll have 2 papers released in 3 weeks.

9 Likes