Questions About Recognizing Unordered Sets

To clarify, by “unordered set” I mean a set of patterns which tend to occur nearby in time but not in any particular order. For example, letters are a somewhat distinct set from numbers because you tend to read one letter after another and one number after another, whereas you read numbers nearby letters more rarely.

  1. Can HTM recognize unordered sets?
  2. If not, is doing so even necessary?

Edit: I believe I misunderstood the question, so my answer may be misleading. But I’ll leave it here anyway. Read @mrcslws’s answer below.

The term “unordered sets” is confusing. From your description, I believe you are actually talking about temporal sequences that are seemingly randomly repeated throughout a stream of data. For example, the letters eat, which could be in beat, or death, or cheat, etc.

Depending on the data is encoded, you might even call this a spatial pattern, not a temporal one, especially if you are encoding at the word (not letter) level.

Anyway, yes, HTM can recognize temporal sequences, but it is harder to identify when they are occurring. @marion has been working on a project focused on this, but I’m not sure what the status is.

Ha, here’s a quick answer. https://github.com/numenta/nupic.research/blob/fdb6fe2edc3c3d8e6283349d6fa55a824acb66f4/projects/l2_pooling/notebooks/SetMemory_similar_objects.ipynb

I showed how you can rearrange the pieces of HTM to create something that can recognize unordered sets.

There are fun diagrams at the bottom. I’ll paste them below.

Single-column

Multi-column

2 Likes

Does the diagram above imply that this implementation is obsolete? Or is the diagram a more extended version encapsulating also the distal depolarization?

Until we have a complete model of the cortex, nothing is obsolete. :slight_smile:

This “Set Memory” algorithm is built into our current model of Layer 2. It’s essentially supervised – you put it into a “learning mode” when you want it to learn new elements within the current set, and you tell it to generate a new cell SDR when you want it to learn a new set. The computation being performed here is pretty trivial and predictable. It’s built entirely on active dendrites and inhibition driven by spike timing (which is controlled by distal dendrites).

The Union Temporal Pooler is based on different biological properties, relying on cells maintaining and accumulating excitation over time. It’s less supervised. In many ways it’s more powerful. It’s less predictable – for example, if you show it a bunch of elements of one set, will it eventually deactivate the cells that were activated by the earlier elements in the set? The “Set Memory” algorithm above will maintain context indefinitely until it gets confused, then it will start over. It’s unclear how long the Union Temporal Pooler will maintain context.

These our just two pages in our Neural Cookbook. I’m sure there will be more!