Handling Multiple Fields with Hierarchy (?)


Hi Numenta and community! I’m a grad student working on HTM interested in making a novel contribution to the algorithm, and I thought I’d seek feedback from those who know best to help determine what openings there are and what angles I could take. One particular issue I’m interested in is that of handling multiple input streams. In the video ‘Multiple Fields in NuPic’, Subutai mentions the current limitation of the algorithm to handle only as many as 5 streams or so.

I’d like to eventually be able to feed any number of data streams into the algorithm, for it to determine which fields play important roles in what circumstances and in what combinations. My basic intuition about this issue in how its handled by humans is that we, when given a large number of input streams, would choose to focus in on some subset based on what we know to be important, and the more experience we have the more we come to distinguish which fields and combinations thereof have what meanings in what contexts.

It seems to me that the current method of handling multiple fields, of feeding the SDR’s representing each field all into one HTM region (if I understand this correctly) does not scale well since the SDR’s eventually become saturated with ON bits and they lose their sparsity (hence the limit of ~5 fields in the current implementation). So I have the thought of giving each field its own region, and then using hierarchy in some way to learn the interactions between the different fields in different scenarios as they unfold over time.

I thought for instance that if there were 3 input fields, each could get its own region on the first level on the hierarchy. Then the next level up could have 2 regions, the first forming from the cumulative outputs of regions 1 and 2 from from level 1, and the second from the cumulative outputs of regions 2 and 3 from level 1. Finally there could be another level of hierarchy (level 3) with a single region, formed from the cumulative inputs of regions 1 and 2 from level 2.

Whatever the exact producer, I simply hope to achieve the ability to feed in any number of inputs fields to the algorithm, such that the only limit to its capacity would be the amount of data. Of course it would take more data to find patterns of patterns of patterns between numerous input fields than from a single field, though I hope to be only limited by the supply and quality of data, without worrying about any ceilings in the capacity of the algorithm itself. I’m happy to go into more detail on how the outputs of 2 regions from hierarchy-level 1 would be mapped to a single layer in hierarchy-level 2, though I don’t want this post to get too too long.

In ‘On Intelligence’ I remember Jeff explaining hierarchy by describing how it allows information coming in from multiple different senses to give more predictive power than either sense could on its own, and that there was some level in the cortical hierarchy where the predictions from different senses (say sight and sound) are combined. So in theory it seems that hierarchy may be a plausible way to handle the issue of multiple fields, Needless to say I’m very curious for any thoughts anyone had on this issue. Thanks so much !

– Sam Heiserman


In NuPIC, we usually define a region as having 2048 columns, which is very small. I don’t know how many different “fields” a real portion of cortex could the same size could process, because we’ve already exited the biological nomenclature by talking about “fields” instead of direct input from sensory neurons. But like Subutai said, anything more than 5 starts getting very slow to process.

I am not sure that hierarchy will help solve this problem, because the encoder input still needs to be fed into the lowest level of the hierarchy together so spatial pooling can apply to all the input together.

If there is input to the system that you know is not correlated with other system input, and each different input is processed by a different region, then yes hierarchy will help with with scalability, but remember we are still focusing on one region.


Hi Sam, happy to hear you’re working with HTM at school. I’m interested to hear more about your project.

The combination of region outputs up the hierarchy is essential to HTM theory, but understanding the mechanisms for doing so remains an open question and an active area of our research.

In a forthcoming journal paper on HTM for anomaly detection, we describe a method of feeding the data streams into a set of smaller models, and computing a global measure that accumulates the results of individual models, indicating whether those portions of the system are anomalous. @scott would you be able to post the anomaly detection paper to arXiv?


Hey Matt and Alavn,

Thanks for your replies! I’ll very happily give you a brief outline of my research project, doing user discrimination using HTM if you have a moment:

So we start with a group of 20 human subjects, each of whom individually plays a simple computer game. For each subject we record their movements 60 times per second for 30 seconds (~1800 data points), and feed the stream into Nupic to build a model. Given the control streams of 20 subjects we have saved 20 separate Nupic models. The objective is to feed in an unknown stream and determine which of the 20 subjects the stream was likely generated by, or if it was someone else entirely.

The game is very simple so I’ll quickly explain to give you a fuller picture of the project. Imagine a box with white borders in the middle of an otherwise black screen. There’s a white vertical line (needle-looking thing) moving all around the screen in 1 dimension, from left to right. The needle jumps all around, in and out of the box, and the player’s objective is to keep it in the box as much as possible by countering its movements with the mouse. If the needle is outside the box to the left the player will counter by moving the mouse to the right with some amount of force, and if its outside to the right they’ll move it back to the left.

The resulting data is a stream of scalar values between -2 and 2, each representing a movement to the left or right. A stronger movement to the left is represented by a larger negative number (close to -2) and a stronger movement to the right would be close to +2.

This is clearly a single-field task at this point (only 1 input stream), though I’m interested in the concept of handling multiple fields because I hope to use this same method for identifying users on games (/tasks) that involve multiple fields.

Let’s say that instead of this simple needle-moving game the subjects are pilots placed in flight simulators, and we want to learn their individual patterns for how they land their planes. I know nothing about landing planes but from what I’ve heard there are (or can be) 4 different main controls the pilots use to do so, and they each get it done using their own unique blend of these 4. I believe that this data, of 4 input streams for each subject, would contain much richer idiosyncratic differences between subjects, and I want to feed this multi-field input into Nupic without loosing the predictive power contained in the special combinations of the 4 as used in special circumstances. Maybe subject 1, for instance, uses control A and control C in a certain combination on the approach to the runway when its raining.

Along with identifying the user (which pilot is landing the plane in this case), it seems that these same saved Nupic models could be used to predict how subjects would react when given novel situations. In theory this could mean having auto-pilot systems that had learned to fly/land like certain pilots whose data they were trained on (say Captain Sully Sullenberger who successfully landed a passenger plane in the Hudson river several years ago). I like the idea that if there was a rookie pilot in control and some unusual circumstances came that they were unfamiliar with (say a big wind and lightning storm), the system could go to an auto-pilot based on the control patterns of another more experienced pilot who was known to be well versed in those scenarios.

So that’s the basic idea of the project that I’m hoping to turn into my dissertation. Needless to say I’d be very interested in any thoughts you might have on it, or anything to keep in mind when using Nupic. Thanks again!!

– Sam