HTM technology is (mostly) based on the biology of the brain.
Consider what you are asking - to feed a large number of variables into this network.
Can you think of where the brain does this to get an example of the processing involved in HTM?
Take the eye for example.
The processing of one layer of HTM may be good to work out the edges and movement of a well formed shape or similar processing in a different sensory modality.
There is a large number of points to be processed but the variable are somewhat constrained by the physics of the world. The end product take many layers of processing to work out the shape and distance of the object, and maybe the color and texture. It is likely that matching up the object to other aspects such as a name does not happen until the processing reaches the temporal lobe, several processing steps later.
How does this type of multiple input processing match up with your problem?