Here's a quick answer to get you started.
The resolution refers to the "accuracy" of the input units of measure and is used by the Encoders to put similar inputs into the same "bucket". So for example, given the input, "0.002 -> 0.234 -> 0.474 -> 0.009", if your resolution was 10 then these inputs would all go into the same bucket (because your resolution would be too high) (bucket 0 - there would be a bucket for 10, 20, 30 etc, and the numbers 10-19 would all go in bucket 0, and the numbers 20-29 would go in bucket 1).
If you set your resolution to 0.001 such that inputs varying by thousandths would then be spread into multiple buckets (i.e. 0.000-0.0009 would go in zero'th bucket, and 0.001-0.0019 [i.e. 0.001, 0.0013, 0.0014875] would go in bucket 1, and 0.002-0.0029 would go in bucket 2).
Keep in mind this is just a conceptual answer, but you can hopefully see now that the resolution makes a difference in just how finely tuned the system is to your input.
Typically we use the RandomDistributedScalarEncoder which (I believe I'm putting this right), can "auto-size" the resolution to give you the best distribution of buckets, given your data. But if you know the "range" of your data before hand, you can use other Encoders such as the ScalarEncoder.
That's just a quick answer to that one question...
Cheers, and Welcome @lightpriest!!
P.S. The usual flow of data through NuPIC's algorithms is: Data -> Encoder -> Spatial Pooler -> TemporalMemory < Classifier -or- AnomalyDetector (just named Anomaly).
I recommend this video which covers encoders - but the rest of HTM School is essential for new comers to HTM Theory! Enjoy!