- Can I know what is the meaning of
installing nupic in develop mode? (I mean what is the difference between installing normally and installing in develop mode?)
- I see there are 3 methods to create models. Can I know the difference between
- Also, I learnt about swarming. Can I know the reason for swarming in the first place (is it similar to parameter initialization for deep learning models?). Or is it the place where the actual learning takes place?
#1: It means you can change the python code without needing to recompile everything to see the changes. You only want this if you are expecting to change the NuPIC codebase.
#2: Hopefully the docs explain this?
#3: Swarming is for parameter optimization only. See docs.
Thanks for the reply.
2. I am still not sure about the difference between Algorithms and Networks API. Is it like Network API allows us to use individual modules (e.g. spatial pooling, encoders etc) to build a graph but we cannot edit these individual modules while Algorithms allows us to change the individual modules even. So, does that mean
Networks API is a subset of
Algorithms API? Please explain incase I have understood it in the wrong way.
If I have to create a complex network but with custom encoders or poolers, what interface should I use?
Also, why cant I import encoders and poolers manually
from nupic.research and build a system myself? Why opf? (Question from NuPIC Walkthrough ipynb)
3. So, in general, training a model means optimizing its model parameters to work well for the given dataset…right? So, does this mean swarming is even like
training a model. If so, what is the process after swarming where we send data again to the model? I mean what is the difference between swarming and the next step that happens after swarming (I expected that was the actual training…where data points are sent to the model).
Can you please reply?
Sorry I missed this.
No, swarming is more like evolution.
Think of it like this. In an HTM system, encoders are like sensory input. Your senses took hundreds of millions of years to evolve alongside your brain. You might look at every new data set as a new sense. The types are different, the intervals might mean different thinks, there could be much more or less data. The speed of data could be different. Swarming is like trying to find the best configuration of encoder params to match the data, as if the sense is evolving over epochs of life cycles.
This sounds a bit dramatic, but it is a good anology. Swarming is not a part of HTM theory. It is just a tool we use to evolve the encoders to better match the input data signature.
Now that you have a set of model params, you might say the “training” phase starts (and never ends). There really is no test phase. HTMs are training forever. That is one of their strengths, and something deep learning systems cannot do today (online learning).
Can you reply for this please? I forgot to quote this earlier. Sorry about that.
So, swarming has just to do with encoder optimizations…right? This line
Swarming is a process that automatically determines the best model for a given dataset. By “best”, we mean the model that most accurately produces the desired output. Swarming figures out which optional components should go into a model (encoders, spatial pooler, temporal memory, classifier, etc.), as well as the best parameter values to use for each component.
at this link http://nupic.docs.numenta.org/1.0.3/guides/swarming/index.html is ambiguous (reference to model is ambiguous…feels like swarming has something to do with HTM). Please correct it incase you feel its ambiguous too.
And the paragraph even says swarming figues out which components should go into a model and it even finds the best params for them (including for temporal memory). This even creates more confusion for me.
Thanks for the feedback. Model params include encoders, Spatial Pooling parameters, and Temporal Memory parameters. The swarm will try to find the best params for them all.
So, in deep learning, MODEL LE ARNING means optimizing its params so as fit the dataset.
But in HTM, MODEL LEARNING doesnt mean optimizing the params of the model but optimizing the values of the synapse connection (0-1). Here, I think model params is different from the values of synapse connection.
Please tell me whether I am right.
And also one more query: Does the brain learn the encodings for every new word it comes across (unlike HTM where encoding algorithms are fixed)? I mean how does the brain encode every new word it comes across?
I got this query because incase of the text encoders present now (cortical.IO), it represents a word into an SDR ONLY when it recognizes the meaning. But the brain might be learning new words everyday (its sample space of words is not fixed).So, how does the brain encode such things?
You are basically asking how brains learn new concepts. That’s a hard question. Here is a vague answer.