@scott any idea?
Even i set count=1 in Spec() for Input but after initialize() of my region i found the input size is always 0.
@scott do you know when the input size of a region, whose input has no links, will be updated?
I implemented a dummySensorRegion() with no input and one output, Then i use its output for inputing a data directly from extern (here from the main()) and hope that by running this network, its output will be forwarded further, but i really can not find any idea how to implment an interface that copies data from the main() and links to its output.
Do you know it?
@scott @mrcslws @vkruglikov Could you please help me by solving problem âhow to input data from the main() directly into a region?â ???
In the meantime, I have a look at py.RawRegion in htmresearch, and found that it have a member function âaddDataToQueueâ
https://github.com/numenta/htmresearch/blob/master/htmresearch/regions/RawSensor.py#L129
that allows to input data into the sensor region like:
https://github.com/numenta/htmresearch/blob/master/htmresearch/frameworks/layers/l2_l4_inference.py#L380
I used this idea from python implementation in C++, by adding a member function into my region like
class myRegion
public:
void inputDatafromMain(float data){};
But by compiling I had an error
âclass nupic::Regionâ has no member named âinputDatafromMainâ
regionDummy->inputDatafromMain(12.34);
I tried all what I learned from your source codes and samples, but I have no way to realize my very simple idea!!!
Could you, all Numenta friends, help me by building my first application with regions?
thanks
I believe that you need a link to the input of the region before the input size will be anything other than zero. The network inspects all of the links and regions during initialization to see how wide the input arrays are that get linked to a region.
I understand it, but the region in the lowest level can not have any link to its input. That is why i create a dummy sensor region that has no input and one output at size of 1 and link its output into the input of the sensor region. My main loop looks like
For i= 1 to n
Float sendata =12;
Update ouput of dummy sensor using sendata and dummysensor->compute()
SensorRegon->prepareInputs() for copying sendata into sensor region
Network.run(1)
Problem:
By running the network, it does not take sendata=12 as input of the sensor region. Instead of it, the network start to call âcompute()â of each region, firstly dummy region, then sensor region. At this step, output of dummy region is set to default value 0, not 12. we do not have a mechanism to run network from a given level or from a given phase.
Did i undestand the update mechanism of network correctly?
@scott: Could you please explain me how the input/output size is updated/calculated?
during initialze()?
each time by calling network link?
the order in every region (first the input size, then the output size)?
which information is used for calculating? parameters in Specs? the output size of all linked region to its input?
Thanks
@thanh-binh.to One reason it is hard to provide support for your questions is because we donât use the C++ Network API all that much in practice. We built out the Python API for research and experimentation, and we use go back to C when we need speed. So we donât use it all that much, except to create Python structures.
Hi Matt,
thanks for your feedback. I really understand the current situation of using C++ API as you mentioned here.
Iâd like to say, that at the momment I can understand many parts of your concept for region/network in C++ and am very happy that I have hopefully only one âroad constructionâ for understanding in which order the network will update the input/output size, so that I can set-up my parameters correctly, because it is not documented anywhere.
I am 100% sure you and your colleagues know it exactlyâŚ
Any comment from you can help me a lot âŚ
Thanks
The inputs/outputs can infer their widths based on what they are linked to. So if output A on some region has a known size of 100 bits and gets linked to input B on some other region that has unknown width, the initialization code will infer that the width of input B is 100 bits. This is actually an iterative process where it attempts to initialize all inputs/outputs but some may not be able to be resolved so it does it over and over until either all links are resolved or it is unable to resolve anymore of them.
If you want to understand the underlying implementation, feel free to browse the code. The starting point would be the Network::intialize function:
But you shouldnât have to understand the implementation details - just realize that inputs need to be linked or they will have a width of zero and wonât ever have useful data.
@scott thank you very much. Now, everything works well by me!
This makes me happy! Thanks @scott and @vkruglikov for your help. And thanks @thanh-binh.to for your patience.
Thanks all Numenta friends for your excellent supports and understanding
One more question about link between regions.
Can a Input of a region A be linked to the output of different region like B and C?
If yes:
- is it correct that the Input of A has 2 nodes, one from B, and one from C?
- do we have to use those data two times in compute(), every time with one data from B and C?
- What is happened, if output B is linked to Input A per PropagationDelay = 1,2,etc?
Does it mean that outputB goes to a buffer and used in the next computing step?
Thanks for your help!
I donât quite follow.
- Do you mean that you have multiple region outputs linked to the same input on another region? Usually this will result in the outputs being concatenated into a single input array. The initialization determines that there are multiple links and computes the size of the input as the sum of the output sizes that were linked to it.
- See ^
- Yes thatâs exactly right. With propogationDelay=0, the input on the destination region will receive whatever the current value is. With propogationDelay>0, the input will receive the value written to the originating output from some number of steps (calls to
run
) in the past.
@scott yes i think so. It is clear! Best thank for your smart support
@scott i understand that the network-run function run() does only one time and feedforwads. I mean for example the feedback from L2 region to L4 region only valid for the next call run() so that no dead closed loop between L2 and L2 is happened.
Am right here?
Iâm not sure in that particular case. Keep in mind that with no propogation delay this could still work. If L4 has a lower phase then it runs first, pulling in data linked to its inputs, which may include feedback that was last populated in the previous iteration. It populates L4 ouputs and then L2 runs, pulling in these new values. L2 in turn populates its outputs, including the feedback. But the feedback values wonât be seen by L4 until the next run
iteration.
The propogation delay was added for a more complex case: multiple columns. With multiple columns, the L2 in the columns share information. But since some regions run before others, they wouldnât have access to the lateral info from columns that havenât run yet. By using propogation delay, all columns get lateral L2 input from other columns from the previous iteration.
Does that make sense?
Yes that makes only sense, when the feedback comes from the regions with higher phase.
But the internal connection in L4, e.g. between activeCells output and basalInput wille be dangerous if propagationDelay = 0. I am not sure, if it works.
For multiple columns, it will be very complex âŚ
Other question:
Do you know which compiler options/flags will be usefull for better runtime performance? using pthread?
Thanks
You can look at the cmake file in nupic.core to see what flags are used in âReleaseâ mode - that is as far as we have gone with optimization but it should be a big improvement over âDebugâ options.
Thanks