I am using kafka-consumer to create a stream of the data. Initially I used python’s inbuilt multiprocessing liabrary to create multiple processes serving different kafka-consumers(different partitions of the same topic).Every time(have tried this 5 to 6 times with different kafka settings) After 1 or 2 days process got killed silently without raising any exception. I have enabled debug for both kafka and OPF. I din’t find anything suspicious in kafka logs, but the process got killed by OPF after encoding the input without raising any exception. As the daemon was true each of the process was terminated automatically. Now I have started different processes on different terminals and one of the process has already terminated(after 1 day) with same behaviour i.e. after encoding the input. Is multiprocessing not supported by HTM? Can someone help me, am stuck. Thanks in advance.
The process is killed by the kernel.
[Fri Mar 1 13:45:54 2019] Out of memory: Kill process 18635 (python) score 346 or sacrifice child [Fri Mar 1 13:45:54 2019] Killed process 18635 (python) total-vm:7495836kB, anon-rss:6473844kB, file-rss:0kB, shmem-rss:0kB
RAM is 16 GB, when I checked cpu usage by the process:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 31693 86.0 33.2 6279712 5415384 pts/8 Rl+ Mar01 3567:49 python run.py
Is there any solution to this problem, HTM is consuming very high amount of memory(further increasing) which is resulting in kernel forcefully killing the process.
Hello and thanks for trying NuPIC! Multiprocessing should work. To help you I really need to see a stack track.
Yes, you are right multiprocessing should work. The problem is not with the multiprocessing, the memory usage I have mentioned is for single process(normal execution of htm). The scenario is that,data is getting generated per millisecond. From what I have understood from the tutorials, only permeance values of the connections will be updated in same memory(Kindly clarify if I am wrong). How it is increasing CPU usage with respective data, that m not able to understand. Also there is no stacktrace as there is no exception by htm, python process is killed by kernel(centos 7).BTW Thanks for quick response:.
Does this only happen when you are using multiprocessing?
Do you have any reason to believe that multiprocessing has something to do with this issue? Or is it the fact that models are taking up too much memory that is causing you problems. If the latter, I suggest you look at some of the ideas under #optimization.
Yup it’s the latter…I will look at optimization. Thanks for reaching out.