Memory footprint of Network object []

Has anyone done performance testing on ?

Here are my observations
Basic Network object is taking 21.3 mb of disk space during serialization and reached to 28mb after 44 records .

Lot of times I got GC overheard Error when I store Network objects in HashMap .
Here are my test details
total metrics : 80
MultiEncoder (timestamp and scalar value)


80 network objects took almost 4.8gb of memory after 100 values/metric

We are planning to use in production to capture/analyze VM(~10000 nodes) system metrics like cpu, memory ,network, IO etc around 10 metrics/VM

With the current memory usage ,we can’t take to production usage .
Pls let me know your opinion on the same .

Hi @subutai , @rhyolight @cogmission

Can we use in the production where we have thousands of nodes and 5-10 metrics/each node , Network object started 21.3mb , after 2000 records it went to 36.3 mb . is it possible to reduce the network to 1-2 mb without compression ?

Thanks & Regards

I don’t know of a way to compress the memory.

Also the rise from 21.3 to 36.3 probably doesn’t represent a linear increase… HTMs are “dynamic” meaning their structure is what represents the learned knowledge (i.e. by creating dendrites and synapses). They should “plateau” for the most part after running a while (especially since their is culling of dendrites and synapses also for changing data).

1 Like

Hi David

Can we reduce memory footprint of network object ? What is the best way of serializing network object?

Thanks & Regards

Hi @wip_user,

There’s plenty of examples in the test directory under the network package in the file…

Start from this point down in the file where different types of Networks and scenarios are being serialized.
Also, there is an example of “Checkpointing” as well…

Thanks David. what are factors influence size of Network model ?

Thanks & Regards