Visualizing Network Topologies

I’m happy to introduce a new framework in NuPIC to support visualizing HTM network topologies. The following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network and was adapted from a Jupyter Notebook I plan to submit back to the NuPIC project.

Before you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository:

pip install --user .[viz]

Setup a simple network so we have something to work with:

from nupic.engine import Network, Dimensions

# Create Network instance
network = Network()

# Add three TestNode regions to network
network.addRegion("region1", "TestNode", "")
network.addRegion("region2", "TestNode", "")
network.addRegion("region3", "TestNode", "")

# Set dimensions on first region
region1 = network.getRegions().getByName("region1")
region1.setDimensions(Dimensions([1, 1]))

# Link regions
network.link("region1", "region2", "UniformLink", "")
network.link("region2", "region1", "UniformLink", "")
network.link("region1", "region3", "UniformLink", "")
network.link("region2", "region3", "UniformLink", "")

# Initialize network
network.initialize()

Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance:

from nupic.frameworks.viz import NetworkVisualizer

# Initialize Network Visualizer
viz = NetworkVisualizer(network)

# Render to dot (stdout)
viz.render()

Yielding the following Dot-formatted document on stdout:

digraph structs {
rankdir = "LR";
"region2":bottomUpOut -> "region3":bottomUpIn;
"region2":bottomUpOut -> "region1":bottomUpIn;
"region1":bottomUpOut -> "region3":bottomUpIn;
"region1":bottomUpOut -> "region2":bottomUpIn;
region3 [shape=record,label="{{<bottomUpIn>bottomUpIn}|region3|{}}"];
region2 [shape=record,label="{{<bottomUpIn>bottomUpIn}|region2|{<bottomUpOut>bottomUpOut}}"];
region1 [shape=record,label="{{<bottomUpIn>bottomUpIn}|region1|{<bottomUpOut>bottomUpOut}}"];
}

That’s interesting, but not necessarily useful if you don’t understand dot. Let’s capture that output and do something else:

from nupic.frameworks.viz import DotRenderer
from io import StringIO

outp = StringIO()
viz.render(renderer=lambda: DotRenderer(outp))

outp now contains the rendered output, render to an image with graphviz:

# Render dot to image
from graphviz import Source
from IPython.display import Image

Image(Source(outp.getvalue()).pipe("png"))

In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. “bottumUpIn” and “bottomUpOut”, are specific to the region type. The arrows indicate links between outputs from one region to the input of another.

I know what you’re thinking. That’s a cool trick, but nobody cares about your contrived example. I want to see something real!

Continuing below, I’ll instantiate a CLA model and visualize it. In this case, I’ll use one of the “hotgym” examples.

from nupic.frameworks.opf.modelfactory import ModelFactory

# Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py
model = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'CLA', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tpEnable': True, 'tpParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}})

Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.

# New network, new NetworkVisualizer instance
viz = NetworkVisualizer(model._netInfo.net)

# Render to Dot output to buffer
outp = StringIO()
viz.render(renderer=lambda: DotRenderer(outp))

# Render Dot to image, display inline
Image(Source(outp.getvalue()).pipe("png"))

In these examples, I’m using graphviz to render an image from the dot document in Python, but you may want to do something else. dot is a generic and flexible graph description language and there are many tools for working with dot files.

2 Likes

Jupyter notebook at https://github.com/numenta/nupic/pull/3459

Cool @Austin_Marshall. This will only become more valuable as networks become more complex. I bet it will be helpful to include the dimensionality of the input/outputs in the label and possibly the name of underlying class of the regions. Cheers!