Show and Tell: Simulator for Conductance Based Models


I’d like to show and tell you all about what I’ve been working on for a while now. Here is a short description of the project and its goals:

NEUWON is a simulation framework for neuroscience and artificial intelligence specializing in conductance based models. This software is a modern remake of the NEURON simulator. It is accurate, efficient, and easy to use.

It is currently in the alpha phase of development, so it is not at all ready to be used. It’s written it in python and freely available under the MIT license. Link:

Here are some pictures of action potentials generated by the software:

Merry Christmas


I’ve now implemented a neuron growth algorithm. The goal of this algorithm is to generate realistic neuronal morphological from a concise description of their basic properties. I use the TREES algorithm combined with the morphological constraints used by the ROOTS algorithm. The algorithm is capable of making both dendrites and axons. Neuronal growth is constrained to an arbitrary area.

This animation shows an action potential propagating through an axonal arbor. The color represents the membrane electric potential: blue = -70mV, red = +55mV. The Soma is the large cylinder in the lower left corner. I constrained the axon into a spiral shape. The width and height of this shape are approximately 200 micrometers and the center of the shape recedes 600 micrometers into the distance. The model populated with Hodgkin-Huxley channels and an AP is initiated at the soma.

Cuntz H, Forstner F, Borst A, Hausser M (2010)
One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Practical Application.
PLoS Comput Biol 6(8): e1000877.

Bingham CS, Mergenthal A, Bouteiller J-MC, Song D, Lazzi G and Berger TW (2020)
ROOTS: An Algorithm to Generate Biologically Realistic Cortical Axons and an Application to Electroceutical Modeling.
Front. Comput. Neurosci. 14:13.
doi: 10.3389/fncom.2020.00013


This is quite beautiful work. I haven’t had time to dig in deeply, but very happy to see what you’re sharing. :slight_smile:


Last month I rewrote the program to run on a graphics card (GPU) using CUDA.

Its still written in 100% python. I use the CuPy and Numba libraries to execute code on the GPU.

This demonstrates that the methods of simulation are embarrassingly parallel.
Furthermore the libraries (Numpy, Scipy, and CUDA) implement all of the difficult mathematics required for the exact integration method (Exact digital simulation of time-invariant linear systems with applications to neuronal modeling).

Sorry no pictures for this update, but it does run a lot faster than before without compromising the accuracy or ease of use.


In the past month I wrote an interface for NEUWON to use NMODL files. NMODL is a standardized file specification for describing the mechanisms in neuronal simulations. It is custom tailored for neuroscience; it can describe ion channel kinetics and chemical reactions among other things. NMODL files can contain algebraic and differential equations. Most of state-of-the-art simulations of the brain use NMODL, usually along with the NEURON which also supports NMODL.

My implementation of NMODL is still at the prototype stage; it does not support many features and I’m sure there are still bugs in the code. Regardless, I was able to produce an Action Potential using the latest and greatest ion channel models:

For this demonstration I replaced the old Hodgkin-Huxley type channels with newer models of Nav1.1 type sodium channels and Kv1.1 type potassium channels. The first AP was spontaneously generated, and the second AP was caused by a current injection at t=25 ms.

The sources for the kinetic models of the ion channels are:


It’s been a while since I’ve provided an update about what I’m working on.
I’ve been making some major improvements to the internal architecture of this program.
Now it does more things, runs faster, and with many fewer lines of code.
However I still have not finished putting it back together again.

Here are some details about the overhaul.

A problem which I was having was that writing programs in a structure-of-arrays (SoA) style is tedious and error prone. Programmers normally use an array-of-structures (AoS) style (aka: Object-Oriented-Programming) because it’s a lot nicer to work with. For many applications (this included) SoA is technically superior to AoS. So I created a piece of software to deal with this problem which I named the “database”.

The database stores data in an optimized format (SoA), and provides an object oriented user interface to the data. It gets the best of both worlds: performant storage and easy access.

Data is stored in large contiguous vectors of homogeneous type, intended for fast vectorized computation. The database can also move these vectors to & from a graphics card (using CUDA).

The user is presented with proxy objects which behave as though they were regularly defined python objects, even though their internal data is not stored in the usual way. Python allows for such customization.

Whats more is that this has allowed me to consolidating all of the “data” related stuff into a centralized place. New features and improvements can be added to the database and easily applied to all of the contained data. In no particular order, here are some of those features:

  • All pointers are represented as indexes into arrays (as opposed to raw memory addresses), and can be stored in 32 bits instead of 64.
  • Data can be sorted. Surprisingly, this is a challenge!
    • Sorting things necessarily involves moving them to new locations. Any pointers to the old location need to be updated to point to the things’ new location.
    • Some data arrays need to be sorted based on a pointer’s value, and if the target of the pointer is also being sorted then there is a dependency in the order that you sort the arrays. For example: you might want to sort all of the neurons in a simulation, and then sort all of the synapses according to the index of their postsynaptic neuron. In this example, you must sort the neurons before the synapses.
  • All data can be (optionally) tagged with meta-data. Currently I have fields for:
    • A documentation string
    • The physical units
    • The range valid values
    • Error checking for NaN, NULL, and values which are outside of their valid range.
  • Tools for recording and measuring data.

Example of using the database:

First, using Object-Oriented-Programming, here is what we are going to make:

class Neuron:
    def __init__(self):
        self.voltage = -70 # millivolts

my_neuron = Neuron()
print(my_neuron.voltage) # prints: "-70"

And now let’s rewrite it using the database.

from neuwon.database import Database
db = Database()
neuron_data  = db.add_class("Neuron")
voltage_data = neuron_data.add_attribute("voltage",
        initial_value = -70,
        units = "millivolts",)
Neuron = neuron_data.get_instance_type()

my_neuron = Neuron()
print(my_neuron.voltage) # prints: "-70.0"

Both implementations of the Neuron class behave identically. However, behind the scenes, my_neuron does not actually contain the voltage data. Instead my_neuron contains a pointer to the database, and the index of where my_neuron is located inside of the database’s arrays.
The database provides an API for accessing the raw data, although this should be reserved for advanced users/programmers:

my_neuron.get_unstable_index() # The index of this neuron.
neuron_data.get_data("voltage") # The voltages for every neuron.

It’s been one of my goals with making these tools that they be useful to me beyond my current project, and can be useful for other people’s simulations too.


Hi again,

In the past few months I added a new feature to the database that I’m excited to unveil.

The database can now execute code.

  • It uses numba’s Just-In-Time compilation to run fast.
  • It runs on both the CPU and the GPU.
  • It automatically converts your code from the OOP-API to the faster SoA-API.

Let’s start by rebuilding the example from the previous post.
This time we’re going to use a class to hold everything together.

from neuwon.database import Database, Compute

class NeuronSuperclass:
    __slots__ = () # Required BC the data lives in the database, not here.
    def initialize(cls, database):
        # Register this 'Neuron' class with the database.
        neuron_data = database.add_class('Neuron', cls)
        # Setup this class's attributes.
        neuron_data.add_attribute('voltage', initial_value = -70)
        # Return the user-friendly OOP-API class.
        return neuron_data.get_instance_type()
    # Now let's write some very simple code to act on our neuron.
    def increase_voltage(self, x):
        self.voltage += x

The @Compute decorator causes the database to JIT-compile the next function or method.
The database will process all @Computes it finds on the NeuronSuperclass.

Notice that increase_voltage is written using the OOP-API (self.voltage). The database will automatically convert it to use the SoA-API (voltage_array[index]) before compiling it.

Now let’s run it:

db = Database()
Neuron = NeuronSuperclass.initialize(db)

my_neuron = Neuron()
print(my_neuron.voltage)        # prints '-70'

my_neuron.increase_voltage(3)   # Call the method.
print(my_neuron.voltage)        # prints '-67'

# Run it on every neuron, and on the graphics card.
all_instances = range(len(Neuron.get_database_class()))
with db.using_memory_space('cuda'):
    Neuron.increase_voltage(all_instances, 3)


  • numba, cuda, and my database all place restrictions what code can be compiled.
  • This feature is mainly intended for simple number crunching.
  • Although regular methods and @Computed methods act very similar, there are some discrepancies in their semantics, specifically regarding NULL pointers.