Show and Tell: Simulator for Conductance Based Models


I’d like to show and tell you all about what I’ve been working on for a while now. Here is a short description of the project and its goals:

NEUWON is a simulation framework for neuroscience and artificial intelligence specializing in conductance based models. This software is a modern remake of the NEURON simulator. It is accurate, efficient, and easy to use.

It is currently in the alpha phase of development, so it is not at all ready to be used. It’s written it in python and freely available under the MIT license. Link:

Here are some pictures of action potentials generated by the software:

Merry Christmas


I’ve now implemented a neuron growth algorithm. The goal of this algorithm is to generate realistic neuronal morphological from a concise description of their basic properties. I use the TREES algorithm combined with the morphological constraints used by the ROOTS algorithm. The algorithm is capable of making both dendrites and axons. Neuronal growth is constrained to an arbitrary area.

This animation shows an action potential propagating through an axonal arbor. The color represents the membrane electric potential: blue = -70mV, red = +55mV. The Soma is the large cylinder in the lower left corner. I constrained the axon into a spiral shape. The width and height of this shape are approximately 200 micrometers and the center of the shape recedes 600 micrometers into the distance. The model populated with Hodgkin-Huxley channels and an AP is initiated at the soma.

Cuntz H, Forstner F, Borst A, Hausser M (2010)
One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Practical Application.
PLoS Comput Biol 6(8): e1000877.

Bingham CS, Mergenthal A, Bouteiller J-MC, Song D, Lazzi G and Berger TW (2020)
ROOTS: An Algorithm to Generate Biologically Realistic Cortical Axons and an Application to Electroceutical Modeling.
Front. Comput. Neurosci. 14:13.
doi: 10.3389/fncom.2020.00013


This is quite beautiful work. I haven’t had time to dig in deeply, but very happy to see what you’re sharing. :slight_smile:


Last month I rewrote the program to run on a graphics card (GPU) using CUDA.

Its still written in 100% python. I use the CuPy and Numba libraries to execute code on the GPU.

This demonstrates that the methods of simulation are embarrassingly parallel.
Furthermore the libraries (Numpy, Scipy, and CUDA) implement all of the difficult mathematics required for the exact integration method (Exact digital simulation of time-invariant linear systems with applications to neuronal modeling).

Sorry no pictures for this update, but it does run a lot faster than before without compromising the accuracy or ease of use.


In the past month I wrote an interface for NEUWON to use NMODL files. NMODL is a standardized file specification for describing the mechanisms in neuronal simulations. It is custom tailored for neuroscience; it can describe ion channel kinetics and chemical reactions among other things. NMODL files can contain algebraic and differential equations. Most of state-of-the-art simulations of the brain use NMODL, usually along with the NEURON which also supports NMODL.

My implementation of NMODL is still at the prototype stage; it does not support many features and I’m sure there are still bugs in the code. Regardless, I was able to produce an Action Potential using the latest and greatest ion channel models:

For this demonstration I replaced the old Hodgkin-Huxley type channels with newer models of Nav1.1 type sodium channels and Kv1.1 type potassium channels. The first AP was spontaneously generated, and the second AP was caused by a current injection at t=25 ms.

The sources for the kinetic models of the ion channels are:


It’s been a while since I’ve provided an update about what I’m working on.
I’ve been making some major improvements to the internal architecture of this program.
Now it does more things, runs faster, and with many fewer lines of code.
However I still have not finished putting it back together again.

Here are some details about the overhaul.

A problem which I was having was that writing programs in a structure-of-arrays (SoA) style is tedious and error prone. Programmers normally use an array-of-structures (AoS) style (aka: Object-Oriented-Programming) because it’s a lot nicer to work with. For many applications (this included) SoA is technically superior to AoS. So I created a piece of software to deal with this problem which I named the “database”.

The database stores data in an optimized format (SoA), and provides an object oriented user interface to the data. It gets the best of both worlds: performant storage and easy access.

Data is stored in large contiguous vectors of homogeneous type, intended for fast vectorized computation. The database can also move these vectors to & from a graphics card (using CUDA).

The user is presented with proxy objects which behave as though they were regularly defined python objects, even though their internal data is not stored in the usual way. Python allows for such customization.

Whats more is that this has allowed me to consolidating all of the “data” related stuff into a centralized place. New features and improvements can be added to the database and easily applied to all of the contained data. In no particular order, here are some of those features:

  • All pointers are represented as indexes into arrays (as opposed to raw memory addresses), and can be stored in 32 bits instead of 64.
  • Data can be sorted. Surprisingly, this is a challenge!
    • Sorting things necessarily involves moving them to new locations. Any pointers to the old location need to be updated to point to the things’ new location.
    • Some data arrays need to be sorted based on a pointer’s value, and if the target of the pointer is also being sorted then there is a dependency in the order that you sort the arrays. For example: you might want to sort all of the neurons in a simulation, and then sort all of the synapses according to the index of their postsynaptic neuron. In this example, you must sort the neurons before the synapses.
  • All data can be (optionally) tagged with meta-data. Currently I have fields for:
    • A documentation string
    • The physical units
    • The range valid values
    • Error checking for NaN, NULL, and values which are outside of their valid range.
  • Tools for recording and measuring data.

Example of using the database:

First, using Object-Oriented-Programming, here is what we are going to make:

class Neuron:
    def __init__(self):
        self.voltage = -70 # millivolts

my_neuron = Neuron()
print(my_neuron.voltage) # prints: "-70"

And now let’s rewrite it using the database.

from neuwon.database import Database
db = Database()
neuron_data  = db.add_class("Neuron")
voltage_data = neuron_data.add_attribute("voltage",
        initial_value = -70,
        units = "millivolts",)
Neuron = neuron_data.get_instance_type()

my_neuron = Neuron()
print(my_neuron.voltage) # prints: "-70.0"

Both implementations of the Neuron class behave identically. However, behind the scenes, my_neuron does not actually contain the voltage data. Instead my_neuron contains a pointer to the database, and the index of where my_neuron is located inside of the database’s arrays.
The database provides an API for accessing the raw data, although this should be reserved for advanced users/programmers:

my_neuron.get_unstable_index() # The index of this neuron.
neuron_data.get_data("voltage") # The voltages for every neuron.

It’s been one of my goals with making these tools that they be useful to me beyond my current project, and can be useful for other people’s simulations too.


Hi again,

In the past few months I added a new feature to the database that I’m excited to unveil.

The database can now execute code.

  • It uses numba’s Just-In-Time compilation to run fast.
  • It runs on both the CPU and the GPU.
  • It automatically converts your code from the OOP-API to the faster SoA-API.

Let’s start by rebuilding the example from the previous post.
This time we’re going to use a class to hold everything together.

from neuwon.database import Database, Compute

class NeuronSuperclass:
    __slots__ = () # Required BC the data lives in the database, not here.
    def initialize(cls, database):
        # Register this 'Neuron' class with the database.
        neuron_data = database.add_class('Neuron', cls)
        # Setup this class's attributes.
        neuron_data.add_attribute('voltage', initial_value = -70)
        # Return the user-friendly OOP-API class.
        return neuron_data.get_instance_type()
    # Now let's write some very simple code to act on our neuron.
    def increase_voltage(self, x):
        self.voltage += x

The @Compute decorator causes the database to JIT-compile the next function or method.
The database will process all @Computes it finds on the NeuronSuperclass.

Notice that increase_voltage is written using the OOP-API (self.voltage). The database will automatically convert it to use the SoA-API (voltage_array[index]) before compiling it.

Now let’s run it:

db = Database()
Neuron = NeuronSuperclass.initialize(db)

my_neuron = Neuron()
print(my_neuron.voltage)        # prints '-70'

my_neuron.increase_voltage(3)   # Call the method.
print(my_neuron.voltage)        # prints '-67'

# Run it on every neuron, and on the graphics card.
all_instances = range(len(Neuron.get_database_class()))
with db.using_memory_space('cuda'):
    Neuron.increase_voltage(all_instances, 3)


  • numba, cuda, and my database all place restrictions what code can be compiled.
  • This feature is mainly intended for simple number crunching.
  • Although regular methods and @Computed methods act very similar, there are some discrepancies in their semantics, specifically regarding NULL pointers.

I implemented a graphical user interface (GUI) for NEUWON.

I’m dissatisfied with how my GUI turned out. It feels clunky and unintuitive. IMO part of the problem is that I used tkinter, because it’s what I’m familiar with, instead of investing in learning how to use more modern tools. I want my GUI to be fun to use. Building models of neurons isn’t a game per-say, but it should not feel like a burdensome chore. Part of the reason I embarked on this project is because NEURON’s GUI is such an unfixable mess.

Anyways, here is a demonstration of the one part of the GUI that did turn out really well.
The model is now rendered using OpenGL and you can navigate around the model using WASD style controls.


To illustrate why I’m making NEUWON and why the current state-of-the-art simulator, NEURON, is woefully inadequate, I’m going to pick apart the synapse model from a recently published model: Pyramidal cell model predicting plasticity in neocortex. The paper itself is valid and good science, but the source code running the experiment is a mess.

The following NMDOL file contains the entire synapse model:

GluSynapse.mod (500 lines)
 * @file GluSynapse.mod
 * @brief Probabilistic synapse featuring long-term plasticity
 * @author king, chindemi, rossert
 * @date 2019-09-20
 * @version 1.0.0
 * @remark Copyright (c) BBP/EPFL 2005-2021. This work is licenced under Creative Common CC BY-NC-SA-4.0 (
 Glutamatergic synapse model featuring:
1) AMPA receptor with a dual-exponential conductance profile.
2) NMDA receptor  with a dual-exponential conductance profile and magnesium
   block as described in Jahr and Stevens 1990.
3) Tsodyks-Markram presynaptic short-term plasticity as Barros et al. 2019.
   Implementation based on the work of Eilif Muller, Michael Reimann and
   Srikanth Ramaswamy (Blue Brain Project, August 2011), who introduced the
   2-state Markov model of vesicle release. The new model is an extension of
   Fuhrmann et al. 2002, motivated by the following constraints:
        a) No consumption on failure
        b) No release until recovery
        c) Same ensemble averaged trace as canonical Tsodyks-Markram using same
           parameters determined from experiment.
   For a pre-synaptic spike or external spontaneous release trigger event, the
   synapse will only release if it is in the recovered state, and with
   probability u (which follows facilitation dynamics). If it releases, it will
   transition to the unrecovered state. Recovery is as a Poisson process with
   rate 1/Dep.
   John Rahmon and Giuseppe Chindemi introduced multi-vesicular release as an
   extension of the 2-state Markov model of vesicle release described above
   (Blue Brain Project, February 2017).
4) NMDAR-mediated calcium current. Fractional calcium current Pf_NMDA from
   Schneggenburger et al. 1993. Fractional NMDAR conductance treated as a
   calcium-only permeable channel with Erev = 40 mV independent of extracellular
   calcium concentration (see Jahr and Stevens 1993). Implemented by Christian
   Rossert and Giuseppe Chindemi (Blue Brain Project, 2016).
5) Spine volume.
6) VDCC.
7) Postsynaptic calcium dynamics.
8) Long-term synaptic plasticity. Calcium-based STDP model based on Graupner and
   Brunel 2012.
Model implementation, optimization and simulation curated by James King (Blue
Brain Project, 2019).

TITLE Glutamatergic synapse

    POINT_PROCESS GluSynapse
    : AMPA Receptor
    RANGE tau_d_AMPA, gmax0_AMPA, gmax_d_AMPA, gmax_p_AMPA, g_AMPA
    : NMDA Receptor
    GLOBAL mgo_NMDA, scale_NMDA, slope_NMDA
    GLOBAL tau_r_NMDA, tau_d_NMDA, E_NMDA
    RANGE gmax_NMDA, g_NMDA
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    RANGE Use0_TM, Dep_TM, Fac_TM, Nrrp_TM
    RANGE Use_d_TM, Use_p_TM
    : NMDAR-mediated calcium current
    RANGE ica_NMDA
    : Spine
    RANGE volume_CR
    : VDCC (R-type)
    GLOBAL ljp_VDCC, vhm_VDCC, km_VDCC, mtau_VDCC, vhh_VDCC, kh_VDCC, htau_VDCC, gca_bar_VDCC
    RANGE ica_VDCC
    : Postsynaptic Ca2+ dynamics
    GLOBAL gamma_ca_CR, tau_ca_CR, min_ca_CR, cao_CR
    : Long-term synaptic plasticity
    GLOBAL rho_star_GB, tau_ind_GB, tau_exp_GB, tau_effca_GB
    GLOBAL gamma_d_GB, gamma_p_GB
    RANGE theta_d_GB, theta_p_GB, rho0_GB, dep_GB, pot_GB
    : Misc
    RANGE vsyn, synapseID, selected_for_report, verbose

    (nA)    = (nanoamp)
    (mV)    = (millivolt)
    (uS)    = (microsiemens)
    (nS)    = (nanosiemens)
    (pS)    = (picosiemens)
    (umho)  = (micromho)
    (um)    = (micrometers)
    (mM)    = (milli/liter)
    (uM)    = (micro/liter)
    FARADAY = (faraday) (coulomb)
    PI      = (pi)      (1)
    R       = (k-mole)  (joule/degC)

    celsius                     (degC)
    : AMPA Receptor
    tau_r_AMPA      = 0.2       (ms)        : Tau rise, dual-exponential conductance profile
    tau_d_AMPA      = 1.7       (ms)        : Tau decay, IMPORTANT: tau_r < tau_d
    E_AMPA          = 0         (mV)        : Reversal potential
    gmax0_AMPA      = 1.0       (nS)        : Initial peak conductance
    gmax_d_AMPA     = 1.0       (nS)        : Peak conductance in the depressed state
    gmax_p_AMPA     = 2.0       (nS)        : Peak conductance in the potentitated state
    : NMDA Receptor
    mgo_NMDA        = 1         (mM)        : Extracellular magnesium concentration
    scale_NMDA      = 2.552     (mM)        : Scale of the mg block (Vargas-Caballero and Robinson 2003)
    slope_NMDA      = 0.072     (/mV)       : Slope of the ma block (Vargas-Caballero and Robinson 2003)
    tau_r_NMDA      = 0.29      (ms)        : Tau rise, dual-exponential conductance profile
    tau_d_NMDA      = 70        (ms)        : Tau decay, IMPORTANT: tau_r < tau_d
    E_NMDA          = -3        (mV)        : Reversal potential (Vargas-Caballero and Robinson 2003)
    gmax_NMDA       = 0.55      (nS)        : Peak conductance
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    Use0_TM         = 0.5       (1)         : Initial utilization of synaptic efficacy
    Dep_TM          = 100       (ms)        : Relaxation time constant from depression
    Fac_TM          = 10        (ms)        : Relaxation time constant from facilitation
    Nrrp_TM         = 1         (1)         : Number of release sites for given contact
    Use_d_TM        = 0.2       (1)         : Depressed Use
    Use_p_TM        = 0.8       (1)         : Potentiated Use
    : Spine
    volume_CR       = 0.087     (um3)       : From spine data by Ruth Benavides-Piccione (unpublished)
    : VDCC (R-type)
    gca_bar_VDCC    = 0.0744    (nS/um2)    : Density spines: 20 um-2 (Sabatini 2000), unitary conductance VGCC 3.72 pS (Bartol 2015)
    ljp_VDCC        = 0         (mV)
    vhm_VDCC        = -5.9      (mV)        : v 1/2 for act, Magee and Johnston 1995 (corrected for m*m)
    km_VDCC         = 9.5       (mV)        : act slope, Magee and Johnston 1995 (corrected for m*m)
    vhh_VDCC        = -39       (mV)        : v 1/2 for inact, Magee and Johnston 1995
    kh_VDCC         = -9.2      (mV)        : inact, Magee and Johnston 1995
    mtau_VDCC       = 1         (ms)        : max time constant (guess)
    htau_VDCC       = 27        (ms)        : max time constant 100*0.27
    : Postsynaptic Ca2+ dynamics
    gamma_ca_CR     = 0.04      (1)         : Percent of free calcium (not buffered), Sabatini et al 2002: kappa_e = 24+-11 (also 14 (2-31) or 22 (18-33))
    tau_ca_CR       = 12        (ms)        : Rate of removal of calcium, Sabatini et al 2002: 14ms (12-20ms)
    min_ca_CR       = 70e-6     (mM)        : Sabatini et al 2002: 70+-29 nM, per AP: 1.1 (0.6-8.2) uM = 1100 e-6 mM = 1100 nM
    cao_CR          = 2.0       (mM)        : Extracellular calcium concentration in slices
    : Long-term synaptic plasticity
    rho_star_GB     = 0.5       (1)
    tau_ind_GB      = 70        (s)
    tau_exp_GB      = 100       (s)
    tau_effca_GB    = 200       (ms)
    gamma_d_GB      = 100       (1)
    gamma_p_GB      = 450       (1)
    theta_d_GB      = 0.006     (us/liter)
    theta_p_GB      = 0.001     (us/liter)
    rho0_GB         = 0         (1)
    : Misc
    synapseID       = 0
    verbose         = 0
    selected_for_report = 0

 * This Verbatim block is needed to generate random numbers from a uniform
 * distribution U(0, 1).
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "nrnran123.h"
double nrn_random_pick(void* r);
void* nrn_random_arg(int argpos);

    : AMPA Receptor
    g_AMPA          (uS)
    : NMDA Receptor
    g_NMDA          (uS)
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    rng_TM                  : Random Number Generator
    usingR123               : TEMPORARY until mcellran4 completely deprecated
    : NMDAR-mediated calcium current
    ica_NMDA        (nA)
    : VDCC (R-type)
    ica_VDCC        (nA)
    : Long-term synaptic plasticity
    dep_GB          (1)
    pot_GB          (1)
    : Misc
    v               (mV)
    vsyn            (mV)
    i               (nA)

    : AMPA Receptor
    A_AMPA      (1)
    B_AMPA      (1)
    gmax_AMPA   (nS)
    : NMDA Receptor
    A_NMDA      (1)
    B_NMDA      (1)
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    Use_TM      (1)
    : VDCC (R-type)
    m_VDCC      (1)
    h_VDCC      (1)
    : Postsynaptic Ca2+ dynamics
    cai_CR      (mM)        <1e-6>
    : Long-term synaptic plasticity
    rho_GB      (1)
    effcai_GB   (us/liter)  <1e-3>

    : AMPA Receptor
    A_AMPA      = 0
    B_AMPA      = 0
    gmax_AMPA   = gmax0_AMPA
    : NMDA Receptor
    A_NMDA      = 0
    B_NMDA      = 0
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    Use_TM      = Use0_TM
    : Postsynaptic Ca2+ dynamics
    cai_CR      = min_ca_CR
    : Long-term synaptic plasticity
    rho_GB      = rho0_GB
    effcai_GB   = 0
    dep_GB      = 0
    pot_GB      = 0
    : Initialize watchers
    net_send(0, 1)

    LOCAL Eca_syn, mggate, i_AMPA, i_NMDA, Pf_NMDA, gca_bar_abs_VDCC, gca_VDCC
    SOLVE state METHOD derivimplicit
    : AMPA Receptor
    g_AMPA = (1e-3)*gmax_AMPA*(B_AMPA - A_AMPA)
    i_AMPA = g_AMPA*(v-E_AMPA)
    : NMDA Receptor
    mggate = 1 / (1 + exp(-slope_NMDA*v) * (mgo_NMDA/scale_NMDA))
    g_NMDA = (1e-3)*gmax_NMDA*mggate*(B_NMDA - A_NMDA)
    i_NMDA = g_NMDA*(v - E_NMDA)
    : NMDAR-mediated calcium current
    Pf_NMDA  = (4*cao_CR) / (4*cao_CR + (1/1.38) * 120 (mM)) * 0.6
    ica_NMDA = Pf_NMDA*g_NMDA*(v-40.0)
    : VDCC (R-type), assuming sphere for spine head
    gca_bar_abs_VDCC = gca_bar_VDCC * 4(um2)*PI*(3(1/um3)/4*volume_CR*1/PI)^(2/3)
    gca_VDCC = (1e-3) * gca_bar_abs_VDCC * m_VDCC * m_VDCC * h_VDCC
    Eca_syn = nernst(cai_CR, cao_CR, 2)
    ica_VDCC = gca_VDCC*(v-Eca_syn)
    : Update synaptic voltage (for recording convenience)
    vsyn = v
    : Update current
    i = i_AMPA + i_NMDA + ica_VDCC

    LOCAL minf_VDCC, hinf_VDCC
    : AMPA Receptor
    A_AMPA'      = - A_AMPA/tau_r_AMPA
    B_AMPA'      = - B_AMPA/tau_d_AMPA
    gmax_AMPA'   = (gmax_d_AMPA + rho_GB*(gmax_p_AMPA - gmax_d_AMPA) - gmax_AMPA) / ((1e3)*tau_exp_GB)
    : NMDA Receptor
    A_NMDA'      = - A_NMDA/tau_r_NMDA
    B_NMDA'      = - B_NMDA/tau_d_NMDA
    : Stochastic Tsodyks-Markram Multi-Vesicular Release
    Use_TM'      = (Use_d_TM + rho_GB*(Use_p_TM - Use_d_TM) - Use_TM) / ((1e3)*tau_exp_GB)
    : VDCC (R-type)
    minf_VDCC    = 1 / (1 + exp(((vhm_VDCC - ljp_VDCC) - v) / km_VDCC))
    hinf_VDCC    = 1 / (1 + exp(((vhh_VDCC - ljp_VDCC) - v) / kh_VDCC))
    m_VDCC'      = (minf_VDCC-m_VDCC)/mtau_VDCC
    h_VDCC'      = (hinf_VDCC-h_VDCC)/htau_VDCC
    : Postsynaptic Ca2+ dynamics
    cai_CR'      = - (1e-9)*(ica_NMDA + ica_VDCC)*gamma_ca_CR/((1e-15)*volume_CR*2*FARADAY)
                   - (cai_CR - min_ca_CR)/tau_ca_CR
    : Long-term synaptic plasticity
    effcai_GB'   = - effcai_GB/tau_effca_GB + (cai_CR - min_ca_CR)
    rho_GB'      = ( - rho_GB*(1 - rho_GB)*(rho_star_GB - rho_GB)
                     + pot_GB*gamma_p_GB*(1 - rho_GB)
                     - dep_GB*gamma_d_GB*rho_GB ) / ((1e3)*tau_ind_GB)

NET_RECEIVE (weight, u, tsyn (ms), recovered, unrecovered) {
    LOCAL p_rec, released, tp, factor
        weight = 1
        u = 0
        tsyn = 0 (ms)
        recovered = Nrrp_TM
        unrecovered = 0
    if(verbose > 0){ UNITSOFF printf("Time = %g ms, incoming spike at synapse %g\n", t, synapseID) UNITSON }
    if(flag == 0) {
        if(weight <= 0){
            : Do not perform any calculations if the synapse (netcon) is deactivated.
            : This avoids drawing from the random stream
            : WARNING In this model *weight* is only used to activate/deactivate the
            :         synapse. The conductance is stored in gmax_AMPA and gmax_NMDA.
            if(verbose > 0){ printf("Inactive synapse, weight = %g\n", weight) }
        } else {
            : Flag 0: Regular spike
            if(verbose > 0){ printf("Flag 0, Regular spike\n") }
            : Update facilitation variable as Eq. 2 in Fuhrmann et al. 2002
            u = Use_TM + u*(1 - Use_TM)*exp(-(t - tsyn)/Fac_TM)
            if ( verbose > 0 ) { printf("\tVesicle release probability = %g\n", u) }
            : Recovery
            p_rec = 1 - exp(-(t - tsyn)/Dep_TM)
            if ( verbose > 0 ) { printf("\tVesicle recovery probability = %g\n", p_rec) }
            if ( verbose > 0 ) { printf("\tVesicle available before recovery = %g\n", recovered) }
            recovered = recovered + brand(unrecovered, p_rec)
            if ( verbose > 0 ) { printf("\tVesicles available after recovery = %g\n", recovered) }
            : Release
            released = brand(recovered, u)
            if ( verbose > 0 ) { printf("\tReleased %g vesicles out of %g\n", released, recovered) }
            : Update AMPA variables
            tp = (tau_r_AMPA*tau_d_AMPA)/(tau_d_AMPA-tau_r_AMPA)*log(tau_d_AMPA/tau_r_AMPA)  : Time to peak
            factor = 1 / (-exp(-tp/tau_r_AMPA)+exp(-tp/tau_d_AMPA))  : Normalization factor
            A_AMPA = A_AMPA + released/Nrrp_TM*factor
            B_AMPA = B_AMPA + released/Nrrp_TM*factor
            : Update NMDA variables
            tp = (tau_r_NMDA*tau_d_NMDA)/(tau_d_NMDA-tau_r_NMDA)*log(tau_d_NMDA/tau_r_NMDA)  : Time to peak
            factor = 1 / (-exp(-tp/tau_r_NMDA)+exp(-tp/tau_d_NMDA))  : Normalization factor
            A_NMDA = A_NMDA + released/Nrrp_TM*factor
            B_NMDA = B_NMDA + released/Nrrp_TM*factor
            : Update vesicle pool
            recovered = recovered - released
            unrecovered = Nrrp_TM - recovered
            if ( verbose > 0 ) { printf("\tFinal vesicle count, Recovered = %g, Unrecovered = %g, Nrrp = %g\n", recovered, unrecovered, Nrrp_TM) }
            : Update tsyn
            : tsyn knows about all spikes, not only those that released
            : i.e. each spike can increase the u, regardless of recovered state
            :      and each spike trigger an evaluation of recovery
            tsyn = t
    } else if(flag == 1) {
        : Flag 1, Initialize watchers
        if(verbose > 0){ printf("Flag 1, Initialize watchers\n") }
        WATCH (effcai_GB > theta_d_GB) 2
        WATCH (effcai_GB < theta_d_GB) 3
        WATCH (effcai_GB > theta_p_GB) 4
        WATCH (effcai_GB < theta_p_GB) 5
    } else if(flag == 2) {
        : Flag 2, Activate depression mechanisms
        if(verbose > 0){ printf("Flag 2, Activate depression mechanisms\n") }
        dep_GB = 1
    } else if(flag == 3) {
        : Flag 3, Deactivate depression mechanisms
        if(verbose > 0){ printf("Flag 3, Deactivate depression mechanisms\n") }
        dep_GB = 0
    } else if(flag == 4) {
        : Flag 4, Activate potentiation mechanisms
        if(verbose > 0){ printf("Flag 4, Activate potentiation mechanisms\n") }
        pot_GB = 1
    } else if(flag == 5) {
        : Flag 5, Deactivate potentiation mechanisms
        if(verbose > 0){ printf("Flag 5, Deactivate potentiation mechanisms\n") }
        pot_GB = 0

FUNCTION nernst(ci(mM), co(mM), z) (mV) {
    nernst = (1000) * R * (celsius + 273.15) / (z*FARADAY) * log(co/ci)
    if(verbose > 1) { UNITSOFF printf("nernst:%g R:%g temperature (c):%g \n", nernst, R, celsius) UNITSON }

    // For compatibility, allow for either MCellRan4 or Random123
    // Distinguish by the arg types
    // Object => MCellRan4, seeds (double) => Random123
    usingR123 = 0;
    if( ifarg(1) && hoc_is_double_arg(1) ) {
        nrnran123_State** pv = (nrnran123_State**)(&_p_rng_TM);
        uint32_t a2 = 0;
        uint32_t a3 = 0;
        if (*pv) {
            *pv = (nrnran123_State*)0;
        if (ifarg(2)) {
            a2 = (uint32_t)*getarg(2);
        if (ifarg(3)) {
            a3 = (uint32_t)*getarg(3);
        *pv = nrnran123_newstream3((uint32_t)*getarg(1), a2, a3);
        usingR123 = 1;
    } else if( ifarg(1) ) {   // not a double, so assume hoc object type
        void** pv = (void**)(&_p_rng_TM);
        *pv = nrn_random_arg(1);
    } else {  // no arg, so clear pointer
        void** pv = (void**)(&_p_rng_TM);
        *pv = (void*)0;

FUNCTION urand() {
    double value;
    if ( usingR123 ) {
        value = nrnran123_dblpick((nrnran123_State*)_p_rng_TM);
    } else if (_p_rng_TM) {
        #ifndef CORENEURON_BUILD
        value = nrn_random_pick(_p_rng_TM);
    } else {
        value = 0.0;
    _lurand = value;

FUNCTION brand(n, p) {
    LOCAL result, count, success
    success = 0
    FROM count = 0 TO (n - 1) {
        result = urand()
        if(result <= p) {
            success = success + 1
    brand = success

FUNCTION bbsavestate() {
    bbsavestate = 0
        /* first arg is direction (0 save, 1 restore), second is array*/
        /* if first arg is -1, fill xdir with the size of the array */
        double *xdir, *xval, *hoc_pgetarg();
        long nrn_get_random_sequence(void* r);
        void nrn_set_random_sequence(void* r, int val);
        xdir = hoc_pgetarg(1);
        xval = hoc_pgetarg(2);
        if (_p_rng_TM) {
            // tell how many items need saving
            if (*xdir == -1) {  // count items
                if( usingR123 ) {
                    *xdir = 2.0;
                } else {
                    *xdir = 1.0;
                return 0.0;
            } else if(*xdir ==0 ) {  // save
                if( usingR123 ) {
                    uint32_t seq;
                    unsigned char which;
                    nrnran123_getseq( (nrnran123_State*)_p_rng_TM, &seq, &which );
                    xval[0] = (double) seq;
                    xval[1] = (double) which;
                } else {
                    xval[0] = (double)nrn_get_random_sequence(_p_rng_TM);
            } else {  // restore
                if( usingR123 ) {
                    nrnran123_setseq( (nrnran123_State*)_p_rng_TM, (uint32_t)xval[0], (char)xval[1] );
                } else {
                    nrn_set_random_sequence(_p_rng_TM, (long)(xval[0]));

static void bbcore_write(double* dArray, int* iArray, int* doffset, int* ioffset, _threadargsproto_) {
    // make sure offset array non-null
    if (iArray) {
        // get handle to random123 instance
        nrnran123_State** pv = (nrnran123_State**)(&_p_rng_TM);
        // get location for storing ids
        uint32_t* ia = ((uint32_t*)iArray) + *ioffset;
        // retrieve/store identifier seeds
        nrnran123_getids3(*pv, ia, ia+1, ia+2);
        // retrieve/store stream sequence
        unsigned char which;
        nrnran123_getseq(*pv, ia+3, &which);
        ia[4] = (int)which;

    // increment integer offset (2 identifier), no double data
    *ioffset += 5;
    *doffset += 0;

static void bbcore_read(double* dArray, int* iArray, int* doffset, int* ioffset, _threadargsproto_) {
    // make sure it's not previously set
    uint32_t* ia = ((uint32_t*)iArray) + *ioffset;
    // make sure non-zero identifier seeds
    if (ia[0] != 0 || ia[1] != 0 || ia[2] != 0) {
        nrnran123_State** pv = (nrnran123_State**)(&_p_rng_TM);
        // get new stream
        *pv = nrnran123_newstream3(ia[0], ia[1], ia[2]);
        // restore sequence
        nrnran123_setseq(*pv, ia[3], (char)ia[4]);
    // increment intger offset (2 identifiers), no double data
    *ioffset += 5;
    *doffset += 0;


This file combines all of the mechanisms that go into the synapse:

  • the presynapse (which includes a simple model of short term plasticity),
  • the AMPA receptor,
  • the NMDA receptor,
  • the Voltage-Dependent-Calcium-Channel (VDCC)
  • the postsynaptic calcium dynamics,
  • long-term synaptic plasticity.

The authors also provided most of these models as individual files, but for actually running the simulation they merged them all into one big file. Merging everything into one big file is in general a bad practice, for many reasons. I can’t say exactly why they did this, but I suspect it’s because:

  • In NEURON multiple mechanisms can not write to certain shared variables or they will overwrite each other. In NEUWON I avoid this issue by using the “+=” operator.
  • In NEURON mechanisms can only interact with other mechanisms using POINTER variables which are tricky to get right and need to be setup using NEURONs custom programming language: “hoc”. Also, mechanisms need to use POINTER variables to access certain chemical concentrations, including the glutamate concentration in a synaptic cleft.

Merging everything into one big mechanism sidesteps these problems.

They use “VERBATIM” blocks to insert raw “C” code into the mechanism. They invented the “NMODL” file format so that they wouldn’t need to write “C” code but they just couldn’t quit their low down ways. The VERBATIM blocks can and do access private variables in NEURON (variables which are both undocumented and start with an ‘_’). They use VERBATIM for doing two things, neither of which seem like they should need C code:

  1. Interfacing with NEURONs psuedo random number generators.
  2. Interfacing with the CoreNEURON extension.

In total there are just over 100 lines of C code in this file. To get all of this working they had the help of one of NEURONs developers. I won’t name names, but their only contribution in the “Author contributions” section was “designed and curated the NEURON implementation of the synapse model.” And for all of the C code the model still doesn’t run on a graphics card.

It does not need to be this complicated, but it is.

Part of the complexity is caused by using hacks to patch over the shortcomings of the underlying system. They merge the NMODL files to overcome NEURON’s issues. They use C to do what NMODL can’t.

Another part of the complexity stems from NEURONs long history and the developers desire to continue to support all of the old models which exist in the wild. In fact, a recent achievement of the team was to add “continuous integration” testing to NEURON with most of the models in ModelDB, to ensure that they don’t accidentially break any of those old models. I appreciate them maintaining the old science for posterity, but that’s no way to make an omelet.

PS: In NEUWON I’ve solved most of these problems, and my NMODL files are not compatible with theirs.

  • Shared variables are added to instead of being overwritten.
  • Mechanisms can access all chemical data without using POINTER variables. I allow any chemical to appear in a “USEION” statement, which then exposes the intracellular and extracellular concentrations to the mechanism. In contrast NEURON only accepts a handful of common ion species in USEION statements.
  • Mechanisms can modify the strengths of other mechanisms, which allows for learning mechanisms.
  • The features that CoreNEURON adds are built into NEUWON by its database layer.
  • NEUWON does not support VERBATIM statements. If/when there is a need for RNG then I will add a new built-in function to NMODL to generate random numbers.
1 Like


I re-implemented my GUI for NEUWON, this time using HTML, CSS, and JavaScript. The GUI is runnable in any modern web browser as a local file; no server is needed. I only re-implemented the model editor portion of the GUI, not the model runner, and I have not yet integrated the GUI with the underlying program. I also didn’t re-implement several features that in retrospect I think were not useful.

In a previous post I lamented my choice of tools for doing the task: python & tkinter. I used tkinter because I have experience using it, from building four or five fully featured GUI’s. However, I feel that despite my experience it took me longer than it should have to build that GUI. The new implementation took me half as long to make, and that includes the time it took me to learn how to use the new technologies. In fact, getting back up to speed with tkinter took significantly more time than it took to learn the new stack (HTML+CSS+JS) for the first time.

The results are much improved. The GUI is more polished and modern, and modifying the source code feels like a breeze.

Here are some pictures of the new GUI:

And this screenshot shows the neuron editor: