Last month I rewrote the program to run on a graphics card (GPU) using CUDA.
Its still written in 100% python. I use the CuPy and Numba libraries to execute code on the GPU.
- CuPy aims to be a drop in replacement for NumPy, except that it executes on a GPU. I wrote the core of the simulator in CuPy.
CuPy – A NumPy-compatible array library accelerated by CUDA — CuPy 8.4.0 documentation - Numba uses just-in-time compilation to compile python code into binary code. Numba kernels can run on GPUs and can interoperate with CuPy. I use Numba to implement the Hodgkin-Huxley ion channels.
Numba documentation — Numba 0.52.0-py3.7-linux-x86_64.egg documentation
This demonstrates that the methods of simulation are embarrassingly parallel.
Furthermore the libraries (Numpy, Scipy, and CUDA) implement all of the difficult mathematics required for the exact integration method (Exact digital simulation of time-invariant linear systems with applications to neuronal modeling).
Sorry no pictures for this update, but it does run a lot faster than before without compromising the accuracy or ease of use.