-
Notifications
You must be signed in to change notification settings - Fork 15
NMODL LLVM Code Generation Project
This page summarises various resources to get started with the NEURON, CoreNEURON and NMODL ecosystem.
Before diving into the computational aspects of neurons, we could look at the following introductory videos. Go through as much as you like and skip as much as you want!
- The Nervous System, Part 1: Crash Course A&P
- The Nervous System, Part 2 - Action! Potential!
- The Nervous System, Part 3 - Synapses!:
Some more detailed lectures from Neuronal Dynamics - Computational Neuroscience course. Don't need to get into all details but first 2-3 weeks give basic understanding:
- Part 1 - Neurons and Synapses : Overview (10 min)
- Part 2 - The Passive Membrane (21 min)
- Part 1 - Biophysics of neurons (5 min)
- Part 2 - Reversal potential and Nernst equation (11 min)
- Part 3 - Hodgkin-Huxley Model (23 min)
This should give you sufficient vocabulary to get started!
To get an idea of how NEURON simulator is used, here are some video tutorials. Don't need to have thorough understanding of everything but to just get high level picture of NEURON simulator:
Here is another (old) NEURON tutorial by Andrew Gillies and David Sterratt. This is quite old with the HOC scripting interface but introduces some of the concepts step-by-step. It's sufficient to skim through the first two parts. If case you want to try example snippets (not necessary to do so), make sure to install NEURON via pip as:
pip3 install neuron #on linux and os x
Below from Loren Segal is simplest and complete tutorial I have seen that put together flex, bison, AST and LLVM code generation. It's based on ancient LLVM version but gives fairly good idea of what it requires to integrate LLVM. I "think" we will use similar approach in NMODL toolchain.
Classic tutorial for any LLVM related work:
This is where we start with the main relevant part of the project. We will update this section with better resources. To get some background, we can start with:
- Part D (corresponding presentation). Just read through it for now.
- Neuron & NMODL paper (useful only for obtaining more detailed information on NMODL constructs if needed)
- Mapping High Level Constructs to LLVM IR : Sufficient Details Page
- Compiling to LLVM IR : Good Summary Page
- Explicit vectorisation pass at LLVM IR page
- Intro to LLVM IR : https://m.youtube.com/watch?v=m8G_S5LwlTo
Here are some GitHub projects that could be useful (to search some implementation aspects)
- Taichi programming language : https://github.com/taichi-dev/taichi/blob/master/taichi/codegen/codegen_llvm.cpp
- Halide programming language : https://github.com/halide/Halide/blob/master/src/CodeGen_LLVM.cpp
- Clay : programming language designed for Generic Programming : https://github.com/jckarter/clay/blob/db0bd2702ab0b6e48965cd85f8859bbd5f60e48e/compiler/codegen_op.cpp
- Wrapper over LLVM for generating : https://github.com/pdziepak/codegen
- COAT: COdegen Abstract Types : https://github.com/tetzank/coat/tree/master/include/coat/llvmjit
- symengine : https://github.com/symengine/symengine/blob/9fc2716cab4a6d2d89a3f9d765a04ef1594c6bcf/symengine/llvm_double.cpp
- https://llvm.org/docs/Frontend/PerformanceTips.html
- https://software.intel.com/content/www/us/en/develop/articles/optimizing-llvm-code-generation-for-data-analytics.html
- Loop Optimisation Framework : https://arxiv.org/pdf/1811.00632.pdf
- Halide DSL and vectorisation approach, see section 4.5: http://people.csail.mit.edu/jrk/halide-pldi13.pdf
- How Clang Compiles a Function : https://blog.regehr.org/archives/1605
- How LLVM Optimizes a Function : https://blog.regehr.org/archives/1603
- How LLVM vectoriser work : https://llvm.org/devmtg/2012-04-12/Slides/Hal_Finkel.pdf
- https://stackoverflow.com/questions/49293203/simd-instruction-for-updating-value-at-address-present-in-array
- https://stackoverflow.com/questions/46012574/per-element-atomicity-of-vector-load-store-and-gather-scatter
- https://llvm.org/devmtg/2013-11/slides/Demikhovsky-Poster.pdf
NMODL kernels are of the form:
VOID nrn_state_hh(){
INTEGER id
for(id = 0; id<node_count; id = id+1) {
INTEGER node_id, ena_id, ek_id
DOUBLE v
node_id = node_index[id]
ena_id = ion_ena_index[id]
ek_id = ion_ek_index[id]
v = voltage[node_id]
ena[id] = ion_ena[ena_id]
ek[id] = ion_ek[ek_id]
m[id] = m[id]+(1.0-exp(dt*((((-1.0)))/mtau[id])))*(-(((minf[id]))/mtau[id])/((((-1.0)))/mtau[id])-m[id])
h[id] = h[id]+(1.0-exp(dt*((((-1.0)))/htau[id])))*(-(((hinf[id]))/htau[id])/((((-1.0)))/htau[id])-h[id])
n[id] = n[id]+(1.0-exp(dt*((((-1.0)))/ntau[id])))*(-(((ninf[id]))/ntau[id])/((((-1.0)))/ntau[id])-n[id])
}
}
Questions:
- How this should be transformed to LLVM IR? esp. vector form?
- Instead of creating serial code then vectorising, we can use vector types directly?