Fully-Asynchronous Fully-Implicit Variable-Order Variable-Timestep Simulation of Neural Networks

07/01/2019
by   Bruno Magalhães, et al.
0

State-of-the-art simulations of detailed neural models follow the Bulk Synchronous Parallel execution model. Execution is divided in equidistant communication intervals, equivalent to the shortest synaptic delay in the network. Neurons stepping is performed independently, with collective communication guiding synchronization and exchange of synaptic events. The interpolation step size is fixed and chosen based on some prior knowledge of the fastest possible dynamics in the system. However, simulations driven by stiff dynamics or a wide range of time scales - such as multiscale simulations of neural networks - struggle with fixed step interpolation methods, yielding excessive computation of intervals of quasi-constant activity, inaccurate interpolation of periods of high volatility solution, and being incapable of handling unknown or distinct time constants. A common alternative is the usage of adaptive stepping methods, however they have been deemed inefficient in parallel executions due to computational load imbalance at the synchronization barriers that characterize the BSP execution model. We introduce a distributed fully-asynchronous execution model that removes global synchronization, allowing for longer variable timestep interpolations. Asynchronicity is provided by active point-to-point communication notifying neurons' time advancement to synaptic connectivities. Time stepping is driven by scheduled neuron advancements based on synaptic delays across neurons, yielding an "exhaustive yet not speculative" adaptive-step execution. Execution benchmarks on 64 Cray XE6 compute nodes demonstrate a reduced number of interpolation steps, higher numerical accuracy and lower time to solution, compared to state-of-the-art methods. Efficiency is shown to be activity-dependent, with scaling of the algorithm demonstrated on a simulation of a laboratory experiment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2023

Theory of coupled neuronal-synaptic dynamics

In neural circuits, synapses influence neurons by shaping network dynami...
research
05/04/2018

Light for communication and superconductors for efficiency in neural computing

Optical communication achieves high fanout and short delay advantageous ...
research
07/25/2005

Network Topology influences Synchronization and Intrinsic Read-out

What are the effects of neuromodulation on a large network model? Neurom...
research
12/05/2005

DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework

In a Spiking Neural Networks (SNN), spike emissions are sparsely and irr...
research
10/04/2019

Distributed Learning of Deep Neural Networks using Independent Subnet Training

Stochastic gradient descent (SGD) is the method of choice for distribute...
research
10/20/2017

Point Neurons with Conductance-Based Synapses in the Neural Engineering Framework

The mathematical model underlying the Neural Engineering Framework (NEF)...

Please sign up or login with your details

Forgot password? Click here to reset