Inference with Artificial Neural Networks on the Analog BrainScaleS-2 Hardware

06/23/2020
by   Johannes Weis, et al.
0

The neuromorphic BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits as well as two versatile digital microprocessors. Primarily designed to emulate spiking neural networks, the system can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks. Analog multiplication is then carried out in the synapse circuits, while the results are accumulated on the neurons' membrane capacitors. Designed as an analog, in-memory computing device, it promises high energy efficiency. Fixed-pattern noise and trial-to-trial variations, however, require the implemented networks to cope with a certain level of perturbations. Further limitations are imposed by the digital resolution of the input values (5-bit), matrix weights (6-bit) and resulting neuron activations (8-bit). In this paper, we discuss BrainScaleS-2 as an analog inference accelerator and present calibration as well as optimization strategies, highlighting the advantages of training with hardware in the loop. Among other benchmarks, we classify the MNIST handwritten digits dataset using a two-dimensional convolution and two dense layers. We reach 98.0 of the same network evaluated in software.

READ FULL TEXT
research
06/23/2020

Inference with Artificial Neural Networks on Analog Neuromorphic Hardware

The neuromorphic BrainScaleS-2 ASIC comprises mixed-signal neurons and s...
research
06/23/2020

hxtorch: PyTorch for BrainScaleS-2 – Perceptrons on Analog Neuromorphic Hardware

We present software facilitating the usage of the BrainScaleS-2 analog n...
research
08/20/2020

Training of mixed-signal optical convolutional neural network with reduced quantization level

Mixed-signal artificial neural networks (ANNs) that employ analog matrix...
research
11/29/2017

Energy-Efficient Time-Domain Vector-by-Matrix Multiplier for Neurocomputing and Beyond

We propose an extremely energy-efficient mixed-signal approach for perfo...
research
09/19/2017

An Analog Neural Network Computing Engine using CMOS-Compatible Charge-Trap-Transistor (CTT)

An analog neural network computing engine based on CMOS-compatible charg...
research
11/26/2018

Noisy Computations during Inference: Harmful or Helpful?

We study two aspects of noisy computations during inference. The first a...
research
05/17/2022

Experimentally realized in situ backpropagation for deep learning in nanophotonic neural networks

Neural networks are widely deployed models across many scientific discip...

Please sign up or login with your details

Forgot password? Click here to reset