Network insensitivity to parameter noise via adversarial regularization

06/09/2021
by   Julian Bücher, et al.
0

Neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal ASICs, promise enormous advantages in compute density and energy efficiency for NN-based ML tasks. However, these technologies are prone to computational non-idealities, due to process variation and intrinsic device physics. This degrades the task performance of networks deployed to the processor, by introducing parameter noise into the deployed model. While it is possible to calibrate each device, or train networks individually for each processor, these approaches are expensive and impractical for commercial deployment. Alternative methods are therefore needed to train networks that are inherently robust against parameter variation, as a consequence of network architecture and parameters. We present a new adversarial network optimisation algorithm that attacks network parameters during training, and promotes robust performance during inference in the face of parameter variation. Our approach introduces a regularization term penalising the susceptibility of a network to weight perturbation. We compare against previous approaches for producing parameter insensitivity such as dropout, weight smoothing and introducing parameter noise during training. We show that our approach produces models that are more robust to targeted parameter variation, and equally robust to random parameter variation. Our approach finds minima in flatter locations in the weight-loss landscape compared with other approaches, highlighting that the networks found by our technique are less sensitive to parameter perturbation. Our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities, with minimal loss of performance. ...

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2023

Training and Deploying Spiking NN Applications to the Mixed-Signal Neuromorphic Chip Dynap-SE2 with Rockpool

Mixed-signal neuromorphic processors provide extremely low-power operati...
research
03/05/2021

Neuromorphic Computing with Deeply Scaled Ferroelectric FinFET in Presence of Process Variation, Device Aging and Flicker Noise

This paper reports a comprehensive study on the applicability of ultra-s...
research
11/16/2020

LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training...
research
11/10/2021

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Always-on TinyML perception tasks in IoT applications require very high ...
research
09/08/2020

Low-Rank Training of Deep Neural Networks for Emerging Memory Technology

The recent success of neural networks for solving difficult decision tal...
research
04/02/2020

Device-aware inference operations in SONOS nonvolatile memory arrays

Non-volatile memory arrays can deploy pre-trained neural network models ...

Please sign up or login with your details

Forgot password? Click here to reset