Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware

06/05/2023
by   Lakshmi Nair, et al.
0

Existing methods to recover model accuracy on analog-digital hardware in the presence of quantization and analog noise include noise-injection training. However, it can be slow in practice, incurring high computational costs, even when starting from pretrained models. We introduce the Sensitivity-Aware Finetuning (SAFT) approach that identifies noise sensitive layers in a model, and uses the information to freeze specific layers for noise-injection training. Our results show that SAFT achieves comparable accuracy to noise-injection training and is 2x to 8x faster.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2020

A Diffractive Neural Network with Weight-Noise-Injection Training

We propose a diffractive neural network with strong robustness based on ...
research
01/14/2020

Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation

The success of deep learning has brought forth a wave of interest in com...
research
09/18/2022

PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

Processing-in-memory (PIM), an increasingly studied neuromorphic hardwar...
research
11/18/2019

A Code injection Method for Rapid Docker Image Building

Docker images are built by layers, yet the current implementation has ma...
research
05/15/2022

Effect of Batch Normalization on Noise Resistant Property of Deep Learning Models

The fast execution speed and energy efficiency of analog hardware has ma...
research
06/10/2020

On Noise Injection in Generative Adversarial Networks

Noise injection has been proved to be one of the key technique advances ...
research
10/24/2022

Noise Injection as a Probe of Deep Learning Dynamics

We propose a new method to probe the learning mechanism of Deep Neural N...

Please sign up or login with your details

Forgot password? Click here to reset