Reliability-Aware Quantization for Anti-Aging NPUs

03/08/2021
by   Sami Salamin, et al.
0

Transistor aging is one of the major concerns that challenges designers in advanced technologies. It profoundly degrades the reliability of circuits during its lifetime as it slows down transistors resulting in errors due to timing violations unless large guardbands are included, which leads to considerable performance losses. When it comes to Neural Processing Units (NPUs), where increasing the inference speed is the primary goal, such performance losses cannot be tolerated. In this work, we are the first to propose a reliability-aware quantization to eliminate aging effects in NPUs while completely removing guardbands. Our technique delivers a graceful inference accuracy degradation over time while compensating for the aging-induced delay increase of the NPU. Our evaluation, over ten state-of-the-art neural network architectures trained on the ImageNet dataset, demonstrates that for an entire lifetime of 10 years, the average accuracy loss is merely 3 due to the elimination of the aging guardband.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/15/2022

Automated Design Approximation to Overcome Circuit Aging

Transistor aging phenomena manifest themselves as degradations in the ma...
10/23/2021

A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness

Neural networks are getting better accuracy with higher energy and compu...
09/08/2020

Asymmetric Aging Effect on Modern Microprocessors

Reliability is a crucial requirement in any modern microprocessor to ass...
04/25/2021

Quantization of Deep Neural Networks for Accurate EdgeComputing

Deep neural networks (DNNs) have demonstrated their great potential in r...
05/27/2020

Accelerating Neural Network Inference by Overflow Aware Quantization

The inherent heavy computation of deep neural networks prevents their wi...
05/04/2020

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

We empirically evaluate an undervolting technique, i.e., underscaling th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.