Training for 'Unstable' CNN Accelerator:A Case Study on FPGA

12/02/2018
by   KouZi Xing, et al.
0

With the great advancements of convolution neural networks(CNN), CNN accelerators are increasingly developed and deployed in the major computing systems.To make use of the CNN accelerators, CNN models are trained via the off-line training systems such as Caffe, Pytorch and Tensorflow on multi-core CPUs and GPUs first and then compiled to the target accelerators. Although the two-step process seems to be natural and has been widely applied, it assumes that the accelerators' behavior can be fully modeled on CPUs and GPUs. This does not hold true and the behavior of the CNN accelerators is un-deterministic when the circuit works at 'unstable' mode when it is overclocked or is affected by the environment like fault-prone aerospace. The exact behaviors of the accelerators are determined by both the chip fabrication and the working environment or status. In this case, applying the conventional off-line training result to the accelerators directly may lead to considerable accuracy loss. To address this problem, we propose to train for the 'unstable' CNN accelerator and have the 'un-determined behavior' learned together with the data in the same framework. Basically, it starts from the off-line trained model and then integrates the uncertain circuit behaviors into the CNN models through additional accelerator-specific training. The fine-tuned training makes the CNN models less sensitive to the circuit uncertainty. We apply the design method to both an overclocked CNN accelerator and a faulty accelerator. According to our experiments on a subset of ImageNet, the accelerator-specific training can improve the top 5 accuracy up to 3.4 CNN accelerator is at extreme overclocking. When the accelerator is exposed to a faulty environment, the top 5 accuracy improves up to 6.8 average under the most severe fault injection.

READ FULL TEXT
research
09/14/2021

Cohmeleon: Learning-Based Orchestration of Accelerator Coherence in Heterogeneous SoCs

One of the most critical aspects of integrating loosely-coupled accelera...
research
06/08/2020

Yield Loss Reduction and Test of AI and Deep Learning Accelerators

With data-driven analytics becoming mainstream, the global demand for de...
research
05/04/2020

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

We empirically evaluate an undervolting technique, i.e., underscaling th...
research
10/01/2019

UltraShare: FPGA-based Dynamic Accelerator Sharing and Allocation

Despite all the available commercial and open-source frameworks to ease ...
research
12/03/2019

Understanding the Impact of On-chip Communication on DNN Accelerator Performance

Deep Neural Networks have flourished at an unprecedented pace in recent ...
research
05/17/2019

Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification

Deep-learning is a cutting edge theory that is being applied to many fie...
research
04/12/2023

Exploiting Logic Locking for a Neural Trojan Attack on Machine Learning Accelerators

Logic locking has been proposed to safeguard intellectual property (IP) ...

Please sign up or login with your details

Forgot password? Click here to reset