Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs

12/14/2020
by   Andrew Boutros, et al.
0

Field-programmable gate arrays (FPGAs) are becoming widely used accelerators for a myriad of datacenter applications due to their flexibility and energy efficiency. Among these applications, FPGAs have shown promising results in accelerating low-latency real-time deep learning (DL) inference, which is becoming an indispensable component of many end-user applications. With the emerging research direction towards virtualized cloud FPGAs that can be shared by multiple users, the security aspect of FPGA-based DL accelerators requires careful consideration. In this work, we evaluate the security of DL accelerators against voltage-based integrity attacks in a multitenant FPGA scenario. We first demonstrate the feasibility of such attacks on a state-of-the-art Stratix 10 card using different attacker circuits that are logically and physically isolated in a separate attacker role, and cannot be flagged as malicious circuits by conventional bitstream checkers. We show that aggressive clock gating, an effective power-saving technique, can also be a potential security threat in modern FPGAs. Then, we carry out the attack on a DL accelerator running ImageNet classification in the victim role to evaluate the inherent resilience of DL models against timing faults induced by the adversary. We find that even when using the strongest attacker circuit, the prediction accuracy of the DL accelerator is not compromised when running at its safe operating frequency. Furthermore, we can achieve 1.18-1.31x higher inference performance by over-clocking the DL accelerator without affecting its prediction accuracy.

READ FULL TEXT

page 1

page 4

page 6

research
05/04/2020

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

We empirically evaluate an undervolting technique, i.e., underscaling th...
research
05/10/2020

Power and Accuracy of Multi-Layer Perceptrons (MLPs) under Reduced-voltage FPGA BRAMs Operation

In this paper, we exploit the aggressive supply voltage underscaling tec...
research
08/22/2023

Octopus: A Heterogeneous In-network Computing Accelerator Enabling Deep Learning for network

Deep learning (DL) for network models have achieved excellent performanc...
research
05/20/2021

DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA

As Field-programmable gate arrays (FPGAs) are widely adopted in clouds t...
research
03/08/2023

autoXFPGAs: An End-to-End Automated Exploration Framework for Approximate Accelerators in FPGA-Based Systems

Generation and exploration of approximate circuits and accelerators has ...
research
03/31/2023

Pentimento: Data Remanence in Cloud FPGAs

Cloud FPGAs strike an alluring balance between computational efficiency,...
research
06/27/2023

PASNet: Polynomial Architecture Search Framework for Two-party Computation-based Secure Neural Network Deployment

Two-party computation (2PC) is promising to enable privacy-preserving de...

Please sign up or login with your details

Forgot password? Click here to reset