DeepAI AI Chat
Log In Sign Up

Live Trojan Attacks on Deep Neural Networks

04/22/2020
by   Robby Costales, et al.
Columbia University
5

Like all software systems, the execution of deep learning models is dictated in part by logic represented as data in memory. For decades, attackers have exploited traditional software programs by manipulating this data. We propose a live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs. By minimizing the size and number of these patches, the attacker can reduce the amount of network communication and memory overwrites, with minimal risk of system malfunctions or other detectable side effects. We demonstrate the feasibility of this attack by computing efficient patches on multiple deep learning models. We show that the desired trojan behavior can be induced with a few small patches and with limited access to training data. We describe the details of how this attack is carried out on real systems and provide sample code for patching TensorFlow model parameters in Windows and in Linux. Lastly, we present a technique for effectively manipulating entropy on perturbed inputs to bypass STRIP, a state-of-the-art run-time trojan detection technique.

READ FULL TEXT
02/18/2019

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

Recent trojan attacks on deep neural network (DNN) models are one insidi...
08/09/2019

DeepCleanse: Input Sanitization Framework Against Trojan Attacks on Deep Neural Network Systems

Doubts over safety and trustworthiness of deep learning systems have eme...
10/14/2020

Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability

Training robust deep learning models for down-stream tasks is a critical...
09/09/2020

Multimodal Deep Learning for Flaw Detection in Software Programs

We explore the use of multiple deep learning models for detecting flaws ...
06/18/2019

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

Obtaining the state of the art performance of deep learning models impos...
06/01/2021

Improving Compositionality of Neural Networks by Decoding Representations to Inputs

In traditional software programs, we take for granted how easy it is to ...
04/10/2021

A Low-Cost Attack against the hCaptcha System

CAPTCHAs are a defense mechanism to prevent malicious bot programs from ...