Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

08/30/2018
by   Cong Liao, et al.
4

Deep learning models have consistently outperformed traditional machine learning models in various classification tasks, including image classification. As such, they have become increasingly prevalent in many real world applications including those where security is of great concern. Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications. In this paper, we focus on a specific type of data poisoning attack, which we refer to as a backdoor injection attack. The main goal of the adversary performing such attack is to generate and inject a backdoor into a deep learning model that can be triggered to recognize certain embedded patterns with a target label of the attacker's choice. Additionally, a backdoor injection attack should occur in a stealthy manner, without undermining the efficacy of the victim model. Specifically, we propose two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model. We consider two attack settings, with backdoor injection carried out either before model training or during model updating. We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at a small cost of model accuracy loss (below 1%) with a small injection rate (around 1%), even under the weakest assumption wherein the adversary has no knowledge either of the original training data or the classifier model.

READ FULL TEXT

page 5

page 7

research
06/01/2020

BadNL: Backdoor Attacks Against NLP Models

Machine learning (ML) has progressed rapidly during the past decade and ...
research
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
research
07/31/2022

Electromagnetic Signal Injection Attacks on Differential Signaling

Differential signaling is a method of data transmission that uses two co...
research
02/03/2022

Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations

Owing much to the revolution of information technology, the recent progr...
research
01/18/2021

DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

Deep learning models are increasingly used in mobile applications as cri...
research
02/28/2023

Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger

Recent deep-learning-based compression methods have achieved superior pe...
research
11/28/2019

Towards Privacy and Security of Deep Learning Systems: A Survey

Deep learning has gained tremendous success and great popularity in the ...

Please sign up or login with your details

Forgot password? Click here to reset