Deep Learning Backdoors

07/16/2020
by   Shaofeng Li, et al.
0

Intuitively, a backdoor attack against Deep Neural Networks (DNNs) is to inject hidden malicious behaviors into DNNs such that the backdoor model behaves legitimately for benign inputs, yet invokes a predefined malicious behavior when its input contains a malicious trigger. The trigger can take a plethora of forms, including a special object present in the image (e.g., a yellow pad), a shape filled with custom textures (e.g., logos with particular colors) or even image-wide stylizations with special filters (e.g., images altered by Nashville or Gotham filters). These filters can be applied to the original image by replacing or perturbing a set of image pixels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks

With the widespread use of deep neural networks (DNNs) in high-stake app...
research
01/15/2020

Filter Grafting for Deep Neural Networks

This paper proposes a new learning paradigm called filter grafting, whic...
research
04/26/2020

Towards Feature Space Adversarial Attack

We propose a new type of adversarial attack to Deep Neural Networks (DNN...
research
05/06/2022

Imperceptible Backdoor Attack: From Input Space to Feature Representation

Backdoor attacks are rapidly emerging threats to deep neural networks (D...
research
07/15/2021

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

We study the realistic potential of conducting backdoor attack against d...
research
06/20/2020

FaceHack: Triggering backdoored facial recognition systems using facial characteristics

Recent advances in Machine Learning (ML) have opened up new avenues for ...
research
06/30/2019

Mechanisms of Artistic Creativity in Deep Learning Neural Networks

The generative capabilities of deep learning neural networks (DNNs) have...

Please sign up or login with your details

Forgot password? Click here to reset