Towards Effective and Robust Neural Trojan Defenses via Input Filtering

02/24/2022
by   Kien Do, et al.
0

Trojan attacks on deep neural networks are both dangerous and surreptitious. Over the past few years, Trojan attacks have advanced from using only a single input-agnostic trigger and targeting only one class to using multiple, input-specific triggers and targeting multiple classes. However, Trojan defenses have not caught up with this development. Most defense methods still make out-of-date assumptions about Trojan triggers and target classes, thus, can be easily circumvented by modern Trojan attacks. To deal with this problem, we propose two novel "filtering" defenses called Variational Input Filtering (VIF) and Adversarial Input Filtering (AIF) which leverage lossy data compression and adversarial learning respectively to effectively purify all potential Trojan triggers in the input at run time without making assumptions about the number of triggers/target classes or the input dependence property of triggers. In addition, we introduce a new defense mechanism called "Filtering-then-Contrasting" (FtC) which helps avoid the drop in classification accuracy on clean data caused by "filtering", and combine it with VIF/AIF to derive new defenses of this kind. Extensive experimental results and ablation studies show that our proposed defenses significantly outperform well-known baseline defenses in mitigating five advanced Trojan attacks including two recent state-of-the-art while being quite robust to small amounts of training data and large-norm triggers.

READ FULL TEXT

page 18

page 19

page 20

page 21

page 26

page 27

page 28

page 29

research
05/28/2019

An Investigation of Data Poisoning Defenses for Online Learning

We consider data poisoning attacks, where an adversary can modify a smal...
research
09/13/2017

On the Accuracy of Formal Verification of Selective Defenses for TDoS Attacks

Telephony Denial of Service (TDoS) attacks target telephony services, su...
research
05/16/2020

Encryption Inspired Adversarial Defense for Visual Classification

Conventional adversarial defenses reduce classification accuracy whether...
research
10/27/2021

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Adversarial patch attacks that craft the pixels in a confined region of ...
research
09/04/2023

Efficient Defense Against Model Stealing Attacks on Convolutional Neural Networks

Model stealing attacks have become a serious concern for deep learning m...
research
02/18/2020

Block Switching: A Stochastic Approach for Deep Learning Security

Recent study of adversarial attacks has revealed the vulnerability of mo...
research
06/29/2023

Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features

Recent studies have demonstrated the susceptibility of deep neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset