UNICORN: A Unified Backdoor Trigger Inversion Framework

04/05/2023
by   Zhenting Wang, et al.
0

The backdoor attack, where the adversary uses inputs stamped with triggers (e.g., a patch) to activate pre-planted malicious behaviors, is a severe threat to Deep Neural Network (DNN) models. Trigger inversion is an effective way of identifying backdoor models and understanding embedded adversarial behaviors. A challenge of trigger inversion is that there are many ways of constructing the trigger. Existing methods cannot generalize to various types of triggers by making certain assumptions or attack-specific constraints. The fundamental reason is that existing work does not consider the trigger's design space in their formulation of the inversion problem. This work formally defines and analyzes the triggers injected in different spaces and the inversion problem. Then, it proposes a unified framework to invert backdoor triggers based on the formalization of triggers and the identified inner behaviors of backdoor models from our analysis. Our prototype UNICORN is general and effective in inverting backdoor triggers in DNNs. The code can be found at https://github.com/RU-System-Software-and-Security/UNICORN.

READ FULL TEXT

page 4

page 8

page 20

research
06/15/2020

An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks

With the widespread use of deep neural networks (DNNs) in high-stake app...
research
06/12/2020

Analysis, Design, and Generalization of Electrochemical Impedance Spectroscopy (EIS) Inversion Algorithms

We introduce a framework for analyzing and designing EIS inversion algor...
research
11/05/2019

The Tale of Evil Twins: Adversarial Inputs versus Backdoored Models

Despite their tremendous success in a wide range of applications, deep n...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
01/29/2023

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

Most existing methods to detect backdoored machine learning (ML) models ...
research
03/01/2022

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

Collaborative machine learning settings like federated learning can be s...
research
06/03/2019

NeuralVis: Visualizing and Interpreting Deep Learning Models

Deep Neural Network(DNN) techniques have been prevalent in software engi...

Please sign up or login with your details

Forgot password? Click here to reset