Game of Trojans: A Submodular Byzantine Approach

07/13/2022
by   Dinuka Sahabandu, et al.
0

Machine learning models in the wild have been shown to be vulnerable to Trojan attacks during training. Although many detection mechanisms have been proposed, strong adaptive attackers have been shown to be effective against them. In this paper, we aim to answer the questions considering an intelligent and adaptive adversary: (i) What is the minimal amount of instances required to be Trojaned by a strong attacker? and (ii) Is it possible for such an attacker to bypass strong detection mechanisms? We provide an analytical characterization of adversarial capability and strategic interactions between the adversary and detection mechanism that take place in such models. We characterize adversary capability in terms of the fraction of the input dataset that can be embedded with a Trojan trigger. We show that the loss function has a submodular structure, which leads to the design of computationally efficient algorithms to determine this fraction with provable bounds on optimality. We propose a Submodular Trojan algorithm to determine the minimal fraction of samples to inject a Trojan trigger. To evade detection of the Trojaned model, we model strategic interactions between the adversary and Trojan detection mechanism as a two-player game. We show that the adversary wins the game with probability one, thus bypassing detection. We establish this by proving that output probability distributions of a Trojan model and a clean model are identical when following the Min-Max (MM) Trojan algorithm. We perform extensive evaluations of our algorithms on MNIST, CIFAR-10, and EuroSAT datasets. The results show that (i) with Submodular Trojan algorithm, the adversary needs to embed a Trojan trigger into a very small fraction of samples to achieve high accuracy on both Trojan and clean samples, and (ii) the MM Trojan algorithm yields a trained Trojan model that evades detection with probability 1.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2019

Bypassing Backdoor Detection Algorithms in Deep Learning

Deep learning models are known to be vulnerable to various adversarial m...
research
05/27/2019

Divide-and-Conquer Adversarial Detection

The vulnerabilities of deep neural networks against adversarial examples...
research
10/17/2022

Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class

In recent years, machine learning models have been shown to be vulnerabl...
research
03/27/2018

Bypassing Feature Squeezing by Increasing Adversary Strength

Feature Squeezing is a recently proposed defense method which reduces th...
research
08/30/2023

MDTD: A Multi Domain Trojan Detector for Deep Neural Networks

Machine learning models that use deep neural networks (DNNs) are vulnera...
research
06/06/2021

Adversarial Classification of the Attacks on Smart Grids Using Game Theory and Deep Learning

Smart grids are vulnerable to cyber-attacks. This paper proposes a game-...
research
02/19/2019

A Random Subspace Technique That Is Resistant to a Limited Number of Features Corrupted by an Adversary

In this paper, we consider batch supervised learning where an adversary ...

Please sign up or login with your details

Forgot password? Click here to reset