Robust Collective Classification against Structural Attacks

07/26/2020
by   Kai Zhou, et al.
0

Collective learning methods exploit relations among data points to enhance classification performance. However, such relations, represented as edges in the underlying graphical model, expose an extra attack surface to the adversaries. We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks, where an attacker can modify the graph structure at test time. We formulate the task of learning a robust AMN classifier as a bi-level program, where the inner problem is a challenging non-linear integer program that computes optimal structural changes to the AMN. To address this technical challenge, we first relax the attacker problem, and then use duality to obtain a convex quadratic upper bound for the robust AMN problem. We then prove a bound on the quality of the resulting approximately optimal solutions, and experimentally demonstrate the efficacy of our approach. Finally, we apply our approach in a transductive learning setting, and show that robust AMN is much more robust than state-of-the-art deep learning methods, while sacrificing little in accuracy on non-adversarial data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2019

Attacking Graph-based Classification via Manipulating the Graph Structure

Graph-based classification methods are widely used for security and priv...
research
09/30/2019

Hidden Trigger Backdoor Attacks

With the success of deep learning algorithms in various domains, studyin...
research
11/02/2020

Blocking Adversarial Influence in Social Networks

While social networks are widely used as a media for information diffusi...
research
06/11/2018

On the adversarial robustness of robust estimators

Motivated by recent data analytics applications, we study the adversaria...
research
12/17/2021

A Robust Optimization Approach to Deep Learning

Many state-of-the-art adversarial training methods leverage upper bounds...
research
04/09/2016

A General Retraining Framework for Scalable Adversarial Classification

Traditional classification algorithms assume that training and test data...
research
02/21/2023

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Finding classifiers robust to adversarial examples is critical for their...

Please sign up or login with your details

Forgot password? Click here to reset