Decision-Focused Learning of Adversary Behavior in Security Games

03/03/2019
by   Andrew Perrault, et al.
0

Stackelberg security games are a critical tool for maximizing the utility of limited defense resources to protect important targets from an intelligent adversary. Motivated by green security, where the defender may only observe an adversary's response to defense on a limited set of targets, we study the problem of defending against the same adversary on a larger set of targets from the same distribution. We give a theoretical justification for why standard two-stage learning approaches, where a model of the adversary is trained for predictive accuracy and then optimized against, may fail to maximize the defender's expected utility in this setting. We develop a decision-focused learning approach, where the adversary behavior model is optimized for decision quality, and show empirically that it achieves higher defender expected utility than the two-stage approach when there is limited training data and a large number of target features.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2015

Learning Adversary Behavior in Security Games: A PAC Model Perspective

Recent applications of Stackelberg Security Games (SSG), from wildlife c...
research
03/16/2020

Exploiting an Adversary's Intentions in Graphical Coordination Games

How does information regarding an adversary's intentions affect optimal ...
research
05/31/2018

Defending Against Model Stealing Attacks Using Deceptive Perturbations

Machine learning models are vulnerable to simple model stealing attacks ...
research
04/25/2022

Safe Delivery of Critical Services in Areas with Volatile Security Situation via a Stackelberg Game Approach

Vaccine delivery in under-resourced locations with security risks is not...
research
10/25/2021

Computational Efficiency in Multivariate Adversarial Risk Analysis Models

In this paper we address the computational feasibility of the class of d...
research
06/28/2022

How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection

Model stealing attacks present a dilemma for public machine learning API...
research
10/29/2018

An approach to predictively securing critical cloud infrastructures through probabilistic modeling

Cloud infrastructures are being increasingly utilized in critical infras...

Please sign up or login with your details

Forgot password? Click here to reset