Manipulating a Learning Defender and Ways to Counteract

05/28/2019
by   Jiarui Gan, et al.
0

In Stackelberg security games, information about the attacker's type (i.e., payoff parameters) are essential for computing the optimal strategy for the defender to commit to. While such information can be incomplete or uncertain in practice, algorithms have been proposed to learn the optimal defender commitment from the attacker's best responses during the defender's interaction with the follower. In this paper, we show that, however, such algorithms might be easily manipulated by a strategic attacker, who intentionally sends fake best responses to mislead the learning algorithm into producing a strategy that benefits the attacker but, very likely, hurts the defender. As a key finding in this paper, attacker manipulation normally leads to the defender playing only her maximin strategy, which effectively renders the learning algorithm useless as to compute the maximin strategy requires no information about the other player at all. To address this issue, we propose a game-theoretic framework at a higher level, in which the defender commits to a policy that allows her to specify a particular strategy to play conditioned on the learned attacker type. We then provide a polynomial-time algorithm to compute the optimal defender policy, and in addition, a heuristic approach that applies even when the attacker type space is infinite or completely unknown. It is shown through simulations that our approaches can improve in the defender's utility significantly as compared to the situation when attacker manipulations are ignored.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2023

Zero-Determinant Strategy in Stochastic Stackelberg Asymmetric Security Game

In a stochastic Stackelberg asymmetric security game, the strong Stackel...
research
04/25/2022

Strategic Signaling for Utility Control in Audit Games

As an effective method to protect the daily access to sensitive data aga...
research
05/05/2018

Designing the Game to Play: Optimizing Payoff Structure in Security Games

Effective game-theoretic modeling of defender-attacker behavior is becom...
research
06/11/2020

Optimally Deceiving a Learning Leader in Stackelberg Games

Recent results in the ML community have revealed that learning algorithm...
research
03/01/2023

Planning for Attacker Entrapment in Adversarial Settings

In this paper, we propose a planning framework to generate a defense str...
research
09/13/2019

Strategic Inference with a Single Private Sample

Motivated by applications in cyber security, we develop a simple game mo...
research
04/23/2015

Security Games with Information Leakage: Modeling and Computation

Most models of Stackelberg security games assume that the attacker only ...

Please sign up or login with your details

Forgot password? Click here to reset