Adversarial Source Identification Game with Corrupted Training

03/27/2017
by   Mauro Barni, et al.
0

We study a variant of the source identification game with training data in which part of the training data is corrupted by an attacker. In the addressed scenario, the defender aims at deciding whether a test sequence has been drawn according to a discrete memoryless source X ∼ P_X, whose statistics are known to him through the observation of a training sequence generated by X. In order to undermine the correct decision under the alternative hypothesis that the test sequence has not been drawn from X, the attacker can modify a sequence produced by a source Y ∼ P_Y up to a certain distortion, and corrupt the training sequence either by adding some fake samples or by replacing some samples with fake ones. We derive the unique rationalizable equilibrium of the two versions of the game in the asymptotic regime and by assuming that the defender bases its decision by relying only on the first order statistics of the test and the training sequences. By mimicking Stein's lemma, we derive the best achievable performance for the defender when the first type error probability is required to tend to zero exponentially fast with an arbitrarily small, yet positive, error exponent. We then use such a result to analyze the ultimate distinguishability of any two sources as a function of the allowed distortion and the fraction of corrupted samples injected into the training sequence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2018

Detection Games Under Fully Active Adversaries

We study a binary hypothesis testing problem in which a defender must de...
research
03/14/2019

Distributed Detection with Empirically Observed Statistics

We consider a binary distributed detection problem in which the distribu...
research
06/23/2022

Universal Neyman-Pearson Classification with a Known Hypothesis

We propose a universal classifier for binary Neyman-Pearson classificati...
research
06/09/2021

Statistical Classification via Robust Hypothesis Testing

In this letter, we consider multiple statistical classification problem ...
research
10/08/2019

Evaluation of Error Probability of Classification Based on the Analysis of the Bayes Code

Suppose that we have two training sequences generated by parametrized di...
research
09/10/2018

Multi-party Poisoning through Generalized p-Tampering

In a poisoning attack against a learning algorithm, an adversary tampers...
research
09/12/2018

Reversing the asymmetry in data exfiltration

Preventing data exfiltration from computer systems typically depends on ...

Please sign up or login with your details

Forgot password? Click here to reset