Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

02/07/2022
by   Xilie Xu, et al.
1

Non-parametric two-sample tests (TSTs) that judge whether two sets of samples are drawn from the same distribution, have been widely used in the analysis of critical data. People tend to employ TSTs as trusted basic tools and rarely have any doubt about their reliability. This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks and then proposes corresponding defense strategies. First, we theoretically show that an adversary can upper-bound the distributional shift which guarantees the attack's invisibility. Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs. To enable TST-agnostic attacks, we propose an ensemble attack (EA) framework that jointly minimizes the different types of test criteria. Second, to robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels. Extensive experiments on both simulated and real-world datasets validate the adversarial vulnerabilities of non-parametric TSTs and the effectiveness of our proposed defense.

READ FULL TEXT

page 2

page 10

page 28

research
07/14/2023

Two-Sample Test with Copula Entropy

In this paper we propose a two-sample test based on copula entropy (CE)....
research
06/07/2019

Adversarial Examples for Non-Parametric Methods: Attacks, Defenses and Large Sample Limits

Adversarial examples have received a great deal of recent attention beca...
research
05/25/2020

Keyed Non-Parametric Hypothesis Tests

The recent popularity of machine learning calls for a deeper understandi...
research
10/25/2020

Attack Agnostic Adversarial Defense via Visual Imperceptible Bound

The high susceptibility of deep learning algorithms against structured a...
research
10/22/2020

Maximum Mean Discrepancy is Aware of Adversarial Attacks

The maximum mean discrepancy (MMD) test, as a representative two-sample ...
research
05/06/2021

Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model

Deep neural networks have been shown to suffer from critical vulnerabili...
research
06/27/2023

A non-parametric approach to detect patterns in binary sequences

To determine any pattern in an ordered binary sequence of wins and losse...

Please sign up or login with your details

Forgot password? Click here to reset