Protecting Classifiers From Attacks. A Bayesian Approach

04/18/2020
by   Víctor Gallego, et al.
18

Classification problems in security settings are usually modeled as confrontations in which an adversary tries to fool a classifier manipulating the covariates of instances to obtain a benefit. Most approaches to such problems have focused on game-theoretic ideas with strong underlying common knowledge assumptions, which are not realistic in the security realm. We provide an alternative Bayesian framework that accounts for the lack of precise knowledge about the attacker's behavior using adversarial risk analysis. A key ingredient required by our framework is the ability to sample from the distribution of originating instances given the possibly attacked observed one. We propose a sampling procedure based on approximate Bayesian computation, in which we simulate the attacker's problem taking into account our uncertainty about his elements. For large scale problems, we propose an alternative, scalable approach that could be used when dealing with differentiable classifiers. Within it, we move the computational load to the training phase, simulating attacks from an adversary, adapting the framework to obtain a classifier robustified against attacks.

READ FULL TEXT
research
02/21/2018

Adversarial classification: An adversarial risk analysis approach

Classification problems in security settings are usually contemplated as...
research
06/28/2021

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

We consider the problem of finding optimal classifiers in an adversarial...
research
11/22/2019

Insider threat modeling: An adversarial risk analysis approach

Insider threats entail major security issues in geopolitics, cyber risk ...
research
05/25/2020

Adversarial Feature Selection against Evasion Attacks

Pattern recognition and machine learning techniques have been increasing...
research
06/06/2020

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?

Adversarial attacks on machine learning-based classifiers, along with de...
research
09/02/2021

Excess Capacity and Backdoor Poisoning

A backdoor data poisoning attack is an adversarial attack wherein the at...
research
01/06/2021

A Qualitative Empirical Analysis of Human Post-Exploitation Behavior

Honeypots are a well-studied defensive measure in network security. This...

Please sign up or login with your details

Forgot password? Click here to reset