DeepAI AI Chat
Log In Sign Up

Adversarial Examples for Cost-Sensitive Classifiers

10/04/2019
by   Gavin S. Hartnett, et al.
RAND Corporation
0

Motivated by safety-critical classification problems, we investigate adversarial attacks against cost-sensitive classifiers. We use current state-of-the-art adversarially-resistant neural network classifiers [1] as the underlying models. Cost-sensitive predictions are then achieved via a final processing step in the feed-forward evaluation of the network. We evaluate the effectiveness of cost-sensitive classifiers against a variety of attacks and we introduce a new cost-sensitive attack which performs better than targeted attacks in some cases. We also explored the measures a defender can take in order to limit their vulnerability to these attacks. This attacker/defender scenario is naturally framed as a two-player zero-sum finite game which we analyze using game theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/01/2022

On the reversibility of adversarial attacks

Adversarial attacks modify images with perturbations that change the pre...
01/21/2022

Identifying Adversarial Attacks on Text Classifiers

The landscape of adversarial attacks against text classifiers continues ...
07/01/2020

Adversarial Example Games

The existence of adversarial examples capable of fooling trained neural ...
06/06/2020

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?

Adversarial attacks on machine learning-based classifiers, along with de...
09/24/2020

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

Multivariate time series (MTS) regression tasks are common in many real-...
10/12/2018

How to Pick Your Friends - A Game Theoretic Approach to P2P Overlay Construction

A major limitation of open P2P networks is the lack of strong identities...
04/18/2020

Protecting Classifiers From Attacks. A Bayesian Approach

Classification problems in security settings are usually modeled as conf...