DeepAI AI Chat
Log In Sign Up

Adversarial Examples for Cost-Sensitive Classifiers

by   Gavin S. Hartnett, et al.
RAND Corporation

Motivated by safety-critical classification problems, we investigate adversarial attacks against cost-sensitive classifiers. We use current state-of-the-art adversarially-resistant neural network classifiers [1] as the underlying models. Cost-sensitive predictions are then achieved via a final processing step in the feed-forward evaluation of the network. We evaluate the effectiveness of cost-sensitive classifiers against a variety of attacks and we introduce a new cost-sensitive attack which performs better than targeted attacks in some cases. We also explored the measures a defender can take in order to limit their vulnerability to these attacks. This attacker/defender scenario is naturally framed as a two-player zero-sum finite game which we analyze using game theory.


page 1

page 2

page 3

page 4


On the reversibility of adversarial attacks

Adversarial attacks modify images with perturbations that change the pre...

Identifying Adversarial Attacks on Text Classifiers

The landscape of adversarial attacks against text classifiers continues ...

Adversarial Example Games

The existence of adversarial examples capable of fooling trained neural ...

Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?

Adversarial attacks on machine learning-based classifiers, along with de...

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

Multivariate time series (MTS) regression tasks are common in many real-...

How to Pick Your Friends - A Game Theoretic Approach to P2P Overlay Construction

A major limitation of open P2P networks is the lack of strong identities...

Protecting Classifiers From Attacks. A Bayesian Approach

Classification problems in security settings are usually modeled as conf...