Adversarial Robustness for Tabular Data through Cost and Utility Awareness

08/27/2022
by   Klim Kireev, et al.
0

Many machine learning problems use data in the tabular domains. Adversarial examples can be especially damaging for these applications. Yet, existing works on adversarial robustness mainly focus on machine-learning models in the image and text domains. We argue that due to the differences between tabular data and images or text, existing threat models are inappropriate for tabular domains. These models do not capture that cost can be more important than imperceptibility, nor that the adversary could ascribe different value to the utility obtained from deploying different adversarial examples. We show that due to these differences the attack and defence methods used for images and text cannot be directly applied to the tabular setup. We address these issues by proposing new cost and utility-aware threat models tailored to the adversarial capabilities and constraints of attackers targeting tabular domains. We introduce a framework that enables us to design attack and defence mechanisms which result in models protected against cost or utility-aware adversaries, e.g., adversaries constrained by a certain dollar budget. We show that our approach is effective on three tabular datasets corresponding to applications for which adversarial examples can have economic and social implications.

READ FULL TEXT
research
05/18/2021

On the Robustness of Domain Constraints

Machine learning is vulnerable to adversarial examples-inputs designed t...
research
10/07/2020

Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples

Recent work on adversarial learning has focused mainly on neural network...
research
09/22/2018

Unrestricted Adversarial Examples

We introduce a two-player contest for evaluating the safety and robustne...
research
10/25/2018

Evading classifiers in discrete domains with provable optimality guarantees

Security-critical applications such as malware, fraud, or spam detection...
research
06/29/2020

Natural Backdoor Attack on Text Data

Deep learning has been widely adopted in natural language processing app...
research
10/05/2021

Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems

Probabilistic model checking is a useful technique for specifying and ve...
research
09/24/2019

A Visual Analytics Framework for Adversarial Text Generation

This paper presents a framework which enables a user to more easily make...

Please sign up or login with your details

Forgot password? Click here to reset