Adversarial training for tabular data with attack propagation

07/28/2023
by   Tiago Leon Melo, et al.
0

Adversarial attacks are a major concern in security-centered applications, where malicious actors continuously try to mislead Machine Learning (ML) models into wrongly classifying fraudulent activity as legitimate, whereas system maintainers try to stop them. Adversarially training ML models that are robust against such attacks can prevent business losses and reduce the work load of system maintainers. In such applications data is often tabular and the space available for attackers to manipulate undergoes complex feature engineering transformations, to provide useful signals for model training, to a space attackers cannot access. Thus, we propose a new form of adversarial training where attacks are propagated between the two spaces in the training loop. We then test this method empirically on a real world dataset in the domain of credit card fraud detection. We show that our method can prevent about 30 performance drops under moderate attacks and is essential under very aggressive attacks, with a trade-off loss in performance under no attacks smaller than 7

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning

Machine Learning (ML) has become ubiquitous, and its deployment in Netwo...
research
03/23/2023

Adversarial Robustness and Feature Impact Analysis for Driver Drowsiness Detection

Drowsy driving is a major cause of road accidents, but drivers are dismi...
research
06/12/2021

Disrupting Model Training with Adversarial Shortcuts

When data is publicly released for human consumption, it is unclear how ...
research
02/26/2021

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors

Data poisoning is a threat model in which a malicious actor tampers with...
research
06/16/2023

You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks

The robustness of modern machine learning (ML) models has become an incr...
research
12/10/2020

Composite Adversarial Attacks

Adversarial attack is a technique for deceiving Machine Learning (ML) mo...
research
01/27/2020

Practical Fast Gradient Sign Attack against Mammographic Image Classifier

Artificial intelligence (AI) has been a topic of major research for many...

Please sign up or login with your details

Forgot password? Click here to reset