Designing an attack-defense game: how to increase robustness of financial transaction models via a competition

08/22/2023
by   Alexey Zaytsev, et al.
0

Given the escalating risks of malicious attacks in the finance sector and the consequential severe damage, a thorough understanding of adversarial strategies and robust defense mechanisms for machine learning models is critical. The threat becomes even more severe with the increased adoption in banks more accurate, but potentially fragile neural networks. We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input. To achieve this goal, we have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data. The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions. Our main contributions are the analysis of the competition dynamics that answers the questions on how important it is to conceal a model from malicious users, how long does it take to break it, and what techniques one should use to make it more robust, and introduction additional way to attack models or increase their robustness. Our analysis continues with a meta-study on the used approaches with their power, numerical experiments, and accompanied ablations studies. We show that the developed attacks and defenses outperform existing alternatives from the literature while being practical in terms of execution, proving the validity of the competition as a tool for uncovering vulnerabilities of machine learning models and mitigating them in various domains.

READ FULL TEXT
research
11/27/2018

Robust Classification of Financial Risk

Algorithms are increasingly common components of high-impact decision-ma...
research
10/15/2021

Adversarial Attacks on ML Defense Models Competition

Due to the vulnerability of deep neural networks (DNNs) to adversarial e...
research
12/05/2022

Multiple Perturbation Attack: Attack Pixelwise Under Different ℓ_p-norms For Better Adversarial Performance

Adversarial machine learning has been both a major concern and a hot top...
research
11/02/2018

Stronger Data Poisoning Attacks Break Data Sanitization Defenses

Machine learning models trained on data from the outside world can be co...
research
09/13/2017

On the Accuracy of Formal Verification of Selective Defenses for TDoS Attacks

Telephony Denial of Service (TDoS) attacks target telephony services, su...
research
03/14/2019

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...
research
08/20/2023

Hiding Backdoors within Event Sequence Data via Poisoning Attacks

The financial industry relies on deep learning models for making importa...

Please sign up or login with your details

Forgot password? Click here to reset