Adversarial Attacks on Deep Models for Financial Transaction Records

06/15/2021
by   Ivan Fursov, et al.
0

Machine learning models using transaction records as inputs are popular among financial institutions. The most efficient models use deep-learning architectures similar to those in the NLP community, posing a challenge due to their tremendous number of parameters and limited robustness. In particular, deep-learning models are vulnerable to adversarial attacks: a little change in the input harms the model's output. In this work, we examine adversarial attacks on transaction records data and defences from these attacks. The transaction records data have a different structure than the canonical NLP or time series data, as neighbouring records are less connected than words in sentences, and each record consists of both discrete merchant code and continuous transaction amount. We consider a black-box attack scenario, where the attack doesn't know the true decision model, and pay special attention to adding transaction tokens to the end of a sequence. These limitations provide more realistic scenario, previously unexplored in NLP world. The proposed adversarial attacks and the respective defences demonstrate remarkable performance using relevant datasets from the financial industry. Our results show that a couple of generated transactions are sufficient to fool a deep-learning model. Further, we improve model robustness via adversarial training or separate adversarial examples detection. This work shows that embedding protection from adversarial attacks improves model robustness, allowing a wider adoption of deep models for transaction records in banking and finance.

READ FULL TEXT
research
08/20/2023

Hiding Backdoors within Event Sequence Data via Poisoning Attacks

The financial industry relies on deep learning models for making importa...
research
02/13/2018

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

The surging availability of electronic medical records (EHR) leads to in...
research
06/19/2020

Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers

An adversarial attack paradigm explores various scenarios for the vulner...
research
03/09/2020

Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world

An adversarial attack paradigm explores various scenarios for vulnerabil...
research
11/27/2018

Robust Classification of Financial Risk

Algorithms are increasingly common components of high-impact decision-ma...
research
06/05/2019

Multi-way Encoding for Robustness

Deep models are state-of-the-art for many computer vision tasks includin...
research
08/21/2022

MockingBERT: A Method for Retroactively Adding Resilience to NLP Models

Protecting NLP models against misspellings whether accidental or adversa...

Please sign up or login with your details

Forgot password? Click here to reset