Hiding Backdoors within Event Sequence Data via Poisoning Attacks

08/20/2023
by   Elizaveta Kovtun, et al.
0

The financial industry relies on deep learning models for making important decisions. This adoption brings new danger, as deep black-box models are known to be vulnerable to adversarial attacks. In computer vision, one can shape the output during inference by performing an adversarial attack called poisoning via introducing a backdoor into the model during training. For sequences of financial transactions of a customer, insertion of a backdoor is harder to perform, as models operate over a more complex discrete space of sequences, and systematic checks for insecurities occur. We provide a method to introduce concealed backdoors, creating vulnerabilities without altering their functionality for uncontaminated data. To achieve this, we replace a clean model with a poisoned one that is aware of the availability of a backdoor and utilize this knowledge. Our most difficult for uncovering attacks include either additional supervised detection step of poisoned data activated during the test or well-hidden model weight modifications. The experimental study provides insights into how these effects vary across different datasets, architectures, and model components. Alternative methods and baselines, such as distillation-type regularization, are also explored but found to be less efficient. Conducted on three open transaction datasets and architectures, including LSTM, CNN, and Transformer, our findings not only illuminate the vulnerabilities in contemporary models but also can drive the construction of more robust systems.

READ FULL TEXT
research
06/15/2021

Adversarial Attacks on Deep Models for Financial Transaction Records

Machine learning models using transaction records as inputs are popular ...
research
07/21/2023

Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks

Deep learning has been rapidly employed in many applications revolutioni...
research
04/15/2020

Poisoning Attacks on Algorithmic Fairness

Research in adversarial machine learning has shown how the performance o...
research
07/17/2019

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Machine learning models are currently being deployed in a variety of rea...
research
06/23/2023

Creating Valid Adversarial Examples of Malware

Machine learning is becoming increasingly popular as a go-to approach fo...
research
08/22/2023

Designing an attack-defense game: how to increase robustness of financial transaction models via a competition

Given the escalating risks of malicious attacks in the finance sector an...
research
09/07/2020

Black Box to White Box: Discover Model Characteristics Based on Strategic Probing

In Machine Learning, White Box Adversarial Attacks rely on knowing under...

Please sign up or login with your details

Forgot password? Click here to reset