Defending against Adversarial Denial-of-Service Attacks

04/14/2021
by   Nicolas M. Müller, et al.
0

Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies. Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training dataset to degrade the performance of machine learning models. As recent work has shown, such Denial-of-Service (DoS) data poisoning attacks are highly effective. To mitigate this threat, we propose a new approach of detecting DoS poisoned instances. In comparison to related work, we deviate from clustering and anomaly detection based approaches, which often suffer from the curse of dimensionality and arbitrary anomaly threshold selection. Rather, our defence is based on extracting information from the training data in such a generalized manner that we can identify poisoned samples based on the information present in the unpoisoned portion of the data. We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances. In comparison to related work, our defence improves false positive / false negative rates by at least 50

READ FULL TEXT
research
02/08/2018

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Machine learning has become an important component for many systems and ...
research
02/21/2023

Using Semantic Information for Defining and Detecting OOD Inputs

As machine learning models continue to achieve impressive performance ac...
research
09/28/2022

Anomaly detection optimization using big data and deep learning to reduce false-positive

Anomaly-based Intrusion Detection System (IDS) has been a hot research t...
research
05/29/2023

Membership Inference Attacks against Language Models via Neighbourhood Comparison

Membership Inference attacks (MIAs) aim to predict whether a data sample...
research
12/06/2022

On the Discredibility of Membership Inference Attacks

With the wide-spread application of machine learning models, it has beco...
research
06/15/2022

Learn to Adapt: Robust Drift Detection in Security Domain

Deploying robust machine learning models has to account for concept drif...
research
12/02/2020

Detection of False-Reading Attacks in the AMI Net-Metering System

In smart grid, malicious customers may compromise their smart meters (SM...

Please sign up or login with your details

Forgot password? Click here to reset