New data poison attacks on machine learning classifiers for mobile exfiltration

10/20/2022
by   Miguel A. Ramírez, et al.
0

Most recent studies have shown several vulnerabilities to attacks with the potential to jeopardize the integrity of the model, opening in a few recent years a new window of opportunity in terms of cyber-security. The main interest of this paper is directed towards data poisoning attacks involving label-flipping, this kind of attacks occur during the training phase, being the aim of the attacker to compromise the integrity of the targeted machine learning model by drastically reducing the overall accuracy of the model and/or achieving the missclassification of determined samples. This paper is conducted with intention of proposing two new kinds of data poisoning attacks based on label-flipping, the targeted of the attack is represented by a variety of machine learning classifiers dedicated for malware detection using mobile exfiltration data. With that, the proposed attacks are proven to be model-agnostic, having successfully corrupted a wide variety of machine learning models; Logistic Regression, Decision Tree, Random Forest and KNN are some examples. The first attack is performs label-flipping actions randomly while the second attacks performs label flipping only one of the 2 classes in particular. The effects of each attack are analyzed in further detail with special emphasis on the accuracy drop and the misclassification rate. Finally, this paper pursuits further research direction by suggesting the development of a defense technique that could promise a feasible detection and/or mitigation mechanisms; such technique should be capable of conferring a certain level of robustness to a target model against potential attackers.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 7

page 11

page 12

page 16

research
02/21/2022

Poisoning Attacks and Defenses on Artificial Intelligence: A Survey

Machine learning models have been widely adopted in several fields. Howe...
research
02/11/2021

Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

State-of-the-art machine learning models are vulnerable to data poisonin...
research
06/04/2022

Leveraging Machine Learning for Ransomware Detection

The current pandemic situation has increased cyber-attacks drastically w...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
05/09/2022

Btech thesis report on adversarial attack detection and purification of adverserially attacked images

This is Btech thesis report on detection and purification of adverserial...
research
08/30/2023

Predict And Prevent DDOS Attacks Using Machine Learning and Statistical Algorithms

A malicious attempt to exhaust a victim's resources to cause it to crash...
research
07/31/2020

Class-Oriented Poisoning Attack

Poisoning attacks on machine learning systems compromise the model perfo...

Please sign up or login with your details

Forgot password? Click here to reset