ML Attack Models: Adversarial Attacks and Data Poisoning Attacks

12/06/2021
by   Jing Lin, et al.
0

Many state-of-the-art ML models have outperformed humans in various tasks such as image classification. With such outstanding performance, ML models are widely used today. However, the existence of adversarial attacks and data poisoning attacks really questions the robustness of ML models. For instance, Engstrom et al. demonstrated that state-of-the-art image classifiers could be easily fooled by a small rotation on an arbitrary image. As ML systems are being increasingly integrated into safety and security-sensitive applications, adversarial attacks and data poisoning attacks pose a considerable threat. This chapter focuses on the two broad and important areas of ML security: adversarial attacks and data poisoning attacks.

READ FULL TEXT
research
06/08/2023

Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning

Machine Learning (ML) has become ubiquitous, and its deployment in Netwo...
research
10/19/2021

Multi-concept adversarial attacks

As machine learning (ML) techniques are being increasingly used in many ...
research
10/24/2022

SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning

Existing literature on adversarial Machine Learning (ML) focuses either ...
research
03/13/2023

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

Numerous adversarial attack methods have been developed to generate impe...
research
01/20/2021

Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data

Guaranteeing the security of transactional systems is a crucial priority...
research
11/24/2020

Stochastic sparse adversarial attacks

Adversarial attacks of neural network classifiers (NNC) and the use of r...

Please sign up or login with your details

Forgot password? Click here to reset