Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

07/17/2019
by   Yuxin Ma, et al.
0

Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 8

page 9

page 10

page 11

research
06/06/2023

Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey

Explainable artificial intelligence (XAI) methods are portrayed as a rem...
research
03/29/2021

Automating Defense Against Adversarial Attacks: Discovery of Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed Models

Image classification is a common step in image recognition for machine l...
research
10/24/2021

Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels

The confusion matrix, a ubiquitous visualization for helping people eval...
research
09/15/2020

A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes

Many statistical learning models hold an assumption that the training da...
research
03/03/2021

A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models

Each machine learning model deployed into production has a risk of adver...
research
02/18/2022

Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness

From past couple of years there is a cycle of researchers proposing a de...
research
08/20/2023

Hiding Backdoors within Event Sequence Data via Poisoning Attacks

The financial industry relies on deep learning models for making importa...

Please sign up or login with your details

Forgot password? Click here to reset