Deep Detector Health Management under Adversarial Campaigns

11/19/2019
by   Javier Echauz, et al.
0

Machine learning models are vulnerable to adversarial inputs that induce seemingly unjustifiable errors. As automated classifiers are increasingly used in industrial control systems and machinery, these adversarial errors could grow to be a serious problem. Despite numerous studies over the past few years, the field of adversarial ML is still considered alchemy, with no practical unbroken defenses demonstrated to date, leaving PHM practitioners with few meaningful ways of addressing the problem. We introduce turbidity detection as a practical superset of the adversarial input detection problem, coping with adversarial campaigns rather than statistically invisible one-offs. This perspective is coupled with ROC-theoretic design guidance that prescribes an inexpensive domain adaptation layer at the output of a deep learning model during an attack campaign. The result aims to approximate the Bayes optimal mitigation that ameliorates the detection model's degraded health. A proactively reactive type of prognostics is achieved via Monte Carlo simulation of various adversarial campaign scenarios, by sampling from the model's own turbidity distribution to quickly deploy the correct mitigation during a real-world campaign.

READ FULL TEXT

page 3

page 11

research
12/29/2022

"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice

Recent years have seen a proliferation of research on adversarial machin...
research
12/11/2020

Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

Deep neural network (DNN) architectures are considered to be robust to r...
research
06/12/2022

Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems

In this study, we focus on the impact of adversarial attacks on deep lea...
research
03/12/2021

Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case

6G is the next generation for the communication systems. In recent years...
research
04/23/2020

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentional...
research
08/03/2023

URET: Universal Robustness Evaluation Toolkit (for Evasion)

Machine learning models are known to be vulnerable to adversarial evasio...

Please sign up or login with your details

Forgot password? Click here to reset