UnMask: Adversarial Detection and Defense Through Robust Feature Alignment

02/21/2020
by   Scott Freitas, et al.
0

Deep learning models are being integrated into a wide range of high-impact, security-critical systems, from self-driving cars to medical diagnosis. However, recent research has demonstrated that many of these deep learning architectures are vulnerable to adversarial attacks–highlighting the vital need for defensive techniques to detect and mitigate these attacks before they occur. To combat these adversarial attacks, we developed UnMask, an adversarial detection and defense framework based on robust feature alignment. The core idea behind UnMask is to protect these models by verifying that an image's predicted class ("bird") contains the expected robust features (e.g., beak, wings, eyes). For example, if an image is classified as "bird", but the extracted features are wheel, saddle and frame, the model may be under attack. UnMask detects such attacks and defends the model by rectifying the misclassification, re-classifying the image based on its robust features. Our extensive evaluation shows that UnMask (1) detects up to 96.75 with a false positive rate of 9.66 classifying up to 93 attack, Projected Gradient Descent, in the gray-box setting. UnMask provides significantly better protection than adversarial training across 8 attack vectors, averaging 31.18 agnostic and fast. We open source the code repository and data with this paper: https://github.com/unmaskd/unmask.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2020

A Partial Break of the Honeypots Defense to Catch Adversarial Attacks

A recent defense proposes to inject "honeypots" into neural networks in ...
research
06/21/2023

Adversarial Attacks Neutralization via Data Set Randomization

Adversarial attacks on deep-learning models pose a serious threat to the...
research
11/05/2020

Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty

Dataset bias is a problem in adversarial machine learning, especially in...
research
08/09/2023

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning

ModSecurity is widely recognized as the standard open-source Web Applica...
research
01/04/2021

Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

Although there are a great number of adversarial attacks on deep learnin...
research
05/13/2020

DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses

DeepRobust is a PyTorch adversarial learning library which aims to build...
research
06/02/2023

VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations

Adversarial attacks reveal serious flaws in deep learning models. More d...

Please sign up or login with your details

Forgot password? Click here to reset