Metric Learning for Adversarial Robustness

09/03/2019
by   Chengzhi Mao, et al.
15

Deep networks are well-known to be fragile to adversarial attacks. Using several standard image datasets and established attack mechanisms, we conduct an empirical analysis of deep representations under attack, and find that the attack causes the internal representation to shift closer to the "false" class. Motivated by this observation, we propose to regularize the representation space under attack with metric learning in order to produce more robust classifiers. By carefully sampling examples for metric learning, our learned representation not only increases robustness, but also can detect previously unseen adversarial samples. Quantitative experiments show improvement of robustness accuracy by up to 4% and detection efficiency by up to 6% according to Area Under Curve (AUC) score over baselines.

READ FULL TEXT

page 17

page 18

research
04/30/2020

DIABLO: Dictionary-based Attention Block for Deep Metric Learning

Recent breakthroughs in representation learning of unseen classes and ex...
research
02/28/2022

Robust Textual Embedding against Word-level Adversarial Attacks

We attribute the vulnerability of natural language processing models to ...
research
06/12/2020

Provably Robust Metric Learning

Metric learning is an important family of algorithms for classification ...
research
02/14/2021

Exploring Adversarial Robustness of Deep Metric Learning

Deep Metric Learning (DML), a widely-used technique, involves learning a...
research
11/04/2022

Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning

Intentionally crafted adversarial samples have effectively exploited wea...
research
11/01/2022

Zero Day Threat Detection Using Metric Learning Autoencoders

The proliferation of zero-day threats (ZDTs) to companies' networks has ...
research
06/01/2020

Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods

We identify three common cases that lead to overestimation of adversaria...

Please sign up or login with your details

Forgot password? Click here to reset