A Self-supervised Approach for Adversarial Robustness

06/08/2020
by   Muzammal Naseer, et al.
15

Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems e.g., for classification, segmentation and object detection. The vulnerability of DNNs against such attacks can prove a major roadblock towards their real-world deployment. Transferability of adversarial examples demand generalizable defenses that can provide cross-task protection. Adversarial training that enhances robustness by modifying target model's parameters lacks such generalizability. On the other hand, different input processing based defenses fall short in the face of continuously evolving attacks. In this paper, we take the first step to combine the benefits of both approaches and propose a self-supervised adversarial training mechanism in the input space. By design, our defense is a generalizable approach and provides significant robustness against the unseen adversarial attacks ( by reducing the success rate of translation-invariant ensemble attack from 82.6% to 31.9% in comparison to previous state-of-the-art). It can be deployed as a plug-and-play solution to protect a variety of vision systems, as we demonstrate for the case of classification, segmentation and detection. Code is available at: <https://github.com/Muzammal-Naseer/NRP>.

READ FULL TEXT

page 7

page 8

page 13

page 15

page 16

page 17

page 18

page 19

research
04/19/2021

Removing Adversarial Noise in Class Activation Feature Space

Deep neural networks (DNNs) are vulnerable to adversarial noise. Preproc...
research
06/14/2022

Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO

This work conducts the first analysis on the robustness against adversar...
research
06/09/2021

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...
research
08/30/2021

Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings

Adversarial robustness of deep models is pivotal in ensuring safe deploy...
research
06/25/2023

Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic Adversarial Training

Machine learning-based forecasting models are commonly used in Intellige...
research
07/18/2022

Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations

Transferable adversarial attacks optimize adversaries from a pretrained ...
research
02/23/2021

Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation

Despite that deep neural networks (DNNs) have achieved enormous success ...

Please sign up or login with your details

Forgot password? Click here to reset