Towards Adversarial Robustness of Deep Vision Algorithms

11/19/2022
by   Hanshu Yan, et al.
0

Deep learning methods have achieved great success in solving computer vision tasks, and they have been widely utilized in artificially intelligent systems for image processing, analysis, and understanding. However, deep neural networks have been shown to be vulnerable to adversarial perturbations in input data. The security issues of deep neural networks have thus come to the fore. It is imperative to study the adversarial robustness of deep vision algorithms comprehensively. This talk focuses on the adversarial robustness of image classification models and image denoisers. We will discuss the robustness of deep vision algorithms from three perspectives: 1) robustness evaluation (we propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness improvement (HAT, TisODE, and CIFS are developed to robustify vision models), and 3) the connection between adversarial robustness and generalization capability to new domains (we find that adversarially robust denoisers can deal with unseen types of real-world noise).

READ FULL TEXT

page 34

page 38

research
02/21/2020

Robustness from Simple Classifiers

Despite the vast success of Deep Neural Networks in numerous application...
research
01/12/2022

Towards Adversarially Robust Deep Image Denoising

This work systematically investigates the adversarial robustness of deep...
research
05/10/2023

The Robustness of Computer Vision Models against Common Corruptions: a Survey

The performance of computer vision models is susceptible to unexpected c...
research
03/11/2022

Perception Over Time: Temporal Dynamics for Robust Image Understanding

While deep learning surpasses human-level performance in narrow and spec...
research
06/06/2019

Computer Vision with a Single (Robust) Classifier

We show that the basic classification framework alone can be used to tac...
research
05/19/2021

Balancing Robustness and Sensitivity using Feature Contrastive Learning

It is generally believed that robust training of extremely large network...
research
08/20/2020

β-Variational Classifiers Under Attack

Deep Neural networks have gained lots of attention in recent years thank...

Please sign up or login with your details

Forgot password? Click here to reset