Robustness in Deep Learning for Computer Vision: Mind the gap?

12/01/2021
by   Nathan Drenkow, et al.
12

Deep neural networks for computer vision tasks are deployed in increasingly safety-critical and socially-impactful applications, motivating the need to close the gap in model performance under varied, naturally occurring imaging conditions. Robustness, ambiguously used in multiple contexts including adversarial machine learning, here then refers to preserving model performance under naturally-induced image corruptions or alterations. We perform a systematic review to identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision. We find that this area of research has received disproportionately little attention relative to adversarial machine learning, yet a significant robustness gap exists that often manifests in performance degradation similar in magnitude to adversarial conditions. To provide a more transparent definition of robustness across contexts, we introduce a structural causal model of the data generating process and interpret non-adversarial robustness as pertaining to a model's behavior on corrupted images which correspond to low-probability samples from the unaltered data distribution. We then identify key architecture-, data augmentation-, and optimization tactics for improving neural network robustness. This causal view of robustness reveals that common practices in the current literature, both in regards to robustness tactics and evaluations, correspond to causal concepts, such as soft interventions resulting in a counterfactually-altered distribution of imaging conditions. Through our findings and analysis, we offer perspectives on how future research may mind this evident and significant non-adversarial robustness gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Non-adversarial Robustness of Deep Learning Methods for Computer Vision

Non-adversarial robustness, also known as natural robustness, is a prope...
research
11/28/2022

Context-Adaptive Deep Neural Networks via Bridge-Mode Connectivity

The deployment of machine learning models in safety-critical application...
research
08/01/2021

Advances in adversarial attacks and defenses in computer vision: A survey

Deep Learning (DL) is the most widely used tool in the contemporary fiel...
research
05/10/2023

The Robustness of Computer Vision Models against Common Corruptions: a Survey

The performance of computer vision models is susceptible to unexpected c...
research
06/08/2023

Degraded Polygons Raise Fundamental Questions of Neural Network Perception

It is well-known that modern computer vision systems often exhibit behav...
research
08/28/2017

ChainerCV: a Library for Deep Learning in Computer Vision

Despite significant progress of deep learning in the field of computer v...
research
09/11/2019

Structural Robustness for Deep Learning Architectures

Deep Networks have been shown to provide state-of-the-art performance in...

Please sign up or login with your details

Forgot password? Click here to reset