Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

07/21/2023
by   Shouwei Ruan, et al.
0

Viewpoint invariance remains challenging for visual recognition in the 3D world, as altering the viewing directions can significantly impact predictions for the same object. While substantial efforts have been dedicated to making neural networks invariant to 2D image translations and rotations, viewpoint invariance is rarely investigated. Motivated by the success of adversarial training in enhancing model robustness, we propose Viewpoint-Invariant Adversarial Training (VIAT) to improve the viewpoint robustness of image classifiers. Regarding viewpoint transformation as an attack, we formulate VIAT as a minimax optimization problem, where the inner maximization characterizes diverse adversarial viewpoints by learning a Gaussian mixture distribution based on the proposed attack method GMVFool. The outer minimization obtains a viewpoint-invariant classifier by minimizing the expected loss over the worst-case viewpoint distributions that can share the same one for different objects within the same category. Based on GMVFool, we contribute a large-scale dataset called ImageNet-V+ to benchmark viewpoint robustness. Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool. Furthermore, we propose ViewRS, a certified viewpoint robustness method that provides a certified radius and accuracy to demonstrate the effectiveness of VIAT from the theoretical perspective.

READ FULL TEXT

page 1

page 2

page 9

page 10

research
10/08/2022

ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints

Recent studies have demonstrated that visual recognition models lack rob...
research
02/14/2020

Adversarial Distributional Training for Robust Deep Learning

Adversarial training (AT) is among the most effective techniques to impr...
research
11/03/2018

Learning to Defense by Learning to Attack

Adversarial training provides a principled approach for training robust ...
research
04/25/2020

Improved Adversarial Training via Learned Optimizer

Adversarial attack has recently become a tremendous threat to deep learn...
research
12/22/2020

Self-Progressing Robust Training

Enhancing model robustness under new and even adversarial environments i...
research
12/28/2021

Associative Adversarial Learning Based on Selective Attack

A human's attention can intuitively adapt to corrupted areas of an image...
research
06/26/2019

Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

This work provides theoretical and empirical evidence that invariance-in...

Please sign up or login with your details

Forgot password? Click here to reset