Causal Attention for Unbiased Visual Recognition

08/19/2021
by   Tan Wang, et al.
0

Attention module does not always help deep models learn causal features that are robust in any confounding context, e.g., a foreground object feature is invariant to different backgrounds. This is because the confounders trick the attention to capture spurious correlations that benefit the prediction when the training and testing data are IID (identical independent distribution); while harm the prediction when the data are OOD (out-of-distribution). The sole fundamental solution to learn causal attention is by causal intervention, which requires additional annotations of the confounders, e.g., a "dog" model is learned within "grass+dog" and "road+dog" respectively, so the "grass" and "road" contexts will no longer confound the "dog" recognition. However, such annotation is not only prohibitively expensive, but also inherently problematic, as the confounders are elusive in nature. In this paper, we propose a causal attention module (CaaM) that self-annotates the confounders in unsupervised fashion. In particular, multiple CaaMs can be stacked and integrated in conventional attention CNN and self-attention Vision Transformer. In OOD settings, deep models with CaaM outperform those without it significantly; even in IID settings, the attention localization is also improved by CaaM, showing a great potential in applications that require robust visual saliency. Codes are available at <https://github.com/Wangt-CN/CaaM>.

READ FULL TEXT

page 2

page 8

page 15

page 16

page 17

research
03/05/2021

Causal Attention for Vision-Language Tasks

We present a novel attention mechanism: Causal Attention (CATT), to remo...
research
04/26/2022

Causal Transportability for Visual Recognition

Visual representations underlie object recognition tasks, but they often...
research
05/09/2021

Conformer: Local Features Coupling Global Representations for Visual Recognition

Within Convolutional Neural Network (CNN), the convolution operations ar...
research
10/07/2021

TranSalNet: Towards perceptually relevant visual saliency prediction

Convolutional neural networks (CNNs) have significantly advanced computa...
research
07/21/2022

Weakly Supervised Object Localization via Transformer with Implicit Spatial Calibration

Weakly Supervised Object Localization (WSOL), which aims to localize obj...
research
08/26/2021

Glimpse-Attend-and-Explore: Self-Attention for Active Visual Exploration

Active visual exploration aims to assist an agent with a limited field o...
research
08/22/2022

Meta-Causal Feature Learning for Out-of-Distribution Generalization

Causal inference has become a powerful tool to handle the out-of-distrib...

Please sign up or login with your details

Forgot password? Click here to reset