Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias

01/09/2020
by   Krishna Kumar Singh, et al.
32

Existing models often leverage co-occurrences between objects and their context to improve recognition accuracy. However, strongly relying on context risks a model's generalizability, especially when typical co-occurrence patterns are absent. This work focuses on addressing such contextual biases to improve the robustness of the learnt feature representations. Our goal is to accurately recognize a category in the absence of its context, without compromising on performance when it co-occurs with context. Our key idea is to decorrelate feature representations of a category from its co-occurring context. We achieve this by learning a feature subspace that explicitly represents categories occurring in the absence of context along side a joint feature subspace that represents both categories and context. Our very simple yet effective method is extensible to two multi-label tasks – object and attribute classification. On 4 challenging datasets, we demonstrate the effectiveness of our method in reducing contextual bias.

READ FULL TEXT

page 1

page 7

page 8

research
04/28/2021

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias

Singh et al. (2020) point out the dangers of contextual bias in visual r...
research
09/21/2018

Analysing object detectors from the perspective of co-occurring object categories

The accuracy of state-of-the-art Faster R-CNN and YOLO object detectors ...
research
10/20/2021

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...
research
03/24/2023

Category Query Learning for Human-Object Interaction Classification

Unlike most previous HOI methods that focus on learning better human-obj...
research
05/23/2022

Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation

Multi-Label Image Classification (MLIC) approaches usually exploit label...
research
10/29/2017

Examining CNN Representations with respect to Dataset Bias

Given a pre-trained CNN without any testing samples, this paper proposes...
research
06/12/2021

Structure-Regularized Attention for Deformable Object Representation

Capturing contextual dependencies has proven useful to improve the repre...

Please sign up or login with your details

Forgot password? Click here to reset