NICEST: Noisy Label Correction and Training for Robust Scene Graph Generation

07/27/2022
by   Lin Li, et al.
2

Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1) all the manually annotated positive samples are equally correct; 2) all the un-annotated negative samples are absolutely background. In this paper, we argue that neither of the assumptions applies to SGG: there are numerous noisy ground-truth predicate labels that break these two assumptions and harm the training of unbiased SGG models. To this end, we propose a novel NoIsy label CorrEction and Sample Training strategy for SGG: NICEST. Specifically, it consists of two parts: NICE and NIST, which rule out these noisy label issues by generating high-quality samples and the effective training strategy, respectively. NICE first detects noisy samples and then reassigns them more high-quality soft predicate labels. NIST is a multi-teacher knowledge distillation based training strategy, which enables the model to learn unbiased fusion knowledge. And a dynamic trade-off weighting strategy in NIST is designed to penalize the bias of different teachers. Due to the model-agnostic nature of both NICE and NIST, our NICEST can be seamlessly incorporated into any SGG architecture to boost its performance on different predicate categories. In addition, to better evaluate the generalization of SGG models, we further propose a new benchmark VG-OOD, by re-organizing the prevalent VG dataset and deliberately making the predicate distributions of the training and test sets as different as possible for each subject-object category pair. This new benchmark helps disentangle the influence of subject-object category based frequency biases. Extensive ablations and results on different backbones and tasks have attested to the effectiveness and generalization ability of each component of NICEST.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 11

research
11/21/2022

Blind Knowledge Distillation for Robust Image Classification

Optimizing neural networks with noisy labels is a challenging task, espe...
research
06/19/2022

Gray Learning from Non-IID Data with Out-of-distribution Samples

The quality of the training data annotated by experts cannot be guarante...
research
07/06/2022

A Deep Model for Partial Multi-Label Image Classification with Curriculum Based Disambiguation

In this paper, we study the partial multi-label (PML) image classificati...
research
09/17/2023

Mitigating Shortcuts in Language Models with Soft Label Encoding

Recent research has shown that large language models rely on spurious co...
research
07/18/2022

Rethinking Data Augmentation for Robust Visual Question Answering

Data Augmentation (DA) – generating extra training samples beyond origin...
research
11/09/2022

ARNet: Automatic Refinement Network for Noisy Partial Label Learning

Partial label learning (PLL) is a typical weakly supervised learning, wh...
research
02/14/2022

Asymptotically Unbiased Estimation for Delayed Feedback Modeling via Label Correction

Alleviating the delayed feedback problem is of crucial importance for th...

Please sign up or login with your details

Forgot password? Click here to reset