Learning with Difference Attention for Visually Grounded Self-supervised Representations

06/26/2023
by   Aishwarya Agarwal, et al.
0

Recent works in self-supervised learning have shown impressive results on single-object images, but they struggle to perform well on complex multi-object images as evidenced by their poor visual grounding. To demonstrate this concretely, we propose visual difference attention (VDA) to compute visual attention maps in an unsupervised fashion by comparing an image with its salient-regions-masked-out version. We use VDA to derive attention maps for state-of-the art SSL methods and show they do not highlight all salient regions in an image accurately, suggesting their inability to learn strong representations for downstream tasks like segmentation. Motivated by these limitations, we cast VDA as a differentiable operation and propose a new learning objective, Differentiable Difference Attention (DiDA) loss, which leads to substantial improvements in an SSL model's visually grounding to an image's salient regions.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 12

page 13

page 14

page 15

research
12/08/2020

CASTing Your Model: Learning to Localize Improves Self-Supervised Representations

Recent advances in self-supervised learning (SSL) have largely closed th...
research
12/02/2017

Improving Visually Grounded Sentence Representations with Self-Attention

Sentence representation models trained only on language could potentiall...
research
05/19/2023

Syllable Discovery and Cross-Lingual Generalization in a Visually Grounded, Self-Supervised Speech Mode

In this paper, we show that representations capturing syllabic units eme...
research
04/03/2023

Multi-Modal Representation Learning with Text-Driven Soft Masks

We propose a visual-linguistic representation learning approach within a...
research
04/20/2023

Movie Box Office Prediction With Self-Supervised and Visually Grounded Pretraining

Investments in movie production are associated with a high level of risk...
research
03/28/2022

Word Discovery in Visually Grounded, Self-Supervised Speech Models

We present a method for visually-grounded spoken term discovery. After t...
research
02/16/2022

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Discriminative self-supervised learning allows training models on any ra...

Please sign up or login with your details

Forgot password? Click here to reset