Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations

06/21/2021
by   Witold Oleszkiewicz, et al.
0

Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors, yet the corresponding efforts to explain the self-supervised approaches lag behind. Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models by leveraging probing tasks employed previously in natural language processing. The probing tasks require knowledge about semantic relationships between image parts. Hence, we propose a systematic approach to obtain analogs of natural language in vision, such as visual words, context, and taxonomy. Our proposal is grounded in Marr's computational theory of vision and concerns features like textures, shapes, and lines. We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations. Our key findings emphasize that relations between language and vision can serve as an effective yet intuitive tool for discovering how machine learning models work, independently of data modality. Our work opens a plethora of research pathways towards more explainable and transparent AI.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 8

page 10

page 11

page 12

research
06/10/2020

Demystifying Self-Supervised Learning: An Information-Theoretical Framework

Self-supervised representation learning adopts self-defined signals as s...
research
01/25/2019

Revisiting Self-Supervised Visual Representation Learning

Unsupervised visual representation learning remains a largely unsolved p...
research
04/14/2023

Tempo vs. Pitch: understanding self-supervised tempo estimation

Self-supervision methods learn representations by solving pretext tasks ...
research
06/20/2022

Visualizing and Understanding Self-Supervised Vision Learning

Self-Supervised vision learning has revolutionized deep learning, becomi...
research
07/31/2022

COCOA: Cross Modality Contrastive Learning for Sensor Data

Self-Supervised Learning (SSL) is a new paradigm for learning discrimina...
research
11/18/2019

Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks

Vision-Language Navigation (VLN) is a task where agents learn to navigat...
research
07/31/2022

Augmenting Vision Language Pretraining by Learning Codebook with Visual Semantics

Language modality within the vision language pretraining framework is in...

Please sign up or login with your details

Forgot password? Click here to reset