Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning

10/27/2020
by   Iro Laina, et al.
6

The increasing impact of black box models, and particularly of unsupervised ones, comes with an increasing interest in tools to understand and interpret them. In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with state-of-the-art clustering methods. In some cases, clusters readily correspond to an existing labelled dataset. However, often they do not, yet they still maintain an "intuitive interpretability". We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings, including unsupervised ones. The idea is to measure (1) how well humans can learn to reproduce a grouping by measuring their ability to generalise from a small set of visual examples (learnability) and (2) whether the set of visual examples can be replaced by a succinct, textual description (describability). By assessing human annotators as classifiers, we remove the subjective quality of existing evaluation metrics. For better scalability, we finally propose a class-level captioning system to generate descriptions for visual groupings automatically and compare it to human annotators using the describability metric.

READ FULL TEXT

page 2

page 8

page 18

page 19

page 20

page 21

page 22

page 23

research
03/12/2017

Improving Interpretability of Deep Neural Networks with Semantic Information

Interpretability of deep neural networks (DNNs) is essential since it en...
research
02/07/2019

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks

Interpretability has become an important topic of research as more machi...
research
09/07/2023

A Function Interpretation Benchmark for Evaluating Interpretability Methods

Labeling neural network submodules with human-legible descriptions is us...
research
06/17/2016

Using Visual Analytics to Interpret Predictive Machine Learning Models

It is commonly believed that increasing the interpretability of a machin...
research
09/07/2022

Measuring the Interpretability of Unsupervised Representations via Quantized Reverse Probing

Self-supervised visual representation learning has recently attracted si...
research
04/26/2021

Rich Semantics Improve Few-shot Learning

Human learning benefits from multi-modal inputs that often appear as ric...

Please sign up or login with your details

Forgot password? Click here to reset