Toward a Visual Concept Vocabulary for GAN Latent Space

10/08/2021
by   Sarah Schwettmann, et al.
11

A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images. But existing techniques for identifying these transformations rely on either a fixed vocabulary of pre-specified visual concepts, or on unsupervised disentanglement techniques whose alignment with human judgments about perceptual salience is unknown. This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space. Our approach is built from three components: (1) automatic identification of perceptually salient directions based on their layer selectivity; (2) human annotation of these directions with free-form, compositional natural language descriptions; and (3) decomposition of these annotations into a visual concept vocabulary, consisting of distilled directions labeled with single words. Experiments show that concepts learned with our approach are reliable and composable – generalizing across classes, contexts, and observers, and enabling fine-grained manipulation of image style and content.

READ FULL TEXT

page 3

page 4

page 6

page 8

page 12

page 13

page 14

page 15

research
11/23/2021

Tensor Component Analysis for Interpreting the Latent Space of GANs

This paper addresses the problem of finding interpretable directions in ...
research
04/08/2023

3D GANs and Latent Space: A comprehensive survey

Generative Adversarial Networks (GANs) have emerged as a significant pla...
research
05/29/2023

Concept Decomposition for Visual Exploration and Inspiration

A creative idea is often born from transforming, combining, and modifyin...
research
07/20/2022

Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods

Generative models such as Generative Adversarial Networks (GANs) and Var...
research
12/09/2020

GAN Steerability without optimization

Recent research has shown remarkable success in revealing "steering" dir...
research
10/29/2021

Visual Explanations for Convolutional Neural Networks via Latent Traversal of Generative Adversarial Networks

Lack of explainability in artificial intelligence, specifically deep neu...
research
10/26/2022

A Sign That Spells: DALL-E 2, Invisual Images and The Racial Politics of Feature Space

In this paper, we examine how generative machine learning systems produc...

Please sign up or login with your details

Forgot password? Click here to reset