Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension

02/16/2023
by   Henry Kvinge, et al.
4

Prompting has become an important mechanism by which users can more effectively interact with many flavors of foundation model. Indeed, the last several years have shown that well-honed prompts can sometimes unlock emergent capabilities within such models. While there has been a substantial amount of empirical exploration of prompting within the community, relatively few works have studied prompting at a mathematical level. In this work we aim to take a first step towards understanding basic geometric properties induced by prompts in Stable Diffusion, focusing on the intrinsic dimension of internal representations within the model. We find that choice of prompt has a substantial impact on the intrinsic dimension of representations at both layers of the model which we explored, but that the nature of this impact depends on the layer being considered. For example, in certain bottleneck layers of the model, intrinsic dimension of representations is correlated with prompt perplexity (measured using a surrogate model), while this correlation is not apparent in the latent layers. Our evidence suggests that intrinsic dimension could be a useful tool for future studies of the impact of different prompts on text-to-image models.

READ FULL TEXT

page 7

page 8

page 9

page 10

page 11

research
05/29/2019

Intrinsic dimension of data representations in deep neural networks

Deep neural networks progressively transform their inputs across multipl...
research
03/07/2004

Memorization in a neural network with adjustable transfer function and conditional gating

The main problem about replacing LTP as a memory mechanism has been to f...
research
05/04/2023

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Diffusion-based generative models' impressive ability to create convinci...
research
11/23/2022

Relating Regularization and Generalization through the Intrinsic Dimension of Activations

Given a pair of models with similar training set performance, it is natu...
research
02/01/2023

The geometry of hidden representations of large transformer models

Large transformers are powerful architectures for self-supervised analys...
research
12/22/2020

Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning

Although pretrained language models can be fine-tuned to produce state-o...
research
11/22/2021

On Data-centric Myths

The community lacks theory-informed guidelines for building good data se...

Please sign up or login with your details

Forgot password? Click here to reset