Addressing the Topological Defects of Disentanglement via Distributed Operators

02/10/2021 ∙ by Diane Bouchacourt, et al. ∙ 39

A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e.g. object shape vs. pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical success to date. Here, we show that, for a broad family of transformations acting on images–encompassing simple affine transformations such as rotations and translations–this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder). Motivated by classical results from group representation theory, we study an alternative, more flexible approach to disentanglement which relies on distributed latent operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of this approach to disentangle affine transformations. Our work lays a theoretical foundation for the recent success of a new generation of models using distributed operators for disentanglement.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 8

page 38

page 39

page 40

page 41

page 42

Code Repositories

Addressing-the-Topological-Defects-of-Disentanglement

Repo reproducing experimental results in "Addressing the Topological Defects of Disentanglement"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.