
SymmetryBased Disentangled Representation Learning requires Interaction with Environments
Finding a generally accepted formal definition of a disentangled represe...
03/30/2019 ∙ by Hugo CasellesDupré, et al. ∙ 0 ∙ shareread it

Invariantequivariant representation learning for multiclass data
Representations learnt through deep neural networks tend to be highly in...
02/08/2019 ∙ by Ilya Feige, et al. ∙ 0 ∙ shareread it

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning
Disentangled representations have recently been shown to improve data ef...
05/29/2019 ∙ by Sunny Duan, et al. ∙ 0 ∙ shareread it

Disentangled Representations in Neural Models
Representation learning is the foundation for the recent success of neur...
02/07/2016 ∙ by William Whitney, et al. ∙ 0 ∙ shareread it

Towards Learning FineGrained Disentangled Representations from Speech
Learning disentangled representations of highdimensional data is curren...
08/08/2018 ∙ by Yuan Gong, et al. ∙ 0 ∙ shareread it

Unsupervised Disentangled Representation Learning with Analogical Relations
Learning the disentangled representation of interpretable generative fac...
04/25/2018 ∙ by Zejian Li, et al. ∙ 0 ∙ shareread it

A Quantum Field Theory of Representation Learning
Continuous symmetries and their breaking play a prominent role in contem...
07/04/2019 ∙ by Robert Bamler, et al. ∙ 0 ∙ shareread it
Towards a Definition of Disentangled Representations
How can intelligent agents solve a diverse set of tasks in a dataefficient manner? The disentangled representation learning approach posits that such an agent would benefit from separating out (disentangling) the underlying structure of the world into disjoint parts of its representation. However, there is no generally agreedupon definition of disentangling, not least because it is unclear how to formalise the notion of world structure beyond toy datasets with a known ground truth generative process. Here we propose that a principled solution to characterising disentangled representations can be found by focusing on the transformation properties of the world. In particular, we suggest that those transformations that change only some properties of the underlying world state, while leaving all other properties invariant, are what gives exploitable structure to any kind of data. Similar ideas have already been successfully applied in physics, where the study of symmetry transformations has revolutionised the understanding of the world structure. By connecting symmetry transformations to vector representations using the formalism of group and representation theory we arrive at the first formal definition of disentangled representations. Our new definition is in agreement with many of the current intuitions about disentangling, while also providing principled resolutions to a number of previous points of contention. While this work focuses on formally defining disentangling  as opposed to solving the learning problem  we believe that the shift in perspective to studying data transformations can stimulate the development of better representation learning algorithms.
READ FULL TEXT