Interventional Robustness of Deep Latent Variable Models

10/31/2018
by   Raphael Suter, et al.
14

The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is of central importance for data efficient and robust use of neural networks. Various approaches aiming towards this goal have been proposed in the recent time -- validating existing work is hence a crucial task to guide further development. Previous validation methods focused on shared information between generative factors and learned features. The effects of rare events or cumulative influences from multiple factors on encodings, however, remain uncaptured. Our experiments show that this already becomes noticeable in a simple, noise free dataset. This is why we introduce the interventional robustness score, which provides a quantitative evaluation of robustness in learned representations with respect to interventions on generative factors and changing nuisance factors. We show how this score can be estimated from labeled observational data, that may be confounded, and further provide an efficient algorithm that scales linearly in the dataset size. The benefits of our causally motivated framework are illustrated in extensive experiments.

READ FULL TEXT

page 18

page 23

page 27

page 30

page 31

page 32

page 33

page 34

research
10/26/2020

Robust Disentanglement of a Few Factors at a Time

Disentanglement is at the forefront of unsupervised learning, as disenta...
research
05/20/2022

Leveraging Relational Information for Learning Weakly Disentangled Representations

Disentanglement is a difficult property to enforce in neural representat...
research
04/06/2018

Hierarchical Disentangled Representations

Deep latent-variable models learn representations of high-dimensional da...
research
03/04/2021

There and back again: Cycle consistency across sets for isolating factors of variation

Representational learning hinges on the task of unraveling the set of un...
research
08/24/2023

Disentanglement Learning via Topology

We propose TopDis (Topological Disentanglement), a method for learning d...
research
04/09/2021

SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size

Before deploying machine learning models it is critical to assess their ...

Please sign up or login with your details

Forgot password? Click here to reset