Interpretable Disentangled Parametrization of Measured BRDF with β-VAE

08/08/2022
by   Alexis Benamira, et al.
0

Finding a low dimensional parametric representation of measured BRDF remains challenging. Currently available solutions are either not interpretable, or rely on limited analytical solutions, or require expensive test subject based investigations. In this work, we strive to establish a parametrization space that affords the data-driven representation variance of measured BRDF models while still offering the artistic control of parametric analytical BRDFs. We present a machine learning approach that generates an interpretable disentangled parameter space. A disentangled representation is one in which each parameter is responsible for a unique generative factor and is insensitive to the ones encoded by the other parameters. To that end, we resort to a β-Variational AutoEncoder (β-VAE), a specific architecture of Deep Neural Network (DNN). After training our network, we analyze the parametrization space and interpret the learned generative factors utilizing our visual perception. It should be noted that perceptual analysis is called upon downstream of the system for interpretation purposes compared to most other existing methods where it is used upfront to elaborate the parametrization. In addition to that, we do not need a test subject investigation. A novel feature of our interpretable disentangled parametrization is the post-processing capability to incorporate new parameters along with the learned ones, thus expanding the richness of producible appearances. Furthermore, our solution allows more flexible and controllable material editing possibilities than manifold exploration. Finally, we provide a rendering interface, for real-time material editing and interpolation based on the presented new parametrization system.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

page 9

page 10

research
04/18/2023

CF-VAE: Causal Disentangled Representation Learning with VAE and Causal Flows

Learning disentangled representations is important in representation lea...
research
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
research
04/25/2018

Unsupervised Disentangled Representation Learning with Analogical Relations

Learning the disentangled representation of interpretable generative fac...
research
05/24/2017

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

We would like to learn a representation of the data which decomposes an ...
research
10/18/2022

Out of Distribution Reasoning by Weakly-Supervised Disentangled Logic Variational Autoencoder

Out-of-distribution (OOD) detection, i.e., finding test samples derived ...
research
09/15/2021

Disentangling Generative Factors of Physical Fields Using Variational Autoencoders

The ability to extract generative parameters from high-dimensional field...
research
08/04/2018

Learning disentangled representation from 12-lead electrograms: application in localizing the origin of Ventricular Tachycardia

The increasing availability of electrocardiogram (ECG) data has motivate...

Please sign up or login with your details

Forgot password? Click here to reset