Interpretable Disentangled Parametrization of Measured BRDF with β-VAE

08/08/2022
by   Alexis Benamira, et al.
0

Finding a low dimensional parametric representation of measured BRDF remains challenging. Currently available solutions are either not interpretable, or rely on limited analytical solutions, or require expensive test subject based investigations. In this work, we strive to establish a parametrization space that affords the data-driven representation variance of measured BRDF models while still offering the artistic control of parametric analytical BRDFs. We present a machine learning approach that generates an interpretable disentangled parameter space. A disentangled representation is one in which each parameter is responsible for a unique generative factor and is insensitive to the ones encoded by the other parameters. To that end, we resort to a β-Variational AutoEncoder (β-VAE), a specific architecture of Deep Neural Network (DNN). After training our network, we analyze the parametrization space and interpret the learned generative factors utilizing our visual perception. It should be noted that perceptual analysis is called upon downstream of the system for interpretation purposes compared to most other existing methods where it is used upfront to elaborate the parametrization. In addition to that, we do not need a test subject investigation. A novel feature of our interpretable disentangled parametrization is the post-processing capability to incorporate new parameters along with the learned ones, thus expanding the richness of producible appearances. Furthermore, our solution allows more flexible and controllable material editing possibilities than manifold exploration. Finally, we provide a rendering interface, for real-time material editing and interpolation based on the presented new parametrization system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset