Zero Grads Ever Given: Learning Local Surrogate Losses for Non-Differentiable Graphics

08/10/2023
by   Michael Fischer, et al.
0

Gradient-based optimization is now ubiquitous across graphics, but unfortunately can not be applied to problems with undefined or zero gradients. To circumvent this issue, the loss function can be manually replaced by a "surrogate" that has similar minima but is differentiable. Our proposed framework, ZeroGrads, automates this process by learning a neural approximation of the objective function, the surrogate, which in turn can be used to differentiate through arbitrary black-box graphics pipelines. We train the surrogate on an actively smoothed version of the objective and encourage locality, focusing the surrogate's capacity on what matters at the current training episode. The fitting is performed online, alongside the parameter optimization, and self-supervised, without pre-computed data or pre-trained models. As sampling the objective is expensive (it requires a full rendering or simulator run), we devise an efficient sampling scheme that allows for tractable run-times and competitive performance at little overhead. We demonstrate optimizing diverse non-convex, non-differentiable black-box problems in graphics, such as visibility in rendering, discrete parameter spaces in procedural modelling or optimal control in physics-driven animation. In contrast to more traditional algorithms, our approach scales well to higher dimensions, which we demonstrate on problems with up to 35k interlinked variables.

READ FULL TEXT

page 1

page 6

page 7

page 10

page 11

page 12

page 14

research
02/11/2020

Differentiating the Black-Box: Optimization with Local Generative Surrogates

We propose a novel method for gradient-based optimization of black-box s...
research
08/12/2013

KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization

This paper investigates the control of an ML component within the Covari...
research
11/06/2020

Continuous surrogate-based optimization algorithms are well-suited for expensive discrete problems

One method to solve expensive black-box optimization problems is to use ...
research
11/30/2022

Plateau-free Differentiable Path Tracing

Current differentiable renderers provide light transport gradients with ...
research
07/16/2019

Meta-Learning for Black-box Optimization

Recently, neural networks trained as optimizers under the "learning to l...
research
08/16/2020

AutoSimulate: (Quickly) Learning Synthetic Data Generation

Simulation is increasingly being used for generating large labelled data...

Please sign up or login with your details

Forgot password? Click here to reset