Two-Player Games for Efficient Non-Convex Constrained Optimization

04/17/2018
by   Andrew Cotter, et al.
0

In recent years, constrained optimization has become increasingly relevant to the machine learning community, with applications including Neyman-Pearson classification, robust optimization, and fair machine learning. A natural approach to constrained optimization is to optimize the Lagrangian, but this is not guaranteed to work in the non-convex setting. Instead, we prove that, given a Bayesian optimization oracle, a modified Lagrangian approach can be used to find a distribution over no more than m+1 models (where m is the number of constraints) that is nearly-optimal and nearly-feasible w.r.t. the original constrained problem. Interestingly, our method can be extended to non-differentiable--even discontinuous--constraints (where assuming a Bayesian optimization oracle is not realistic) by viewing constrained optimization as a non-zero-sum two-player game. The first player minimizes external regret in terms of easy-to-optimize "proxy constraints", while the second player enforces the original constraints by minimizing swap-regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset