Log-barrier constrained CNNs

04/08/2019
by   Hoel Kervadec, et al.
0

This study investigates imposing inequality constraints on the outputs of CNNs for weakly supervised segmentation. In the general context of deep networks, constraints are commonly handled with penalty approaches for their simplicity, and despite their well-known limitations. Lagrangian optimization has well-established theoretical and practical advantages over penalty methods, but has been largely avoided for deep CNNs, mainly due to computational complexity and stability/convergence issues caused by alternating stochastic optimization and dual updates. Several recent studies showed that, in the context of deep CNNs, the theoretical advantages of Lagrangian optimization over simple penalties do not materialize in practice, with performances that are, surprisingly, worse. We leverage well-established concepts in interior-point methods, which approximate Lagrangian optimization with a sequence of unconstrained problems, while completely avoiding dual steps/projections. Specifically, we propose a sequence of unconstrained log-barrier-extension losses for approximating inequality-constrained CNN problems. The proposed extension has a duality-gap bound, which yields sub-optimality certificates for feasible solutions in the case of convex losses. While sub-optimality is not guaranteed for non-convex problems, the result shows that log-barrier extensions are a principled way to approximate Lagrangian optimization for constrained CNNs. Our approach addresses the well-known limitations of penalty methods and, at the same time, removes the explicit dual steps of Lagrangian optimization. We report comprehensive experiments showing that our formulation outperforms a recent penalty-based constrained CNN method, both in terms of accuracy and training stability.

READ FULL TEXT

page 6

page 8

page 12

page 13

research
05/12/2018

Constrained-CNN losses forweakly supervised segmentation

Weak supervision, e.g., in the form of partial labels or image tags, is ...
research
08/25/2021

A New Insight on Augmented Lagrangian Method and Its Extensions

Motivated by the recent work [He-Yuan, Balanced Augmented Lagrangian Met...
research
08/08/2019

Constrained domain adaptation for segmentation

We propose to adapt segmentation networks with a constrained formulation...
research
12/16/2019

Convergence Analysis of Penalty Based Numerical Methods for Constrained Inequality Problems

This paper presents a general convergence theory of penalty based numeri...
research
09/15/2020

Training neural networks under physical constraints using a stochastic augmented Lagrangian approach

We investigate the physics-constrained training of an encoder-decoder ne...
research
03/08/2021

Constrained Learning with Non-Convex Losses

Though learning has become a core technology of modern information proce...
research
01/26/2020

A Lagrangian Dual Framework for Deep Neural Networks with Constraints

A variety of computationally challenging constrained optimization proble...

Please sign up or login with your details

Forgot password? Click here to reset