Efficient differentiable quadratic programming layers: an ADMM approach

12/14/2021
by   Andrew Butler, et al.
0

Recent advances in neural-network architecture allow for seamless integration of convex optimization problems as differentiable layers in an end-to-end trainable neural network. Integrating medium and large scale quadratic programs into a deep neural network architecture, however, is challenging as solving quadratic programs exactly by interior-point methods has worst-case cubic complexity in the number of variables. In this paper, we present an alternative network layer architecture based on the alternating direction method of multipliers (ADMM) that is capable of scaling to problems with a moderately large number of variables. Backward differentiation is performed by implicit differentiation of the residual map of a modified fixed-point iteration. Simulated results demonstrate the computational advantage of the ADMM layer, which for medium scaled problems is approximately an order of magnitude faster than the OptNet quadratic programming layer. Furthermore, our novel backward-pass routine is efficient, from both a memory and computation standpoint, in comparison to the standard approach based on unrolled differentiation or implicit differentiation of the KKT optimality conditions. We conclude with examples from portfolio optimization in the integrated prediction and optimization paradigm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2023

SCQPTH: an efficient differentiable splitting method for convex quadratic programming

We present SCQPTH: a differentiable first-order splitting method for con...
research
03/01/2017

OptNet: Differentiable Optimization as a Layer in Neural Networks

This paper presents OptNet, a network architecture that integrates optim...
research
10/24/2022

Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation

Differentiable planning promises end-to-end differentiability and adapti...
research
12/22/2021

A Convergent ADMM Framework for Efficient Neural Network Training

As a well-known optimization framework, the Alternating Direction Method...
research
05/31/2019

ADMM for Efficient Deep Learning with Global Convergence

Alternating Direction Method of Multipliers (ADMM) has been used success...
research
02/07/2020

Differentiable Fixed-Point Iteration Layer

Recently, several studies proposed methods to utilize some restricted cl...
research
08/21/2023

Differentiable Frank-Wolfe Optimization Layer

Differentiable optimization has received a significant amount of attenti...

Please sign up or login with your details

Forgot password? Click here to reset