Physarum Powered Differentiable Linear Programming Layers and Applications

04/30/2020
by   Zihang Meng, et al.
5

Consider a learning algorithm, which involves an internal call to an optimization routine such as a generalized eigenvalue problem, a cone programming problem or even sorting. Integrating such a method as layers within a trainable deep network in a numerically stable way is not simple – for instance, only recently, strategies have emerged for eigendecomposition and differentiable sorting. We propose an efficient and differentiable solver for general linear programming problems which can be used in a plug and play manner within deep neural networks as a layer. Our development is inspired by a fascinating but not widely used link between dynamics of slime mold (physarum) and mathematical optimization schemes such as steepest descent. We describe our development and demonstrate the use of our solver in a video object segmentation task and meta-learning for few-shot learning. We review the relevant known results and provide a technical analysis describing its applicability for our use cases. Our solver performs comparably with a customized projected gradient descent method on the first task and outperforms the very recently proposed differentiable CVXPY solver on the second task. Experiments show that our solver converges quickly without the need for a feasible initial point. Interestingly, our scheme is easy to implement and can easily serve as layers whenever a learning procedure needs a fast approximate solution to a LP, within a larger network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2019

Differentiable Convex Optimization Layers

Recent work has shown how to embed differentiable optimization problems ...
research
09/26/2017

Output Range Analysis for Deep Neural Networks

Deep neural networks (NN) are extensively used for machine learning task...
research
05/21/2018

Meta-learning with differentiable closed-form solvers

Adapting deep networks to new concepts from few examples is extremely ch...
research
07/07/2022

A Solver + Gradient Descent Training Algorithm for Deep Neural Networks

We present a novel hybrid algorithm for training Deep Neural Networks th...
research
05/29/2019

SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver

Integrating logical reasoning within deep learning architectures has bee...
research
03/18/2020

Speeding up Linear Programming using Randomized Linear Algebra

Linear programming (LP) is an extremely useful tool and has been success...
research
03/01/2017

OptNet: Differentiable Optimization as a Layer in Neural Networks

This paper presents OptNet, a network architecture that integrates optim...

Please sign up or login with your details

Forgot password? Click here to reset