
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
Tight estimation of the Lipschitz constant for deep neural networks (DNN...
read it

Lipschitz constant estimation of Neural Networks via sparse polynomial optimization
We introduce LiPopt, a polynomial optimization framework for computing i...
read it

Exactly Computing the Local Lipschitz Constant of ReLU Networks
The Lipschitz constant of a neural network is a useful metric for provab...
read it

Linear systems with neural network nonlinearities: Improved stability analysis via acausal ZamesFalb multipliers
In this paper, we analyze the stability of feedback interconnections of ...
read it

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications
The Jacobian matrix (or the gradient for singleoutput networks) is dire...
read it

Offsetfree setpoint tracking using neural network controllers
In this paper, we present a method to analyze local and global stability...
read it

StabilityCertified Reinforcement Learning via Spectral Normalization
In this article, two types of methods from different perspectives based ...
read it
Certifying Incremental Quadratic Constraints for Neural Networks
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimizationbased algorithms for certification of stability and robustness of feedback systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, onesided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear timeinvariant system in feedback with an approximate model predictive controller parameterized by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closedloop system.
READ FULL TEXT
Comments
There are no comments yet.