DeepAI AI Chat
Log In Sign Up

Maximin Optimization for Binary Regression

by   Nisan Chiprut, et al.

We consider regression problems with binary weights. Such optimization problems are ubiquitous in quantized learning models and digital communication systems. A natural approach is to optimize the corresponding Lagrangian using variants of the gradient ascent-descent method. Such maximin techniques are still poorly understood even in the concave-convex case. The non-convex binary constraints may lead to spurious local minima. Interestingly, we prove that this approach is optimal in linear regression with low noise conditions as well as robust regression with a small number of outliers. Practically, the method also performs well in regression with cross entropy loss, as well as non-convex multi-layer neural networks. Taken together our approach highlights the potential of saddle-point optimization for learning constrained models.


page 1

page 2

page 3

page 4


Why Do Local Methods Solve Nonconvex Problems?

Non-convex optimization is ubiquitous in modern machine learning. Resear...

Two-Player Games for Efficient Non-Convex Constrained Optimization

In recent years, constrained optimization has become increasingly releva...

Optimistic Robust Optimization With Applications To Machine Learning

Robust Optimization has traditionally taken a pessimistic, or worst-case...

Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization

We consider the minimization of submodular functions subject to ordering...

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization

Many supervised machine learning methods are naturally cast as optimizat...

A scalable multi-step least squares method for network identification with unknown disturbance topology

Identification methods for dynamic networks typically require prior know...

Data Summarization via Bilevel Optimization

The increasing availability of massive data sets poses a series of chall...