DeepAI AI Chat
Log In Sign Up

Learning to Initialize Gradient Descent Using Gradient Descent

by   Kartik Ahuja, et al.

Non-convex optimization problems are challenging to solve; the success and computational expense of a gradient descent algorithm or variant depend heavily on the initialization strategy. Often, either random initialization is used or initialization rules are carefully designed by exploiting the nature of the problem class. As a simple alternative to hand-crafted initialization rules, we propose an approach for learning "good" initialization rules from previous solutions. We provide theoretical guarantees that establish conditions that are sufficient in all cases and also necessary in some under which our approach performs better than random initialization. We apply our methodology to various non-convex problems such as generating adversarial examples, generating post hoc explanations for black-box machine learning models, and allocating communication spectrum, and show consistent gains over other initialization techniques.


Mirror descent of Hopfield model

Mirror descent is a gradient descent method that uses a dual space of pa...

Improving Group Testing via Gradient Descent

We study the problem of group testing with non-identical, independent pr...

Gradient Descent Can Take Exponential Time to Escape Saddle Points

Although gradient descent (GD) almost always escapes saddle points asymp...

An algorithm for non-convex off-the-grid sparse spike estimation with a minimum separation constraint

Theoretical results show that sparse off-the-grid spikes can be estimate...

Parallax Bundle Adjustment on Manifold with Convexified Initialization

Bundle adjustment (BA) with parallax angle based feature parameterizatio...

Reptile: a Scalable Metalearning Algorithm

This paper considers metalearning problems, where there is a distributio...

Machine Learning Discovery of Optimal Quadrature Rules for Isogeometric Analysis

We propose the use of machine learning techniques to find optimal quadra...