Log In Sign Up

Meta Subspace Optimization

by   Yoni Choukroun, et al.

Subspace optimization methods have the attractive property of reducing large-scale optimization problems to a sequence of low-dimensional subspace optimization problems. However, existing subspace optimization frameworks adopt a fixed update policy of the subspace, and therefore, appear to be sub-optimal. In this paper we propose a new Meta Subspace Optimization (MSO) framework for large-scale optimization problems, which allows to determine the subspace matrix at each optimization iteration. In order to remain invariant to the optimization problem's dimension, we design an efficient meta optimizer based on very low-dimensional subspace optimization coefficients, inducing a rule-based agent that can significantly improve performance. Finally, we design and analyze a reinforcement learning procedure based on the subspace optimization dynamics whose learnt policies outperform existing subspace optimization methods.


Primal-Dual Sequential Subspace Optimization for Saddle-point Problems

We introduce a new sequential subspace optimization method for large-sca...

A Multiresolution approach to solve large-scale optimization problems

General purpose optimization techniques can be used to solve many proble...

On the Subspace Structure of Gradient-Based Meta-Learning

In this work we provide an analysis of the distribution of the post-adap...

SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques

We present SEBOOST, a technique for boosting the performance of existing...

Reducing Subspace Models for Large-Scale Covariance Regression

We develop an envelope model for joint mean and covariance regression in...

Polynomial Preconditioning for Gradient Methods

We study first-order methods with preconditioning for solving structured...

Accelerating design optimization using reduced order models

Although design optimization has shown its great power of automatizing t...