Complementary Composite Minimization, Small Gradients in General Norms, and Applications to Regression Problems

01/26/2021
by   Jelena Diakonikolas, et al.
0

Composite minimization is a powerful framework in large-scale convex optimization, based on decoupling of the objective function into terms with structurally different properties and allowing for more flexible algorithmic design. In this work, we introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term. This particular form of decoupling is pervasive in statistics and machine learning, due to its link to regularization. The main contributions of our work are summarized as follows. First, we introduce the problem of complementary composite minimization in general normed spaces; second, we provide a unified accelerated algorithmic framework to address broad classes of complementary composite minimization problems; and third, we prove that the algorithms resulting from our framework are near-optimal in most of the standard optimization settings. Additionally, we show that our algorithmic framework can be used to address the problem of making the gradients small in general normed spaces. As a concrete example, we obtain a nearly-optimal method for the standard ℓ_1 setup (small gradients in the ℓ_∞ norm), essentially matching the bound of Nesterov (2012) that was previously known only for the Euclidean setup. Finally, we show that our composite methods are broadly applicable to a number of regression problems, leading to complexity bounds that are either new or match the best existing ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2022

Optimal Algorithms for Stochastic Complementary Composite Minimization

Inspired by regularization techniques in statistics and machine learning...
research
05/07/2019

Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking

We study distributed, strongly convex and nonconvex, multiagent optimiza...
research
06/03/2019

Towards Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity

In this paper, through a very intuitive vanilla proximal method perspec...
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
03/15/2021

Lasry-Lions Envelopes and Nonconvex Optimization: A Homotopy Approach

In large-scale optimization, the presence of nonsmooth and nonconvex ter...
research
02/24/2023

Linearization Algorithms for Fully Composite Optimization

In this paper, we study first-order algorithms for solving fully composi...
research
06/29/2019

Conjugate Gradients and Accelerated Methods Unified: The Approximate Duality Gap View

This note provides a novel, simple analysis of the method of conjugate g...

Please sign up or login with your details

Forgot password? Click here to reset