Convex optimization on Banach Spaces

01/01/2014
by   R. A. DeVore, et al.
0

Greedy algorithms which use only function evaluations are applied to convex optimization in a general Banach space X. Along with algorithms that use exact evaluations, algorithms with approximate evaluations are treated. A priori upper bounds for the convergence rate of the proposed algorithms are given. These bounds depend on the smoothness of the objective function and the sparsity or compressibility (with respect to a given dictionary) of a point in X where the minimum is attained.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2014

Convergence and rate of convergence of some greedy algorithms in convex optimization

The paper gives a systematic study of the approximate versions of three ...
research
08/18/2018

Exact Passive-Aggressive Algorithms for Learning to Rank Using Interval Labels

In this paper, we propose exact passive-aggressive (PA) online algorithm...
research
11/04/2015

Dictionary descent in optimization

The problem of convex optimization is studied. Usually in convex optimiz...
research
09/11/2012

Query Complexity of Derivative-Free Optimization

This paper provides lower bounds on the convergence rate of Derivative F...
research
01/15/2020

Biorthogonal greedy algorithms in convex optimization

The study of greedy approximation in the context of convex optimization ...
research
03/15/2021

Lasry-Lions Envelopes and Nonconvex Optimization: A Homotopy Approach

In large-scale optimization, the presence of nonsmooth and nonconvex ter...
research
07/19/2021

High-Dimensional Simulation Optimization via Brownian Fields and Sparse Grids

High-dimensional simulation optimization is notoriously challenging. We ...

Please sign up or login with your details

Forgot password? Click here to reset