On Zeroth-Order Stochastic Convex Optimization via Random Walks

02/11/2014
by   Tengyuan Liang, et al.
0

We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n^7T^-1/2) after T queries for a convex bounded function f: R^n→ R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations compared to noiseless zeroth order methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2019

A Random Walk Approach to First-Order Stochastic Convex Optimization

Online minimization of an unknown convex function over a convex and comp...
research
06/21/2023

Memory-Query Tradeoffs for Randomized Convex Optimization

We show that any randomized first-order algorithm which minimizes a d-di...
research
07/12/2012

Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition

We focus on the problem of minimizing a convex function f over a convex ...
research
07/31/2015

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

We consider the closely related problems of bandit convex optimization w...
research
02/22/2020

Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives

In this paper, we revisit the problem of private stochastic convex optim...
research
11/08/2022

Sampling from convex sets with a cold start using multiscale decompositions

Running a random walk in a convex body K⊆ℝ^n is a standard approach to s...
research
05/01/2018

Spiking Neural Algorithms for Markov Process Random Walk

The random walk is a fundamental stochastic process that underlies many ...

Please sign up or login with your details

Forgot password? Click here to reset