
On the Convergence of the Stochastic PrimalDual Hybrid Gradient for Convex Optimization
Stochastic PrimalDual Hybrid Gradient (SPDHG) was proposed by Chambolle...
read it

The PrimalDual method for Learning Augmented Algorithms
The extension of classical online algorithms when provided with predicti...
read it

A Primal Condition for Approachability with Partial Monitoring
In approachability with full monitoring there are two types of condition...
read it

Stochastic PrimalDual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
We propose a stochastic extension of the primaldual hybrid gradient alg...
read it

Online PrimalDual Algorithms with Configuration Linear Programs
Nonlinear, especially convex, objective functions have been extensively...
read it

Revisiting the Approximate Carathéodory Problem via the FrankWolfe Algorithm
The approximate Carathéodory theorem states that given a polytope P, eac...
read it

Safe Feature Elimination for NonNegativity Constrained Convex Optimization
Inspired by recent work on safe feature elimination for 1norm regulariz...
read it
Robust Algorithms for Online Convex Problems via PrimalDual
Primaldual methods in online optimization give several of the stateofthe art results in both of the most common models: adversarial and stochastic/random order. Here we try to provide a more unified analysis of primaldual algorithms to better understand the mechanisms behind this important method. With this we are able of recover and extend in one goal several results of the literature. In particular, we obtain robust online algorithm for fairly general online convex problems: we consider the MIXED model where in some of the time steps the data is stochastic and in the others the data is adversarial. Both the quantity and location of the adversarial time steps are unknown to the algorithm. The guarantees of our algorithms interpolate between the (close to) best guarantees for each of the pure models. In particular, the presence of adversarial times does not degrade the guarantee relative to the stochastic part of the instance. Concretely, we first consider Online Convex Programming: at each time a feasible set V_t is revealed, and the algorithm needs to select v_t ∈ V_t to minimize the total cost ψ(∑_t v_t), for a convex function ψ. Our robust primaldual algorithm for this problem on the MIXED model recovers and extends, for example, a result of Gupta et al. and recent work on ℓ_pnorm load balancing by the author. We also consider the problem of Welfare Maximization with Convex Production Costs: at each time a customer presents a value c_t and resource consumption vector a_t, and the goal is to fractionally select customers to maximize the profit ∑_t c_t x_t  ψ(∑_t a_t x_t). Our robust primaldual algorithm on the MIXED model recovers and extends the result of Azar et al. Given the ubiquity of primaldual algorithms we hope the ideas presented here will be useful in obtaining other robust algorithm in the MIXED or related models.
READ FULL TEXT
Comments
There are no comments yet.