
-
The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems
Online allocation problems with resource constraints are central problem...
read it
-
Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems
Unlike nonconvex optimization, where gradient descent is guaranteed to c...
read it
-
Regularized Online Allocation Problems: Fairness and Beyond
Online allocation problems with resource constraints have a rich history...
read it
-
The Landscape of Nonconvex-Nonconcave Minimax Optimization
Minimax optimization has become a central tool for modern machine learni...
read it
-
Contextual Reserve Price Optimization in Auctions
We study the problem of learning a linear model to set the reserve price...
read it
-
An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to Convex-Concave Saddle-Point Problems
There has been a long history of using Ordinary Differential Equations (...
read it
-
A Stochastic First-Order Method for Ordered Empirical Risk Minimization
We propose a new stochastic first-order method for empirical risk minimi...
read it
-
Accelerating Gradient Boosting Machine
Gradient Boosting Machine (GBM) is an extremely powerful supervised lear...
read it
-
The Second-Price Knapsack Problem: Near-Optimal Real Time Bidding in Internet Advertisement
In the past decade, Real Time Bidding (RTB) has become one of the most c...
read it
-
Near-Optimal Online Knapsack Strategy for Real-Time Bidding in Internet Advertising
In the past decade, Real Time Bidding (RTB) has become one of the most c...
read it
-
Randomized Gradient Boosting Machine
Gradient Boosting Machine (GBM) introduced by Friedman is an extremely p...
read it
-
Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems
Consider the following class of learning schemes: β̂ := β∈C ∑_j=1^n ℓ(x_...
read it
-
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
Consider the following class of learning schemes: β̂ := _β ∑_j=1^n ℓ(x_j...
read it
-
Depth Creates No Bad Local Minima
In deep learning, depth, as well as nonlinearity, create non-convex loss...
read it