Effective Dimension of Exp-concave Optimization

05/21/2018
by   Naman Agarwal, et al.
0

We investigate the role of the effective (a.k.a. statistical) dimension in determining both the statistical and the computational costs associated with exp-concave stochastic minimization. We derive sample complexity bounds that scale with d_λ/ϵ, where d_λ is the effective dimension associated with the regularization parameter λ. These are the first fast rates in this setting that do not exhibit any explicit dependence either on the intrinsic dimension or the ℓ_2-norm of the optimal classifier. We also propose fast preconditioned methods that solve the ERM problem in time ((X)+_λ'>λλ'/λ d_λ'^2d), where (X) is the number of nonzero entries in the data. Our analysis emphasizes interesting connections between leverage scores, algorithmic stability and regularization. In particular, our algorithm involves a novel technique for choosing a regularization parameter λ' that minimizes the complexity bound λ'/λ d_λ'^2d, while avoiding the entire (approximate) computation of the effective dimension for each candidate λ'. All of our result extend to the kernel setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2021

Optimal Gradient-based Algorithms for Non-concave Bandit Optimization

Bandit problems with linear or concave reward have been extensively stud...
research
02/26/2018

Dimension-free Information Concentration via Exp-Concavity

Information concentration of probability measures have important implica...
research
06/20/2013

Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

We provide theoretical analysis of the statistical and computational pro...
research
09/09/2017

A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer

In this paper, we present a simple analysis of fast rates with high pr...
research
03/30/2022

System Identification via Nuclear Norm Regularization

This paper studies the problem of identifying low-order linear systems v...
research
11/02/2014

Fast Randomized Kernel Methods With Statistical Guarantees

One approach to improving the running time of kernel-based machine learn...
research
03/17/2020

Linear Regression without Correspondences via Concave Minimization

Linear regression without correspondences concerns the recovery of a sig...

Please sign up or login with your details

Forgot password? Click here to reset