Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity

07/30/2019
by   Feihu Huang, et al.
0

Zeroth-order (gradient-free) method is a class of powerful optimization tool for many machine learning problems because it only needs function values (not gradient) in the optimization. In particular, zeroth-order method is very suitable for many complex problems such as black-box attacks and bandit feedback, whose explicit gradients are difficult or infeasible to obtain. Recently, although many zeroth-order methods have been developed, these approaches still exist two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a novel fast zeroth-order stochastic alternating direction method of multipliers (ADMM) method (i.e., ZO-SPIDER-ADMM) with lower function query complexity for solving nonconvex problems with multiple nonsmooth penalties. Moreover, we prove that our ZO-SPIDER-ADMM has the optimal function query complexity of O(dn + dn^1/2ϵ^-1) for finding an ϵ-approximate local solution, where n and d denote the sample size and dimension of data, respectively. In particular, the ZO-SPIDER-ADMM improves the existing best nonconvex zeroth-order ADMM methods by a factor of O(d^1/3n^1/6). Moreover, we propose a fast online ZO-SPIDER-ADMM (i.e., ZOO-SPIDER-ADMM). Our theoretical analysis shows that the ZOO-SPIDER-ADMM has the function query complexity of O(dϵ^-3/2), which improves the existing best result by a factor of O(ϵ^-1/2). Finally, we utilize a task of structured adversarial attack on black-box deep neural networks to demonstrate the efficiency of our algorithms.

READ FULL TEXT
research
05/29/2019

Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

Alternating direction method of multipliers (ADMM) is a popular optimiza...
research
08/04/2020

Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization

In this paper, we propose a faster stochastic alternating direction meth...
research
05/01/2019

Automated Machine Learning via ADMM

We study the automated machine learning (AutoML) problem of jointly sele...
research
06/17/2020

Solving Constrained CASH Problems with ADMM

The CASH problem has been widely studied in the context of automated con...
research
07/03/2019

On a Randomized Multi-Block ADMM for Solving Selected Machine Learning Problems

The Alternating Direction Method of Multipliers (ADMM) has now days gain...
research
07/26/2019

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

Robust machine learning is currently one of the most prominent topics wh...
research
05/15/2019

Differentiable Linearized ADMM

Recently, a number of learning-based optimization methods that combine d...

Please sign up or login with your details

Forgot password? Click here to reset