Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

05/29/2019
by   Feihu Huang, et al.
0

Alternating direction method of multipliers (ADMM) is a popular optimization tool for the composite and constrained problems in machine learning. However, in many machine learning problems such as black-box attacks and bandit feedback, ADMM could fail because the explicit gradients of these problems are difficult or infeasible to obtain. Zeroth-order (gradient-free) methods can effectively solve these problems due to that the objective function values are only required in the optimization. Recently, though there exist a few zeroth-order ADMM methods, they build on the convexity of objective function. Clearly, these existing zeroth-order methods are limited in many applications. In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) for solving nonconvex problems with multiple nonsmooth penalties, based on the coordinate smoothing gradient estimator. Moreover, we prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of O(1/T), where T denotes the number of iterations. In particular, our methods not only reach the best convergence rate O(1/T) for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints. Finally, we conduct the experiments of black-box binary classification and structured adversarial attack on black-box deep neural network to validate the efficiency of our algorithms.

READ FULL TEXT
research
07/30/2019

Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity

Zeroth-order (gradient-free) method is a class of powerful optimization ...
research
02/16/2019

Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

Proximal gradient method has been playing an important role to solve man...
research
10/21/2017

Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications

In this paper, we design and analyze a new zeroth-order online algorithm...
research
10/15/2019

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

The adaptive momentum method (AdaMM), which uses past gradients to updat...
research
06/17/2020

Solving Constrained CASH Problems with ADMM

The CASH problem has been widely studied in the context of automated con...
research
07/03/2019

On a Randomized Multi-Block ADMM for Solving Selected Machine Learning Problems

The Alternating Direction Method of Multipliers (ADMM) has now days gain...
research
04/09/2022

Distributed Evolution Strategies for Black-box Stochastic Optimization

This work concerns the evolutionary approaches to distributed stochastic...

Please sign up or login with your details

Forgot password? Click here to reset