Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?
It is well-known that given a bounded, smooth nonconvex function, standard gradient-based methods can find ϵ-stationary points (where the gradient norm is less than ϵ) in O(1/ϵ^2) iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. Moreover, as recently pointed out in Zhang et al. [2020], it is generally impossible to provide finite-time guarantees for finding an ϵ-stationary point of nonsmooth functions. Perhaps the most natural relaxation of this is to find points which are near such ϵ-stationary points. In this paper, we show that even this relaxed goal is hard to obtain in general, given only black-box access to the function values and gradients. We conclude with a discussion of the result and its implications.
READ FULL TEXT