Accelerated Randomized Block-Coordinate Algorithms for Co-coercive Equations and Applications
In this paper, we develop an accelerated randomized block-coordinate algorithm to approximate a solution of a co-coercive equation. Such an equation plays a central role in optimization and related fields and covers many mathematical models as special cases, including convex optimization, convex-concave minimax, and variational inequality problems. Our algorithm relies on a recent Nesterov's accelerated interpretation of the Halpern fixed-point iteration in [48]. We establish that the new algorithm achieves ๐ช(1/k^2)-convergence rate on ๐ผ[โ Gx^kโ^2] through the last-iterate, where G is the underlying co-coercive operator, ๐ผ[ยท] is the expectation, and k is the iteration counter. This rate is significantly faster than ๐ช(1/k) rates in standard forward or gradient-based methods from the literature. We also prove o(1/k^2) rates on both ๐ผ[โ Gx^kโ^2] and ๐ผ[โ x^k+1 - x^kโ^2]. Next, we apply our method to derive two accelerated randomized block coordinate variants of the forward-backward splitting and Douglas-Rachford splitting schemes, respectively for solving a monotone inclusion involving the sum of two operators. As a byproduct, these variants also have faster convergence rates than their non-accelerated counterparts. Finally, we apply our scheme to a finite-sum monotone inclusion that has various applications in machine learning and statistical learning, including federated learning. As a result, we obtain a novel federated learning-type algorithm with fast and provable convergence rates.
READ FULL TEXT