Accelerated Randomized Block-Coordinate Algorithms for Co-coercive Equations and Applications

01/08/2023
โˆ™
by   Quoc Tran-Dinh, et al.
โˆ™
0
โˆ™

In this paper, we develop an accelerated randomized block-coordinate algorithm to approximate a solution of a co-coercive equation. Such an equation plays a central role in optimization and related fields and covers many mathematical models as special cases, including convex optimization, convex-concave minimax, and variational inequality problems. Our algorithm relies on a recent Nesterov's accelerated interpretation of the Halpern fixed-point iteration in [48]. We establish that the new algorithm achieves ๐’ช(1/k^2)-convergence rate on ๐”ผ[โ€– Gx^kโ€–^2] through the last-iterate, where G is the underlying co-coercive operator, ๐”ผ[ยท] is the expectation, and k is the iteration counter. This rate is significantly faster than ๐’ช(1/k) rates in standard forward or gradient-based methods from the literature. We also prove o(1/k^2) rates on both ๐”ผ[โ€– Gx^kโ€–^2] and ๐”ผ[โ€– x^k+1 - x^kโ€–^2]. Next, we apply our method to derive two accelerated randomized block coordinate variants of the forward-backward splitting and Douglas-Rachford splitting schemes, respectively for solving a monotone inclusion involving the sum of two operators. As a byproduct, these variants also have faster convergence rates than their non-accelerated counterparts. Finally, we apply our scheme to a finite-sum monotone inclusion that has various applications in machine learning and statistical learning, including federated learning. As a result, we obtain a novel federated learning-type algorithm with fast and provable convergence rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset