Risk Analysis of Divide-and-Conquer ERM

03/09/2020
by   Yong Liu, et al.
26

Theoretical analysis of the divide-and-conquer based distributed learning with least square loss in the reproducing kernel Hilbert space (RKHS) have recently been explored within the framework of learning theory. However, the studies on learning theory for general loss functions and hypothesis spaces remain limited. To fill the gap, we study the risk performance of distributed empirical risk minimization (ERM) for general loss functions and hypothesis spaces. The main contributions are two-fold. First, we derive two risk bounds of optimal rates under certain basic assumptions on the hypothesis space, as well as the smoothness, Lipschitz continuity, strong convexity of the loss function. Second, we further develop two more general risk bounds for distributed ERM without the restriction of strong convexity

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset