DeepAI
Log In Sign Up

Parallel Distributed Logistic Regression for Vertical Federated Learning without Third-Party Coordinator

11/22/2019
by   Shengwen Yang, et al.
0

Federated Learning is a new distributed learning mechanism which allows model training on a large corpus of decentralized data owned by different data providers, without sharing or leakage of raw data. According to the characteristics of data dis-tribution, it could be usually classified into three categories: horizontal federated learning, vertical federated learning, and federated transfer learning. In this paper we present a solution for parallel dis-tributed logistic regression for vertical federated learning. As compared with existing works, the role of third-party coordinator is removed in our proposed solution. The system is built on the pa-rameter server architecture and aims to speed up the model training via utilizing a cluster of servers in case of large volume of training data. We also evaluate the performance of the parallel distributed model training and the experimental results show the great scalability of the system.

READ FULL TEXT
02/04/2019

Towards Federated Learning at Scale: System Design

Federated Learning is a distributed machine learning approach which enab...
10/12/2019

Quantification of the Leakage in Federated Learning

With the growing emphasis on users' privacy, federated learning has beco...
04/16/2020

Asymmetrical Vertical Federated Learning

Federated learning is a distributed machine learning method that aims to...
04/16/2020

Asymmetrically Vertical Federated Learning

Federated learning is a distributed machine learning method that aims to...
06/10/2021

Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners

Vertical Federated Learning (VFL) refers to the collaborative training o...
11/05/2021

DVFL: A Vertical Federated Learning Method for Dynamic Data

Federated learning, which solves the problem of data island by connectin...
02/01/2022

Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?

In this paper, we question the rationale behind propagating large number...