FedCCEA : A Practical Approach of Client Contribution Evaluation for Federated Learning

06/04/2021
by   Sung Kuk Shyn, et al.
0

Client contribution evaluation, also known as data valuation, is a crucial approach in federated learning(FL) for client selection and incentive allocation. However, due to restrictions of accessibility of raw data, only limited information such as local weights and local data size of each client is open for quantifying the client contribution. Using data size from available information, we introduce an empirical evaluation method called Federated Client Contribution Evaluation through Accuracy Approximation(FedCCEA). This method builds the Accuracy Approximation Model(AAM), which estimates a simulated test accuracy using inputs of sampled data size and extracts the clients' data quality and data size to measure client contribution. FedCCEA strengthens some advantages: (1) enablement of data size selection to the clients, (2) feasible evaluation time regardless of the number of clients, and (3) precise estimation in non-IID settings. We demonstrate the superiority of FedCCEA compared to previous methods through several experiments: client contribution distribution, client removal, and robustness test to partial participation.

READ FULL TEXT
research
07/09/2020

Client Adaptation improves Federated Learning with Simulated Non-IID Clients

We present a federated learning approach for learning a client adaptable...
research
02/13/2023

FilFL: Accelerating Federated Learning via Client Filtering

Federated learning is an emerging machine learning paradigm that enables...
research
05/03/2022

FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning

Robustness is becoming another important challenge of federated learning...
research
07/25/2023

FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning

Traditional federated learning uses the number of samples to calculate t...
research
11/21/2022

Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization

The aim of Machine Unlearning (MU) is to provide theoretical guarantees ...
research
06/20/2021

Is Shapley Value fair? Improving Client Selection for Mavericks in Federated Learning

Shapley Value is commonly adopted to measure and incentivize client part...
research
06/17/2022

Decentralized adaptive clustering of deep nets is beneficial for client collaboration

We study the problem of training personalized deep learning models in a ...

Please sign up or login with your details

Forgot password? Click here to reset