Achieving Transparency in Distributed Machine Learning with Explainable Data Collaboration

12/06/2022
by   Anna Bogdanova, et al.
0

Transparency of Machine Learning models used for decision support in various industries becomes essential for ensuring their ethical use. To that end, feature attribution methods such as SHAP (SHapley Additive exPlanations) are widely used to explain the predictions of black-box machine learning models to customers and developers. However, a parallel trend has been to train machine learning models in collaboration with other data holders without accessing their data. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with one another, undermining trust in the product. This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm (KernelSHAP) and Data Collaboration method of privacy-preserving distributed machine learning. In particular, we present three algorithms for different scenarios of explainability in Data Collaboration and verify their consistency with experiments on open-access datasets. Our results demonstrated a significant (by at least a factor of 1.75) decrease in feature attribution discrepancies among the users of distributed machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2023

Commentary on explainable artificial intelligence methods: SHAP and LIME

eXplainable artificial intelligence (XAI) methods have emerged to conver...
research
09/26/2020

Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

Background: The lack of explanations for the decisions made by algorithm...
research
06/27/2023

An Empirical Evaluation of the Rashomon Effect in Explainable Machine Learning

The Rashomon Effect describes the following phenomenon: for a given data...
research
10/19/2022

Gradient Backpropagation based Feature Attribution to Enable Explainable-AI on the Edge

There has been a recent surge in the field of Explainable AI (XAI) which...
research
06/29/2019

Privacy Risks of Explaining Machine Learning Models

Can we trust black-box machine learning with its decisions? Can we trust...
research
02/20/2019

Data collaboration analysis for distributed datasets

In this paper, we propose a data collaboration analysis method for distr...
research
12/11/2020

Dependency Decomposition and a Reject Option for Explainable Models

Deploying machine learning models in safety-related do-mains (e.g. auton...

Please sign up or login with your details

Forgot password? Click here to reset