FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

10/10/2022
by   Shubham Sharma, et al.
0

Three key properties that are desired of trustworthy machine learning models deployed in high-stakes environments are fairness, explainability, and an ability to account for various kinds of "drift". While drifts in model accuracy, for example due to covariate shift, have been widely investigated, drifts in fairness metrics over time remain largely unexplored. In this paper, we propose FEAMOE, a novel "mixture-of-experts" inspired framework aimed at learning fairer, more explainable/interpretable models that can also rapidly adjust to drifts in both the accuracy and the fairness of a classifier. We illustrate our framework for three popular fairness measures and demonstrate how drift can be handled with respect to these fairness constraints. Experiments on multiple datasets show that our framework as applied to a mixture of linear experts is able to perform comparably to neural networks in terms of accuracy while producing fairer models. We then use the large-scale HMDA dataset and show that while various models trained on HMDA demonstrate drift with respect to both accuracy and fairness, FEAMOE can ably handle these drifts with respect to all the considered fairness measures and maintain model accuracy as well. We also prove that the proposed framework allows for producing fast Shapley value explanations, which makes computationally efficient feature attribution based explanations of model decisions readily available via FEAMOE.

READ FULL TEXT

page 14

page 15

research
06/05/2022

Interpretable Mixture of Experts for Structured Data

With the growth of machine learning for structured data, the need for re...
research
10/30/2022

Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning

In the literature of mitigating unfairness in machine learning, many fai...
research
10/17/2021

Poisoning Attacks on Fair Machine Learning

Both fair machine learning and adversarial learning have been extensivel...
research
09/27/2022

Explainable Global Fairness Verification of Tree-Based Classifiers

We present a new approach to the global fairness verification of tree-ba...
research
10/13/2020

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
research
10/14/2020

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...
research
11/19/2021

ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets

The kepler and TESS missions have generated over 100,000 potential trans...

Please sign up or login with your details

Forgot password? Click here to reset