-
Rényi Fair Inference
Machine learning algorithms have been increasingly deployed in critical ...
read it
-
SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
In this paper, we cast fair machine learning as invariant machine learni...
read it
-
Fairness Sample Complexity and the Case for Human Intervention
With the aim of building machine learning systems that incorporate stand...
read it
-
Towards Threshold Invariant Fair Classification
Effective machine learning models can automatically learn useful informa...
read it
-
The Bouncer Problem: Challenges to Remote Explainability
The concept of explainability is envisioned to satisfy society's demands...
read it
-
A Statistical Test for Probabilistic Fairness
Algorithms are now routinely used to make consequential decisions that a...
read it
-
Algorithmic Fairness from a Non-ideal Perspective
Inspired by recent breakthroughs in predictive modeling, practitioners i...
read it
Explainability for fair machine learning
As the decisions made or influenced by machine learning models increasingly impact our lives, it is crucial to detect, understand, and mitigate unfairness. But even simply determining what "unfairness" should mean in a given context is non-trivial: there are many competing definitions, and choosing between them often requires a deep understanding of the underlying task. It is thus tempting to use model explainability to gain insights into model fairness, however existing explainability tools do not reliably indicate whether a model is indeed fair. In this work we present a new approach to explaining fairness in machine learning, based on the Shapley value paradigm. Our fairness explanations attribute a model's overall unfairness to individual input features, even in cases where the model does not operate on sensitive attributes directly. Moreover, motivated by the linearity of Shapley explainability, we propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely. By explaining the original model, the perturbation, and the fair-corrected model, we gain insight into the accuracy-fairness trade-off that is being made by the intervention. We further show that this meta algorithm enjoys both flexibility and stability benefits with no loss in performance.
READ FULL TEXT
Comments
There are no comments yet.