DeepAI AI Chat
Log In Sign Up

Fair Meta-Learning: Learning How to Learn Fairly

11/06/2019
by   Dylan Slack, et al.
0

Data sets for fairness relevant tasks can lack examples or be biased according to a specific label in a sensitive attribute. We demonstrate the usefulness of weight based meta-learning approaches in such situations. For models that can be trained through gradient descent, we demonstrate that there are some parameter configurations that allow models to be optimized from a few number of gradient steps and with minimal data which are both fair and accurate. To learn such weight sets, we adapt the popular MAML algorithm to Fair-MAML by the inclusion of a fairness regularization term. In practice, Fair-MAML allows practitioners to train fair machine learning models from only a few examples when data from related tasks is available. We empirically exhibit the value of this technique by comparing to relevant baselines.

READ FULL TEXT
08/24/2019

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

In this paper, we advocate for the study of fairness techniques in low d...
09/23/2020

Fair Meta-Learning For Few-Shot Classification

Artificial intelligence nowadays plays an increasingly prominent role in...
10/31/2017

Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm

Learning to learn is a powerful paradigm for enabling models to learn fr...
06/19/2020

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...
08/25/2020

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning

Biased regularization and fine-tuning are two recent meta-learning appro...
04/10/2023

Do We Train on Test Data? The Impact of Near-Duplicates on License Plate Recognition

This work draws attention to the large fraction of near-duplicates in th...
10/14/2020

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...