DeepAI AI Chat
Log In Sign Up

Fair Meta-Learning: Learning How to Learn Fairly

by   Dylan Slack, et al.

Data sets for fairness relevant tasks can lack examples or be biased according to a specific label in a sensitive attribute. We demonstrate the usefulness of weight based meta-learning approaches in such situations. For models that can be trained through gradient descent, we demonstrate that there are some parameter configurations that allow models to be optimized from a few number of gradient steps and with minimal data which are both fair and accurate. To learn such weight sets, we adapt the popular MAML algorithm to Fair-MAML by the inclusion of a fairness regularization term. In practice, Fair-MAML allows practitioners to train fair machine learning models from only a few examples when data from related tasks is available. We empirically exhibit the value of this technique by comparing to relevant baselines.


Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

In this paper, we advocate for the study of fairness techniques in low d...

Fair Meta-Learning For Few-Shot Classification

Artificial intelligence nowadays plays an increasingly prominent role in...

Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm

Learning to learn is a powerful paradigm for enabling models to learn fr...

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning

Biased regularization and fine-tuning are two recent meta-learning appro...

Do We Train on Test Data? The Impact of Near-Duplicates on License Plate Recognition

This work draws attention to the large fraction of near-duplicates in th...

Explainability for fair machine learning

As the decisions made or influenced by machine learning models increasin...