Fairness in Reinforcement Learning

07/24/2019
by   Paul Weng, et al.
0

Decision support systems (e.g., for ecological conservation) and autonomous systems (e.g., adaptive controllers in smart cities) start to be deployed in real applications. Although their operations often impact many users or stakeholders, no fairness consideration is generally taken into account in their design, which could lead to completely unfair outcomes for some users or stakeholders. To tackle this issue, we advocate for the use of social welfare functions that encode fairness and present this general novel problem in the context of (deep) reinforcement learning, although it could possibly be extended to other machine learning tasks.

READ FULL TEXT
research
06/16/2023

Fairness in Preference-based Reinforcement Learning

In this paper, we address the issue of fairness in preference-based rein...
research
08/18/2020

Learning Fair Policies in Multiobjective (Deep) Reinforcement Learning with Average and Discounted Rewards

As the operations of autonomous systems generally affect simultaneously ...
research
03/31/2020

Fairness in ad auctions through inverse proportionality

We study the tradeoff between social welfare maximization and fairness i...
research
05/01/2019

Fair Classification and Social Welfare

Now that machine learning algorithms lie at the center of many resource ...
research
01/24/2022

Deep Reinforcement Learning for Random Access in Machine-Type Communication

Random access (RA) schemes are a topic of high interest in machine-type ...
research
01/30/2021

Fairness through Optimization

We propose optimization as a general paradigm for formalizing fairness i...
research
06/03/2019

Proximal Reliability Optimization for Reinforcement Learning

Despite the numerous advances, reinforcement learning remains away from ...

Please sign up or login with your details

Forgot password? Click here to reset