Comparing Fairness Criteria Based on Social Outcome

06/13/2018
by   Junpei Komiyama, et al.
0

Fairness in algorithmic decision-making processes is attracting increasing concern. When an algorithm is applied to human-related decision-making an estimator solely optimizing its predictive power can learn biases on the existing data, which motivates us the notion of fairness in machine learning. while several different notions are studied in the literature, little studies are done on how these notions affect the individuals. We demonstrate such a comparison between several policies induced by well-known fairness criteria, including the color-blind (CB), the demographic parity (DP), and the equalized odds (EO). We show that the EO is the only criterion among them that removes group-level disparity. Empirical studies on the social welfare and disparity of these policies are conducted.

READ FULL TEXT
research
03/04/2019

On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning

Most existing notions of algorithmic fairness are one-shot: they ensure ...
research
04/18/2022

The Equity Framework: Fairness Beyond Equalized Predictive Outcomes

Machine Learning (ML) decision-making algorithms are now widely used in ...
research
05/31/2022

Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria

Although many fairness criteria have been proposed to ensure that machin...
research
11/19/2021

Towards Return Parity in Markov Decision Processes

Algorithmic decisions made by machine learning models in high-stakes dom...
research
10/28/2022

Mitigating Health Disparities in EHR via Deconfounder

Health disparities, or inequalities between different patient demographi...
research
06/08/2023

Reconciling Predictive and Statistical Parity: A Causal Approach

Since the rise of fair machine learning as a critical field of inquiry, ...
research
04/07/2023

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

Traditional algorithmic fairness notions rely on label feedback, which c...

Please sign up or login with your details

Forgot password? Click here to reset