DeepAI AI Chat
Log In Sign Up

Preventing Discriminatory Decision-making in Evolving Data Streams

by   Zichong Wang, et al.

Bias in machine learning has rightly received significant attention over the last decade. However, most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting. Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking. The unique challenges of the online environment make addressing bias more difficult than in the offline setting. First, Streaming Machine Learning (SML) algorithms must deal with the constantly evolving real-time data stream. Second, they need to adapt to changing data distributions (concept drift) to make accurate predictions on new incoming data. Adding fairness constraints to this already complicated task is not straightforward. In this work, we focus on the challenges of achieving fairness in biased data streams while accounting for the presence of concept drift, accessing one sample at a time. We present Fair Sampling over Stream (FS^2), a novel fair rebalancing approach capable of being integrated with SML classification algorithms. Furthermore, we devise the first unified performance-fairness metric, Fairness Bonded Utility (FBU), to evaluate and compare the trade-off between performance and fairness of different bias mitigation methods efficiently. FBU simplifies the comparison of fairness-performance trade-offs of multiple techniques through one unified and intuitive evaluation, allowing model designers to easily choose a technique. Overall, extensive evaluations show our measures surpass those of other fair online techniques previously reported in the literature.


page 1

page 2

page 3

page 4


Online Decision Trees with Fairness

While artificial intelligence (AI)-based decision-making systems are inc...

FARF: A Fair and Adaptive Random Forests Classifier

As Artificial Intelligence (AI) is used in more applications, the need t...

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a...

FAHT: An Adaptive Fairness-aware Decision Tree Classifier

Automated data-driven decision-making systems are ubiquitous across a wi...

Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP

Modern NLP systems exhibit a range of biases, which a growing literature...

Fairness-enhancing interventions in stream classification

The wide spread usage of automated data-driven decision support systems ...

The Bias-Expressivity Trade-off

Learning algorithms need bias to generalize and perform better than rand...