Technical Note: Bias and the Quantification of Stability

by   Peter D. Turney, et al.

Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias.



There are no comments yet.



Algorithmic Bias and Regularisation in Machine Learning

Often, what is termed algorithmic bias in machine learning will be due t...

How to Shift Bias: Lessons from the Baldwin Effect

An inductive learning algorithm takes a set of data as input and generat...

Feature-Wise Bias Amplification

We study the phenomenon of bias amplification in classifiers, wherein a ...

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we demonstrate the n...

Introducing a Family of Synthetic Datasets for Research on Bias in Machine Learning

A significant impediment to progress in research on bias in machine lear...

On the bias of H-scores for comparing biclusters, and how to correct it

In the last two decades several biclustering methods have been developed...

Why resampling outperforms reweighting for correcting sampling bias

A data set sampled from a certain population is biased if the subgroups ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.