DeepAI AI Chat
Log In Sign Up

Technical Note: Bias and the Quantification of Stability

by   Peter D. Turney, et al.

Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias.


Algorithmic Bias and Regularisation in Machine Learning

Often, what is termed algorithmic bias in machine learning will be due t...

How to Shift Bias: Lessons from the Baldwin Effect

An inductive learning algorithm takes a set of data as input and generat...

Feature-Wise Bias Amplification

We study the phenomenon of bias amplification in classifiers, wherein a ...

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we demonstrate the n...

Measuring Stakeholder Agreement and Stability in a Decentralised Organisation

A decentralised organisation (DO) is a multi-stakeholder institution whe...

On the bias of H-scores for comparing biclusters, and how to correct it

In the last two decades several biclustering methods have been developed...

Stacking and stability

Stacking is a general approach for combining multiple models toward grea...