Fairness and underspecification in acoustic scene classification: The case for disaggregated evaluations

Underspecification and fairness in machine learning (ML) applications have recently become two prominent issues in the ML community. Acoustic scene classification (ASC) applications have so far remained unaffected by this discussion, but are now becoming increasingly used in real-world systems where fairness and reliability are critical aspects. In this work, we argue for the need of a more holistic evaluation process for ASC models through disaggregated evaluations. This entails taking into account performance differences across several factors, such as city, location, and recording device. Although these factors play a well-understood role in the performance of ASC models, most works report single evaluation metrics taking into account all different strata of a particular dataset. We argue that metrics computed on specific sub-populations of the underlying data contain valuable information about the expected real-world behaviour of proposed systems, and their reporting could improve the transparency and trustability of such systems. We demonstrate the effectiveness of the proposed evaluation process in uncovering underspecification and fairness problems exhibited by several standard ML architectures when trained on two widely-used ASC datasets. Our evaluation shows that all examined architectures exhibit large biases across all factors taken into consideration, and in particular with respect to the recording location. Additionally, different architectures exhibit different biases even though they are trained with the same experimental configurations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/05/2021

Reducing Unintended Bias of ML Models on Tabular and Textual Data

Unintended biases in machine learning (ML) models are among the major co...
research
11/10/2022

Fairness and bias correction in machine learning for depression prediction: results from four different study populations

A significant level of stigma and inequality exists in mental healthcare...
research
05/19/2022

What Is Fairness? Implications For FairML

A growing body of literature in fairness-aware ML (fairML) aspires to mi...
research
03/30/2023

Non-Invasive Fairness in Learning through the Lens of Data Drift

Machine Learning (ML) models are widely employed to drive many modern da...
research
03/30/2021

Statistical inference for individual fairness

As we rely on machine learning (ML) models to make more consequential de...
research
05/29/2023

Counterpart Fairness – Addressing Systematic between-group Differences in Fairness Evaluation

When using machine learning (ML) to aid decision-making, it is critical ...
research
11/01/2022

Evaluation Metrics for Symbolic Knowledge Extracted from Machine Learning Black Boxes: A Discussion Paper

As opaque decision systems are being increasingly adopted in almost any ...

Please sign up or login with your details

Forgot password? Click here to reset