Beyond Hard Labels: Investigating data label distributions

07/13/2022
by   Vasco Grossmann, et al.
0

High-quality data is a key aspect of modern machine learning. However, labels generated by humans suffer from issues like label noise and class ambiguities. We raise the question of whether hard labels are sufficient to represent the underlying ground truth distribution in the presence of these inherent imprecision. Therefore, we compare the disparity of learning with hard and soft labels quantitatively and qualitatively for a synthetic and a real-world dataset. We show that the application of soft labels leads to improved performance and yields a more regular structure of the internal feature space.

READ FULL TEXT
research
09/08/2023

Generating the Ground Truth: Synthetic Data for Label Noise Research

Most real-world classification tasks suffer from label noise to some ext...
research
02/05/2020

Exploratory Machine Learning with Unknown Unknowns

In conventional supervised learning, a training dataset is given with gr...
research
09/17/2023

Mitigating Shortcuts in Language Models with Soft Label Encoding

Recent research has shown that large language models rely on spurious co...
research
07/02/2022

Eliciting and Learning with Soft Labels from Every Annotator

The labels used to train machine learning (ML) models are of paramount i...
research
11/16/2021

Who Decides if AI is Fair? The Labels Problem in Algorithmic Auditing

Labelled "ground truth" datasets are routinely used to evaluate and audi...
research
06/20/2021

Improving Label Quality by Jointly Modeling Items and Annotators

We propose a fully Bayesian framework for learning ground truth labels f...
research
01/09/2023

A Robust Multilabel Method Integrating Rule-based Transparent Model, Soft Label Correlation Learning and Label Noise Resistance

Model transparency, label correlation learning and the robust-ness to la...

Please sign up or login with your details

Forgot password? Click here to reset