Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled Learning

11/30/2022
by   Chengming Xu, et al.
0

Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.

READ FULL TEXT

page 2

page 4

page 6

page 11

research
08/01/2023

Robust Positive-Unlabeled Learning via Noise Negative Sample Self-correction

Learning from positive and unlabeled data is known as positive-unlabeled...
research
04/20/2020

MixPUL: Consistency-based Augmentation for Positive and Unlabeled Learning

Learning from positive and unlabeled data (PU learning) is prevalent in ...
research
12/13/2020

Improving the Classification of Rare Chords with Unlabeled Data

In this work, we explore techniques to improve performance for rare clas...
research
09/06/2023

Community-Based Hierarchical Positive-Unlabeled (PU) Model Fusion for Chronic Disease Prediction

Positive-Unlabeled (PU) Learning is a challenge presented by binary clas...
research
03/02/2020

Learning from Positive and Unlabeled Data by Identifying the Annotation Process

In binary classification, Learning from Positive and Unlabeled data (LeP...
research
04/19/2022

Incorporating Semi-Supervised and Positive-Unlabeled Learning for Boosting Full Reference Image Quality Assessment

Full-reference (FR) image quality assessment (IQA) evaluates the visual ...
research
06/10/2021

A Mathematical Foundation for Robust Machine Learning based on Bias-Variance Trade-off

A common assumption in machine learning is that samples are independentl...

Please sign up or login with your details

Forgot password? Click here to reset