Incorporating Semi-Supervised and Positive-Unlabeled Learning for Boosting Full Reference Image Quality Assessment

04/19/2022
by   Yue Cao, et al.
0

Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference, and has been widely used in low-level vision tasks. Pairwise labeled data with mean opinion score (MOS) are required in training FR-IQA model, but is time-consuming and cumbersome to collect. In contrast, unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance. Moreover, due to the distribution inconsistency between labeled and unlabeled data, outliers may occur in unlabeled data, further increasing the training difficulty. In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers. Particularly, by treating all labeled data as positive samples, PU learning is leveraged to identify negative samples (i.e., outliers) from unlabeled data. Semi-supervised learning (SSL) is further deployed to exploit positive unlabeled data by dynamically generating pseudo-MOS. We adopt a dual-branch network including reference and distortion branches. Furthermore, spatial attention is introduced in the reference branch to concentrate more on the informative regions, and sliced Wasserstein distance is used for robust difference map computation to address the misalignment issues caused by images recovered by GAN models. Extensive experiments show that our method performs favorably against state-of-the-arts on the benchmark datasets PIPAL, KADID-10k, TID2013, LIVE and CSIQ.

READ FULL TEXT

page 14

page 16

research
06/26/2021

Semi-Supervised Deep Ensembles for Blind Image Quality Assessment

Ensemble methods are generally regarded to be better than a single model...
research
12/28/2021

GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled Images as Reference

Semi-supervised learning is a challenging problem which aims to construc...
research
01/24/2023

Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning

Given a model well-trained with a large-scale base dataset, Few-Shot Cla...
research
02/17/2019

Exploiting Unlabeled Data in CNNs by Self-supervised Learning to Rank

For many applications the collection of labeled data is expensive labori...
research
07/26/2018

False Positive Reduction by Actively Mining Negative Samples for Pulmonary Nodule Detection in Chest Radiographs

Generating large quantities of quality labeled data in medical imaging i...
research
10/22/2020

Exploit Multiple Reference Graphs for Semi-supervised Relation Extraction

Manual annotation of the labeled data for relation extraction is time-co...
research
11/30/2022

Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled Learning

Positive-Unlabeled (PU) learning aims to learn a model with rare positiv...

Please sign up or login with your details

Forgot password? Click here to reset