
Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization
Pairwise similarities and dissimilarities between data points might be e...
read it

Similaritybased Classification: Connecting Similarity Learning to Binary Classification
In realworld classification problems, pairwise supervision (i.e., a pai...
read it

Classification from Triplet Comparison Data
Learning from triplet comparison data has been extensively studied in th...
read it

Learning from Noisy Similar and Dissimilar Data
With the widespread use of machine learning for classification, it becom...
read it

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data
Empirical risk minimization (ERM), with proper loss function and regular...
read it

Uncoupled Regression from Pairwise Comparison Data
Uncoupled regression is the problem to learn a model from unlabeled data...
read it

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach
From two unlabeled (U) datasets with different class priors, we can trai...
read it
Pointwise Binary Classification with Pairwise Confidence Comparisons
Ordinary (pointwise) binary classification aims to learn a binary classifier from pointwise labeled data. However, such pointwise labels may not be directly accessible due to privacy, confidentiality, or security considerations. In this case, can we still learn an accurate binary classifier? This paper proposes a novel setting, namely pairwise comparison (Pcomp) classification, where we are given only pairs of unlabeled data that we know one is more likely to be positive than the other, instead of pointwise labeled data. Pcomp classification is useful for private or subjective classification tasks. To solve this problem, we present a mathematical formulation for the generation process of pairwise comparison data, based on which we exploit an unbiased risk estimator(URE) to train a binary classifier by empirical risk minimization and establish an estimation error bound. We first prove that a URE can be derived and improve it using correction functions. Then, we start from the noisylabel learning perspective to introduce a progressive URE and improve it by imposing consistency regularization. Finally, experiments validate the effectiveness of our proposed solutions for Pcomp classification.
READ FULL TEXT
Comments
There are no comments yet.