-
Learning from Multiple Complementary Labels
Complementary-label learning is a new weakly-supervised learning framewo...
read it
-
Provably Consistent Partial-Label Learning
Partial-label learning (PLL) is a multi-class classification problem, wh...
read it
-
A Self-paced Regularization Framework for Partial-Label Learning
Partial label learning (PLL) aims to solve the problem where each traini...
read it
-
Multi-Level Generative Models for Partial Label Learning with Non-random Label Noise
Partial label (PL) learning tackles the problem where each training inst...
read it
-
Label Aggregation via Finding Consensus Between Models
Label aggregation is an efficient and low cost way to make large dataset...
read it
-
GM-PLL: Graph Matching based Partial Label Learning
Partial Label Learning (PLL) aims to learn from the data where each trai...
read it
-
Loss factorization, weakly supervised learning and label noise robustness
We prove that the empirical risk of most well-known loss functions facto...
read it
Progressive Identification of True Labels for Partial-Label Learning
Partial-label learning is one of the important weakly supervised learning problems, where each training example is equipped with a set of candidate labels that contains the true label. Most existing methods elaborately designed learning objectives as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data. The goal of this paper is to propose a novel framework of partial-label learning without implicit assumptions on the model or optimization algorithm. More specifically, we propose a general estimator of the classification risk, theoretically analyze the classifier-consistency, and establish an estimation error bound. We then explore a progressive identification method for approximately minimizing the proposed risk estimator, where the update of the model and identification of true labels are conducted in a seamless manner. The resulting algorithm is model-independent and loss-independent, and compatible with stochastic optimization. Thorough experiments demonstrate it sets the new state of the art.
READ FULL TEXT
Comments
There are no comments yet.