
Learning from Multiple Complementary Labels
Complementarylabel learning is a new weaklysupervised learning framewo...
read it

Provably Consistent PartialLabel Learning
Partiallabel learning (PLL) is a multiclass classification problem, wh...
read it

A Selfpaced Regularization Framework for PartialLabel Learning
Partial label learning (PLL) aims to solve the problem where each traini...
read it

MultiLevel Generative Models for Partial Label Learning with Nonrandom Label Noise
Partial label (PL) learning tackles the problem where each training inst...
read it

Label Aggregation via Finding Consensus Between Models
Label aggregation is an efficient and low cost way to make large dataset...
read it

GMPLL: Graph Matching based Partial Label Learning
Partial Label Learning (PLL) aims to learn from the data where each trai...
read it

Loss factorization, weakly supervised learning and label noise robustness
We prove that the empirical risk of most wellknown loss functions facto...
read it
Progressive Identification of True Labels for PartialLabel Learning
Partiallabel learning is one of the important weakly supervised learning problems, where each training example is equipped with a set of candidate labels that contains the true label. Most existing methods elaborately designed learning objectives as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data. The goal of this paper is to propose a novel framework of partiallabel learning without implicit assumptions on the model or optimization algorithm. More specifically, we propose a general estimator of the classification risk, theoretically analyze the classifierconsistency, and establish an estimation error bound. We then explore a progressive identification method for approximately minimizing the proposed risk estimator, where the update of the model and identification of true labels are conducted in a seamless manner. The resulting algorithm is modelindependent and lossindependent, and compatible with stochastic optimization. Thorough experiments demonstrate it sets the new state of the art.
READ FULL TEXT
Comments
There are no comments yet.