
Maximum Mean Discrepancy is Aware of Adversarial Attacks
The maximum mean discrepancy (MMD) test, as a representative twosample ...
read it

Pointwise Binary Classification with Pairwise Confidence Comparisons
Ordinary (pointwise) binary classification aims to learn a binary classi...
read it

Geometryaware Instancereweighted Adversarial Training
In adversarial machine learning, there was a common belief that robustne...
read it

Provably Consistent PartialLabel Learning
Partiallabel learning (PLL) is a multiclass classification problem, wh...
read it

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
In weakly supervised learning, unbiased risk estimator(URE) is a powerfu...
read it

Partsdependent Label Noise: Towards Instancedependent Label Noise
Learning with the instancedependent label noise is challenging, because...
read it

Class2Simi: A New Perspective on Learning with Label Noise
Label noise is ubiquitous in the era of big data. Deep learning algorith...
read it

Dual T: Reducing Estimation Error for Transition Matrix in Labelnoise Learning
The transition matrix, denoting the transition relationship from clean l...
read it

Rethinking Importance Weighting for Deep Learning under Distribution Shift
Under distribution shift (DS) where the training data distribution diffe...
read it

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Adversarial training based on the minimax formulation is necessary for o...
read it

Do We Need Zero Training Loss After Achieving Zero Training Error?
Overparameterized deep networks have the capacity to memorize training d...
read it

Progressive Identification of True Labels for PartialLabel Learning
Partiallabel learning is one of the important weakly supervised learnin...
read it

MultiClass Classification from NoisySimilarityLabeled Data
A similarity label indicates whether two instances belong to the same cl...
read it

Towards Mixture Proportion Estimation without Irreducibility
Mixture proportion estimation (MPE) is a fundamental problem of practica...
read it

Confidence Scores Make Instancedependent Labelnoise Learning Possible
Learning with noisy labels has drawn a lot of attention. In this area, m...
read it

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?
Deep neural networks (DNNs) are incredibly brittle due to adversarial ex...
read it

Searching to Exploit Memorization Effect in Learning from Corrupted Labels
Sampleselection approaches, which attempt to pick up clean instances fr...
read it

Scalable Evaluation and Improvement of Document Set Expansion via Neural PositiveUnlabeled Learning
We consider the situation in which a user has collected a small set of d...
read it

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach
From two unlabeled (U) datasets with different class priors, we can trai...
read it

Direction Matters: On InfluencePreserving Graph Summarization and Maxcut Principle for Directed Graphs
Summarizing largescaled directed graphs into smallscale representation...
read it

Are Anchor Points Really Indispensable in LabelNoise Learning?
In labelnoise learning, noise transition matrix, denoting the probabili...
read it

Fast and Robust Rank Aggregation against Model Misspecification
In rank aggregation, preferences from different users are summarized int...
read it

Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation
In unsupervised domain adaptation (UDA), classifiers for the target doma...
read it

Butterfly: Robust Onestep Approach towards Wildlyunsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) trains with clean labeled data in s...
read it

Revisiting Sample Selection Approach to PositiveUnlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
In the early history of positiveunlabeled (PU) learning, the sample sel...
read it

How Does Disagreement Benefit Coteaching?
Learning with noisy labels is one of the most important question in weak...
read it

ComplementaryLabel Learning for Arbitrary Losses and Models
In contrast to the standard classification paradigm where the true (or p...
read it

Classification from Positive, Unlabeled and Biased Negative Data
Positiveunlabeled (PU) learning addresses the problem of learning a bin...
read it

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels
It is challenging to train deep neural networks robustly on the industri...
read it

Alternate Estimation of a Classifier and the ClassPrior from Positive and Unlabeled Data
We consider a problem of learning a binary classifier only from positive...
read it

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data
Empirical risk minimization (ERM), with proper loss function and regular...
read it

Matrix Cocompletion for Multilabel Classification with Missing Features and Labels
We consider a challenging multilabel classification problem where both ...
read it

Masking: A New Perspective of Noisy Supervision
It is important to learn classifiers under noisy labels due to their ubi...
read it

Cosampling: Training Robust Networks for Extremely Noisy Supervision
Training robust deep networks is challenging under noisy labels. Current...
read it

Active Feature Acquisition with Supervised Matrix Completion
Feature missing is a serious problem in many applications, which may lea...
read it

Classification from Pairwise Similarity and Unlabeled Data
One of the biggest bottlenecks in supervised learning is its high labeli...
read it

Binary Classification from PositiveConfidence Data
Reducing labeling costs in supervised learning is a critical issue in ma...
read it

Estimation of SquaredLoss Mutual Information from Positive and Unlabeled Data
Capturing inputoutput dependency is an important task in statistical da...
read it

ModeSeeking Clustering and Density Ridge Estimation via Direct Estimation of DensityDerivativeRatios
Modes and ridges of the probability density function behind observed dat...
read it

Learning from Complementary Labels
Collecting labeled data is costly and thus a critical bottleneck in real...
read it

SemiSupervised AUC Optimization based on PositiveUnlabeled Learning
Maximizing the area under the receiver operating characteristic curve (A...
read it

PositiveUnlabeled Learning with NonNegative Risk Estimator
From only positive (P) and unlabeled (U) data, a binary classifier could...
read it

Revisiting Distributionally Robust Supervised Learning in Classification
Distributionally Robust Supervised Learning (DRSL) is necessary for buil...
read it

Classprior Estimation for Learning from Positive and Unlabeled Data
We consider the problem of estimating the class prior in an unlabeled da...
read it

Theoretical Comparisons of PositiveUnlabeled Learning against PositiveNegative Learning
In PU learning, a binary classifier is trained from positive (P) and unl...
read it

WhiteningFree LeastSquares NonGaussian Component Analysis
NonGaussian component analysis (NGCA) is an unsupervised linear dimensi...
read it

NonGaussian Component Analysis with LogDensity Gradient Estimation
NonGaussian component analysis (NGCA) is aimed at identifying a linear ...
read it

Transductive Learning with Multiclass Volume Approximation
Given a hypothesis space, the large volume principle by Vladimir Vapnik ...
read it

SemiSupervised InformationMaximization Clustering
Semisupervised clustering aims to introduce prior knowledge in the deci...
read it

SERAPH: Semisupervised Metric Learning Paradigm with Hyper Sparsity
We propose a general informationtheoretic approach called Seraph (SEmi...
read it
Gang Niu
is this you? claim profile
Research Scientist, Imperfect Information Learning Team at RIKEN Center for Advanced Intelligence Project.