-
Exploiting Anti-monotonicity of Multi-label Evaluation Measures for Inducing Multi-label Rules
Exploiting dependencies between labels is considered to be crucial for m...
read it
-
Cautious Deep Learning
Most classifiers operate by selecting the maximum of an estimate of the ...
read it
-
Inducing Generalized Multi-Label Rules with Learning Classifier Systems
In recent years, multi-label classification has attracted a significant ...
read it
-
Efficient Discovery of Expressive Multi-label Rules using Relaxed Pruning
Being able to model correlations between labels is considered crucial in...
read it
-
Seeing The Whole Patient: Using Multi-Label Medical Text Classification Techniques to Enhance Predictions of Medical Codes
Machine learning-based multi-label medical text classifications can be u...
read it
-
Metric Learning for Dynamic Text Classification
Traditional text classifiers are limited to predicting over a fixed set ...
read it
-
Exemplar Auditing for Multi-Label Biomedical Text Classification
Many practical applications of AI in medicine consist of semi-supervised...
read it
Regularizing Model Complexity and Label Structure for Multi-Label Text Classification
Multi-label text classification is a popular machine learning task where each document is assigned with multiple relevant labels. This task is challenging due to high dimensional features and correlated labels. Multi-label text classifiers need to be carefully regularized to prevent the severe over-fitting in the high dimensional space, and also need to take into account label dependencies in order to make accurate predictions under uncertainty. We demonstrate significant and practical improvement by carefully regularizing the model complexity during training phase, and also regularizing the label search space during prediction phase. Specifically, we regularize the classifier training using Elastic-net (L1+L2) penalty for reducing model complexity/size, and employ early stopping to prevent overfitting. At prediction time, we apply support inference to restrict the search space to label sets encountered in the training set, and F-optimizer GFM to make optimal predictions for the F1 metric. We show that although support inference only provides density estimations on existing label combinations, when combined with GFM predictor, the algorithm can output unseen label combinations. Taken collectively, our experiments show state of the art results on many benchmark datasets. Beyond performance and practical contributions, we make some interesting observations. Contrary to the prior belief, which deems support inference as purely an approximate inference procedure, we show that support inference acts as a strong regularizer on the label prediction structure. It allows the classifier to take into account label dependencies during prediction even if the classifiers had not modeled any label dependencies during training.
READ FULL TEXT
Comments
There are no comments yet.