-
MUSCLE: Strengthening Semi-Supervised Learning Via Concurrent Unsupervised Learning Using Mutual Information Maximization
Deep neural networks are powerful, massively parameterized machine learn...
read it
-
Granular conditional entropy-based attribute reduction for partially labeled data with proxy labels
Attribute reduction is one of the most important research topics in the ...
read it
-
MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification
This paper presents MixText, a semi-supervised learning method for text ...
read it
-
Matching Distributions via Optimal Transport for Semi-Supervised Learning
Semi-Supervised Learning (SSL) approaches have been an influential frame...
read it
-
ReRankMatch: Semi-Supervised Learning with Semantics-Oriented Similarity Representation
This paper proposes integrating semantics-oriented similarity representa...
read it
-
Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering
Recently, Semi-Supervised Learning (SSL) has shown much promise in lever...
read it
-
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
We improve the recently-proposed "MixMatch" semi-supervised learning alg...
read it
Unsupervised Semantic Aggregation and Deformable Template Matching for Semi-Supervised Learning
Unlabeled data learning has attracted considerable attention recently. However, it is still elusive to extract the expected high-level semantic feature with mere unsupervised learning. In the meantime, semi-supervised learning (SSL) demonstrates a promising future in leveraging few samples. In this paper, we combine both to propose an Unsupervised Semantic Aggregation and Deformable Template Matching (USADTM) framework for SSL, which strives to improve the classification performance with few labeled data and then reduce the cost in data annotating. Specifically, unsupervised semantic aggregation based on Triplet Mutual Information (T-MI) loss is explored to generate semantic labels for unlabeled data. Then the semantic labels are aligned to the actual class by the supervision of labeled data. Furthermore, a feature pool that stores the labeled samples is dynamically updated to assign proxy labels for unlabeled data, which are used as targets for cross-entropy minimization. Extensive experiments and analysis across four standard semi-supervised learning benchmarks validate that USADTM achieves top performance (e.g., 90.46% accuracy on CIFAR-10 with 40 labels and 95.20% accuracy with 250 labels). The code is released at https://github.com/taohan10200/USADTM.
READ FULL TEXT
Comments
There are no comments yet.