TransPOS: Transformers for Consolidating Different POS Tagset Datasets

09/24/2022
by   Alex Li, et al.
0

In hope of expanding training data, researchers often want to merge two or more datasets that are created using different labeling schemes. This paper considers two datasets that label part-of-speech (POS) tags under different tagging schemes and leverage the supervised labels of one dataset to help generate labels for the other dataset. This paper further discusses the theoretical difficulties of this approach and proposes a novel supervised architecture employing Transformers to tackle the problem of consolidating two completely disjoint datasets. The results diverge from initial expectations and discourage exploration into the use of disjoint labels to consolidate datasets with different labels.

READ FULL TEXT
research
04/28/2022

Pseudo strong labels for large scale weakly supervised audio tagging

Large-scale audio tagging datasets inevitably contain imperfect labels, ...
research
05/31/2017

Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks

Collecting large training datasets, annotated with high-quality labels, ...
research
06/17/2019

Active Learning by Greedy Split and Label Exploration

Annotating large unlabeled datasets can be a major bottleneck for machin...
research
06/25/2022

Protoformer: Embedding Prototypes for Transformers

Transformers have been widely applied in text classification. Unfortunat...
research
12/11/2017

Error Correction for Dense Semantic Image Labeling

Pixelwise semantic image labeling is an important, yet challenging, task...
research
05/02/2023

Towards a better labeling process for network security datasets

Most network security datasets do not have comprehensive label assignmen...

Please sign up or login with your details

Forgot password? Click here to reset