Investigating the Effectiveness of Representations Based on Word-Embeddings in Active Learning for Labelling Text Datasets

10/04/2019
by   Jinghui Lu, et al.
0

Manually labelling large collections of text data is a time-consuming, expensive, and laborious task, but one that is necessary to support machine learning based on text datasets. Active learning has been shown to be an effective way to alleviate some of the effort required in utilising large collections of unlabelled data for machine learning tasks without needing to fully label them. The representation mechanism used to represent text documents when performing active learning, however, has a significant influence on how effective the process will be. While simple vector representations such as bag of words have been shown to be an effective way to represent documents during active learning, the emergence of representation mechanisms based on the word embeddings prevalent in neural network research (e.g. word2vec and transformer-based models like BERT) offer a promising, and as yet not fully explored, alternative. This paper describes a large-scale evaluation of the effectiveness of different text representation mechanisms for active learning across 8 datasets from varied domains. This evaluation shows that using representations based on modern word embeddings—especially BERT—, which have not yet been widely used in active learning, achieves a significant improvement over more commonly used vector-based methods like bag of words.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

Investigating the Effectiveness of Word-Embedding Based Active Learning for Labelling Text Datasets

Manually labelling large collections of text data is a time-consuming, e...
research
07/11/2016

The Benefits of Word Embeddings Features for Active Learning in Clinical Information Extraction

This study investigates the use of unsupervised word embeddings and sequ...
research
05/18/2021

WOVe: Incorporating Word Order in GloVe Word Embeddings

Word vector representations open up new opportunities to extract useful ...
research
09/21/2021

InvBERT: Text Reconstruction from Contextualized Embeddings used for Derived Text Formats of Literary Works

Digital Humanities and Computational Literary Studies apply text mining ...
research
07/31/2020

Evaluating Semantic Interaction on Word Embeddings via Simulation

Semantic interaction (SI) attempts to learn the user's cognitive intents...
research
03/24/2023

SIGMORPHON 2023 Shared Task of Interlinear Glossing: Baseline Model

Language documentation is a critical aspect of language preservation, of...

Please sign up or login with your details

Forgot password? Click here to reset