Text Classification with Few Examples using Controlled Generalization

05/18/2020
by   Abhijit Mahabal, et al.
0

Training data for text classification is often limited in practice, especially for applications with many output classes or involving many related classification problems. This means classifiers must generalize from limited evidence, but the manner and extent of generalization is task dependent. Current practice primarily relies on pre-trained word embeddings to map words unseen in training to similar seen ones. Unfortunately, this squishes many components of meaning into highly restricted capacity. Our alternative begins with sparse pre-trained representations derived from unlabeled parsed corpora; based on the available training data, we select features that offers the relevant generalizations. This produces task-specific semantic vectors; here, we show that a feed-forward network over these vectors is especially effective in low-data scenarios, compared to existing state-of-the-art methods. By further pairing this network with a convolutional neural network, we keep this edge in low data scenarios and remain competitive when using full training sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2019

Word-Class Embeddings for Multiclass Text Classification

Pre-trained word embeddings encode general word semantics and lexical re...
research
05/10/2018

Joint Embedding of Words and Labels for Text Classification

Word embeddings are effective intermediate representations for capturing...
research
12/11/2020

TF-CR: Weighting Embeddings for Text Classification

Text classification, as the task consisting in assigning categories to t...
research
06/10/2022

Feature-informed Embedding Space Regularization For Audio Classification

Feature representations derived from models pre-trained on large-scale d...
research
03/09/2022

Memory Efficient Continual Learning for Neural Text Classification

Learning text classifiers based on pre-trained language models has becom...
research
11/05/2020

QMUL-SDS @ DIACR-Ita: Evaluating Unsupervised Diachronic Lexical Semantics Classification in Italian

In this paper, we present the results and main findings of our system fo...
research
10/25/2018

Learning Neural Emotion Analysis from 100 Observations: The Surprising Effectiveness of Pre-Trained Word Representations

Deep Learning has drastically reshaped virtually all areas of NLP. Yet o...

Please sign up or login with your details

Forgot password? Click here to reset