Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models

09/08/2019
by   Xinyi Liu, et al.
0

This study compares the effectiveness and robustness of multi-class categorization of Amazon product data using transfer learning on pre-trained contextualized language models. Specifically, we fine-tuned BERT and XLNet, two bidirectional models that have achieved state-of-the-art performance on many natural language tasks and benchmarks, including text classification. While existing classification studies and benchmarks focus on binary targets, with the exception of ordinal ranking tasks, here we examine the robustness of such models as the number of classes grows from 1 to 20. Our experiments demonstrate an approximately linear decrease in performance metrics (i.e., precision, recall, F_1 score, and accuracy) with the number of class labels. BERT consistently outperforms XLNet using identical hyperparameters on the entire range of class label quantities for categorizing products based on their textual descriptions. BERT is also more affordable than XLNet in terms of the computational cost (i.e., time and memory) required for training.

READ FULL TEXT
research
01/27/2023

Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

As the size of the pre-trained language model (PLM) continues to increas...
research
04/27/2021

Multi-class Text Classification using BERT-based Active Learning

Text Classification finds interesting applications in the pickup and del...
research
06/12/2022

DeepEmotex: Classifying Emotion in Text Messages using Deep Transfer Learning

Transfer learning has been widely used in natural language processing th...
research
10/11/2022

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Despite the remarkable success of pre-trained language models (PLMs), th...
research
05/24/2021

PTR: Prompt Tuning with Rules for Text Classification

Fine-tuned pre-trained language models (PLMs) have achieved awesome perf...
research
01/21/2021

Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline

Pre-trained deep language models (LM) have advanced the state-of-the-art...
research
03/11/2022

verBERT: Automating Brazilian Case Law Document Multi-label Categorization Using BERT

In this work, we carried out a study about the use of attention-based al...

Please sign up or login with your details

Forgot password? Click here to reset