Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks

01/26/2021
by   Hyunjin Choi, et al.
0

Contextualized representations from a pre-trained language model are central to achieve a high performance on downstream NLP task. The pre-trained BERT and A Lite BERT (ALBERT) models can be fine-tuned to give state-ofthe-art results in sentence-pair regressions such as semantic textual similarity (STS) and natural language inference (NLI). Although BERT-based models yield the [CLS] token vector as a reasonable sentence embedding, the search for an optimal sentence embedding scheme remains an active research area in computational linguistics. This paper explores on sentence embedding models for BERT and ALBERT. In particular, we take a modified BERT network with siamese and triplet network structures called Sentence-BERT (SBERT) and replace BERT with ALBERT to create Sentence-ALBERT (SALBERT). We also experiment with an outer CNN sentence-embedding network for SBERT and SALBERT. We evaluate performances of all sentence-embedding models considered using the STS and NLI datasets. The empirical results indicate that our CNN architecture improves ALBERT models substantially more than BERT models for STS benchmark. Despite significantly fewer model parameters, ALBERT sentence embedding is highly competitive to BERT in downstream NLP evaluations.

READ FULL TEXT
research
02/16/2020

SBERT-WK: A Sentence Embedding Method by Dissecting BERT-based Word Models

Sentence embedding is an important research topic in natural language pr...
research
09/23/2020

A Token-wise CNN-based Method for Sentence Compression

Sentence compression is a Natural Language Processing (NLP) task aimed a...
research
04/27/2020

ColBERT: Using BERT Sentence Embedding for Humor Detection

Automatic humor detection has interesting use cases in modern technologi...
research
06/03/2022

Extracting Similar Questions From Naturally-occurring Business Conversations

Pre-trained contextualized embedding models such as BERT are a standard ...
research
10/22/2020

Summarizing Utterances from Japanese Assembly Minutes using Political Sentence-BERT-based Method for QA Lab-PoliInfo-2 Task of NTCIR-15

There are many discussions held during political meetings, and a large n...
research
08/27/2019

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new ...
research
11/03/2020

Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset Augmentation Using Graph Theory

Most NLP datasets are manually labeled, so suffer from inconsistent labe...

Please sign up or login with your details

Forgot password? Click here to reset