DeepAI AI Chat
Log In Sign Up

Better than BERT but Worse than Baseline

by   Boxiang Liu, et al.

This paper compares BERT-SQuAD and Ab3P on the Abbreviation Definition Identification (ADI) task. ADI inputs a text and outputs short forms (abbreviations/acronyms) and long forms (expansions). BERT with reranking improves over BERT without reranking but fails to reach the Ab3P rule-based baseline. What is BERT missing? Reranking introduces two new features: charmatch and freq. The first feature identifies opportunities to take advantage of character constraints in acronyms and the second feature identifies opportunities to take advantage of frequency constraints across documents.


page 1

page 2

page 3

page 4


DSC IIT-ISM at SemEval-2020 Task 6: Boosting BERT with Dependencies for Definition Extraction

We explore the performance of Bidirectional Encoder Representations from...

San-BERT: Extractive Summarization for Sanskrit Documents using BERT and it's variants

In this work, we develop language models for the Sanskrit language, name...

Generating Derivational Morphology with BERT

Can BERT generate derivationally complex words? We present the first stu...

A BERT Baseline for the Natural Questions

This technical note describes a new baseline for the Natural Questions. ...

He Thinks He Knows Better than the Doctors: BERT for Event Factuality Fails on Pragmatics

We investigate how well BERT performs on predicting factuality in severa...

NUAA-QMUL at SemEval-2020 Task 8: Utilizing BERT and DenseNet for Internet Meme Emotion Analysis

This paper describes our contribution to SemEval 2020 Task 8: Memotion A...

AUBER: Automated BERT Regularization

How can we effectively regularize BERT? Although BERT proves its effecti...