Discourse Probing of Pretrained Language Models

04/13/2021
by   Fajri Koto, et al.
4

Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level relations. We experiment with 7 pretrained LMs, 4 languages, and 7 discourse probing tasks, and find BART to be overall the best model at capturing discourse – but only in its encoder, with BERT performing surprisingly well as the baseline model. Across the different models, there are substantial differences in which layers best capture discourse information, and large disparities between models.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

03/18/2021

Evaluating Document Coherence Modelling

While pretrained language models ("LM") have driven impressive gains ove...
11/12/2015

Document Context Language Models

Text documents are structured on multiple levels of detail: individual w...
08/31/2019

Evaluation Benchmarks and Learning Criteriafor Discourse-Aware Sentence Representations

Prior work on pretrained sentence embeddings and benchmarks focus on the...
09/10/2021

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

Current language models are usually trained using a self-supervised sche...
06/16/2021

On the long-term learning ability of LSTM LMs

We inspect the long-term learning ability of Long Short-Term Memory lang...
11/20/2020

Topic modelling discourse dynamics in historical newspapers

This paper addresses methodological issues in diachronic data analysis f...
11/23/2019

Discourse Level Factors for Sentence Deletion in Text Simplification

This paper presents a data-driven study focusing on analyzing and predic...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.