Leveraging Natural Supervision for Language Representation Learning and Generation

07/21/2022
by   Mingda Chen, et al.
0

Recent breakthroughs in Natural Language Processing (NLP) have been driven by language models trained on a massive amount of plain text. While powerful, deriving supervision from textual resources is still an open question. For example, language model pretraining often neglects the rich, freely-available structures in textual data. In this thesis, we describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision. We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks. Specifically, we alter the sentence prediction loss to make it better suited to other pretraining losses and more challenging to solve. We design an intermediate finetuning step that uses self-supervised training to promote models' ability in cross-task generalization. Then we describe methods to leverage the structures in Wikipedia and paraphrases. In particular, we propose training losses to exploit hyperlinks, article structures, and article category graphs for entity-, discourse-, entailment-related knowledge. We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations. We extend the framework for a novel generation task that controls the syntax of output text with a sentential exemplar. Lastly, we discuss our work on tailoring textual resources for establishing challenging evaluation tasks. We introduce three datasets by defining novel tasks using various fan-contributed websites, including a long-form data-to-text generation dataset, a screenplay summarization dataset, and a long-form story generation dataset. These datasets have unique characteristics offering challenges to future work in their respective task settings.

READ FULL TEXT
research
11/22/2021

Knowledge Based Multilingual Language Model

Knowledge enriched language representation learning has shown promising ...
research
02/25/2021

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives

Modern natural language processing (NLP) methods employ self-supervised ...
research
12/20/2019

Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model

Recent breakthroughs of pretrained language models have shown the effect...
research
05/25/2022

RSTGen: Imbuing Fine-Grained Interpretable Control into Long-FormText Generators

In this paper, we study the task of improving the cohesion and coherence...
research
04/21/2022

Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing

Multi-modal data abounds in biomedicine, such as radiology images and re...
research
04/30/2020

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...
research
10/06/2020

Supervised Seeded Iterated Learning for Interactive Language Learning

Language drift has been one of the major obstacles to train language mod...

Please sign up or login with your details

Forgot password? Click here to reset