BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward

03/05/2020
by   Florian Schmidt, et al.
0

Measuring the quality of a generated sequence against a set of references is a central problem in many learning frameworks, be it to compute a score, to assign a reward, or to perform discrimination. Despite great advances in model architectures, metrics that scale independently of the number of references are still based on n-gram estimates. We show that the underlying operations, counting words and comparing counts, can be lifted to embedding words and comparing embeddings. An in-depth analysis of BERT embeddings shows empirically that contextual embeddings can be employed to capture the required dependencies while maintaining the necessary scalability through appropriate pruning and smoothing techniques. We cast unconditional generation as a reinforcement learning problem and show that our reward function indeed provides a more effective learning signal than n-gram reward in this challenging setting.

READ FULL TEXT
research
03/16/2020

A Survey on Contextual Embeddings

Contextual embeddings, such as ELMo and BERT, move beyond global word re...
research
12/07/2021

Adaptive Mimic: Deep Reinforcement Learning of Parameterized Bipedal Walking from Infeasible References

Not until recently, robust robot locomotion has been achieved by deep re...
research
07/12/2022

Using Paraphrases to Study Properties of Contextual Embeddings

We use paraphrases as a unique source of data to analyze contextualized ...
research
08/19/2021

Language Model Augmented Relevance Score

Although automated metrics are commonly used to evaluate NLG systems, th...
research
06/04/2019

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Pretrained contextual and non-contextual subword embeddings have become ...

Please sign up or login with your details

Forgot password? Click here to reset