Neural Composition: Learning to Generate from Multiple Models

07/10/2020
by   Denis Filimonov, et al.
0

Decomposing models into multiple components is critically important in many applications such as language modeling (LM) as it enables adapting individual components separately and biasing of some components to the user's personal preferences. Conventionally, contextual and personalized adaptation for language models, are achieved through class-based factorization, which requires class-annotated data, or through biasing to individual phrases which is limited in scale. In this paper, we propose a system that combines model-defined components, by learning when to activate the generation process from each individual component, and how to combine probability distributions from each component, directly from unlabeled text data.

READ FULL TEXT
research
03/19/2022

Dependency-based Mixture Language Models

Various models have been proposed to incorporate knowledge of syntactic ...
research
10/04/2017

Counterfactual Language Model Adaptation for Suggesting Phrases

Mobile devices use language models to suggest words and phrases for use ...
research
06/01/2016

Generalizing and Hybridizing Count-based and Neural Language Models

Language models (LMs) are statistical models that calculate probabilitie...
research
06/09/2020

Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

In this paper, we detail novel strategies for interpolating personalized...
research
11/02/2021

LMdiff: A Visual Diff Tool to Compare Language Models

While different language models are ubiquitous in NLP, it is hard to con...
research
03/18/2021

Generalized infinite factorization models

Factorization models express a statistical object of interest in terms o...

Please sign up or login with your details

Forgot password? Click here to reset