Low-Rank RNN Adaptation for Context-Aware Language Modeling

10/06/2017
by   Aaron Jaech, et al.
0

A context-aware language model uses location, user and/or domain metadata (context) to adapt its predictions. In neural language models, context information is typically represented as an embedding and it is given to the RNN as an additional input, which has been shown to be useful in many applications. We introduce a more powerful mechanism for using context to adapt an RNN by letting the context vector control a low-rank transformation of the recurrent layer weight matrix. Experiments show that allowing a greater fraction of the model parameters to be adjusted has benefits in terms of perplexity, classification, and generation for several different types of context.

READ FULL TEXT
research
04/21/2017

Improving Context Aware Language Models

Increased adaptability of RNN language models leads to improved predicti...
research
09/19/2016

Context-aware Sequential Recommendation

Since sequential information plays an important role in modeling user be...
research
04/18/2022

Context-Aware Language Modeling for Goal-Oriented Dialogue Systems

Goal-oriented dialogue systems face a trade-off between fluent language ...
research
07/05/2017

Context Aware Document Embedding

Recently, doc2vec has achieved excellent results in different tasks. In ...
research
09/15/2021

Tied Reduced RNN-T Decoder

Previous works on the Recurrent Neural Network-Transducer (RNN-T) models...
research
05/26/2023

CONA: A novel CONtext-Aware instruction paradigm for communication using large language model

We introduce CONA, a novel context-aware instruction paradigm for effect...
research
05/24/2023

Trusting Your Evidence: Hallucinate Less with Context-aware Decoding

Language models (LMs) often struggle to pay enough attention to the inpu...

Please sign up or login with your details

Forgot password? Click here to reset