Modulating Language Models with Emotions

08/17/2021
by   Ruibo Liu, et al.
11

Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of modulated layer normalization – a technique inspired by computer vision – that allows us to use large-scale language models for emotional response generation. In automatic and human evaluation on the MojiTalk dataset, our proposed modulated layer normalization method outperforms prior baseline methods while maintaining diversity, fluency, and coherence. Our method also obtains competitive performance even when using only 10 available training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2021

Emotion-aware Chat Machine: Automatic Emotional Response Generation for Human-like Emotional Interaction

The consistency of a response to a given post at semantic-level and emot...
research
05/18/2023

SimOAP: Improve Coherence and Consistency in Persona-based Dialogue Generation via Over-sampling and Post-evaluation

Language models trained on large-scale corpora can generate remarkably f...
research
05/29/2022

CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI

Human language expression is based on the subjective construal of the si...
research
08/31/2023

Socratis: Are large multimodal models emotionally aware?

Existing emotion prediction benchmarks contain coarse emotion labels whi...
research
09/21/2021

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?

In this paper, we investigate what types of stereotypical information ar...
research
11/01/2021

PerSpeechNorm: A Persian Toolkit for Speech Processing Normalization

In general, speech processing models consist of a language model along w...

Please sign up or login with your details

Forgot password? Click here to reset