ReactFace: Multiple Appropriate Facial Reaction Generation in Dyadic Interactions

05/25/2023
by   Cheng Luo, et al.
0

In dyadic interaction, predicting the listener's facial reactions is challenging as different reactions may be appropriate in response to the same speaker's behaviour. This paper presents a novel framework called ReactFace that learns an appropriate facial reaction distribution from a speaker's behaviour rather than replicating the real facial reaction of the listener. ReactFace generates multiple different but appropriate photo-realistic human facial reactions by (i) learning an appropriate facial reaction distribution representing multiple appropriate facial reactions; and (ii) synchronizing the generated facial reactions with the speaker's verbal and non-verbal behaviours at each time stamp, resulting in realistic 2D facial reaction sequences. Experimental results demonstrate the effectiveness of our approach in generating multiple diverse, synchronized, and appropriate facial reactions from each speaker's behaviour, with the quality of the generated reactions being influenced by the speaker's speech and facial behaviours. Our code is made publicly available at <https://github.com/lingjivoo/ReactFace>.

READ FULL TEXT

page 1

page 3

page 7

research
05/24/2023

Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation

Generating facial reactions in a human-human dyadic interaction is compl...
research
07/05/2023

MRecGen: Multimodal Appropriate Reaction Generator

Verbal and non-verbal human reaction generation is a challenging task, a...
research
06/11/2023

REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge

The Multi-modal Multiple Appropriate Facial Reaction Generation Challeng...
research
02/13/2023

Multiple Appropriate Facial Reaction Generation in Dyadic Interaction Settings: What, Why and How?

According to the Stimulus Organism Response (SOR) theory, all human beha...
research
04/18/2022

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion

We present a framework for modeling interactional communication in dyadi...
research
01/08/2018

Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

Rapport, the close and harmonious relationship in which interaction part...
research
04/15/2019

Synthesising 3D Facial Motion from "In-the-Wild" Speech

Synthesising 3D facial motion from speech is a crucial problem manifesti...

Please sign up or login with your details

Forgot password? Click here to reset