Multimodal Dialogue Management for Multiparty Interaction with Infants

09/05/2018
by   Setareh Nasihati Gilani, et al.
0

We present dialogue management routines for a system to engage in multiparty agent-infant interaction. The ultimate purpose of this research is to help infants learn a visual sign language by engaging them in naturalistic and socially contingent conversations during an early-life critical period for language development (ages 6 to 12 months) as initiated by an artificial agent. As a first step, we focus on creating and maintaining agent-infant engagement that elicits appropriate and socially contingent responses from the baby. Our system includes two agents, a physical robot and an animated virtual human. The system's multimodal perception includes an eye-tracker (measures attention) and a thermal infrared imaging camera (measures patterns of emotional arousal). A dialogue policy is presented that selects individual actions and planned multiparty sequences based on perceptual inputs about the baby's internal changing states of emotional engagement. The present version of the system was evaluated in interaction with 8 babies. All babies demonstrated spontaneous and sustained engagement with the agents for several minutes, with patterns of conversationally relevant and socially contingent behaviors. We further performed a detailed case-study analysis with annotation of all agent and baby behaviors. Results show that the baby's behaviors were generally relevant to agent conversations and contained direct evidence for socially contingent responses by the baby to specific linguistic samples produced by the avatar. This work demonstrates the potential for language learning from agents in very young babies and has especially broad implications regarding the use of artificial agents with babies who have minimal language exposure in early life.

READ FULL TEXT

page 3

page 7

research
01/09/2014

Emotional Responses in Artificial Agent-Based Systems: Reflexivity and Adaptation in Artificial Life

The current work addresses a virtual environment with self-replicating a...
research
08/20/2023

The Effects of Engaging and Affective Behaviors of Virtual Agents in Group Decision-Making

Virtual agents (VAs) need to exhibit engaged and affective behavior in o...
research
02/25/2020

Multimodal Transformer with Pointer Network for the DSTC8 AVSD Challenge

Audio-Visual Scene-Aware Dialog (AVSD) is an extension from Video Questi...
research
08/06/2019

Batch Recurrent Q-Learning for Backchannel Generation Towards Engaging Agents

The ability to generate appropriate verbal and non-verbal backchannels b...
research
07/02/2022

Enabling Harmonious Human-Machine Interaction with Visual-Context Augmented Dialogue System: A Review

The intelligent dialogue system, aiming at communicating with humans har...
research
11/03/2021

Athena 2.0: Contextualized Dialogue Management for an Alexa Prize SocialBot

Athena 2.0 is an Alexa Prize SocialBot that has been a finalist in the l...
research
07/15/2022

A Flexible Schema-Guided Dialogue Management Framework: From Friendly Peer to Virtual Standardized Cancer Patient

A schema-guided approach to dialogue management has been shown in recent...

Please sign up or login with your details

Forgot password? Click here to reset