An empirical user-study of text-based nonverbal annotation systems for human-human conversations

12/30/2021
by   Joshua Y. Kim, et al.
0

the substantial increase in the number of online human-human conversations and the usefulness of multimodal transcripts, there is a rising need for automated multimodal transcription systems to help us better understand the conversations. In this paper, we evaluated three methods to perform multimodal transcription. They were (1) Jefferson – an existing manual system used widely by the linguistics community, (2) MONAH – a system that aimed to make multimodal transcripts accessible and automated, (3) MONAH+ – a system that builds on MONAH that visualizes machine attention. Based on 104 participants responses, we found that (1) all text-based methods significantly reduced the amount of information for the human users, (2) MONAH was found to be more usable than Jefferson, (3) Jefferson's relative strength was in chronemics (pace / delay) and paralinguistics (pitch / volume) annotations, whilst MONAH's relative strength was in kinesics (body language) annotations, (4) enlarging words' font-size based on machine attention was confusing human users as loudness. These results pose considerations for researchers designing a multimodal annotation system for the masses who would like a fully-automated or human-augmented conversational analysis system.

READ FULL TEXT
research
09/24/2021

An animated picture says at least a thousand words: Selecting Gif-based Replies in Multimodal Dialog

Online conversations include more than just text. Increasingly, image-ba...
research
08/11/2017

Break it Down for Me: A Study in Automated Lyric Annotation

Comprehending lyrics, as found in songs and poems, can pose a challenge ...
research
08/26/2020

Conversations On Multimodal Input Design With Older Adults

Multimodal input systems can help bridge the wide range of physical abil...
research
01/18/2021

MONAH: Multi-Modal Narratives for Humans to analyze conversations

In conversational analyses, humans manually weave multimodal information...
research
01/13/2020

Detecting depression in dyadic conversations with multimodal narratives and visualizations

Conversations contain a wide spectrum of multimodal information that giv...
research
10/23/2022

McQueen: a Benchmark for Multimodal Conversational Query Rewrite

The task of query rewrite aims to convert an in-context query to its ful...
research
08/29/2023

Sequential annotations for naturally-occurring HRI: first insights

We explain the methodology we developed for improving the interactions a...

Please sign up or login with your details

Forgot password? Click here to reset