Towards Speech Dialogue Translation Mediating Speakers of Different Languages

05/16/2023
by   Shuichiro Shimizu, et al.
0

We present a new task, speech dialogue translation mediating speakers of different languages. We construct the SpeechBSD dataset for the task and conduct baseline experiments. Furthermore, we consider context to be an important aspect that needs to be addressed in this task and propose two ways of utilizing context, namely monolingual context and bilingual context. We conduct cascaded speech translation experiments using Whisper and mBART, and show that bilingual context performs better in our settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2023

Speech Wikimedia: A 77 Language Multilingual Speech Dataset

The Speech Wikimedia Dataset is a publicly available compilation of audi...
research
10/26/2021

Assessing Evaluation Metrics for Speech-to-Speech Translation

Speech-to-speech translation combines machine translation with speech sy...
research
09/02/2021

Towards Making the Most of Dialogue Characteristics for Neural Chat Translation

Neural Chat Translation (NCT) aims to translate conversational text betw...
research
12/12/2022

Direct Speech-to-speech Translation without Textual Annotation using Bottleneck Features

Speech-to-speech translation directly translates a speech utterance to a...
research
01/29/2020

Improving Language Identification for Multilingual Speakers

Spoken language identification (LID) technologies have improved in recen...
research
09/12/2018

Game-Based Video-Context Dialogue

Current dialogue systems focus more on textual and speech context knowle...
research
05/10/2022

Controlling Extra-Textual Attributes about Dialogue Participants: A Case Study of English-to-Polish Neural Machine Translation

Unlike English, morphologically rich languages can reveal characteristic...

Please sign up or login with your details

Forgot password? Click here to reset