Leveraging translations for speech transcription in low-resource settings

03/23/2018
by   Antonis Anastasopoulos, et al.
0

Recently proposed data collection frameworks for endangered language documentation aim not only to collect speech in the language of interest, but also to collect translations into a high-resource language that will render the collected resource interpretable. We focus on this scenario and explore whether we can improve transcription quality under these extremely low-resource settings with the assistance of text translations. We present a neural multi-source model and evaluate several variations of it on three low-resource datasets. We find that our multi-source model with shared attention outperforms the baselines, reducing transcription character error rate by up to 12.3

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset