CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation Tasks

06/23/2016
by   Jindřich Libovický, et al.
0

Neural sequence to sequence learning recently became a very promising paradigm in machine translation, achieving competitive results with statistical phrase-based systems. In this system description paper, we attempt to utilize several recently published methods used for neural sequential learning in order to build systems for WMT 2016 shared tasks of Automatic Post-Editing and Multimodal Machine Translation.

READ FULL TEXT
research
05/30/2016

Does Multimodality Help Human and Machine for Translation and Image Captioning?

This paper presents the systems developed by LIUM and CVC for the WMT16 ...
research
04/21/2017

Attention Strategies for Multi-Source Sequence-to-Sequence Learning

Modeling attention in neural multi-source sequence-to-sequence learning ...
research
09/15/2021

Sequence Length is a Domain: Length-based Overfitting in Transformer Models

Transformer-based sequence-to-sequence architectures, while achieving st...
research
11/09/2019

Learning to Copy for Automatic Post-Editing

Automatic post-editing (APE), which aims to correct errors in the output...
research
04/01/2020

Editable Neural Networks

These days deep neural networks are ubiquitously used in a wide range of...
research
05/30/2019

Interactive-predictive neural multimodal systems

Despite the advances achieved by neural models in sequence to sequence l...
research
05/20/2019

A Neural, Interactive-predictive System for Multimodal Sequence to Sequence Tasks

We present a demonstration of a neural interactive-predictive system for...

Please sign up or login with your details

Forgot password? Click here to reset