Are BLEU and Meaning Representation in Opposition?

05/16/2018
by   Ondřej Cífka, et al.
0

One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/27/2019

Explicit Sentence Compression for Neural Machine Translation

State-of-the-art Transformer-based neural machine translation (NMT) syst...
research
10/17/2016

Interactive Attention for Neural Machine Translation

Conventional attention-based Neural Machine Translation (NMT) conducts d...
research
05/18/2018

Metric for Automatic Machine Translation Evaluation based on Universal Sentence Representations

Sentence representations can capture a wide range of information that ca...
research
04/18/2017

An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification

End-to-end neural machine translation has overtaken statistical machine ...
research
05/01/2018

Dynamic Sentence Sampling for Efficient Training of Neural Machine Translation

Traditional Neural machine translation (NMT) involves a fixed training p...
research
05/20/2017

Search Engine Guided Non-Parametric Neural Machine Translation

In this paper, we extend an attention-based neural machine translation (...
research
04/26/2022

Flow-Adapter Architecture for Unsupervised Machine Translation

In this work, we propose a flow-adapter architecture for unsupervised NM...

Please sign up or login with your details

Forgot password? Click here to reset