We present the joint contribution of Unbabel and Instituto Superior Técn...
Automatic evaluation of machine translation (MT) is a critical tool driv...
Several uncertainty estimation methods have been recently proposed for
m...
Although neural-based machine translation evaluation metrics, such as CO...
Selective rationales and counterfactual examples have emerged as two
eff...
The NLP community has mainly focused on scaling Large Language Models (L...
Neural metrics for machine translation evaluation, such as COMET, exhibi...
Many recent advances in natural language generation have been fueled by
...
Large-scale multilingual machine translation systems have demonstrated
r...
Many types of data from fields including natural language processing,
co...
Code generation from text requires understanding the user's intent from ...
Neural machine translation (NMT) has become the de-facto standard in
rea...
Current abstractive summarization systems present important weaknesses w...
We present the joint contribution of IST and Unbabel to the WMT 2022 Sha...
Getting the most out of limited resources allows advances in natural lan...
Although the problem of hallucinations in neural machine translation (NM...
Semi-parametric models, which augment generation with retrieval, have le...
Despite the progress in machine translation quality estimation and evalu...
Machine translation models struggle when translating out-of-domain text,...
Modern machine learning models are opaque, and as a result there is a
bu...
Neural-based machine translation (MT) evaluation metrics are progressing...
Recent work has shown promising results in causal discovery by leveragin...
Neural networks are powerful function estimators, leading to their statu...
A bottleneck in transformer architectures is their quadratic complexity ...
Several neural-based metrics have been recently proposed to evaluate mac...
Selective rationalization aims to produce decisions along with rationale...
Transformers struggle when attending to long contexts, since the amount ...
Neural networks and other machine learning models compute continuous
rep...
Exponential families are widely used in machine learning; they include m...
Recent work in neural machine translation has demonstrated both the nece...
Visual attention mechanisms are a key component of neural network models...
Neural networks and other machine learning models compute continuous
rep...
Current sequence-to-sequence models are trained to minimize cross-entrop...
We present MLQE-PE, a new dataset for Machine Translation (MT) Quality
E...
Latent structure models are a powerful tool for modeling language data: ...
Training neural network models with discrete (categorical or structured)...
Exponential families are widely used in machine learning; they include m...
Explainability is a topic of growing importance in NLP. In this work, we...
Current state-of-the-art text generators build on powerful language mode...
Structured prediction requires manipulating a large number of combinator...
Attention mechanisms have become ubiquitous in NLP. Recent architectures...
The combination of machines and humans for translation is effective, wit...
We present the contribution of the Unbabel team to the WMT 2019 Shared T...
These notes aim to shed light on the recently proposed structured projec...
Named entity recognition (NER) and entity linking (EL) are two fundament...
Scheduled sampling is a technique for avoiding one of the known problems...
Automatic post-editing (APE) seeks to automatically refine the output of...
This paper describes Unbabel's submission to the WMT2019 APE Shared Task...
Sequence-to-sequence models are a powerful workhorse of NLP. Most varian...
We present a new neural model for text summarization that first extracts...