Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description

10/19/2017
by   Desmond Elliott, et al.
0

We present the results from the second shared task on multimodal machine translation and multilingual image description. Nine teams submitted 19 systems to two tasks. The multimodal translation task, in which the source sentence is supplemented by an image, was extended with a new language (French) and two new test sets. The multilingual image description task was changed such that at test time, only the image is given. Compared to last year, multimodal systems improved, but text-only systems remain competitive.

READ FULL TEXT

page 2

page 4

page 11

research
05/07/2018

Multimodal Machine Translation with Reinforcement Learning

Multimodal machine translation is one of the applications that integrate...
research
10/15/2015

Multilingual Image Description with Neural Sequence Models

In this paper we present an approach to multi-language image description...
research
08/24/2018

A Visual Attention Grounding Neural Model for Multimodal Machine Translation

We introduce a novel multimodal machine translation model that utilizes ...
research
04/21/2022

SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding

This paper presents the shared task on Multilingual Idiomaticity Detecti...
research
12/09/2017

Modulating and attending the source image during encoding improves Multimodal Translation

We propose a new and fully end-to-end approach for multimodal translatio...
research
04/14/2023

OPI at SemEval 2023 Task 1: Image-Text Embeddings and Multimodal Information Retrieval for Visual Word Sense Disambiguation

The goal of visual word sense disambiguation is to find the image that b...
research
04/24/2018

A Report on the Complex Word Identification Shared Task 2018

We report the findings of the second Complex Word Identification (CWI) s...

Please sign up or login with your details

Forgot password? Click here to reset