Deep learning evaluation using deep linguistic processing

06/05/2017
by   Alexander Kuhnle, et al.
0

We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2017

ShapeWorld - A new test methodology for multimodal language understanding

We introduce a novel framework for evaluating multimodal deep learning m...
research
05/10/2017

Survey of Visual Question Answering: Datasets and Techniques

Visual question answering (or VQA) is a new and exciting problem that co...
research
04/10/2019

Advances in Natural Language Question Answering: A Review

Question Answering has recently received high attention from artificial ...
research
07/02/2017

Modulating early visual processing by language

It is commonly assumed that language refers to high-level visual concept...
research
07/03/2022

Understanding Tieq Viet with Deep Learning Models

Deep learning is a powerful approach in recovering lost information as w...
research
07/17/2016

An Empirical Evaluation of various Deep Learning Architectures for Bi-Sequence Classification Tasks

Several tasks in argumentation mining and debating, question-answering, ...
research
04/22/2020

Syntactic Structure from Deep Learning

Modern deep neural networks achieve impressive performance in engineerin...

Please sign up or login with your details

Forgot password? Click here to reset