Deep Bayesian Active Learning for Multiple Correct Outputs

12/02/2019
by   Khaled Jedoui, et al.
53

Typical active learning strategies are designed for tasks, such as classification, with the assumption that the output space is mutually exclusive. The assumption that these tasks always have exactly one correct answer has resulted in the creation of numerous uncertainty-based measurements, such as entropy and least confidence, which operate over a model's outputs. Unfortunately, many real-world vision tasks, like visual question answering and image captioning, have multiple correct answers, causing these measurements to overestimate uncertainty and sometimes perform worse than a random sampling baseline. In this paper, we propose a new paradigm that estimates uncertainty in the model's internal hidden space instead of the model's output space. We specifically study a manifestation of this problem for visual question answer generation (VQA), where the aim is not to classify the correct answer but to produce a natural language answer, given an image and a question. Our method overcomes the paraphrastic nature of language. It requires a semantic space that structures the model's output concepts and that enables the usage of techniques like dropout-based Bayesian uncertainty. We build a visual-semantic space that embeds paraphrases close together for any existing VQA model. We empirically show state-of-art active learning results on the task of VQA on two datasets, being 5 times more cost-efficient on Visual Genome and 3 times more cost-efficient on VQA 2.0.

READ FULL TEXT
research
11/06/2017

Active Learning for Visual Question Answering: An Empirical Study

We present an empirical study of active learning for Visual Question Ans...
research
03/19/2017

VQABQ: Visual Question Answering by Basic Questions

Taking an image and question as the input of our method, it can output t...
research
06/27/2016

Revisiting Visual Question Answering Baselines

Visual question answering (VQA) is an interesting learning setting for e...
research
10/21/2021

Single-Modal Entropy based Active Learning for Visual Question Answering

Constructing a large-scale labeled dataset in the real world, especially...
research
11/14/2019

Question-Conditioned Counterfactual Image Generation for VQA

While Visual Question Answering (VQA) models continue to push the state-...
research
10/21/2019

Enforcing Reasoning in Visual Commonsense Reasoning

The task of Visual Commonsense Reasoning is extremely challenging in the...
research
10/12/2021

AutoNLU: Detecting, root-causing, and fixing NLU model errors

Improving the quality of Natural Language Understanding (NLU) models, an...

Please sign up or login with your details

Forgot password? Click here to reset