Imagination-Augmented Natural Language Understanding

04/18/2022
by   Yujie Lu, et al.
4

Human brains integrate linguistic and perceptual information simultaneously to understand natural language, and hold the critical ability to render imaginations. Such abilities enable us to construct new abstract concepts or concrete objects, and are essential in involving practical knowledge to solve problems in low-resource scenarios. However, most existing methods for Natural Language Understanding (NLU) are mainly focused on textual signals. They do not simulate human visual imagination ability, which hinders models from inferring and learning efficiently from limited data samples. Therefore, we introduce an Imagination-Augmented Cross-modal Encoder (iACE) to solve natural language understanding tasks from a novel learning perspective – imagination-augmented cross-modal understanding. iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models. Extensive experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models. More importantly, results in extreme and normal few-shot settings validate the effectiveness of iACE in low-resource natural language understanding circumstances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2023

Towards Versatile and Efficient Visual Knowledge Injection into Pre-trained Language Models with Cross-Modal Adapters

Humans learn language via multi-modal knowledge. However, due to the tex...
research
01/01/2021

BanglaBERT: Combating Embedding Barrier for Low-Resource Language Understanding

Pre-training language models on large volume of data with self-supervise...
research
05/03/2018

Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents

Fast expansion of natural language functionality of intelligent virtual ...
research
03/14/2022

Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer

Pre-trained language models are still far from human performance in task...
research
09/11/2023

From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery

Molecule discovery serves as a cornerstone in numerous scientific domain...
research
10/31/2017

Whodunnit? Crime Drama as a Case for Natural Language Understanding

In this paper we argue that crime drama exemplified in television progra...
research
04/24/2020

Data Annealing for Informal Language Understanding Tasks

There is a huge performance gap between formal and informal language und...

Please sign up or login with your details

Forgot password? Click here to reset