Read Beyond the Lines: Understanding the Implied Textual Meaning via a Skim and Intensive Reading Model

01/03/2020
by   Guoxiu He, et al.
24

The nonliteral interpretation of a text is hard to be understood by machine models due to its high context-sensitivity and heavy usage of figurative language. In this study, inspired by human reading comprehension, we propose a novel, simple, and effective deep neural framework, called Skim and Intensive Reading Model (SIRM), for figuring out implied textual meaning. The proposed SIRM consists of two main components, namely the skim reading component and intensive reading component. N-gram features are quickly extracted from the skim reading component, which is a combination of several convolutional neural networks, as skim (entire) information. An intensive reading component enables a hierarchical investigation for both local (sentence) and global (paragraph) representation, which encapsulates the current embedding and the contextual information with a dense connection. More specifically, the contextual information includes the near-neighbor information and the skim information mentioned above. Finally, besides the normal training loss function, we employ an adversarial loss function as a penalty over the skim reading component to eliminate noisy information arisen from special figurative words in the training data. To verify the effectiveness, robustness, and efficiency of the proposed architecture, we conduct extensive comparative experiments on several sarcasm benchmarks and an industrial spam dataset with metaphors. Experimental results indicate that (1) the proposed model, which benefits from context modeling and consideration of figurative language, outperforms existing state-of-the-art solutions, with comparable parameter scale and training speed; (2) the SIRM yields superior robustness in terms of parameter size sensitivity; (3) compared with ablation and addition variants of the SIRM, the final framework is efficient enough.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2019

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

Understanding narratives requires reading between the lines, which in tu...
research
11/17/2022

Feature-augmented Machine Reading Comprehension with Auxiliary Tasks

While most successful approaches for machine reading comprehension rely ...
research
02/25/2021

ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning

This paper presents our systems for the three Subtasks of SemEval Task4:...
research
04/03/2023

Polytuplet Loss: A Reverse Approach to Training Reading Comprehension and Logical Reasoning Models

Throughout schooling, students are tested on reading comprehension and l...
research
05/26/2019

Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives

This paper tackles the problem of reading comprehension over long narrat...

Please sign up or login with your details

Forgot password? Click here to reset