Adaptive Convolutional Filter Generation for Natural Language Understanding

09/25/2017
by   Dinghan Shen, et al.
0

Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP are not expressive enough, in the sense that all input sentences share the same learned (and static) set of filters. Motivated by this problem, we propose an adaptive convolutional filter generation framework for natural language understanding, by leveraging a meta network to generate input-aware filters. We further generalize our framework to model question-answer sentence pairs and propose an adaptive question answering (AdaQA) model; a novel two-way feature abstraction mechanism is introduced to encapsulate co-dependent sentence representations. We investigate the effectiveness of our framework on document categorization and answer sentence-selection tasks, achieving state-of-the-art performance on several benchmark datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset