Natural Language Inference with Hierarchical BiLSTM Max Pooling Architecture

08/27/2018
by   Aarne Talman, et al.
0

Recurrent neural networks have proven to be very effective for natural language inference tasks. We build on top of one such model, namely BiLSTM with max pooling, and show that adding a hierarchy of BiLSTM and max pooling layers yields state of the art results for the SNLI sentence encoding-based models and the SciTail dataset, as well as provides strong results for the MultiNLI dataset. We also show that our sentence embeddings can be utilized in a wide variety of transfer learning tasks, outperforming InferSent on 7 out of 10 and SkipThought on 8 out of 9 SentEval sentence embedding evaluation tasks. Furthermore, our model beats the InferSent model in 8 out of 10 recently published SentEval probing tasks designed to evaluate sentence embeddings' ability to capture some of the important linguistic properties of sentences.

READ FULL TEXT
research
06/26/2018

Enhancing Sentence Embedding with Generalized Pooling

Pooling is an essential component of a wide variety of sentence represen...
research
05/01/2020

Why and when should you pool? Analyzing Pooling in Recurrent Architectures

Pooling-based recurrent neural architectures consistently outperform the...
research
06/06/2016

A Decomposable Attention Model for Natural Language Inference

We propose a simple neural architecture for natural language inference. ...
research
09/18/2018

Learning Universal Sentence Representations with Mean-Max Attention Autoencoder

In order to learn universal sentence representations, previous methods f...
research
05/30/2016

Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention

In this paper, we proposed a sentence encoding-based model for recognizi...
research
10/16/2020

Inferring symmetry in natural language

We present a methodological framework for inferring symmetry of verb pre...

Please sign up or login with your details

Forgot password? Click here to reset