Improving Natural Language Inference with a Pretrained Parser

09/18/2019
by   Deric Pang, et al.
0

We introduce a novel approach to incorporate syntax into natural language inference (NLI) models. Our method uses contextual token-level vector representations from a pretrained dependency parser. Like other contextual embedders, our method is broadly applicable to any neural model. We experiment with four strong NLI models (decomposable attention model, ESIM, BERT, and MT-DNN), and show consistent benefit to accuracy across three NLI benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2015

Parsing Natural Language Sentences by Semi-supervised Methods

We present our work on semi-supervised parsing of natural language sente...
research
12/25/2021

Combining Improvements for Exploiting Dependency Trees in Neural Semantic Parsing

The dependency tree of a natural language sentence can capture the inter...
research
06/13/2022

Transition-based Abstract Meaning Representation Parsing with Contextual Embeddings

The ability to understand and generate languages sets human cognition ap...
research
10/05/2017

On the Effective Use of Pretraining for Natural Language Inference

Neural networks have excelled at many NLP tasks, but there remain open q...
research
06/25/2018

A Hierarchical Deep Learning Natural Language Parser for Fashion

This work presents a hierarchical deep learning natural language parser ...
research
08/07/2020

Learning a natural-language to LTL executable semantic parser for grounded robotics

Children acquire their native language with apparent ease by observing h...
research
02/14/2020

HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing

Computation-intensive pretrained models have been taking the lead of man...

Please sign up or login with your details

Forgot password? Click here to reset