Unsupervised Learning of Explainable Parse Trees for Improved Generalisation

04/11/2021
by   Atul Sahay, et al.
14

Recursive neural networks (RvNN) have been shown useful for learning sentence representations and helped achieve competitive performance on several natural language inference tasks. However, recent RvNN-based models fail to learn simple grammar and meaningful semantics in their intermediate tree representation. In this work, we propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures. We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks and compare them with other state-of-the-art RvNN based methods. Further, we present a detailed qualitative and quantitative analysis of the learned parse trees and show that the discovered linguistic structures are more explainable, semantically meaningful, and grammatically correct than recent approaches. The source code of the paper is available at https://github.com/atul04/Explainable-Latent-Structures-Using-Attention.

READ FULL TEXT
research
02/25/2019

Cooperative Learning of Disjoint Syntax and Semantics

There has been considerable attention devoted to models that learn to jo...
research
06/11/2023

Improving the Validity of Decision Trees as Explanations

In classification and forecasting with tabular data, one often utilizes ...
research
09/07/2018

Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag Representations

Most existing recursive neural network (RvNN) architectures utilize only...
research
05/21/2021

Rule Augmented Unsupervised Constituency Parsing

Recently, unsupervised parsing of syntactic trees has gained considerabl...
research
02/08/2023

Ordered Memory Baselines

Natural language semantics can be modeled using the phrase-structured mo...
research
08/31/2023

Interpreting Sentiment Composition with Latent Semantic Tree

As the key to sentiment analysis, sentiment composition considers the cl...
research
01/13/2013

Cutting Recursive Autoencoder Trees

Deep Learning models enjoy considerable success in Natural Language Proc...

Please sign up or login with your details

Forgot password? Click here to reset