The Lifted Matrix-Space Model for Semantic Composition

11/09/2017
by   WooJin Chung, et al.
0

Recent advances in tree structured sentence encoding models have shown that explicitly modeling syntax can help handle compositionality. More specifically, recent works by Socher2012, Socher2013, and Chen2013 have shown that using more powerful composition functions with multiplicative interactions within tree-structured models can yield significant improvements in model performance. However, existing compositional approaches which make use of these multiplicative interactions usually have to learn task-specific matrix-shaped word embeddings or rely on third-order tensors, which can be very costly. This paper introduces the Lifted Matrix-Space model which improves on the predecessors on this aspect. The model learns a global transformation from pre-trained word embeddings into matrices, which can be composed via matrix multiplication. The upshot is that we can capture the multiplicative interaction without learning matrix-valued word representations from scratch. In addition, our composition function effectively transmits a larger number of activations across layers with comparably few model parameters. We evaluate our model on the Stanford NLI corpus and the Multi-Genre NLI corpus and find that the Lifted Matrix-Space model outperforms the tree-structured long short-term memory networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2015

Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning

Deep compositional models of meaning acting on distributional representa...
research
04/03/2019

Evaluating KGR10 Polish word embeddings in the recognition of temporal expressions using BiLSTM-CRF

The article introduces a new set of Polish word embeddings, built using ...
research
07/10/2017

Learning to Compose Task-Specific Tree Structures

For years, recursive neural networks (RvNNs) have been shown to be suita...
research
01/03/2021

Attentive Tree-structured Network for Monotonicity Reasoning

Many state-of-art neural models designed for monotonicity reasoning perf...
research
02/05/2019

Deep Tree Transductions - A Short Survey

The paper surveys recent extensions of the Long-Short Term Memory networ...
research
08/11/2022

Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax

We show that both an LSTM and a unitary-evolution recurrent neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset