Evaluation of Unsupervised Compositional Representations

06/12/2018
by   Hanan Aldarmaki, et al.
0

We evaluated various compositional models, from bag-of-words representations to compositional RNN-based models, on several extrinsic supervised and unsupervised evaluation benchmarks. Our results confirm that weighted vector averaging can outperform context-sensitive models in most benchmarks, but structural features encoded in RNN models can also be useful in certain classification tasks. We analyzed some of the evaluation datasets to identify the aspects of meaning they measure and the characteristics of the various models that explain their performance variance.

READ FULL TEXT

page 6

page 7

research
11/16/2018

Analyzing Compositionality-Sensitivity of NLI Models

Success in natural language inference (NLI) should require a model to un...
research
10/06/2022

Compositional Generalisation with Structured Reordering and Fertility Layers

Seq2seq models have been shown to struggle with compositional generalisa...
research
10/02/2022

Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language

Deep learning models struggle with compositional generalization, i.e. th...
research
04/22/2018

A Study on Passage Re-ranking in Embedding based Unsupervised Semantic Search

State of the art approaches for (embedding based) unsupervised semantic ...
research
12/20/2018

RNNs Implicitly Implement Tensor Product Representations

Recurrent neural networks (RNNs) can learn continuous vector representat...
research
11/17/2021

Three approaches to supervised learning for compositional data with pairwise logratios

The common approach to compositional data analysis is to transform the d...
research
01/31/2018

Paraphrase-Supervised Models of Compositionality

Compositional vector space models of meaning promise new solutions to st...

Please sign up or login with your details

Forgot password? Click here to reset