Where's the Learning in Representation Learning for Compositional Semantics and the Case of Thematic Fit

08/09/2022
by   Mughilan Muthupari, et al.
0

Observing that for certain NLP tasks, such as semantic role prediction or thematic fit estimation, random embeddings perform as well as pretrained embeddings, we explore what settings allow for this and examine where most of the learning is encoded: the word embeddings, the semantic role embeddings, or “the network”. We find nuanced answers, depending on the task and its relation to the training objective. We examine these representation learning aspects in multi-task learning, where role prediction and role-filling are supervised tasks, while several thematic fit tasks are outside the models' direct supervision. We observe a non-monotonous relation between some tasks' quality score and the training data size. In order to better understand this observation, we analyze these results using easier, per-verb versions of these tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2017

An Exploration of Word Embedding Initialization in Deep-Learning Tasks

Word embeddings are the interface between the world of discrete units of...
research
05/17/2018

Extrapolation in NLP

We argue that extrapolation to examples outside the training space will ...
research
12/10/2021

Analysis and Prediction of NLP Models Via Task Embeddings

Task embeddings are low-dimensional representations that are trained to ...
research
05/13/2021

Thematic fit bits: Annotation quality and quantity for event participant representation

Modeling thematic fit (a verb–argument compositional semantics task) cur...
research
09/16/2017

Deep Automated Multi-task Learning

Multi-task learning (MTL) has recently contributed to learning better re...
research
01/10/2020

MoRTy: Unsupervised Learning of Task-specialized Word Embeddings by Autoencoding

Word embeddings have undoubtedly revolutionized NLP. However, pre-traine...
research
12/17/2019

The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task

Single implementing, concatenating, adding or replacing of the represent...

Please sign up or login with your details

Forgot password? Click here to reset