Word-Level Loss Extensions for Neural Temporal Relation Classification

08/07/2018
by   Artuur Leeuwenberg, et al.
0

Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

Evaluating Neural Word Embeddings for Sanskrit

Recently, the supervised learning paradigm's surprisingly remarkable per...
research
05/10/2015

Improved Relation Extraction with Feature-Rich Compositional Embedding Models

Compositional embedding models build a representation (or embedding) for...
research
06/24/2016

Focused Meeting Summarization via Unsupervised Relation Extraction

We present a novel unsupervised framework for focused meeting summarizat...
research
04/19/2018

LightRel SemEval-2018 Task 7: Lightweight and Fast Relation Classification

We present LightRel, a lightweight and fast relation classifier. Our goa...
research
07/01/2016

Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource

Word embeddings have recently seen a strong increase in interest as a re...
research
06/13/2019

Antonym-Synonym Classification Based on New Sub-space Embeddings

Distinguishing antonyms from synonyms is a key challenge for many NLP ap...
research
01/12/2021

Neural Contract Element Extraction Revisited

We investigate contract element extraction. We show that LSTM-based enco...

Please sign up or login with your details

Forgot password? Click here to reset