Delta Embedding Learning

12/11/2018
by   Xiao Zhang, et al.
0

Learning from corpus and learning from supervised NLP tasks both give useful semantics that can be incorporated into a good word representation. We propose an embedding learning method called Delta Embedding Learning, to learn semantic information from high-level supervised tasks like reading comprehension, and combine it with an unsupervised word embedding. The simple technique not only improved the performance of various supervised NLP tasks, but also simultaneously learns improved universal word embeddings out of these tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2020

Deconstructing word embedding algorithms

Word embeddings are reliable feature representations of words used to ob...
research
02/24/2022

First is Better Than Last for Training Data Influence

The ability to identify influential training examples enables us to debu...
research
01/17/2013

Affinity Weighted Embedding

Supervised (linear) embedding models like Wsabie and PSI have proven suc...
research
06/08/2021

Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

Adversarial training (AT) as a regularization method has proved its effe...
research
01/25/2020

An Analysis of Word2Vec for the Italian Language

Word representation is fundamental in NLP tasks, because it is precisely...
research
10/02/2017

Unsupervised Hypernym Detection by Distributional Inclusion Vector Embedding

Modeling hypernymy, such as poodle is-a dog, is an important generalizat...
research
06/24/2018

Subword-augmented Embedding for Cloze Reading Comprehension

Representation learning is the foundation of machine reading comprehensi...

Please sign up or login with your details

Forgot password? Click here to reset