Not All Neural Embeddings are Born Equal

10/02/2014
by   Felix Hill, et al.
0

Neural language models learn word representations that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models. We show that translation-based embeddings outperform those learned by cutting-edge monolingual models at single-language tasks requiring knowledge of conceptual similarity and/or syntactic role. The findings suggest that, while monolingual models learn information about how concepts are related, neural-translation models better capture their true ontological status.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2014

Embedding Word Similarity with Neural Machine Translation

Neural language models learn word representations, or embeddings, that c...
research
10/02/2020

Syntax Representation in Word Embeddings and Neural Networks – A Survey

Neural networks trained on natural language processing tasks capture syn...
research
06/05/2018

How Do Source-side Monolingual Word Embeddings Impact Neural Machine Translation?

Using pre-trained word embeddings as input layer is a common practice in...
research
09/04/2022

Informative Language Representation Learning for Massively Multilingual Neural Machine Translation

In a multilingual neural machine translation model that fully shares par...
research
04/10/2020

On the Existence of Tacit Assumptions in Contextualized Language Models

Humans carry stereotypic tacit assumptions (STAs) (Prince, 1978), or pro...
research
12/05/2019

Pairwise Neural Machine Translation Evaluation

We present a novel framework for machine translation evaluation using ne...
research
04/05/2020

Unsupervised Domain Clusters in Pretrained Language Models

The notion of "in-domain data" in NLP is often over-simplistic and vague...

Please sign up or login with your details

Forgot password? Click here to reset