DeepAI AI Chat
Log In Sign Up

Embedding Lexical Features via Low-Rank Tensors

04/02/2016
by   Mo Yu, et al.
ibm
Johns Hopkins University
Carnegie Mellon University
0

Modern NLP models rely heavily on engineered features, which often combine word and contextual information into complex lexical features. Such combination results in large numbers of features, which can lead to over-fitting. We present a new model that represents complex lexical features---comprised of parts for words, contextual information and labels---in a tensor that captures conjunction information among these parts. We apply low-rank tensor approximations to the corresponding parameter tensors to reduce the parameter space and improve prediction speed. Furthermore, we investigate two methods for handling features that include n-grams of mixed lengths. Our model achieves state-of-the-art results on tasks in relation extraction, PP-attachment, and preposition disambiguation.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/02/2020

Tensor Low-Rank Reconstruction for Semantic Segmentation

Context information plays an indispensable role in the success of semant...
07/11/2021

LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution

Lexical substitution is the task of generating meaningful substitutes fo...
12/14/2020

Analyzing Large and Sparse Tensor Data using Spectral Low-Rank Approximation

Information is extracted from large and sparse data sets organized as 3-...
08/17/2015

Molding CNNs for text: non-linear, non-consecutive convolutions

The success of deep learning often derives from well-chosen operational ...
06/13/2019

Antonym-Synonym Classification Based on New Sub-space Embeddings

Distinguishing antonyms from synonyms is a key challenge for many NLP ap...
03/06/2020

NYTWIT: A Dataset of Novel Words in the New York Times

We present the New York Times Word Innovation Types dataset, or NYTWIT, ...