DeepAI AI Chat
Log In Sign Up

The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task

Single implementing, concatenating, adding or replacing of the representations has yielded significant improvements on many NLP tasks. Mainly in Relation Extraction where static, contextualized and others representations that are capable of explaining word meanings through the linguistic features that these incorporates. In this work addresses the question of how is improved the relation extraction using different types of representations generated by pretrained language representation models. We benchmarked our approach using popular word representation models, replacing and concatenating static, contextualized and others representations of hand-extracted features. The experiments show that representation is a crucial element to choose when DL approach is applied. Word embeddings from Flair and BERT can be well interpreted by a deep learning model for RE task, and replacing static word embeddings with contextualized word representations could lead to significant improvements. While, the hand-created representations requires is time-consuming and not is ensure a improve in combination with others representations.


Improved Relation Extraction with Feature-Rich Compositional Embedding Models

Compositional embedding models build a representation (or embedding) for...

Definition Frames: Using Definitions for Hybrid Concept Representations

Concept representations is a particularly active area in NLP. Although r...

LightRel SemEval-2018 Task 7: Lightweight and Fast Relation Classification

We present LightRel, a lightweight and fast relation classifier. Our goa...

Learning Topic-Sensitive Word Representations

Distributed word representations are widely used for modeling words in N...

DeepQoE: A unified Framework for Learning to Predict Video QoE

Motivated by the prowess of deep learning (DL) based techniques in predi...