The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task
Single implementing, concatenating, adding or replacing of the representations has yielded significant improvements on many NLP tasks. Mainly in Relation Extraction where static, contextualized and others representations that are capable of explaining word meanings through the linguistic features that these incorporates. In this work addresses the question of how is improved the relation extraction using different types of representations generated by pretrained language representation models. We benchmarked our approach using popular word representation models, replacing and concatenating static, contextualized and others representations of hand-extracted features. The experiments show that representation is a crucial element to choose when DL approach is applied. Word embeddings from Flair and BERT can be well interpreted by a deep learning model for RE task, and replacing static word embeddings with contextualized word representations could lead to significant improvements. While, the hand-created representations requires is time-consuming and not is ensure a improve in combination with others representations.
READ FULL TEXT