An Accurate Model for Predicting the (Graded) Effect of Context in Word Similarity Based on Bert

05/03/2020
by   Wei Bao, et al.
0

Natural Language Processing (NLP) has been widely used in the semantic analysis in recent years. Our paper mainly discusses a methodology to analyze the effect that context has on human perception of similar words, which is the third task of SemEval 2020. We apply several methods in calculating the distance between two embedding vector generated by Bidirectional Encoder Representation from Transformer (BERT). Our teamwillgowon the 1st place in Finnish language track ofsubtask1, the second place in English track of subtask1.

READ FULL TEXT
research
07/09/2019

UW-BHI at MEDIQA 2019: An Analysis of Representation Methods for Medical Natural Language Inference

Recent advances in distributed language modeling have led to large perfo...
research
11/04/2021

A text autoencoder from transformer for fast encoding language representation

In recent years BERT shows apparent advantages and great potential in na...
research
10/10/2020

Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder

Relying on large pretrained language models such as Bidirectional Encode...
research
10/06/2020

Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming

Models trained to estimate word probabilities in context have become ubi...
research
12/30/2020

Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings

Models based on the transformer architecture, such as BERT, have marked ...
research
05/15/2023

Estimating the Causal Effects of Natural Logic Features in Neural NLI Models

Rigorous evaluation of the causal effects of semantic features on langua...
research
03/13/2021

Embedding Calibration for Music Semantic Similarity using Auto-regressive Transformer

One of the advantages of using natural language processing (NLP) technol...

Please sign up or login with your details

Forgot password? Click here to reset