On the Impact of Temporal Representations on Metaphor Detection

11/05/2021
by   Giorgio Ottolina, et al.
0

State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using sequential metaphor classifiers based on neural networks. The signal that represents the literal meaning is often represented by (non-contextual) word embeddings. However, metaphorical expressions evolve over time due to various reasons, such as cultural and societal impact. Metaphorical expressions are known to co-evolve with language and literal word meanings, and even drive, to some extent, this evolution. This rises the question whether different, possibly time-specific, representations of literal meanings may impact on the metaphor detection task. To the best of our knowledge, this is the first study which examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings. Our experimental analysis is based on three popular benchmarks used for metaphor detection and word embeddings extracted from different corpora and temporally aligned to different state-of-the-art approaches. The results suggest that different word embeddings do impact on the metaphor detection task and some temporal word embeddings slightly outperform static methods on some performance measures. However, results also suggest that temporal word embeddings may provide representations of words' core meaning even too close to their metaphorical meaning, thus confusing the classifier. Overall, the interaction between temporal language evolution and metaphor detection appears tiny in the benchmark datasets used in our experiments. This suggests that future work for the computational analysis of this important linguistic phenomenon should first start by creating a new dataset where this interaction is better represented.

READ FULL TEXT

page 10

page 11

page 12

research
06/05/2019

Training Temporal Word Embeddings with a Compass

Temporal word embeddings have been proposed to support the analysis of w...
research
04/13/2020

Compass-aligned Distributional Embeddings for Studying Semantic Differences across Corpora

Word2vec is one of the most used algorithms to generate word embeddings ...
research
12/09/2021

Combining Textual Features for the Detection of Hateful and Offensive Language

The detection of offensive, hateful and profane language has become a cr...
research
07/22/2021

Theoretical foundations and limits of word embeddings: what types of meaning can they capture?

Measuring meaning is a central problem in cultural sociology and word em...
research
08/13/2020

MICE: Mining Idioms with Contextual Embeddings

Idiomatic expressions can be problematic for natural language processing...
research
08/27/2021

Opinions are Made to be Changed: Temporally Adaptive Stance Classification

Given the rapidly evolving nature of social media and people's views, wo...
research
04/01/2021

HLE-UPC at SemEval-2021 Task 5: Multi-Depth DistilBERT for Toxic Spans Detection

This paper presents our submission to SemEval-2021 Task 5: Toxic Spans D...

Please sign up or login with your details

Forgot password? Click here to reset