Representing Affect Information in Word Embeddings

09/21/2022
by   Yuhan Zhang, et al.
12

A growing body of research in natural language processing (NLP) and natural language understanding (NLU) is investigating human-like knowledge learned or encoded in the word embeddings from large language models. This is a step towards understanding what knowledge language models capture that resembles human understanding of language and communication. Here, we investigated whether and how the affect meaning of a word (i.e., valence, arousal, dominance) is encoded in word embeddings pre-trained in large neural networks. We used the human-labeled dataset as the ground truth and performed various correlational and classification tests on four types of word embeddings. The embeddings varied in being static or contextualized, and how much affect specific information was prioritized during the pre-training and fine-tuning phase. Our analyses show that word embedding from the vanilla BERT model did not saliently encode the affect information of English words. Only when the BERT model was fine-tuned on emotion-related tasks or contained extra contextualized information from emotion-rich contexts could the corresponding embedding encode more relevant affect information.

READ FULL TEXT
research
08/31/2021

Sense representations for Portuguese: experiments with sense embeddings and deep neural language models

Sense representations have gone beyond word representations like Word2Ve...
research
10/25/2020

Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge

Understanding context-dependent variation in word meanings is a key aspe...
research
06/21/2022

Knowledge Graph Fusion for Language Model Fine-tuning

Language Models such as BERT have grown in popularity due to their abili...
research
05/31/2019

Emotional Embeddings: Refining Word Embeddings to Capture Emotional Content of Words

Word embeddings are one of the most useful tools in any modern natural l...
research
01/25/2023

Probing Taxonomic and Thematic Embeddings for Taxonomic Information

Modelling taxonomic and thematic relatedness is important for building A...
research
06/05/2019

Entity-Centric Contextual Affective Analysis

While contextualized word representations have improved state-of-the-art...
research
05/22/2023

LM-Switch: Lightweight Language Model Conditioning in Word Embedding Space

In recent years, large language models (LMs) have achieved remarkable pr...

Please sign up or login with your details

Forgot password? Click here to reset