Humor in Word Embeddings: Cockamamie Gobbledegook for Nincompoops

02/08/2019
by   Limor Gultchin, et al.
0

We study humor in Word Embeddings, a popular AI tool that associates each word with a Euclidean vector. We find that: (a) the word vectors capture multiple aspects of humor discussed in theories of humor; and (b) each individual's sense of humor can be represented by a vector, and that these sense-of-humor vectors accurately predict differences in people's sense of humor on new, unrated, words. The fact that single-word humor seems to be relatively easy for AI has implications for the study of humor in language. Humor ratings are taken from the work of Englethaler and Hills (2017) as well as our own crowdsourcing study of 120,000 words.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

Learning Sense-Specific Static Embeddings using Contextualised Word Embeddings as a Proxy

Contextualised word embeddings generated from Neural Language Models (NL...
research
06/09/2019

Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings

Word embeddings typically represent different meanings of a word in a si...
research
04/02/2019

Using Multi-Sense Vector Embeddings for Reverse Dictionaries

Popular word embedding methods such as word2vec and GloVe assign a singl...
research
07/17/2019

Analysis of Word Embeddings using Fuzzy Clustering

In data dominated systems and applications, a concept of representing wo...
research
05/29/2018

Quantum-inspired Complex Word Embedding

A challenging task for word embeddings is to capture the emergent meanin...
research
11/18/2020

Topology of Word Embeddings: Singularities Reflect Polysemy

The manifold hypothesis suggests that word vectors live on a submanifold...
research
10/11/2018

Towards Understanding Linear Word Analogies

A surprising property of word vectors is that vector algebra can often b...

Please sign up or login with your details

Forgot password? Click here to reset