Deep contextualized word representations for detecting sarcasm and irony

09/26/2018
by   Suzana Ilic, et al.
0

Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2019

Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts

State-of-the-art methods for relation extraction consider the sentential...
research
10/19/2021

Idiomatic Expression Identification using Semantic Compatibility

Idiomatic expressions are an integral part of natural language and const...
research
10/29/2018

Learning Better Internal Structure of Words for Sequence Labeling

Character-based neural models have recently proven very useful for many ...
research
11/23/2018

Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors

Humans convey their intentions through the usage of both verbal and nonv...
research
02/15/2018

Deep contextualized word representations

We introduce a new type of deep contextualized word representation that ...
research
01/21/2022

Taxonomy Enrichment with Text and Graph Vector Representations

Knowledge graphs such as DBpedia, Freebase or Wikidata always contain a ...

Please sign up or login with your details

Forgot password? Click here to reset