Automatic Identification of Sarcasm Target: An Introductory Approach

10/22/2016 ∙ by Aditya Joshi, et al. ∙ Indian Institute Of Technology Monash University IIT Bombay 0

Past work in computational sarcasm deals primarily with sarcasm detection. In this paper, we introduce a novel, related problem: sarcasm target identification i.e., extracting the target of ridicule in a sarcastic sentence). We present an introductory approach for sarcasm target identification. Our approach employs two types of extractors: one based on rules, and another consisting of a statistical classifier. To compare our approach, we use two baselines: a naïve baseline and another baseline based on work in sentiment target identification. We perform our experiments on book snippets and tweets, and show that our hybrid approach performs better than the two baselines and also, in comparison with using the two extractors individually. Our introductory approach establishes the viability of sarcasm target identification, and will serve as a baseline for future work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sarcasm is a form of verbal irony that is intended to express contempt or ridicule111Source: The Free Dictionary. Past work in computational sarcasm deals primarily with sarcasm detection, i.e., to predict whether or a not a given piece of text is sarcastic [Joshi et al.2016a]. So the sentence ‘A woman needs a man like fish needs bicycle222This quote is attributed to Irina Dunn, an Australian writer and social activist.’ will be predicted as sarcastic. While several approaches have been reported for sarcasm detection [Tsur et al.2010, Davidov et al.2010, González-Ibánez et al.2011, Joshi et al.2015], no past work, to the best of our knowledge, attempts to identify a crucial component of sarcasm: the target of ridicule [Campbell and Katz2012]. In case of the example above, this target of ridicule is the word ‘man’.

In this paper, we introduce a new avenue in computational sarcasm research. We explore a novel problem called ‘sarcasm target identification’: the task of extracting the target of ridicule (i.e., sarcasm target) of a sarcastic text. This sarcasm target is either a subset of words in the sentence or a fallback label ‘Outside’333This label is necessary because the sarcasm target may not be present as a word in the sentence. Section 2 discusses this in detail.. We present an introductory approach that takes as input a sarcastic text and returns its sarcasm target. Our hybrid approach employs two extractors: a rule-based extractor (that implements a set of rules) and a statistical extractor (that uses a word-level classifier for every word in the sentence, to predict if the word will constitute the sarcasm target). We evaluate our approach using two manually labeled datasets consisting of book snippets and tweets. We consider two versions of our approach: Hybrid OR (where prediction by the two extractors is OR-ed) and Hybrid AND (where prediction by the two extractors is AND-ed). Since this is the first work in sarcasm target detection, no past work exists to be used as a baseline. Hence, we devise two baselines to validate the strength of our work. The first is a simple, intuitive baseline to show if our approach (which is computationally more intensive than the simple baseline) holds any value444

In absence of past work, using simple and obvious techniques to solve a problem have been considered as baselines in sentiment analysis 

[Tan et al.2011, Pang and Lee2005]. As the second baseline, we use a technique reported for sentiment/opinion target identification. For both our datasets, we observe that the hybrid approach outperforms both the baselines. In addition, the hybrid OR approach also works better than using either rule-based or statistical extractors individually.

Sarcasm target identification will be useful for aspect-based sentiment analysis so that the negative sentiment expressed in the sarcastic text can be attributed to the correct aspect. To the best of our knowledge, this is the first work that attempts identification of sarcasm targets. Our results will serve as a baseline for future work. Our manually labeled datasets are available for download at: Anonymous. Each unit consists of a piece of text (either book snippet or tweet) with the annotation as the sarcasm target.

The rest of the paper is organized as follows. Section 2 formulates the problem, while Section 3 describes our architecture in detail. Experiment setup is in Section 4. The results are presented in Section 5 while an error analysis is in Section 6. We present related work in Section 7 and conclude the paper in Section 8.

Figure 1: Architecture of our Sarcasm Target Identification Approach

2 Formulation

Sarcasm is a well-known challenge to sentiment analysis [Pang and Lee2008]. Consider the sarcastic sentence ‘My cell phone has an awesome battery that lasts 20 minutes’. This sentence ridicules the battery of the cell phone. Aspect-based sentiment analysis needs to identify that the sentence ridicules the battery of the phone and hence, expresses negative sentiment towards the aspect ‘battery’. Our proposed problem ‘sarcasm target identification’ thus enables aspect-based sentiment analysis to attribute the negative sentiment to the correct target aspect. We define the sarcasm target as the entity or situation being ridiculed in a sarcastic text. In case of ‘Can’t wait to go to class today’, the word ‘class’ is the sarcasm target. Every sarcastic text has at least one sarcasm target (by definition of sarcasm), and the notion of sarcasm target is applicable for only sarcastic text (i.e., non-sarcastic text does not have sarcasm target). Thus, we define sarcasm target identification as the task of extracting the subset of words that indicate the target of ridicule, given a sarcastic text. In case the target of ridicule is not present among these words, a fallback label ‘Outside’ is expected. Examples of some sarcasm targets are given in Table 1.

Some challenges of sarcasm target identification are:

  • Presence of multiple candidate phrases: Consider the sentence ‘This phone heats up so much that I strongly recommend chefs around the world to use it as a cook-top’. In this sentence, the words ‘chefs’, ‘cook-top’ and ‘phone’ are candidate phrases. However, only the ‘phone’ is being ridiculed in this sentence.

  • Multiple sarcasm targets: A sentence like ‘You are as good at coding as he is at cooking’ ridicules both ‘you’ and ‘he’, and hence, both are sarcasm targets.

  • Absence of a sarcasm target word (the ‘Outside’ case): Consider the situation where a student is caught copying in a test, and the teacher says, ‘Your parents must be so proud today!’. No specific word in the sentence is the sarcasm target. The target here is the student. We refer to such cases as the ‘outside’ cases.

Example Target
Love when you don’t have two minutes to send me a quick text. you
Don’t you just love it when Microsoft tells you that you’re spelling your own name wrong. Microsoft
I love being ignored. being ignored
He is as good at coding as Tiger Woods is at avoiding controversy. He, Tiger Woods
Yeah, right! I hate catching the bus on time anyway! Outside
Table 1: Examples of sarcasm targets

3 Architecture

Our hybrid approach for sarcasm target identification is depicted in Figure 1. The input is a sarcastic sentence while the output is the sarcasm target. The approach consists of two kinds of extractors: (a) a rule-based extractor that implements nine rules to identify different kinds of sarcasm targets, and (b) a statistical extractor

that uses statistical classification techniques. The two extractors individually generate lists of candidate sarcasm targets. The third component is the

integrator that makes an overall prediction of the sarcasm target by choosing among the sarcasm targets returned by the individual extractors. The overall output is a subset of words in the sentence. In case no word is found to be a sarcasm target, a fallback label ‘Outside’ is returned. In the forthcoming subsections, we describe the three modules in detail.

3.1 Rule-based Extractor

Our rule-based extractor consists of nine rules that take as input the sarcastic sentence, and return a set of candidate sarcasm targets. The rules are summarized in Table 2. We now describe each rule, citing past work that motivated the rule, wherever applicable:

  1. R1 (Pronouns and Pronominal Adjectives): R1 returns pronouns such as ‘you, she, they’ and pronominal adjectives (followed by their object) (as in the case of ‘your shoes’). Thus, for the sentence ‘I am so in love with my job’, the phrases ‘I’ (pronoun) and ‘my job’ (based on the pronominal adjective ‘my’) are returned as candidate sarcasm targets. This is based on observations by  shamay2005neuroanatomical.

  2. R2 (Named Entities): Named entities in a sentence may be sarcasm targets. This rule returns all named entities in the sentence. In case of ‘Olly Riley is so original with his tweets’, R2 predicts the phrase ‘Olly Riley’ as a candidate sarcasm target.

  3. R3 (Sentiment-bearing verb as the pivot): This rule is based on the idea by riloff that sarcasm may be expressed as a contrast between a positive sentiment verb and a negative situation. In case of ‘I love being ignored’, the sentiment-bearing verb ‘love’ is positive. The object of ‘love’ is ‘being ignored’. Therefore, R3 returns ‘being ignored’ as the candidate sarcasm target. If the sentiment-bearing verb is negative, the rule returns ‘Outside’ as a candidate sarcasm target. This is applicable in case of situations like humble bragging555http://www.urbandictionary.com/define.php?term=humblebrag as in ‘I hate being so popular’ where the speaker is either ridiculing the listener or just bragging about themselves.

  4. R4 (Non-sentiment-bearing verb as the pivot): This rule applies in case of sentences where the verb does not bear sentiment. The rule identifies which out of subject or object has a lower sentiment score, and returns the corresponding portion as the candidate sarcasm target. For example, rule R4 returns ‘to have a test on my birthday’ as the candidate sarcasm target in case of ‘Excited that the teacher has decided to have a test on my birthday!’ where ‘decided’ is the non-sentiment-bearing verb. This is also based on  riloff.

  5. R5 (Gerundial verb phrases and Infinitives): R5 returns the gerundial phrase ‘being covered in rashes’ in case of ‘Being covered in rashes is fun.’ as the candidate sarcasm target. Similarly, in case of ‘Can’t wait to wake up early to babysit!’, the infinitive ‘to wake up early to babysit’ is returned.

  6. R6 (Noun phrases containing positive adjective): R6 extracts noun phrases of the form ‘JJ NN’ where JJ is a positive adjective, and returns the noun indicated by NN. Specifically, 1-3 words preceding the nouns in the sentence are checked for positive sentiment. In case of ‘Look at the most realistic walls in a video game’, the noun ‘walls’ is returned as the sarcasm target.

  7. R7 : Interrogative sentences: R7 returns the subject of an interrogative sentence as the sarcasm target. Thus, for ‘A murderer is stalking me. Could life be more fun?’, the rule returns ‘life’ as the target.

  8. R8 : Sarcasm in Similes: This rule captures the subjects/noun phrases involved in similes and ‘as if’ comparisons. The rule returns the subject on both sides, as in ‘He is as good at coding as Tiger Woods is at avoiding controversy.’ Both ‘He’ and ‘Tiger Woods’ are returned as targets. This is derived from work on sarcastic similes by  veale2010detecting.

  9. R9 : Demonstrative adjectives: This rule captures nouns associated with demonstrative adjectives - this/that/these/those. For example, for the sentence ‘Oh, I love this jacket!’, R9 returns ‘this jacket’ as the sarcasm target.

Rule Definition Example
R1 Return pronouns (inluding possessive) and pronoun based adjectives Love when you don’t have two minutes to send me a quick text .. ; I am so in love with my job.
R2 Return named entities as target Don’t you just love it when Microsoft tells you that you’re spelling your own name wrong.
R3 Return direct object of a positive sentiment verb I love being ignored.
R4 Return phrase on lower sentiment side of primary verb So happy to just find out it has been decided to reschedule all my lectures and tutorials for me to night classes at the exact same times!
R5 Return Gerund and Infinitive verb phrases Being covered in hives is so much fun!; Can’t wait to wake up early to babysit
R6 Return nouns preceded by a positive sentiment adjective Yep, this is indeed an amazing donut ..
R7 Return subject of interrogative sentences A murderer is stalking me. Could life be more fun?
R8 Return subjects of comparisons (similes) He is as good at coding as Tiger Woods is at avoiding controversy.
R9 Return demonstrative adjective-noun pairs Oh, I love this jacket!
Table 2: Summary of rules in the rule-based extractor

Combining the outputs of individual rules to generate candidate sarcasm targets of the rule-based extractor: To generate the set of candidate sarcasm targets returned by the rule-based extractor, a weighted majority approach is used as follows. Every rule above is applied to the input sarcastic sentence. Then, every word is assigned a score that sums the accuracy of rules which predicted that this word is a part of the sarcasm target. This accuracy is the overall accuracy of the rule as determined by solely the rule-based classifier666These values are shown in Tables 4 and 5. Thus, the integrator weights each word on the basis of how good a rule predicting it as a target was. Words corresponding to the maximum value of this score are returned as candidate sarcasm targets.

3.2 Statistical Extractor

The statistical extractor uses a classifier that takes as input a word (along with its features) and returns if the word is a sarcasm target. To do this, we decompose the task into classification tasks, where is the total number of words in the sentence. This means that every word in input text is considered as an instance, such that the label can be 1 or 0 depending on whether or not the given word is a part of sarcasm target. For example, ‘Tooth-ache is fun’ with sarcasm target as ‘tooth-ache’ is broken down into three instances: ‘tooth-ache’ with label 1, ‘is’ with label 0 and ‘fun’ with label 0. In case the target lies outside the sentence, all words have the label 0.

We then represent the instance (i.e., the word) as a set of following features: (A) Lexical: Unigrams, (B) Part of Speech (POS)-based features: Current POS, Previous POS, Next POS, (C) Polarity-based features: Word Polarity : Sentiment score of the word, Phrase Polarity : Sentiment score for the trigram formed by considering the previous word, current word and the next word together (in that order). These polarities lie in the range [-1,+1]. These features are based on our analysis that the target phrase or word tends to be more neutral than the rest of the sentence, and (D) Pragmatic features: Capitalization : Number of capital letters in the word. Capitalization features are chosen based on features from  4.

The classifiers are trained with words as instances while the sarcasm target is to be computed at the sentence level. Hence, the candidate sarcasm target returned by the statistical extractor consists of words for which the classifier returned 1. For example, the sentence ‘This is fun’ is broken up into three instances: ‘this’, ‘is’ and ‘fun’. If the classifier returns 1, 0, 0 for the three instances respectively, the statistical extractor returns ‘this’ as the candidate sarcasm target. Similarly, if the classifier returns 0, 0, 0 for the three instances, the extractor returns the fallback label ‘Outside’.

3.3 Integrator

The integrator determines the sarcasm target based on the outputs of the two extractors. We consider two configurations of the integrator:

  1. Hybrid OR: In this configuration, the integrator predicts the set of words that occur in the output of either of the two extractors as the sarcasm target. If the lists are empty, the output is returned as ‘Outside’.

  2. Hybrid AND : In this configuration, the integrator predicts the set of words that occur in the output of both the two extractors as the sarcasm target. If the intersection of the lists is empty, the output is returned as ‘Outside’.

The idea of using two configurations OR and AND is based on a rule-based sarcasm detector by [Khattri et al.2015]. While AND is intuitive, the second configuration OR is necessary because our extractors individually may not capture all forms of sarcasm target. This is intuitive because our rules may not cover all forms of sarcasm targets.

Snippets Tweets
Count 224 506
Average #words 28.47 13.06
Vocabulary 1710 1458
Total words 6377 6610
Average length of sarcasm target 1.6 2.08
Average polarity strength of sarcasm target 0.0087 0.035
Average polarity strength of portion apart from sarcasm target 0.027 0.53
Table 3: Statistics of our datasets; ‘Snippets’: Book Snippets

4 Experiment setup

We evaluate our approach using two datasets: one consisting of book snippets and another of tweets. The dataset of book snippets is a sarcasm-labeled dataset by  adityaemnlp. 224 book snippets marked as sarcastic are used. On the other hand, for our dataset of tweets, we use the sarcasm-labeled dataset by  riloff. 506 sarcastic tweets from this dataset are used. The statistics of the two datasets are shown in Table 3. The average length of a sarcasm target is 1.6 words in case of book snippets and 2.08 words in case of tweets. The last two rows in the table point to an interesting observation. In both the datasets, the average polarity strength777Polarity strength is the sum of polarities of words. We use a sentiment word-list to get the strength values of sarcasm target is lower than polarity strength of rest of the sentence. This shows that sarcasm target is likely to be more neutral than sentiment-bearing. Note that all textual units (tweets as well as book snippets) in both datasets are sarcastic.

We use SVM Perf [Joachims2006]

to train the classifiers, optimized for F-score with epsilon e=0.5 and RBF kernel

888RBF Kernel performed better than linear kernel.. We set C=1000 for tweets and C=1500 for snippets. We report our results on four-fold cross validation for both datasets. Note that we convert individual sentences into words. Therefore, the dataset in case of book snippets has 6377 instances, while the one of tweets has 6610 instances. The four folds for cross-validation are created over these instances. With a word as instance, the task is binary classification: 1 indicating that the word is a sarcasm target and 0 indicating that it is not. For rules in the rule-based extractor, we use tools in NLTK [Bird2006], wherever necessary.

We consider two baselines with which our hybrid approach is compared:

  1. Baseline 1: All Objective Words: As the first baseline, we design a naïve approach for our task: include all words of the sentence which are not stop words, and have neutral sentiment polarity, as the predicted sarcasm target. We reiterate that in case of papers with no past work, simplistic baselines have been commonly reported in sentiment analysis. However, to validate that our hybrid approach is valuable, we compare our hybrid approach against other possible versions of the system as well.

  2. Baseline 2: Baseline 2 is derived from past work in opinion target identification because sarcasm target identification may be considered as but a form of opinion target identification. Sequence labeling has been reported for opinion target identification [Jin et al.2009]. Therefore, we use SVM-HMM [Altun et al.2003] with default parameters as the second baseline.

We report performance using two metrics: Exact Match Accuracy and Dice Score. These metrics have been used in past work in information extraction [Michelson and Knoblock2007]. As per their conventional use, these metrics are computed at the sentence level. The metrics that we use are:

  • Exact Match (EM) Accuracy : An exact match occurs if the list of predicted target(s) is exactly the same as the list of actual target(s). The accuracy is computed as number of instances with exact match divided by total instances.

  • Dice Score : Dice score[Sørensen1948] is used to compare similarity between two samples. This is considered to be a better metric than Exact match accuracy because it accounts for missing words and extra words in the target.

Rule Overall Conditional
EM DS EM DS
R1 7.14 32.8 7.65 35.23
R2 8.48 16.7 19.19 37.81
R3 4.91 6.27 16.92 21.62
R4 2.67 11.89 4.38 19.45
R5 1.34 6.39 2.32 11.11
R6 4.01 6.77 8.91 15.02
R7 3.12 10.76 9.46 32.6
R8 4.91 6.78 35.02 45.17
R9 4.46 6.94 34.48 53.67
Table 4: Results for individual rules for book snippets
Rule Overall Conditional
EM DS EM DS
R1 6.32 19.19 8.69 26.39
R2 11.26 16.18 30.32 43.56
R3 12.45 20.28 34.24 55.77
R4 6.91 13.51 18.42 36.0
R5 9.28 23.87 15.36 39.47
R6 10.08 16.91 19.31 32.42
R7 9.88 15.21 32.25 49.65
R8 11.26 11.26 50 50
R9 11.46 13.28 43.59 50.51
Table 5: Results for individual rules for tweets

Note: If the actual target is the fallback label ‘Outside’, then the expected predicted target is either ‘Outside’ or empty prediction list. In such a case, the instance will contribute to exact match accuracy.

5 Results

This section presents our results in two steps: performance of individual rules that are a part of the rule-based extractor, and performance of the overall approach.

5.1 Performance of rules in the rule-based extractor

Tables  4 and  5 present the performance of the rules in our rule-based extractor, for snippets and tweets respectively. The two metrics (exact match accuracy and dice score) are reported for two cases: Overall and Conditional. ‘Overall’ spans all text units in the dataset whereas ‘Conditional’ is limited to text units which match a given rule (i.e., where the given linguistic phenomenon of, say, gerunds, etc. is observed). Considering the ‘Conditional’ case is crucial because a rule may be applicable for a specific form of sarcasm target, but may work accurately in those cases. Such a rule will have a low ‘overall exact match/dice score’ but a high ‘conditional exact match/dice score.’ Values in bold indicate the best performing rule for a given performance metric. As seen in the tables, the values for ‘conditional’ are higher than those for ‘Overall’. For example, consider rule R7 in Table 4

. Exact match of 3.12 (for overall accuracy) as against 9.46 (for conditional accuracy). This situation is typical of rule-based systems where rules may not cover all cases but be accurate for situations that they do cover.

For tweets, R3 has a very high dice score (conditional) (55.77). This rule validates the benefit of utilizing the structure of sarcastic tweets as explored by [Riloff et al.2013] : ‘contrast of positive sentiment with negative situation’ being a strong indicator of sarcasm target.

5.2 Overall Performance

We now compare the performance of the approach with the baseline (as described in Section 4). In order to understand the benefit of individual extractors, we also show their performance when they are used individually. Thus, we compare five approaches: (A) Baseline, (B) Rule-based (when only the rule-based extractor is used), (C) Statistical (when only the statistical extractor is used), and (D) & (E) Hybrid (two configurations: OR and AND). It must be noted that since no existing sarcasm target identification approach exists, we rely on the approach of using a simple baseline, and verify if our approach does any better than a simpler, obvious baseline. Such baselines have been used in early work in sentiment analysis. For example,  pang2005seeing compare against a ‘random-choice’ baseline, or  tan2011user who use a simple majority-voting baseline, in absence of past work. We also use a second baseline from a related area: sentiment/opinion target identification.

Tables 6 and  7 compare the five approaches for snippets and tweets respectively. All our approaches outperform the baseline in case of exact match and dice score. In case of tweets, Table  7 shows that the rule-based extractor achieves a dice score of 29.13 while that for statistical extractor is 31.8. Combining the two together (owing to our hybrid architecture) improves the dice score to 39.63. This improvement also holds for book snippets. This justifies the ‘hybrid’ nature of our approach. Hybrid OR performs the best in terms of Dice Score. However, for exact match accuracy, Hybrid AND achieves the best performance (16.51 for snippets and 13.45 for tweets). This is likely because Hybrid AND is restrictive with respect to the predictions it makes for individual words. The statistical extractor performs better than rule-based extractor for all three metrics. For example, in case of tweets, the dice score for statistical extractor is 31.8 while that for rule-based extractor is 29.13. Also, nearly all results (across approaches and metrics) are higher in case of tweets as compared to snippets. Since tweets are shorter than snippets (as shown in Table 3), it is likely that they are more direct in their ridicule as compared to snippets.

Approach EM DS
Baseline 1: All Objective Words 0.0 16.14
Baseline 2: Seq. Labeling 12.05 31.44
Only Rule-Based 9.82 26.02
Only Statistical 12.05 31.2
Hybrid OR 7.01 32.68
Hybrid AND 16.51 21.28
Table 6: Performance of sarcasm target identification for snippets
Approach EM DS
Baseline 1: All Objective Words 1.38 27.16
Baseline 2: Seq. Labeling 12.26 33.41
Only Rule-Based 9.48 29.13
Only Statistical 10.48 31.8
Hybrid OR 9.09 39.63
Hybrid AND 13.45 20.82
Table 7: Performance of sarcasm target identification for tweets
Book Snippets Tweets
EM DS EM DS
Overall 7.01 32.68 9.09 39.63
‘Outside’ cases 6.81 6.81 4.71 4.71
Table 8: Comparison of performance of our approach in case of examples with target outside the text (indicated by ‘Outside’ cases), with complete dataset (indicated by ‘Overall’); EM: Exact Match, DS: Dice Score

6 Error Analysis

A key source of error is cases where the target lies outside the text. In this section, we describe such examples and compare the impact of these errors with the overall performance.

In our dataset of book snippets, there are 11 texts ( 5%) with sarcasm target outside the text. In case of tweets, such cases are much higher: 53 tweets ( 10%). Table  8 compares the results of our hybrid (OR) approach for the specific case of target being ‘outside’ the text (indicated by ‘Outside cases’ in the table), with the results on the complete dataset (indicated by ‘Overall’ in the table). Dice Score (DS) for book snippets is 6.81 for ‘outside’ cases as compared to 32.68 for the complete dataset. In general, the performance for the ‘outside’ cases is lower than the overall performance. This proves the difficulty that the ‘Outside’ cases presents. The EM and DS values for ‘Outside’ cases are the same by definition. This is because when the target is ‘Outside’, a partial match and an exact match are the same. Our approach correctly predicts the label ‘Outside’ for sentences like ‘Yeah, just ignore me. That is TOTALLY the right way to handle this!’ However, our approach gives the incorrect output for some examples. For example, for ‘Oh, and I suppose the apples ate the cheese’, the predicted target is not ‘Outside’ (the expected label) but ‘I’. Similarly, for ‘Please keep ignoring me for all of senior year. It’s not like we’re friends with the exact same people’, the incorrectly predicted target is ‘me’ instead of the expected label ‘Outside’.

7 Related Work

Computational sarcasm primarily focuses on sarcasm detection: classification of a text as sarcastic or non-sarcastic. joshi2016automatic present a survey of sarcasm detection approaches. They observe three trends in sarcasm detection: semi-supervised extraction of sarcastic patterns, use of hashtag-based supervision, and use of contextual information for sarcasm detection [Tsur et al.2010, Davidov et al.2010, Joshi et al.2015]. However, to the best of our knowledge, no past work aims to identify phrases in a sarcastic sentence that indicate the target of ridicule in the sentence.

Related to sarcasm target identification is sentiment target identification. Sentiment target identification deals with identifying the entity towards which sentiment is expressed in a sentence. qiu2011opinion present an approach to extract opinion words and targets collectively from a dataset. Aspect identification for sentiment has also been studied. This deals with extracting aspects of an entity (for example, color, weight, battery in case of a cell phone). Probabilistic topic models have been commonly used for the same. titov2008joint present a probabilistic topic model that jointly estimates sentiment and aspect in order to achieve sentiment summarization. lu2011multi perform multi-aspect sentiment analysis using a topic model. Several other topic model-based approaches to aspect extraction have been reported 

[Mukherjee and Liu2012]. To the best of our knowledge, ours is the first work that deals with sarcasm target identification.

8 Conclusion & Future Work

In this paper, we introduced a novel problem: sarcasm target identification. This problem aims to identify the target of ridicule in a sarcastic text. This target may be a subset of words in the text or a fallback label ‘Outside’. The task poses challenges such as multiple sarcasm targets or sarcasm targets that may not even be present as words in the sentence. We present an introductory approach for sarcasm target identification that consists of two kinds of extractors: a rule-based and a statistical extractor. Our rule-based extractor implements nine rules that capture forms of sarcasm target. The statistical extractor splits a sentence of length into instances, where each instance is represented by a word, and a label that indicates if this word is a sarcasm target. A statistical classifier that uses features based on POS and sentiment, predicts if a given word is likely to be a target or not. Finally, an integrator combines the outputs of the two extractors in two configurations: OR and AND. We evaluate our approach on two datasets: one consisting of snippets from books, and another of tweets. In general, our hybrid OR system performs the best with a Dice score of 39.63. This is higher than two baselines: a naïve baseline designed for the task, and a baseline based on sentiment target identification. Our hybrid approach is also higher than the two extractors individually used. This shows that the two extractors collectively form a good sarcasm target identification approach. Finally, we discuss performance in case of examples where the target is outside the sentence. In such cases, our approach performs close to the overall system in terms of exact match, but there is a severe degradation in Dice score. We finally present an analysis of errors due to target being outside the text.

Our work forms a foundation for future approaches to identify sarcasm targets. As future work, additional rules in the rule-based extractor and novel sets of features in the statistical extractor may be used. Use of syntactic dependencies has been found to be useful in case of opinion target extraction [Qiu et al.2011]. Applying these techniques for sarcasm target identification can be useful. A special focus on the ‘outside’ cases (i.e., cases where the target of ridicule in a sarcastic text is beyond the words present in the sentence) is likely to be helpful for sarcasm target identification.

References

  • [Altun et al.2003] Yasemin Altun, Ioannis Tsochantaridis, Thomas Hofmann, et al. 2003.

    Hidden markov support vector machines.

    In ICML, volume 3, pages 3–10.
  • [Bird2006] Steven Bird. 2006. Nltk: the natural language toolkit. In Proceedings of the COLING/ACL on Interactive presentation sessions, pages 69–72. Association for Computational Linguistics.
  • [Campbell and Katz2012] John D Campbell and Albert N Katz. 2012. Are there necessary conditions for inducing a sense of sarcastic irony? Discourse Processes, 49(6):459–480.
  • [Davidov et al.2010] Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 107–116. Association for Computational Linguistics.
  • [González-Ibánez et al.2011] Roberto González-Ibánez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 581–586. Association for Computational Linguistics.
  • [Jin et al.2009] Wei Jin, Hung Hay Ho, and Rohini K Srihari. 2009.

    Opinionminer: a novel machine learning system for web opinion mining and extraction.

    In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1195–1204. ACM.
  • [Joachims2006] Thorsten Joachims. 2006. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217–226. ACM.
  • [Joshi et al.2015] Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In

    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing

    , volume 2, pages 757–762.
  • [Joshi et al.2016a] Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2016a. Automatic sarcasm detection: A survey. arXiv preprint arXiv:1602.03426.
  • [Joshi et al.2016b] Aditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, and Mark Carman. 2016b. Are word embedding-based features useful for sarcasm detection? In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) 2016.
  • [Khattri et al.2015] Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2015. Your sentiment precedes you: Using an author’s historical tweets to predict sarcasm. In 6TH WORKSHOP ON COMPUTATIONAL APPROACHES TO SUBJECTIVITY, SENTIMENT AND SOCIAL MEDIA ANALYSIS WASSA 2015, page 25.
  • [Lu et al.2011] Bin Lu, Myle Ott, Claire Cardie, and Benjamin K Tsou. 2011. Multi-aspect sentiment analysis with topic models. In 2011 IEEE 11th International Conference on Data Mining Workshops, pages 81–88. IEEE.
  • [Michelson and Knoblock2007] Matthew Michelson and Craig A Knoblock. 2007. Unsupervised information extraction from unstructured, ungrammatical data sources on the world wide web. International Journal of Document Analysis and Recognition (IJDAR), 10(3-4):211–226.
  • [Mukherjee and Liu2012] Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 339–348. Association for Computational Linguistics.
  • [Pang and Lee2005] Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 115–124. Association for Computational Linguistics.
  • [Pang and Lee2008] Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135.
  • [Qiu et al.2011] Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9–27.
  • [Riloff et al.2013] Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In EMNLP, pages 704–714.
  • [Shamay-Tsoory et al.2005] SG Shamay-Tsoory, Rachel Tomer, and Judith Aharon-Peretz. 2005. The neuroanatomical basis of understanding sarcasm and its relationship to social cognition. Neuropsychology, 19(3):288.
  • [Sørensen1948] Thorvald Sørensen. 1948. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. Biol. Skr., 5:1–34.
  • [Tan et al.2011] Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. User-level sentiment analysis incorporating social networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1397–1405. ACM.
  • [Titov and McDonald2008] Ivan Titov and Ryan T McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In ACL, volume 8, pages 308–316. Citeseer.
  • [Tsur et al.2010] Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm-a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews. In ICWSM.
  • [Veale and Hao2010] Tony Veale and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In ECAI, volume 215, pages 765–770.