FiLMing Multimodal Sarcasm Detection with Attention

Sarcasm detection identifies natural language expressions whose intended meaning is different from what is implied by its surface meaning. It finds applications in many NLP tasks such as opinion mining, sentiment analysis, etc. Today, social media has given rise to an abundant amount of multimodal data where users express their opinions through text and images. Our paper aims to leverage multimodal data to improve the performance of the existing systems for sarcasm detection. So far, various approaches have been proposed that uses text and image modality and a fusion of both. We propose a novel architecture that uses the RoBERTa model with a co-attention layer on top to incorporate context incongruity between input text and image attributes. Further, we integrate feature-wise affine transformation by conditioning the input image through FiLMed ResNet blocks with the textual features using the GRU network to capture the multimodal information. The output from both the models and the CLS token from RoBERTa is concatenated and used for the final prediction. Our results demonstrate that our proposed model outperforms the existing state-of-the-art method by 6.14 dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

02/16/2021

An AutoML-based Approach to Multimodal Image Sentiment Analysis

Sentiment analysis is a research topic focused on analysing data to extr...
05/30/2018

A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection

Social media platforms like twitter and facebook have be- come two of th...
03/03/2021

A Novel Context-Aware Multimodal Framework for Persian Sentiment Analysis

Most recent works on sentiment analysis have exploited the text modality...
04/19/2019

Integrating Text and Image: Determining Multimodal Document Intent in Instagram Posts

Computing author intent from multimodal data like Instagram posts requir...
08/09/2021

Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classification

A meme is an part of media created to share an opinion or emotion across...
10/19/2021

A non-hierarchical attention network with modality dropout for textual response generation in multimodal dialogue systems

Existing text- and image-based multimodal dialogue systems use the tradi...
08/20/2018

Deep Multimodal Image-Repurposing Detection

Nefarious actors on social media and other platforms often spread rumors...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

According to wiki, sarcasm is a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt. Thus, sarcasm is defined as a sharp remark whose intended meaning is different from what it looks like. For example, “I am not insulting you. I am describing you”, could mean that the speaker is insulting the audience, but the receiver does not get it. Sarcasm does not necessarily involve irony, but more often than not, sarcasm is used as an ironic remark 111www.thefreedictionary.com.. Sarcasm usually involves ambivalence and is difficult to comprehend. It requires a certain degree of intelligence quotient (IQ) and age to deliver sarcasm or understand it [9]. Sarcasm detection is an important task in many natural language understanding tasks such as opinion mining, dialogue systems, customer support, online harassment detection, to name a few. In particular, psychologists use the ability to understand sarcasm as a tool to distinguish among different types of neuro-degenerative diseases [17].

A plethora of works automatically detects sarcasm in unimodal data, using either text or images. Over the years, online media and chatbots have given rise to multimodal data such as images, texts, video, and audio. In the present work, we consider sarcasm detection using image and text modality only. The scope of our work encompasses where the text and image have opposite meaning. That means, we do not consider the case where only the text is sarcastic or only the image is sarcastic and leave it for future studies.

One of the examples of sarcasm in multimodal data is presented in fig. 3. Detecting sarcasm in multimodal data can be more challenging as compared to unimodal data simply because what the text says is the opposite of what the image implies, for example, in fig. 3(a), the text says “lovely, clean, pleasant train home” whereas the associated image implies the opposite. Similarly, in fig. 3(b), the textual description and image semantics are alluding to opposite meanings. Such a phenomena is called incongruity [4, 20, 13] and has been leveraged to tackle multimodal sarcasm detection [15, 22, 19, 16].

(a) lovely, clean, pleasant train home
(b) well that looks appetising #ubereats
Figure 3: Examples from the Twitter data showing text modality alone is insufficient for sarcasm detection.

Following previous multimodal sarcasm detection approaches, we propose a deep learning-based architecture that takes as input text and image modality. We compute the inter-modality incongruity in two ways. Firstly, the image attribute is extracted using ResNet

[3]

, and the co-attention matrix is calculated. This operation captures the inter-modality incongruity between the text and image attribute. Secondly, the inter-modality incongruity is computed between the text and image features. The two incongruity representations are fused together and used for classifying the sentences into sarcastic or non-sarcastic. Concretely, we make the following contributions: (1) A novel deep learning-based architecture is proposed that captures the inter-modality incongruity between the image and text, (2) Empirical demonstration shows that we are able to boost the F1 score by 6.14% of the current SOTA for multimodal sarcasm detection on the benchmark Twitter dataset.

2 Related Works

Various methods have been proposed in the literature for sarcasm detection in unimodal and multimodal data. In this section, we delineate works that tackle sarcasm detection in multimodal data using image and text modality only. Using audio or video is out of scope of this work and hence not presented here.

One of the first works to utilize multimodal data for sarcasm detection is [16]

which presents two approaches. The first one exploits visual semantics trained on an external dataset and concatenates the semantic features with the textual features. The second method adopts a visual neural network initialized with parameters trained on ImageNet for multimodal sarcastic posts. Cognitive features are extracted using gaze/eye-movement data, and CNN is used to encode them for feature representation in

[8]. The work in [1] extracts visual features and visual attributes from images using ResNet and builds a hierarchical fusion model to detect sarcasm. Along the same lines, the recurrent network model in [15] proposes the idea of a gating mechanism to leak information from one modality to the other and achieves superior performance on Twitter benchmark dataset for sarcasm detection. The authors of [19] use pre-trained BERT and ResNet models to encode text and image data and connect the two using a gate called a bridge. Further, they also propose a 2D-Intra-Attention layer to extract the relationship between the text and image.

The multimodal work that closely matches with our work is [10]. In that, the author proposes a BERT-based architecture for modeling intra- and inter-modality incongruity. Self-attention is used to model inter-modality incongruity, whereas co-attention is used to model intra-modality incongruity. Intra-modality incongruity is also used in [20] for sarcasm detection. Contrary to this work, we model inter-modality incongruity between the text and visual attributes (The work in [10] uses text and visual features) in two ways

. The first uses visual attributes and text, and the latter uses visual features and text. Note that visual features and visual attributes are two different entities. The former is a low-level representation of the image, whereas the latter is high-level description of the image such as what objects are present in the image? The second major difference is that we use feature-wise linear transformation

[12] to compute FiLM parameters using the text data and inject the FiLM layers (see [12] for more details) in between the ResNet layers. The FiLMed ResNet outputs the visual features, which acts like the inter-modality incongruity. More details are given in the next section.

3 Methodology

3.1 Proposed Model

We propose a novel multimodal architecture using Robustly optimized Bidirectional Encoder Representation from Transformer (RoBERTa) [6] for detecting sarcasm. Figure 4 gives an overview of the model. The proposed model mainly consists of the text and image attribute representation, image representation conditioned on text (FiLM), multimodal incongruity through the co-attention mechanism, and final concatenation with [CLS] token. The description of each component is elucidated in the next section.

Figure 4: Overview of our proposed model

3.1.1 Image, text and image attribute representation

For representing text, we consider them as a sequence of words , here is the sum of token, segment and position embeddings, denotes maximum length of the input text, and is the embedding size. For extracting features , we use the RoBERTa model and consider the output of the first encoder layer of RoBERTa as the representation of the text. Here, depicts the length of set and is the hidden size of RoBERTa. Similarly, for the representation of image attributes, we have as the sum of the token, segment, and position embeddings, and its features will be represented by which is the output of the last layer of RoBERTa. Here, is the length of set F.

3.1.2 Inter-modal incongruity between visual and text representation

: Since input text plays an important role in detecting sarcasm, we capture the image information based on the textual features. Inspired by the work of [12]

, we apply the feature-wise affine transformation on the image by conditioning it on the input text. The image features are extracted using the pre-trained ResNet-50. Further, we use Gated Recurrent Unit (GRU) network

[2] to process the text, which takes 100-dimensional learned GloVe [11] word embeddings as the input. The final layer of the GRU network outputs FiLM parameters , for FiLMed residual block. The and are the output of the functions and , which are learned by the FiLM for the input :

(1)

where , are arbitrary functions.

In our experiment, we use 4 FiLMed residual blocks with a linear layer attached on top which outputs the final output . Using the FiLM parameters, FiLM layers are inserted within each residual block to condition the visual pipeline. Mathematically, the parameters and perform the feature-wise affine transformation on the image feature maps extracted by ResNet

(2)

Here, corresponds to the input’s image feature map. Doing so, we aim to extract visual features akin to text meaning. On the other hand, conditioning the image representation on text representation captures the inter-modal incongruity.

3.1.3 Inter-modal incongruity between visual attributes and text representation

To model the contradiction between the input-text and image attribute, we use a co-attention mechanism motivated by [7]

. The co-attention input is the RoBERTa model’s output of input text and high-level image representation, i.e., image attributes. The co-attention mechanism incorporates the incongruity between the text and image modality. Formally, we first calculate the affinity matrix

using bi-linear transformation to capture the interaction between the input text and the image attributes.

(3)

where represent input-text features, represent image attribute features, is a learnable parameter consisting of weights. and denote the maximum size of input-text features and image attributes features, respectively, and denotes the hidden-size of RoBERTa. The affinity matrix transforms the text attention space to image attribute attention space. The attention weight

is then calculated using 2D max-pooling operation over affinity matrix C using a kernel of size (

). Intuitively, calculates the attention weights over each word in the text, which has been transformed to image attribute attention space.

(4)

Finally, the image attribute attention matrix, , which captures the contradiction between text and high-level features of image is calculated as:

(5)

3.1.4 Final Fusion

Summing up together, we take the output from the FiLM and from co-attention mechanism as mentioned above. Along with this, we also use the

token from input-text feature representation of RoBERTa, and concatenated them forming the fusion vector as

(6)

where

. We pass this fusion vector through a fully connected layer followed by the sigmoid function for classification. So, the final output

would be:

(7)

where and is a scalar and trainable parameters.

4 Experiments

4.1 The Dataset

We use the publicly available multimodal Twitter dataset collected by [1]. This dataset consists of 24k samples of the tweet, image, and image attributes. Further analysis of the dataset is shown in Table 1. This dataset is divided by [1] into the training set, validation set, and test set in the ratio 80%:10%:10%, and we use the same split for a fair comparison. The dataset is preprocessed to separate words, emoticons, and hashtags with the NLTK toolkit. We present results on only one dataset consisting of text and image combined with image attribute since this is the only dataset where image attributes have been manually verified. Original images are resized to 224, followed by center crop and normalization. We use data augmentation during training, including random center crop, random change of brightness, contrast, and image saturation. The text data is preprocessed to exclude the emoji information.

 sentences  positive  negative  % positive
Training 19816 8642 11174 43.62
Development 2410 959 1451 39.76
Test 2409 959 1450 39.80
Table 1: Details of Twitter Dataset

4.2 Baselines

The baseline models for our experiment are as follows:

  • ResNet: It is an image only model [3] which is fine-tuned on the same Twitter multimodal dataset.

  • CNN: A popular text only CNN [5] model which performs well on text classification problems.

  • Multi-dimensional Intra-Attention Recurrent Network (MIARN): [18] proposed a novel architecture for text-only sarcasm detection by using 2D-attention mechanism to model intra-sentence relationships.

  • Hierarchical Fusion Model (HFM): Proposed by [1], it takes text, image, and attribute feature as modalities. Features of the modalities are then reconstructed and fused for prediction. This is the only model besides ours that uses image attributes as an additional modality.

  • D&R Net: [21] preprocesses the image and text to form adjective-noun pairs (ANP). Then, they use a Decomposition and Relation Network (D&R Net) to model cross-modality contrast using ANP and semantic association between the image and text.

  • Res-bert: [10] implements Res-bert as a model to concatenate the output of image features from ResNet and text features from BERT. Since this model closely resembles our approach, it is an important baseline.

  • Intra and Inter-modality Incongruity (IIMI-MMSD): [10] proposes a BERT-based model, which concentrates on both intra and inter-modality incongruity for multimodal sarcasm detection. They use self-attention and co-attention mechanism to capture inter and intra-modality incongruity, respectively.

  • Bridge-RoBERTa: It is proposed by [19]. The authors have used pre-trained RoBERTa and ResNet, and connected their vector spaces using a Bridge Layer. Further, to extract the relationship between text and image, they have used 2D-Intra-Attention layer.

4.3 Experimental Settings and Hyper-Parameters

The details of our experimental setup and hyper-parameters are as follows. We use pre-trained RoBERTa-base [6] with 12 layers, and pre-trained ResNet-50 [3]

with 50 layers. For text and image attribute representation, we experiment with different number of layers of RoBERTa and find that 1 layer of encoder gives the best performance. So the comparison with baselines uses only 1 layer of RoBERTa encoder. We show the performance with different number of layers in the ablation studies. The model is run on NVIDIA Tesla V100-PCIE GPU. We use PyTorch 1.7.1 and Transformers 4.3.2 to implement our model. For evaluation we use F1-score, precision, recall, and accuracy as implemented in Scikit-learn. We take Adam as our optimizer and set the learning rate for FiLMed network as 3e-4, for RoBERTa as 1e-6, and 1e-4 for co-attention layer. The batch size used is 32 for training. We also add weight decay of 1e-2 and gradient clipping set to 1.0. The maximum length of tokenised text is 360. We also take the standard dropout rate of 0.1. The model is fine-tuned for 15 epochs, and the model with the best F1-score on the validation set is saved and tested.

4.4 Results and Discussion

Table 2 shows the comparison of our proposed model with other baseline models. Our model outperforms the current state-of-the-art model [19] on all the four metrics viz. F1-score, Precision, Recall, and Accuracy. Specifically, our model gives an improvement of 6.14% on F1-score and 5.15% on accuracy over the current SOTA from Bridge-RoBERTa model, thus verifying the effectiveness of our model.

Modality Method F1-score Precision Recall Accuracy
Image ResNet [3] 0.6513 0.5441 0.7080 0.6476
Text CNN [5] 0.7532 0.7429 0.7639 0.8003
MIARN [18] 0.7736 0.7967 0.7518 0.8248
Image + Text HFM [1] 0.8018 0.7657 0.8415 0.8344
D&R Net [21] 0.8060 0.7797 0.8342 0.8402
Res-Bert [10] 0.8157 0.7887 0.8446 0.8480
IIMI-MMSD [10] 0.8292 0.8087 0.8508 0.8605
Bridge-RoBERTa [19] 0.8605 0.8295 0.8939 0.8851
Our Method 0.9219 0.9056 0.9387 0.9366
Table 2: Comparison of baselines with our proposed model

We can also verify from Table 2 that treating images or text independently does not perform well on the sarcasm detection problem. Intuitively, image-only models perform worse than text-only models as an image alone does not contain sufficient information to identify underlying sarcasm. Moreover, we can see that these unimodal approaches do not perform well for identifying sarcasm, and thus multimodal approaches are more suitable for sarcasm detection. Further, from the Table 2, we see that improvement in precision from baselines is more than the improvement in recall (the last two rows). It shows that our approach can capture sarcastic tweets(+ve class) more accurately.

Our proposed model achieves better results than other multimodal approaches as it can capture the contradiction between the text and images in two stages: First, using FiLM, we get a representation of the image conditioned on the input text, thereby extracting image features that are incongruous to the text features. FiLM layers enable the GRU network over the input text to impact the neural network computation (ResNet in our case). This method helps to adaptively alter the ResNet network with respect to the input text, thereby allowing our model to capture inter modality incongruity between input text and image features. Second, the co-attention mechanism enables the image attributes to attend to each word in the input text. This helps us get a representation of high-level image features conditioned on the text. Thus, we can capture the inter-modality incongruity between input text and image attributes. Finally, since we have a representation of the image and image attributes, we need to include a representation of the input text to detect the sarcasm in a better way. Motivated by the approach taken in [19], using the output of token from RoBERTa for the final concatenation layer helps the proposed model to identify the underlying sarcasm using input-text, image and image attributes representation. Thus, conditioning the image and image attributes on input text using FiLM and co-attention respectively are effective for sarcasm detection.

4.5 Model Analysis

To understand the visual significance of our model, we plot image attention representations. Fig. 5 illustrates that our model can attend to regions of image that are incongruous to text. The images show that regions corresponding to input text features are highlighted and will have larger activation than other regions of the image. Since final predictions are made using features of the highlighted areas, this operation gives an overall boost to the model performance. Moreover, we can verify from Figure 5 and 5 that our model is able to attend to text like ”pack of almonds” and ”stupid people” without the need to explicitly use noisy Optical Character Recognition (OCR), an approach used in previous works [10, 15].

     
thanks god i have such a wonderful    yup lol.               
reseal in my pack of almonds

Figure 5: Examples from the Twitter data showing attention visualization of sarcastic tweets.

4.6 Ablation Study

To evaluate the effectiveness of each component in our network, we conduct a detailed ablation study on the proposed architecture. Firstly, we remove the FiLM network which is represented as w/o FiLM. Secondly, co-attention between the visual attribute and the text is clipped, and the model is called w/o co-attention. Further, the importance of the token during the fusion is denoted by w/o cls. We experiment our approach with two other transformer models namely BERT and ELECTRA. We replaced RoBERTa with BERT and ELECTRA models and the resultant networks are called FiLM-Bert and FiLM-Electra respectively. The ablation results are shown in Table 3.

Ablation F1-score Precision Recall Accuracy
w/o FiLM 0.6217 0.5667 0.6890 0.6660
w/o co-attention 0.7607 0.7360 0.7871 0.8029
w/o cls 0.7638 0.7225 0.8104 0.8003
FiLM-Electra 0.7683 0.7178 0.8265 0.8013
FiLM-Bert 0.7727 0.7131 0.8439 0.8026
Our Method 0.9219 0.9056 0.9387 0.9366
Table 3: Results of ablation Studies. The ‘w/o’ means removal of the component.

We can see that the removal of FiLM (w/o FiLM) significantly hampers the model’s performance. Next, we eliminate the co-attention module (w/o co-attention) and concatenate the output from FiLM and RoBERTa. This decreases the model performance which implies that capturing incongruity between image and textual features through co-attention is important for sarcasm detection. Further, output from the [CLS] token positively contributes to the model as removing the [CLS] token (w/o cls) reduces the metric scores. When we try our network with the BERT and ELECTRA model (FiLM-Bert and FiLM-Electra), then we observe sharp decline in the performance. This shows that RoBERTa is better at harnessing textual features. The use of larger training data and batch size in RoBERTa gives it an edge over the original BERT model. From the above results, we can conclude that FiLM network fused with RoBERTa and attention mechanism helps to capture incongruity between image and text modality, thereby effectively learning the underlying sarcasm.

Layers F1-score Precision Recall Accuracy Training Time (per epoch)
12 0.8365 0.8242 0.8492 0.8679 15.45 minutes
5 0.8434 0.8217 0.8665 0.8711 8.96 minutes
2 0.9215 0.9043 0.9393 0.9363 6.02 minutes
1 0.9219 0.9056 0.9387 0.9366 5.10 minutes
Table 4: Ablation study with different layers of RoBERTa model.

Motivated by [14], we study the layer transferability in the RoBERTa model for our task. Instead of directly taking the output from the attention encoder layer, we consider the output from different layers to represent the textual features . The rest of the model architecture is unaltered. Results are presented in Table 4. The authors of [14] proposed that middle layers of the BERT model are most prominent in representing syntactic information. We observe a similar trend in Table 4. We get best results when we use the output of the first encoder layer. Our findings imply that initial layers of the RoBERTa model are better at encoding the syntactic representation and have most information about the linear word order. The final layers are more task-specific and give better performance in applications where we simply attach a classifier on top of the transformer model for downstream tasks. Using only one layer also helps in reducing the model size and the training time as seen in the last column of Table 4.

5 Conclusion and Future Work

The present work tackles the problem of sarcasm detection through capturing inter-modality incongruity. The proposed architecture handles the inter-modality incongruity in two ways: the first uses the co-attention, and the second is via FiLM network. Comparison with baselines on Twitter benchmark datasets reveals that the proposed architecture can better capture the contradiction present between the image and text modality. The ablation study highlights the importance of FiLM and co-attention layer between the image, image attribute embeddings and the text embeddings. Comparison with several baselines on benchmark dataset shows the effectiveness and superiority of our model.

References

  • [1] Y. Cai, H. Cai, and X. Wan (2019) Multi-modal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2506–2515. Cited by: §2, 4th item, §4.1, Table 2.
  • [2] J. Chung, Ç. Gülçehre, K. Cho, and Y. Bengio (2014)

    Empirical evaluation of gated recurrent neural networks on sequence modeling

    .
    CoRR abs/1412.3555. External Links: Link, 1412.3555 Cited by: §3.1.2.
  • [3] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §1, 1st item, §4.3, Table 2.
  • [4] A. Joshi, V. Sharma, and P. Bhattacharyya (2015) Harnessing context incongruity for sarcasm detection. In

    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

    ,
    pp. 757–762. Cited by: §1.
  • [5] Y. Kim (2014) Convolutional neural networks for sentence classification. CoRR abs/1408.5882. External Links: Link, 1408.5882 Cited by: 2nd item, Table 2.
  • [6] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §3.1, §4.3.
  • [7] J. Lu, J. Yang, D. Batra, and D. Parikh (2016) Hierarchical question-image co-attention for visual question answering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Red Hook, NY, USA, pp. 289–297. External Links: ISBN 9781510838819 Cited by: §3.1.3.
  • [8] A. Mishra, K. Dey, and P. Bhattacharyya (2017) Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 377–387. Cited by: §2.
  • [9] L. Overstreet (2019) Introduction to Lifespan Development. Note: https://courses.lumenlearning.com/wmopen-lifespandevelopment/chapter/cognitive-development-in-adolescence/[Online; accessed 19-June-2021] Cited by: §1.
  • [10] H. Pan, Z. Lin, P. Fu, Y. Qi, and W. Wang (2020) Modeling intra and inter-modality incongruity for multi-modal sarcasm detection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 1383–1392. Cited by: §2, 6th item, 7th item, §4.5, Table 2.
  • [11] J. Pennington, R. Socher, and C. Manning (2014-10) GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1532–1543. External Links: Link, Document Cited by: §3.1.2.
  • [12] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville (2018) Film: visual reasoning with a general conditioning layer. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 32. Cited by: §2, §3.1.2.
  • [13] E. Riloff, A. Qadir, P. Surve, L. De Silva, N. Gilbert, and R. Huang (2013) Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 704–714. Cited by: §1.
  • [14] A. Rogers, O. Kovaleva, and A. Rumshisky (2020) A primer in bertology: what we know about how bert works. External Links: 2002.12327 Cited by: §4.6.
  • [15] S. Sangwan, M. S. Akhtar, P. Behera, and A. Ekbal (2020) I didn’t mean what i wrote! exploring multimodality for sarcasm detection. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §1, §2, §4.5.
  • [16] R. Schifanella, P. de Juan, J. Tetreault, and L. Cao (2016) Detecting sarcasm in multimodal social platforms. In Proceedings of the 24th ACM international conference on Multimedia, pp. 1136–1145. Cited by: §1, §2.
  • [17] E. Singer (2005)(Website) External Links: Link Cited by: §1.
  • [18] Y. Tay, A. T. Luu, S. C. Hui, and J. Su (2018-07) Reasoning with sarcasm by reading in-between. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 1010–1020. External Links: Link, Document Cited by: 3rd item, Table 2.
  • [19] X. Wang, X. Sun, T. Yang, and H. Wang (2020) Building a bridge: a method for image-text sarcasm detection without pretraining on image-text data. In Proceedings of the First International Workshop on Natural Language Processing Beyond Text, pp. 19–29. Cited by: §1, §2, 8th item, §4.4, §4.4, Table 2.
  • [20] T. Xiong, P. Zhang, H. Zhu, and Y. Yang (2019) Sarcasm detection with self-matching networks and low-rank bilinear pooling. In The World Wide Web Conference, pp. 2115–2124. Cited by: §1, §2.
  • [21] N. Xu, Z. Zeng, and W. Mao (2020-07) Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 3777–3786. External Links: Link, Document Cited by: 5th item, Table 2.
  • [22] N. Xu, Z. Zeng, and W. Mao (2020) Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3777–3786. Cited by: §1.