Look, Read and Feel: Benchmarking Ads Understanding with Multimodal Multitask Learning

12/21/2019
by   Huaizheng Zhang, et al.
0

Given the massive market of advertising and the sharply increasing online multimedia content (such as videos), it is now fashionable to promote advertisements (ads) together with the multimedia content. It is exhausted to find relevant ads to match the provided content manually, and hence, some automatic advertising techniques are developed. Since ads are usually hard to understand only according to its visual appearance due to the contained visual metaphor, some other modalities, such as the contained texts, should be exploited for understanding. To further improve user experience, it is necessary to understand both the topic and sentiment of the ads. This motivates us to develop a novel deep multimodal multitask framework to integrate multiple modalities to achieve effective topic and sentiment prediction simultaneously for ads understanding. In particular, our model first extracts multimodal information from ads and learn high-level and comparable representations. The visual metaphor of the ad is decoded in an unsupervised manner. The obtained representations are then fed into the proposed hierarchical multimodal attention modules to learn task-specific representations for final prediction. A multitask loss function is also designed to train both the topic and sentiment prediction models jointly in an end-to-end manner. We conduct extensive experiments on the latest and large advertisement dataset and achieve state-of-the-art performance for both prediction tasks. The obtained results could be utilized as a benchmark for ads understanding.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

07/11/2018

Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis

Multimodal machine learning is a core research area spanning the languag...
02/03/2018

Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning

With the increasing popularity of video sharing websites such as YouTube...
12/04/2021

Channel Exchanging Networks for Multimodal and Multitask Dense Image Prediction

Multimodal fusion and multitask learning are two vital topics in machine...
05/07/2020

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

Multimodal Sentiment Analysis is an active area of research that leverag...
12/15/2020

A Deep Multi-Level Attentive network for Multimodal Sentiment Analysis

Multimodal sentiment analysis has attracted increasing attention with br...
12/06/2020

Pedestrian Behavior Prediction via Multitask Learning and Categorical Interaction Modeling

Pedestrian behavior prediction is one of the major challenges for intell...
08/09/2021

Disentangling Hate in Online Memes

Hateful and offensive content detection has been extensively explored in...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Advertising has a pivotal role in the global economy and the revenue of numerous companies. For instance, it is predicted that Google’s advertising revenues will improve about 15% to $39.92 billion in 2018, and the improvement is 17% to $21 billion for Facebook [9]. Since there is tremendous multimedia content (such as videos and TV shows) on the web, it is now fashionable to promote the ads together with multimedia content. Manually selecting an ad to match a provided multimedia content is time-consuming and labor-intensive. Thus some automatic advertising techniques are developed, such as the contextual advertising [15] method, which aims to find the most relevant ad to a provided content without annoying customers. Therefore, it is necessary to understand both the multimedia content and ads thoroughly.

Figure 1: Ad examples. Visual rhetoric exists widely in ads design. Besides, different from natural images, ad images contains multimodal information such as visual image and associated texts, which can be utilized for better understanding of ads.

Benefiting from the deep learning technique, typical multimedia content analysis has achieved excellent progress

[23, 13]. However, there is much less success in understanding ads, which are often much harder to understanding since they usually contain much visual rhetoric [8] to attract, communicate with, and even persuade customers. Some example ad images are shown in Figure 1, where we can see that it is often hard to understand an ad only according to its visual appearance (look). Fortunately, the contained or associated texts can usually indicate some underlying implication. Therefore, it is beneficial to integrate both the visual and textual information (read) for effective ad understanding. In addition, to improve the user experience when promoting the ads, we should not only understand its topic, but also the emotion (feel) it conveys. For example, for the same living room scene in a TV show, the atmosphere may vary due to the difference of characters’ situations, and it is inappropriate to insert an ad that conveys “creative” or “inspired” to the scene where characters are painful. Consequently, it is necessary to predict both the topic and sentiment of an ad for non-intrusive user experience. Since both the topic and sentiment are related to the same ad, we propose to learn the two prediction models simultaneously to enable them to help each other in training.

Based on these considerations, we propose a novel deep multimodal multitask framework for ad understanding, where different types of information are integrated to predict both the topic and sentiment of an ad simultaneously. To our best knowledge, this is the first framework that unifies the topic and sentiment understanding of ads. In particular, we first extract different types of information, such as objects and contained texts from the ad using some existing techniques, such as the pre-trained object or image representation models and OCR [22]. To recognize and understand the visual rhetoric

, autoencoder is introduced to decode the object representation in an unsupervised manner. For the whole image and extracted text representations, multi-layer perception (MLP) and BLSTM

[12] are added to learn high-level and comparable representations. The parameters in these modules are shared by different tasks, and thus the total number of parameters can be reduced significantly. The obtained representations are then fed into different sub-networks, where a novel hierarchical multimodal attention module is designed to capture both the intra-modal and inter-modal importance for different tasks. Finally, a multitask loss function is designed to minimize the topic and sentiment prediction, as well as the decoding reconstruction losses simultaneously.

There exist some preliminary attempts for automatic advertising. For example, [15] proposed a framework to insert ads into videos based on global textual relevance gathered from video meta data and local visual-aural relevance gathered from low-level image and audio features. A similar system is presented in [25], but more advanced deep learning technique is introduced to analyze both video content and ad images. Different from these topic-only analysis works, [23] developed an ads recommendation system based on the sentiment of multimedia contents, and a unified framework to understand both topic and sentiment of the multimedia contents is developed in [13]

. The main drawbacks of these works are: 1) the topic and sentiment analysis are only conducted for multimedia contents but not for ads; 2) the ads are processed in the same way as the common images/videos and the specific characteristics of ads are ignored. The proposed method is different and superior to these works in that: 1) both the topic and sentiment are analyzed for ads and the models are trained together to enable interaction between the two prediction tasks; 2) the multimodal nature of ads is fully exploited, and the feature importance is captured in a both intra-modal and inter-modal manner.

Figure 2: Network architecture. There are two main components in the proposed framework. Firstly, some existing models are adopted to extract object features, features of the whole ad, and the text features in the ad. The different types of features are passed through autoencoder, MLP and BLSTM respectively to learn high-level and comparable representations. Then the different representations are fused using the proposed hierarchical multimodal attention mechanism to learn task-specific representation for final prediction. The parameters in the first component are shared by different tasks, and the developed attention scheme is able to capture the feature importance in and between different modalities for different tasks.

We conduct extensive experiments to verify effectiveness of the proposed framework by comparing it with the ResNet baseline [7] and some competitive or recently proposed multi-task/multi-label models [27, 3]. Improvements from 10% to 78% are achieved under the mean average precision (mAP) criterion. To summarize, our main contributions are:

  • The first multimodal multitask learning framework and a benchmark for ads understanding;

  • A shared feature extraction module that makes full use of the multimodal information in ads and understand the visual

    rhetoric;

  • A hierarchical multimodal attention module that effectively exploit the intra-modal and inter-modal information;

Framework Design

In this section, we first present the overview of the proposed architecture. We then divide the framework into three phrases and detail them. We finally describe how to train the framework.

Architecture Overview

We design a new neural network architecture (Figure

2) to predict both topics and sentiments of ads in an end-to-end manner. Formally speaking, let be the topic label space with class labels and be the sentence label space with class labels. Given the training set , and are two sets of relevant topic and sentiment labels associated with -th image. Our learned model will assign multiple proper topic and sentiment labels to image ads.

During the inference stage, we divide the proposed framework into three phrases. In the first phrase, we use pretrained models and OCR to extract multi-modality features from ads and three kinds of sub-networks to learn shared multi-modality representations. These representation will be sent to the second phrase and processed by two hierarchical multimodal attention modules separately. The output from the second phrase is the task-specific representation. In the third phrase, we use a multitask prediction module to transform the learned representation and output final predictions.

Shared Bottom Module

The first phrase of our framework is a shared bottom module as illustrated in Figure 2. Given image ads, there are two main challenges need to be address in this phrase.

  • Since ads are designed to convey topics or emotions with multimodality information such as visual and text, how to separate these information and represent them is still open problem,

  • As people use rhetoric to decorate ads (Figure 1), even the same object in one modality may have different meanings.

To tackle these challenges, we design a shared bottom module consisting of three components: a image component, an object component and a text component.

Image component. It is designed to extract a coarse global feature of an ad. There are two stages in this component which can be illustrated as follows:

(1)
(2)

First, a pre-trained image classification model is adopted to transform the raw image into a feature map where is the number of channels, is the height of the feature map and

is the width of the feature map. Then a pooling technique (e.g., max or average) is applied to transform the feature map into a feature vector,

. Second, we input the feature vector to a MLP to learn a more compact feature shared by the next two tasks. MLP can also align the length of the feature vectors to the other modality features.

Object component. To acquire regional features and decode visual metaphor, we rely on the our object component which includes two functions. A pre-trained object detection model (e.g. faster-rcnn)is applied to extract object-level features, from the bounding box proposals as follows:

(3)

The second function is to decode and represent the visual rhetoric from these object features. To this end, we build a autoencoder as follows:

(4)
(5)

It contains an encoder and a decoder. The encoder projects the object features into a latent space where the features, , have similar meanings are clustered together. The decoder reconstructs the latent features as to make similar to . During the inference, we use the latent feature as object representation.

Since visual rhetoric has no supervised information, we train this part in an unsupervised manner with the reconstructed loss, .

(6)

where is the training batch size.

Text component. We utilize the text component to read and understand words in image ads. An OCR model is first applied to detect and recognize words, from ads. Then, these words will be embedded into word vectors

by using FastText. To be noticed, FastText embedding is necessary in this stage. Our empirical experiences show that even with the most advanced open-source OCR tool, there are still many words can not be detected or recognized correctly. It is often missing some characters in words. Since FastText can embed out-of-vocabulary (OOV) words, it can eliminate the recognition issues a lot. Third, these word embeddings will be input to a sequence model (e.g. BLSTM) to learn shared word representations

. Formal formulations are shown as follows:

(7)
(8)
(9)

Discussion. We design a shared bottom module to extract and learn general multimodal features for the next two tasks. We highlight the advantage of this module in four folds: 1) Understanding ads needs not only visual information but also text information; 2) The rhetoric issue is initially addressed by an autoencoder with reconstruction loss; 3) Some techniques we select such as FasText improves the robustness of the representation; 4) Since this is a shared module, the number of parameters we need to learn decreases a lot which also prevents the overfitting.

Hierarchical Multimodal Attention Module

In the second phrase, all features obtained from the first phrase are fused into a task-specific feature vector by the hierarchical multimodal attention module (HMAM). Specifically, topic and sentiment task will be processed by two HMAMs, respectively. We design two attention mechanisms to form this module: intro-modality attention and inter-modality attention.

Intro-modality attention. This attention reads all feature vectors, extracted by the first phrase, from the same modality, and generate linear weights to fuse them. Since visual modality, , only contains one feature vector, we do not process it in the first attention. For the other object and text modality features, we apply intro-modality attention, respectively. Let be the feature vectors, where is the number of vectors in this modality. An attention block first filters them with a kernel via dot product, yielding a set of corresponding significance . They are then passed to a softmax function to generate positive weights with . These two operations takes the following mathematical form, respectively:

(10)
(11)

The final representation for one modality is calculated by the equation:

(12)

This intro-modality attention simulates human visual system. It perceives more important information by weighting them more and aggregate all features into one feature vector to represent one modality. Besides, from equation (10) we can easily deduce that this attention can take any numbers of feature vectors as long as the length of these vectors keep same. This increase the flexibility of our framework. Moreover, two for two modalities can be trained by standard back-propagation and gradient descent.

Inter-modality attention. This attention weights three modality feature vectors and concatenates them into one task-specific representation. Since we only have three modalities, using the kernel method same with the intro-modality attention accounts for overfitting. Inspired by annealing algorithm, we simplify the attention mechanism for inter-modality. First, we initialize the attention score vector directly. Then, for each modality representation, we use to weight them and get final task-specific representation by following equation:

(13)

By the inter-modality attention, the final feature vector is constructed and passed to the next phrase for prediction. The attention score vector can be also trained by back-propagation and gradient descent with specific settings. But different from the intro-modality attention, the inter-modality attention takes fewer parameters which prevent the overfitting issue. Meanwhile, we do not use softmax function to restrict attention weights to be positive. The reason is that people adopt many design tricks to make ads more attractive, which has no contribution even negative contribution to the final prediction (This is why for one ad image, different annotators label them differently).

Discussion. We process multimodality information in which there are multiple feature vectors in an hierarchical manner. In the first-level attention, feature vectors within one modality are weighted and aggregated together. Different attention filter kernels are initialized and trained for different modality. In the second-level attention, three modality feature vectors are scored by a simplified attention vector then concatenated as the final task representation. The attention can prevent overfitting and some scores may be trained to negative to penalize misleading information.

Multitask Prediction Module

Multitask prediction module contains two prediction modules which are applied to topic and sentiment representations from last phrase, respectively. These representations are first transformed into a low-dimension space with the following equation:

(14)

where

is the non-linear activation function,

and is the parameters can be trained. Then, the is fed into the output layer with sigmoid activation function that can process multiple possible labels for one sample that are not mutually exclusive:

(15)
(16)

where is the number of total labels in one task.

Discussion

. With the help of sigmoid, the neural network models the probability of a class

as Bernoulli distribution. The probabilities of each class is independent from the other class probabilities, so we can use the threshold 0.5 as usual to do prediction (i.e., if the

th probability is larger than 0.5, the network will output the th label.

Training Methodology

We construct a multitask loss to train the proposed framework. It consists of three losses: rhetoric loss, topic loss and sentiment loss. Rhetoric loss, as described in equation (6), aims to distinguish the true meaning or metaphor meaning of an object in ads. Since there is no supervised information, this part is trained in an unsupervised manner. The topic and sentiment losses (i.e., and ) have same mathematical form which can be called multi-label loss . It can be calculated as follows:

(17)

Here, denote the ground-truth of -th ad on the -th label. if -th label is the relevant label, otherwise . is the prediction output. is the number of total label in one task and is the batch size.

The overall loss function turns out to be the following form:

(18)

where and are the balance coefficients which control the interaction of the loss terms.

Experiment

In this section, we first provide a detailed description of the dataset and some evaluation metrics used in the experiments. Then we design a set of experiments to evaluate the performance of the proposed multimodal multitask framework. Furthermore, we conduct ablation study to demonstrate the effectiveness of designed modules

Dataset

We evaluate our framework in a latest ads dataset released from [8]. There are 64832 image ads with 38 topic labels and 30 sentiment labels in the dataset. One image may annotated by multiple topic or sentiment labels. Since some of them only includes topic labels or sentiment labels, to verify our framework, we filter the dataset and only use a sub-dataset where every ad image annotated in both topic and sentiment. The sub-dataset contains 30,000 images which is still a large enough to train a deep learning model. After we apply a OCR technique comes from google, we observe that more than 67% ads contains text information which prove the necessary of using multimodal learning. Unless otherwise stated, we use 70% of the dataset to train, 10% to validate and left 20% to test the proposed framework.

Compared Approaches

Since both topic and sentiment tasks can be cast as multilabel classification, we select ResNet-50 and ResNet-101 with last activation function replaced by as baselines. Then we compare our framework works to some state-of-the-art (SOTA) multilabel classification framework for images. Their details are as follows:

  • ResNet [7]: a very competitive image classification model which has been applied to a lot of areas. We replace its last activation function by

    and train it with binary cross-entropy loss function. We fine-tune last layer or all layers of ResNet-50 and ResNet-101 models trained on ImageNet

    [5] on ad image dataset as baseline.

  • C2AE [27]: Canonical Correlated AutoEncoder, a deep learning based multilabel image recognition framework. It learns to embed features and labels jointly which relates feature and label so to improve recognition.

  • GCN [3]: a recently proposed model based on graph convolutional networks (GCNs). It models label dependencies by constructing graphs and applies GCN.

Following conventional settings from [24, 27, 3], we report the average per-class F1 (F1-C) and the average overall F1 (F1-O) for performance evaluation.

Implementation Details

We build the framework by PyTorch 1.0

[18]. Unless otherwise noted, our configuration is as follows: 1) In the shared bottom module, The image-level features are obtained by average pooling 2048D features from the res-5c block of a pre-trained ResNet-152. Image-level features will be fed into a 2-layer MLP to learn a 1024D shared global features. The object-level features are extracted from the fc6 layer of an improved Faster-RCNN model [19] trained on the Visual Genome [11] objects and attributes as provided in [1]. These features will be passed to an autoencoder and we use the 1024D latent features as the shared object-level features. For OCR, we use an open-source tool named Tesseract [22], comes from google. Then FastText embedding will embed them into 300D vectors. A 1-layer BLSTM with hidden-size 512 will further process these word vectors to 1024D vectors.

We train our framework in an end-to-end manner. Adamax [10]

with learning rate 0.001, momentum 0.9 and weight decay 0.0001 is applied to optimize the network. Every 15 epochs, we multiply the learning rate by 0.1. The balance coefficient

and is set to 200 and 50, respectively.

Experimental Result

We presents our evaluation results from three perspectives. We first compare our models to other STOA multilabel classification baselines. We then demonstrate the parameter sensitivity of the proposed framework. We end this section by conducting ablation study to verify the effectiveness of each module.

width=center Method Topic (All) Sentiment (All) mAP F1-C F1-O mAP F1-C F1-O ResNet50 (Last) 0.151 0.082 0.257 0.242 0.136 0.376 ResNet50 (All) 0.206 0.130 0.363 0.261 0.153 0.400 ResNet101 (Last) 0.157 0.088 0.271 0.244 0.138 0.384 ResNet101 (All) 0.215 0.138 0.379 0.265 0.162 0.408 C2AE - 0.146 0.314 - 0.201 0.422 GCN 0.120 0.051 0.183 0.223 0.110 0.339 Ours-Single-task 0.290 0.214 0.482 0.284 0.192 0.439 Ours-Multi-task 0.382 0.371 0.585 0.292 0.216 0.453

Table 1: Quantitative comparison for topic and sentiment prediction with different methods. We apply all baselines with their official open-sourced code and benchmark them on the ad dataset. For both two tasks under all metrics, our framework achieves the best performance. The overall performance for topic prediction is much better and we own this to our multimodal learning based design.

Comparison with Other Approaches.

Table 1 presents both topic and sentiment results of each model. DeepAD performs the best and significantly outperforms all baseline models on all evaluation metrics, which showcases the effectiveness of the proposed method on ads understanding. For more in-depth performance analysis, we realize that even a very simple with multimodality information, the performance increases a lot which show us the direction of images ads understanding. We should pay more attention on multimodal information extraction and fusion rather than designing deeper networks such as ResNet, DenseNet and so on. We also observe that the performance of topic prediction increases more than the performance of sentiment prediction. We attribute this to two reasons: 1) The text in image ads exposes more information to our framework and leads better topic results (e.g., some text may show the topic directly); 2) From the mAP column of sentiment results, we note that only using resnet can get a comparable results which means that the human emotion for ads depends on the whole images more. In other words, the hierarchical multimodal attention module for objects, words and modalities is more effective on topic prediction.

The improvements over each class on two tasks are shown in Figure 2(a) and 2(b). As it shows, the average precision of all classes in topic task are improved even the number of ads in the one of classes is very small (e.g., the petfood class only contains 29 images). This demonstrates that our framework can mitigate the data imbalance issue. We also notice that the larger improvements are not occurred in classes with larger samples. This indicates that: 1) our framework do not depend on more modality information and training samples; 2) the visual rhetoric

can be classified with our unsupervised training because the top3 improvement classes are “seasoning”, “smoking alcohol abuse” and “ animal right” which are use

rhetoric design widely. For the sentiment prediction, almost all classes are improved except two are decreased by about 1%. The two classes are “persuaded” and “proud”. This may be attributed to the misleading of multitask training (i.e., our framework pays more attention on local features for better task prediction which may produce error signals to sentiment prediction in very rare samples).

(a) The improvements over each class for the topic task.
(b) The improvements over each class for the sentiment task.
Figure 3: The improvements of our proposed framework over each class on two ad understanding tasks. The value attached to each bar represents the number of the corresponding label in the test set. Within the imbalanced dataset, our model performs significant improvement over almost all classes in both tasks.
(a)
(b)
Figure 4: The influence of different hyper-parameter settings. The left figure shows the trend within learning rate ranging from 0.0001 to 0.1. Our framework work is quite stable under a large range which indicates it is not very sensitive to learning rate. The right figure illustrates the dropout influence. Since dropout is used to prevent overfitting, our framework is not influenced by it which proves the model does not overfit on the dataset.

Parameter Sensitivity Analysis.

(a)
(b)
Figure 5: The influence of different framework settings. The left is the influence of different feature dimensions. The figure illustrates that when the dimension increases, the performance first increases and then decreases. This is because fewer dimensions will lead to information lost and more dimensions would introduce noise. The right is the coefficient influence. We fix and tune . When the becomes larger, both the topic and sentiment prediction performance first becomes better and then steady.

We present the test results under different parameters to verify the robustness of our framework. Figure 3(a) illustrates the influence of learning rate. The best performance is achieved at 0.001. In general, the performance is pretty stable under small learning rates. Dropout is to alleviate the overfitting of the network. We set dropout between layers and as Figure 3(b) shows, our framework performs well within all dropout settings. This proves that we do not over-fit the model to the dataset.

The dimension of multimodal features is also an important hyper-parameter. After processed by shared bottom module, these features have same dimension numbers. We set different dimension values to check its influence as shown in Figure 4(a). The mAP is first increases then drops. The reason is that too small dimension accounts for the information lost and too large dimension may adopt noise to the feature vectors.

The influences of balance coefficients and are demonstrated in Figure 4(b). We fix one and adjust the other to check the influence. As controls the topic loss, it is not surprised the topic performance is increased as the value increased. And as the topic performance increased, the sentiment performance is improved which proves the effectiveness of our multitask loss.

Ablation Study

We conduct detailed ablation study to verify the effectiveness of each module in our framework. 1) Rhetoric autoencoder component performs better on topic prediction than sentiment prediction. This meets our expectation since this component only is only utilized by object-level features which contribute topic more. 2) Similar to 1), the hierarchical attention module performs remarkably on topic task. The reason is this module mimic human visual attention so that it can extract import local features. Meanwhile, the sentiment task is influenced by global features more. 3) Multitask loss improves the sentiment prediction a lot. We own this to the relevance between topic and sentiment. From the first two components, we already have a better topic representation which can assist sentiment representation.

width=center Remove Modules Topic Sentiment mAP F1-C F1-O mAP F1-C F1-O - Rhetoric AutoEncoder -1.77% -5.20% -2.50% -0.82% -1.50% -1.30% - Hierarchical Attention -2.92% -6.00% -3.30% +0.36% -1.00% -0.30% - Multitask Loss -0.65% -1.40% -0.70% -0.36% -4.50% -2.30%

Table 2: Ablation Study. We keep other parts fixed and remove the specific component to verify it. The first two components performs better on topic task and the last multitask loss is better for sentiment task. The former is beneficial for understanding region features of ads and the latter indicates topic understanding promotes sentiment understanding.

Related Work

In this section, we summarize some closely related works on ads analysis. Ads related research has a long history. Some early works focus on predicting click-through rates in ads using low-level visual features [2, 4]. In [14, 17], some approaches are developed to predict how much human viewers will like an ad by capturing their facial expressions. The user affect and saliency are utilized in [16, 26] to determine the best placement of a commercial in a video stream, or of image ads in a part of an image. [20, 6] proposed to detect whether the current video shown on TV is a commercial or not, and a solution for detecting human trafficking advertisements is provided in [21]. A method for extracting the object being advertised from commercials (videos) is proposed in [28], by looking for recurring patterns (e.g. logos). Human facial reactions, ad placement and recognition, and detecting logos, are quite distinct from our goal of understanding the messages of ads. Although lots of efforts have been devoted to ads analysis, uncovering the meaning of ads attracts little attention. One of main reasons is the lack of dataset, and this issue is tackled by [8], where the authors make great efforts to collect and propose a new dataset for image and video ads understanding. In [8], some baselines are presented for each single prediction task, while our framework unifies the topic and sentiment prediction.

Conclusion and Future Work

In this paper, we present a novel deep learning based framework to simultaneously predict the topic and sentiment of advertisement. Our model is able to effectively integrate the different multimodal information contained in the ad for joint topic and sentiment prediction, and the feature importance is well exploited using the proposed hierarchical multimodal attention module. Extensive experiments are conducted on a recently proposed dataset, and we provide a benchmark for comparison. Compare to other related approaches, our framework improves the performance on both prediction tasks significantly. The ablation study shows the effectiveness of our proposed modules. In the future, we plan to adopt neural architecture search (NAS) techniques to further improve the performance.

References

  • [1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang (2018)

    Bottom-up and top-down attention for image captioning and visual question answering

    .
    In CVPR, pp. 6077–6086. Cited by: Implementation Details.
  • [2] J. Azimi, R. Zhang, Y. Zhou, V. Navalpakkam, J. Mao, and X. Fern (2012) Visual appearance of display ads and its effect on click through rate. In CIKM, pp. 495–504. Cited by: Related Work.
  • [3] Z. Chen, X. Wei, P. Wang, and Y. Guo (2019) Multi-label image recognition with graph convolutional networks. In CVPR, pp. 5177–5186. Cited by: Introduction, 3rd item, Compared Approaches.
  • [4] H. Cheng, R. v. Zwol, J. Azimi, E. Manavoglu, R. Zhang, Y. Zhou, and V. Navalpakkam (2012) Multimedia features for click prediction of new ads in display advertising. In SIGKDD, pp. 777–785. Cited by: Related Work.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, pp. 248–255. Cited by: 1st item.
  • [6] J. M. Gauch and A. Shivadas (2006) Finding and identifying unknown commercials using repeated video sequence detection. CVIU 103 (1), pp. 80–88. Cited by: Related Work.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: Introduction, 1st item.
  • [8] Z. Hussain, M. Zhang, X. Zhang, K. Ye, C. Thomas, Z. Agha, N. Ong, and A. Kovashka (2017) Automatic understanding of image and video advertisements. In CVPR, pp. 1705–1715. Cited by: Introduction, Dataset, Related Work.
  • [9] (2019) Investopedia. Note: https://www.investopedia.com/news/facebook-google-digital-ad-market-share-drops-amazon-climbs/Accessed: 2019-08-21 Cited by: Introduction.
  • [10] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Implementation Details.
  • [11] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, et al. (2017) Visual genome: connecting language and vision using crowdsourced dense image annotations. IJCV 123 (1), pp. 32–73. Cited by: Implementation Details.
  • [12] L. Liu, A. Finch, M. Utiyama, and E. Sumita (2016) Agreement on target-bidirectional lstms for sequence-to-sequence learning. In AAAI, Cited by: Introduction.
  • [13] R. Madhok, S. Mujumdar, N. Gupta, and S. Mehta (2018) Semantic understanding for contextual in-video advertising. In AAAI, Cited by: Introduction, Introduction.
  • [14] D. McDuff, R. El Kaliouby, J. F. Cohn, and R. W. Picard (2014) Predicting ad liking and purchase intent: large-scale analysis of facial responses to ads. IEEE TAC 6 (3), pp. 223–235. Cited by: Related Work.
  • [15] T. Mei, X. Hua, L. Yang, and S. Li (2007) VideoSense: towards effective online video advertising. In ACM MM, pp. 1075–1084. Cited by: Introduction, Introduction.
  • [16] T. Mei, L. Li, X. Hua, and S. Li (2012) ImageSense: towards contextual image advertising. ACM TOMM 8 (1), pp. 6. Cited by: Related Work.
  • [17] G. Okada, K. Masui, and N. Tsumura (2018)

    Advertisement effectiveness estimation based on crowdsourced multimodal affective responses

    .
    In CVPR workshops, pp. 1263–1271. Cited by: Related Work.
  • [18] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. NIPS-W. Cited by: Implementation Details.
  • [19] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, pp. 91–99. Cited by: Implementation Details.
  • [20] J. M. Sánchez, X. Binefa, and J. Vitrià (2002) Shot partitioning based recognition of tv commercials. MTAP 18 (3), pp. 233–247. Cited by: Related Work.
  • [21] R. J. Sethi, Y. Gil, H. Jo, and A. Philpot (2013) Large-scale multimedia content analysis using scientific workflows. In ACM MM, pp. 813–822. Cited by: Related Work.
  • [22] R. Smith (2007) An overview of the tesseract ocr engine. In ICDAR, Vol. 2, pp. 629–633. Cited by: Introduction, Implementation Details.
  • [23] N. Vedula, W. Sun, H. Lee, H. Gupta, M. Ogihara, J. Johnson, G. Ren, and S. Parthasarathy (2017) Multimodal content analysis for effective advertisements on youtube. In ICDM, pp. 1123–1128. Cited by: Introduction, Introduction.
  • [24] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu (2016) Cnn-rnn: a unified framework for multi-label image classification. In CVPR, pp. 2285–2294. Cited by: Compared Approaches.
  • [25] C. Xiang, T. V. Nguyen, and M. Kankanhalli (2015) Salad: a multimodal approach for contextual video advertising. In ISM, pp. 211–216. Cited by: Introduction.
  • [26] K. Yadati, H. Katti, and M. Kankanhalli (2013) CAVVA: computational affective video-in-video advertising. IEEE TMM 16 (1), pp. 15–23. Cited by: Related Work.
  • [27] C. Yeh, W. Wu, W. Ko, and Y. F. Wang (2017) Learning deep latent space for multi-label classification. In AAAI, Cited by: Introduction, 2nd item, Compared Approaches.
  • [28] G. Zhao, J. Yuan, J. Xu, and Y. Wu (2011) Discovering the thematic object in commercial videos. IEEE MultiMedia 18 (3), pp. 56–65. Cited by: Related Work.