Neural Aesthetic Image Reviewer

by   Wenshan Wang, et al.
FUDAN University

Recently, there is a rising interest in perceiving image aesthetics. The existing works deal with image aesthetics as a classification or regression problem. To extend the cognition from rating to reasoning, a deeper understanding of aesthetics should be based on revealing why a high- or low-aesthetic score should be assigned to an image. From such a point of view, we propose a model referred to as Neural Aesthetic Image Reviewer, which can not only give an aesthetic score for an image, but also generate a textual description explaining why the image leads to a plausible rating score. Specifically, we propose two multi-task architectures based on shared aesthetically semantic layers and task-specific embedding layers at a high level for performance improvement on different tasks. To facilitate researches on this problem, we collect the AVA-Reviews dataset, which contains 52,118 images and 312,708 comments in total. Through multi-task learning, the proposed models can rate aesthetic images as well as produce comments in an end-to-end manner. It is confirmed that the proposed models outperform the baselines according to the performance evaluation on the AVA-Reviews dataset. Moreover, we demonstrate experimentally that our model can generate textual reviews related to aesthetics, which are consistent with human perception.



There are no comments yet.


page 1

page 7


End-to-End Multi-Task Learning with Attention

In this paper, we propose a novel multi-task learning architecture, whic...

Learning High-level Image Representation for Image Retrieval via Multi-Task DNN using Clickthrough Data

Image retrieval refers to finding relevant images from an image database...

ANA at SemEval-2020 Task 4: mUlti-task learNIng for cOmmonsense reasoNing (UNION)

In this paper, we describe our mUlti-task learNIng for cOmmonsense reaso...

MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training

This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackat...

Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection

With the recent rise of toxicity in online conversations on social media...

Explainable Fashion Recommendation with Joint Outfit Matching and Comment Generation

Most previous work on fashion recommendation focuses on designing visual...

DeepDistance: A Multi-task Deep Regression Model for Cell Detection in Inverted Microscopy Images

This paper presents a new deep regression model, which we call DeepDista...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of perceiving image aesthetics, styles, and qualities has been attracting increasingly much attention recently. The goal is to train an artificial intelligence (AI) system being able to perceive aesthetics as human. In the context of computer vision, automatically perceiving image aesthetics has many applications, such as personal album management systems, picture editing software, and content based image retrieval systems.

1⃝Prediction:High-aesthetic category 2⃝Comments:Fantastic colors and great sharpness. 1⃝Prediction:Low-aesthetic category 2⃝Comments:The focus seems a little soft.
Figure 1: Some examples to clarify the problem: The goal is to perceive image aesthetics as well as generate reviews or comments.

In the literature, the problem of perceiving image aesthetics is formulated as a classification or regression problem. However, human vision and intelligence can not only perceive image aesthetics, but also generate reviews or explanations, which express aesthetic insights in a natural language way. Motivated by this, we propose a new task of automatically generating aesthetic reviews or explanations. Imagine such a scenario that a mobile phone user takes a photo and upload it to an AI system, the AI system can automatically predict a score indicating high or low rating in the sense of aesthetics and generate reviews or comments, which express aesthetic insights or give advices on how to take photos being consistent with aesthetic principles.

Formally, we consider the novel problem as two tasks: One is assessment on image aesthetics and the other is generating textual descriptions related to aesthetics, which is important in that it enables understanding image aesthetics not only from the phenomenon point of view but also from the perspective of mechanisms. In Fig. 1, we demonstrate the rising of the problem by a couple of examples. The problem that we explore here is to predict the images of interest as high- or low-aesthetic based on the visual features while producing the corresponding comments. For instance, as shown in Fig. 1, our model can not only perform prediction on image aesthetics, but also produce the comments “Fantastic colors and great sharpness” for the top-left image. In contrast to the image aesthetics classification task [3, 14, 15], we extend it by adding a language generator, which can produce aesthetic reviews or insights, as being helpful to understand aesthetics. Unlike image captioning [24, 8, 25], for which the existing works are focused on producing factual descriptions while neglecting generation of descriptions of styles, we explore the problem of generating descriptions in terms of aesthetics.

To facilitate this research, we collect the AVA-Reviews dataset from Dpchallenge, which contains 52,118 images and 312,708 comments in total. On the Dpchallenge website, users can not only rate images with an aesthetic score for each image, but also give reviews or explanations that express aesthetic insights. For instance, comments such as “an amazing composition” express high-aesthetic while comments like “too noise” represent low-aesthetic. Motivated by this observation, we use the users’ comments from Dpchallenge as our training corpus.

Figure 2:

Our multi-task framework based on CNN plus RNNs, which consists of an aesthetic image classifier and a language generator.

In this paper, we propose a novel model referred to as Neural Aesthetic Image Reviewer (NAIR), which can not only distinguish high- or low-aesthetic images but aslo automatically generate aesthetic comments based on the corresponding visual features of such images. Motivated by [24]

, we construct NAIR model by adopting the convolution neural network (CNN) plus Recurrent Neural Networks (RNNs) framework, which encodes any given image into a vector of fixed dimension, and decodes it to the target description. As shown in Fig.

2, our framework consists of an aesthetic image classifier based on CNN, which can predict images as high- or low-aesthetic, and a language generator based on RNNs, which can produce reviews or comments based on high-level CNN features. Specifically, we propose two novel architectures based on shared aesthetically semantic layers and task-specific embedding layers at a high level to improve performance of different tasks through multi-task learning. Here, the motivation to propose the models as such is that it is a common practice to make use of shared layers at a high level in neural networks for performance improvement on different tasks. We evaluate the NAIR model on the AVA-Reviews dataset. The experimental results show that the proposed models can improve the performance on different tasks compared to the baselines. Moreover, we demonstrate that our model can produce textual comments consistent with human intuition on aesthetics. Our contributions are summarized as follows:

  • The problem that we explore here is whether computer vision systems can perceive image aesthetics as well as generate reviews or explanations as human. To the best of our knowledge, it is the first work to investigate into this problem.

  • By incorporating shared aesthetically semantic layers at a high level, we propose an end-to-end trainable NAIR architecture, which can approach the goal of performing aesthetic prediction as well as generating natural-language comments related to aesthetics.

  • To enable this research, we collect the AVA-Reviews dataset, which contains 52,118 images and 312,708 comments. We hope that this dataset could promote researches on vision-language based image aesthetics.

2 Related Work

In this section, we review two related topics:

One topic is deep features for aesthetic classification. The typical pipeline for image aesthetics classification is to classify the aesthetic image of interest as high- or low-aesthetic in the context of supervised learning. Since the work by

[11], the powerful deep features have shown a good performance on various computer vision tasks. That inspires the use of deep features to improve the classification performance for image aesthetics. Below, we review the recent works based on deep CNN features. The existing approaches [9, 14, 15] aim to improve the performance of classification by extracting both global- and local-view patches from high-resolution aesthetic images. Yet, the improved classification accuracy does not make sense in revealing the underlying mechanism of human or machine perception in terms of image aesthetics. In view of such limit of the existing researches, we extend the traditional aesthetic image assessment task by adding a language generator, which is helpful to understand the mechanism of image aesthetics as well as reason the focus of the visual perception of people from the aesthetic perspective.

The other topic is generation of vision-to-language descriptions. Generating image descriptions with texts automatically is an important problem in artificial intelligence. The recent works take advantage of deep learning to generate natural-language descriptions to describe image contents because deep learning based methods in general promise superior performance. A typical pipeline is the encoder-decoder based architecture, which is transfered from neural machine translation


to computer vision. Modelling the probability distribution in the space of visual features and textural sentences leads to generating more novel sentences. For instance, in order to generate textural descriptions for an image,

[24] proposes an end-to-end CNN-LSTM based architecture, where CNN feature is considered as a signal to start LSTM. [26] improves the image captioning performance using attention mechanism, where the different regions of the image can be selectively attended when generating a word in a step. [25, 28] achieves a significant improvement on image captioning with high-level concepts/attributes predicted by CNN. [8] and [10] employ the rich information of individual regions in images to generate dense image captions.

In the literature, despite the progress made in image captioning, the most existing approaches only generate factual descriptive sentences. Notably, the trend of the recent researches has been shifted to produce non-factual descriptions. For example, [17] generates image captions with sentiments, and [5] proposes the StyleNet to produce humorous and romantic captions. However, the generation of image descriptions related to art and aesthetics remains an open problem yet. In this work, we propose an end-to-end multi-task framework that can not only classify aesthetic images, but also generate sentences in terms of aesthetics for images.

3 Neural Models for Image Aesthetic Classification and Vision-to-Language Generation

In this section, we introduce the deep learning architectures for image aesthetic classification and vision-to-language generation. Since deep neural models achieve superior performance in various tasks, deep CNN [14, 15] is used for image aesthetic classification. Further, CNN plus RNNs architecture [24] is used for generating vision-to-language descriptions.

3.1 Image Aesthetic Classification

Here, we adopt the single-column CNN framework [14] for image aesthetic classification. For the task of binary classification, the learning process of CNN is as follows. Given a set of training examples , where is the high-resolution aesthetic image, and the aesthetic label, then we minimize the cross-entropy loss defined as follows:



is the probability output of the softmax layer, and

the weight set of CNN.

3.2 Vision-to-Language Generation

Figure 3: Unrolled LSTM model along with an image representation based on CNN and word embedding.
Figure 4: The proposed models based on CNN plus RNNs architecture. It consists of an image aesthetic classification part, which performs binary classification using the single-column CNN, and a vision-to-language generation part, which generates natural-language comments using pipelined LSTM models. Model-I includes the shared aesthetically semantic layer. Model-II includes the task-specific embedding layer.

Following the previous works [21, 24, 4, 27]

, we adopt the architecture of CNN plus RNNs to generate textural descriptions for images. The key of these approaches is to encode a given image into a fixed-dimension vector using CNN, and then decode it to the target output description. Formally, suppose that a training example pair

is given, where and denote an image and a textual description, respectively, and

is the length of the description. The goal of the description generation is to minimize the following loss function:


where is the log probability of the correct description given the visual features

. Since the model generates words in the target sentence one by one, the chain rule can be applied to learn the joint probability over the context words. Thus, the log probability of the description can be given by the sum of the log probabilities over the words as


At the training stage, we minimize the loss as shown in (3) to guarantee the contextual relationship among words.

Naturally, we model the log probability as described in (3

) with Long Short-Term Memory (LSTM), which is a variant of RNN, capable of learning long-term dependencies

[7]. We can train the LSTM model to generate descriptions for images in an unrolled form as shown in Fig. 3. First, we feed the LSTM model words , where , represent a special START token ”” and a special END token ”” of the description. At time step , we set , where an input image is represented by CNN. From time to , we set , and then LSTM computes the hidden state and the output probability vector using a recurrence formula , where denote the weights of word embedding [18].

Following [24], we feed the image representation to the LSTM model as input, and then use BeamSearch to generate a description at the testing stage.

4 Neural Aesthetic Image Reviewer

Overview: Our model aims to predict high- or low-aesthetic categories of images while automatically generate natural-language comments related to aesthetics. We solve this as a multi-task problem in an end-to-end deep learning manner. Similar to [24], we adopt the CNN plus RNNs architecture, as illustrated in Fig. 4. The architecture consists of an image aesthetic classification part and a vision-to-language generation part. In the image aesthetic classification part, we use the single-column CNN framework for the task of binary classification. Further, the CNN framework can produce a high-level visual feature vector for an image, which can be fed to the vision-to-language part. In the vision-to-language part, we use LSTM based models to generate natural-language comments. We feed the high-level visual feature vector as an input to pipelined LSTM models. During inference, the procedure of generating textural descriptions on visual scenes is illustrated in Fig. 3. When training the multi-task model, we feed a training instance to the model repeatedly, which consists of an image, the corresponding aesthetic category label, and a ground-truth comment. Given a test image, the model automatically predicts the image of interest as high- or low-aesthetic while outputs a comment.

By utilizing multi-task learning, we propose three neural models based on the CNN plus RNNs architecture to approach the goal of performing aesthetic prediction as well as generating natural-language comments related to aesthetics. Below, we describe the details of the proposed models.

Multi-task baseline: Here, we propose a baseline multi-task framework. One natural way is to directly sum up the loss for image aesthetic classification and the loss for textural description generation , which is formulated as


where is the joint loss for both tasks, and , control the relative importance of the image aesthetic classification task and the language generation task, respectively, which are set based on validation data.

In this model, we minimize the joint loss function by changing the weights of the CNN components and the RNNs components at the same time. Through the experiment, we find that changing the weights of CNN components has a negative effect.

Model-I: Motivated by the Multi-task baseline, we minimize the joint loss function by fixing the weights of CNN components while introducing a shared aesthetically semantic layer, allowing two different tasks to share information at a high level, which is illustrated in Fig. 4(a). Suppose that we have visual feature vector as image representation. Then, we can formulate the shared aesthetically semantic layer as



can be a non-linear function such as ReLU function, and

is the weights for learning.

Model-II: The potential limitation of Model-I is that some task-specific features can not be captured by the shared aesthetically semantic layer. To address this problem, we introduce a task-specific embedding layer for each task in addition to the shared aesthetically semantic layer as described in Model-I.

Formally, we introduce the classification-specific embedding layer and the generation-specific embedding layer for the image aesthetic classification task and the vision-to-language generation task, respectively, which can be defined as follows:


where and are learnable parameters. In addition, we have the shared aesthetically semantic layer , which is defined in Equation (5). Then, we can concatenate the task-specific embedding feature vector and the shared aesthetically semantic feature vector as the final feature representation, which is shown in Fig. 4(b).

5 Experiments

We conduct a series of experiments to evaluate the effectiveness of the proposed models based on the newly collected AVA-Reviews dataset.

5.1 AVA-Reviews Dataset

The AVA dataset [19] is one of the largest datasets for studying image aesthetics, which contains more than 250,000 images download from a social network, namely, Dpchallenge111 Each image has a large number of aesthetic scores ranging from one to ten obtained via crowdsourcing. Further, the Dpchallenge website allows users to rate and comment on images. These reviews express users’ insights into why to rate an image as such, and further give guidelines on how to take photos. For instance, users use comments such as “amazing composition”, “such fantastic cropping and terrific angle”, or “very interesting mix of warm and cold colors, good perspective” to express reasons for giving high rating. In contrast, users use reviews such as “too much noise”, “a bit too soft on the focus”, or “colors seem a little washed out” to indicate why they give low rating. Based on this observation, we use the users’ comments from Dpchallenge as our training corpus.

Here, we describe how to extract AVA-Reviews from the AVA dataset to compose the AVA-Reviews dataset for evaluation. Following the experimental settings in [19], the images with average scores less than are low-aesthetic images, while the images with average scores greater than or equal to are high-aesthetic images, where is a parameter to discard ambiguous images. In the AVA-Reviews dataset, we let to be 0.5, and then we randomly select examples from high-aesthetic images and low-aesthetic images to form the training set, validation set, and testing set, respectively. Besides, we crawl all the comments from Dpchallenge for each image in the AVA-Reviews dataset. Each image has six comments. The statistics of the AVA-Reviews dataset are shown in Table 1.

Train Validation Test
high-aesthetic images 20,000 3,000 3,059
low-aesthetic images 20,000 3,000 3,059
Totol Images 40,000 6,000 6,118
Reviews 240,000 36,000 36,708
Table 1: Statistics of the AVA-Reviews dataset

5.2 Parameter Settings and Implementation Details

Following [24], we perform basic tokenization on the comments in the AVA-Reviews dataset. We filter out the words that appear less than four times, resulting in the vocabulary of 13400 unique words. Each word is represented as a “one-hot” vector222

with the dimension equal to the size of the word dictionary. Then, the “one-hot” vector is transformed to 512-dimensional word embedding vector. Besides, we use the 2,048-dimensional vector as image representation, which is output from the Inception-v3 model pre-trained on ImageNet.

We implement the model using the open-source software TensorFlow

[1]. Specifically, we let the number of the LSTM units to be 512. In order to avoid overfitting, we apply the dropout technique [6]

to LSTM variables with the keep probability of 0.7. We initialize all the weights with a random uniform distribution except for the weights of the Inception-v3 model. Further, in Model-I, we let the units of the shared aesthetically semantic layer to be 512. In Model-II, we set the units of the shared aesthetically semantic layer and the task-specific embedding layer to be identically 256.

To train the proposed models, we use stochastic gradient descent with the fixed learning rate 0.1 and the mini-batch size 32. For multi-task training, we can tune the weights

, using the validation set to obtain the optimal parameter values. At the testing stage, we use the BeamSearch approach to generate comments with a beam of size 20 when inputing an image.

Model Over accuracy BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr
IAC 72.96 - - - - - - -
V2L - 47.0 24.3 13.0 6.9 10.4 24.4 5.2
MT baseline 70.57 45.2 23.5 12.7 5.9 9.7 24.1 4.2
Model-I 75.05 47.1 24.9 13.8 7.0 11.0 25.0 5.5
Model-II 76.46 49.5 26.4 14.5 7.4 11.5 26.1 6.0
Table 2: Performance of the proposed models on the AVA-Reviews dataset. We report overall accuracy , BLEU-1,2,3,4, METEOR, ROUGE, and CIDEr. All values refer to percentage (%).

5.3 Evaluation Metrics

We evaluate the proposed method in two ways:

For the image aesthetic assessment model, we report the overall accuracy, which is used in [16], [14], and [15]. It can be formulated as


where , , , denote true positive examples, true negative examples, total positive examples, and total negative examples, respectively.

To evaluate the proposed language generation model, we utilize four metrics including BLEU@N [20], METEOR [12], ROUGE [13], and CIDEr [23]. For all the metrics, a larger value means better performance. The evaluation source code333 is released by Microsoft COCO Evaluation Server [2].

5.4 Baselines for Comparsion

In order to evaluate the performance of the proposed multi-task models, we compare them with the following models including two single-task models (image aesthetic classification and vision-to-language) and a multi-task baseline.

  • Image Aesthetic Classification (IAC): We implement the single-column CNN framework [14] to predict the image of interest as high- or low-aesthetic images. Unlike [14], we use the more powerful Inception-v3 model [22] as our architecture, which is pre-trained on ImageNet. Following [22], the size of every input image is fixed to 299 299 3.

  • Vision-to-Language (V2L): We implement a language generation model conditioned on the images undergoing Neural Image Caption [24], which is based on encoder-decoder architecture. Here, we train this model on the AVA-Reviews dataset.

  • Multi-task baseline (MT baseline): We implement this model by minimizing the loss function as shown in (4). Note that we change the weights of the CNN components and the weights of the RNNs components at the same time.

5.5 Experimental Results

Table 2 reports the performance of the proposed models based on the AVA-Reviews dataset. The notations of IAC, V2L, and MT baseline denote the models for comparision, namely, the image aesthetic classification model, vision-to-language model, and multi-task baseline, respectively. From Table 2, we can observe that (1) the MT baseline achieves inferior results, suggesting that changing the weights of the CNN components has a negative impact; (2) Compared to IAC, V2L, and MT baseline, the proposed Model-I achieves roughly 4.0 average improvement and 2.0 improvement with regard to image aesthetic prediction and vision-to-language generation, respectively, which suggests that the shared aesthetically semantic layer at a high level can improve the performance of aesthetic prediction as well as that of vision-to-language. (3) Compared to Model-I, the proposed Model-II further improves the performance of aesthetic prediction as well as vision-to-language. It suggests that the combination of task-specific information and task-shared information can further augment the solution with the training over the multi-task architecture.

Some representative examples resulting from the proposed models are illustrated in Fig. 5. We can observe that the proposed models can not only achieve satisfactory image aesthetic prediction, but also generate reviews consistent with human cognition. Such examples show that the comments turned out from the proposed models allow the insights into the principles of image aesthetics in broad-spectrum contexts. The proposed models can generate the reviews like “soft focus”, “distracting background”, and “too small”, to suggest why the images obtain low-aesthetic scores. On the other hand, our model also yields the phrases like “great details”, “harsh lighting”, and “very nice portrait” to give the reasons of assigning high-aesthetic scores. Compared to the ground-truth comments, we see that the proposed models can learn the insights related to aesthetics from the training data.

Ground-truth aesthetic score: 3.4 (Low-aesthetic category) Ground-truth comments: Focus is too soft here. Needs to be sharper. It’s a little blurry Prediction: Low-aesthetic category Generated comments: This would have been better if it was more focus. Ground-truth aesthetic score: 3.7 (Low-aesthetic category) Ground-truth comments: Too saturated. Not well focused. Prediction: Low-aesthetic category Generated comments: The background is a little distracting. Ground-truth aesthetic score: 4.2 (Low-aesthetic category) Ground-truth comments: Would help if the picture was larger. Prediction: Low-aesthetic category Generated comments: The image is too small. Ground-truth aesthetic score: 6.1 (High-aesthetic category) Ground-truth comments: Great detail on this fine bird nicely seperated from background A very good image Prediction: High-aesthetic category Generated comments: Great detail in the feathers. Ground-truth aesthetic score: 5.5 (high-aesthetic category) Ground-truth comments: This is beautiful. The rim lighting on that plant is perfect. Prediction: High-aesthetic category Generated comments: The simplicity of this shot. Ground-truth aesthetic score: 5.6 (high-aesthetic category) Ground-truth comments: Love the light but the leaves look a bit soft, the composition is also very nice. Prediction: High-aesthetic category Generated comments: The lighting is a bit harsh. Ground-truth aesthetic score: 5.9 (high-aesthetic category) Ground-truth comments: Tricky shot to nail. Fantastic creatures. Prediction: High-aesthetic category Generated comments: Nice capture with the composition and the colors. Ground-truth aesthetic score: 6.08 (high-aesthetic category) Ground-truth comments: Really good expression captured. Beautiful huge eyes and great expression and pose. Prediction: High-aesthetic category Generated comments: Very nice portrait with the composition.
Figure 5: Typical examples generated by the proposed models.

6 Conclusion

In this paper, we investigate into the problem whether computer vision systems have the ability to perform image aesthetic prediction as well as generate reviews to explain why the image of interest leads to a plausible rating score as human cognition. For this sake, we collect the AVA-Reviews dataset to do this research. Specifically, we propose two end-to-end trainable neural models based on CNN plus RNNs architecture, namely, Neural Aesthetic Image Reviewer. By incorporating shared aesthetically semantic layers and task-specific embedding layers at a high level for multi-task learning, the proposed models improve the performance of both tasks. Indeed, the proposed models can promise both image aesthetic classification and generation of natural-language comments simultaneously to aid human-machine cognition.


  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016.
  • [2] X. Chen, H. Fang, T. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325, 2015.
  • [3] Y. Deng, C. C. Loy, and X. Tang. Image aesthetic assessment: An experimental survey. IEEE Signal Processing Magazine, 34(4):80–106, July 2017.
  • [4] J. Donahue, L. A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):677–691, April 2017.
  • [5] C. Gan, Z. Gan, X. He, J. Gao, and L. Deng. Stylenet: Generating attractive visual captions with styles. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , July 2017.
  • [6] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
  • [7] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, Nov. 1997.
  • [8] J. Johnson, A. Karpathy, and L. Fei-Fei.

    Densecap: Fully convolutional localization networks for dense captioning.

    In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4565–4574, June 2016.
  • [9] L. Kang, P. Ye, Y. Li, and D. Doermann. Convolutional neural networks for no-reference image quality assessment. In Computer Vision and Pattern Recognition, pages 1733–1740, 2014.
  • [10] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, M. S. Bernstein, and L. Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, May 2017.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems, pages 1097–1105, 2012.
  • [12] A. Lavie and A. Agarwal. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 228–231, Stroudsburg, PA, USA, 2007. Association for Computational Linguistics.
  • [13] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In S. S. Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics.
  • [14] X. Lu, Z. Lin, H. Jin, J. Yang, and J. Z. Wang. Rapid: Rating pictorial aesthetics using deep learning. IEEE Transactions on Multimedia, 17(11):2021–2034, 2015.
  • [15] X. Lu, Z. Lin, X. Shen, R. Mech, and J. Z. Wang.

    Deep multi-patch aggregation network for image style, aesthetics, and quality estimation.

    In IEEE International Conference on Computer Vision, pages 990–998, 2015.
  • [16] Y. Luo and X. Tang. Photo and video quality evaluation: Focusing on the subject. In Proceedings of the 10th European Conference on Computer Vision: Part III, ECCV ’08, pages 386–399, Berlin, Heidelberg, 2008. Springer-Verlag.
  • [17] A. Mathews, L. Xie, and X. He. Senticap: Generating image descriptions with sentiments. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 3574–3580. AAAI Press, 2016.
  • [18] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
  • [19] N. Murray, L. Marchesotti, and F. Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2408–2415, June 2012.
  • [20] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics.
  • [21] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc., 2014.
  • [22] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, June 2016.
  • [23] R. Vedantam, C. L. Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4566–4575, June 2015.
  • [24] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3156–3164, June 2015.
  • [25] Q. Wu, C. Shen, L. Liu, A. Dick, and A. v. d. Hengel. What value do explicit high level concepts have in vision to language problems? In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 203–212, June 2016.
  • [26] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In D. Blei and F. Bach, editors,

    Proceedings of the 32nd International Conference on Machine Learning (ICML-15)

    , pages 2048–2057. JMLR Workshop and Conference Proceedings, 2015.
  • [27] T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei. Boosting image captioning with attributes. CoRR, abs/1611.01646, 2016.
  • [28] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4651–4659, June 2016.