Automatic image aesthetic assessment aims to endow computers with the ability of perceiving aesthetics as human beings. It plays an important role in many real-world applications, such as image recommendation, photo organization and image enhancement [1, 2, 3, 4]. Early attempts in this area focus on handcrafted features which are based on the known aesthetic principles such as the rule-of-thirds, simplicity or diagonal rules [5, 6, 7, 8, 9]. However, most photographic rules are descriptive, which are difficult for mathematical modeling.
Deep learning methods have shown great success in various computer vision tasks[10, 11, 12, 13]. More and more researchers try to apply deep learning methods to image aesthetic assessment [14, 15, 16]. But most of the networks ignore the fine-grained information, which is quite important in aesthetic prediction. In order to tackle this problem, previous study  represented the image with one randomly sampled patch from original high-resolution image. However, aesthetic attributes in one randomly cropped patch may not well represent the fine-grained information in the entire image. Recently, Lu et al.  proposed a multi-patch aggregation network (DMA-Net) to extract local fine-grained features from multiple randomly cropped patches. This method achieves some promising results, but it ignores the global spatial layout information. Considering this, Ma et al. 
proposed a layout-aware framework in which an attribute graph is added to DMA-Net. Whereas, the nodes of the attribute graph need to be predefined, which is not applicable in practical applications. Besides, all the above mentioned researches treat the global and local feature extraction as two distinct tasks. But in human vision system, these two features have high correlations.
It is universally acknowledged that humans perceive scenes with a mixture of high-acuity foveal vision and coarser peripheral vision [20, 21]. The former has the highest density of cones, and is responsible for encoding fine-grained details. The latter contains a significantly lower density of cones, and is mainly used for encoding the broad spatial scene and seeing large objects [21, 22]. More importantly, peripheral vision also actively participates in attentional selection of visual space to be processed by fovea . Considering the above observations, we mimic this process and develop a Gated Peripheral-Foveal Convolutional Neural Networks (GPF-CNN). It is a dedicated double-subnet neural network. The input image of the first subnet is a downsampled low resolution image. We refer to the image as peripheral view and denote the first subnet as peripheral subnet. The peripheral subnet composes of a bottom-up feed-forward network to encode the global composition and a top-down neural attention feedback process to create a saliency map. We use the salience map to determine the regions from the peripheral view on which we wish to extract the fine-grained details. The input image of the second subnet is a high-resolution image and denoted as the foveal view or simply fovea. We refer the second subnet as foveal subnet. Figure 1 shows an example of the attention map, the peripheral and foveal view. The model selects a foveal window from the peripheral view with the guidance of top-down neural attention. The corresponding region from the high-resolution images is then cropped for extracting fine-grained details. Finally, features extracted in the fovea subnet are fused with features extracted in the peripheral subnet.
Recent studies show that foveal vision and peripheral vision play different roles in processing different visual stimuli [21, 24]. Categories such as portrait and animal rely more on fine-grained details information to make aesthetic decision. Thus they are associated more with fovea representations. Other categories, as in the case of landscape and architecture, rely more on global shape and large-scale integration. Therefore, they are associated more with peripheral representations. Motivated from these findings, we propose a gated information fusion network to weight the foveal and peripheral branch adaptively: if one branch is better at processing a given image, the gating layer will direct more information to that branch by increasing the value of that gating node.
Overall, this paper makes the following contributions.
A biologically inspired structure is proposed. With this structure, networks can automatically focus on key regions of the top-down neural attention map to extract the fine-grained details. By doing so, we not only establish a relationship between the global and local features, but also preserve the semantic integrity as demonstrated in the experiment part.
We have also developed a gated information fusion module which can adaptively weight the contributions of the global layout and local fine-grained features according to the input. By combining the weighted global and local features, the proposed module can greatly boost the performance.
We conduct comprehensive experiments for unified aesthetic prediction tasks: aesthetic classification, aesthetic regression and aesthetic label distribution. For all these tasks, the proposed model achieves superior performance over the state-of-the-art approaches on public datasets.
The remainder of this paper is organized as follows. In section II, we briefly summarize the related work. In section III, we introduce the architecture of the GPF-CNN model. In section IV, we quantitatively evaluate the effectiveness of the proposed model and compare it with state-of-the art methods. Finally, we wrap up with conclusions and ideas for future work in section V.
Ii Related Work
Contemporary image aesthetic assessment research can be roughly outlined by the following two important components: extraction of more advanced features and utilization of more sophisticated learning algorithms. Thus, we summary the previous research from these two perspectives: the visual representations and learning algorithms.
Ii-a Visual Representations
There is a vast literature on the problem of designing effective features for aesthetic assessment, starting with the seminal work of  and leading to recent works of [6, 7, 9]. These features are based on the person’s aesthetic perception and photographic rules. For example, Datta et al.  extracted features to model the photographic technique such as rule of thirds, colorfulness, or saturation. Tang et al.  modeled the photographic rules (composition, lighting, and color arrangement) by extracting the visual features according to the variety of photo content. Nishiyama et al.  proposed to use the bags-of-color-patterns to model the color harmony in aesthetics. Later work proposed by Zhang et al. 
focused on constructing the small-sized connected graphs to encode the image composition information. However, the above methods with hand-designed features can achieve only limited success because 1) such hand-crafted features cannot be applied to all the image categories since the photographic rules vary considerably among different images. 2) these handcrafted features are heuristic and some photography rules are difficult to be quantified mathematically.
Recently, some researchers have tried to apply the deep learning networks to image aesthetic quality assessment. Tian et al.  proposed a query-dependent aesthetic model with deep learning for aesthetic quality assessment. Their method suffers deteriorate accuracy since they just use the networks as feature extractor. Kao et al.  explored the deep multi-task networks to leverage the semantic information to image aesthetic prediction. Different from the aforementioned methods, [17, 18, 19]
focused on the fixed-size input constraint of deep networks when applied for aesthetic prediction. The inputs need to be transformed via scaling, cropping, or padding before feeding into the neural network. Images after these transformations often lose the holistic information and the high-resolution fine-grained details. Luet al.  tried to tackle this problem by proposing a double column network called RAPID. In particular, they represented the global view via padded or warped image, the local view via the randomly cropped single patch. In order to capture more high resolution fine-grained details, Lu et al.  extended the RAPID to a deep multi-patch aggregation network (DMA-Net). In DMA-Net, the input image was represented with a bag of random cropped patches. Two network layers (statistics and sorting) were used to aggregate the multiple patches. However, DMA-Net failed to encode the global layout of the image. Ma et al.  tried to address this limitation by adding an object-based attribute graph to DMA-Net. Their method relies on strong hypothesis. The number of attribute graph node is given in advance, which is unapplicable in most cases. Our work is also related to fusing the global and local features for aesthetic prediction. It not only makes full use of the attention mechanism, but also adaptively weights the global and local features according to the inputs.
Ii-B Learning Algorithms
. They classified the images into high or low aesthetic quality based on the threshold of the weighted mean scores of human rating. Other research such as[15, 16] used the regression model to predict the aesthetic score. However, the image aesthetic quality assessment is highly subjective. The rated scores of different people may differ greatly due to the cultural background. Thus a scalar value is insufficient to provide the degree of consensus or diversity of opinion among annotators . Considering this, recent research focuses on directly predicting the label distribution of the scores. In , Jin et al. proposed a new CJS loss to predict the aesthetic label distribution. Murray et al. 
used the Huber loss to predict the aesthetic score distribution. But they predicted each discrete probability independently. Talebiet al.  treated the score distribution as ordered classes and used squared EMD loss to predict the score distributions. In this paper, similar with , we optimize our networks by minimizing EMD loss.
Iii Gated Peripheral and Foveal Vision Convolutional Neural Networks
The proposed model includes two subnets: the peripheral subnet and the foveal subnet. Given a high resolution image, the image is first downsampled and then fed into the peripheral subnet. The peripheral subnet is responsible for encoding the global composition and providing the key region. Then, a top-down back-propagation pass is done to calculate the attention map which is informative about the model's decisions. Based on the neural attention map, the attended region is selected and fed into the foveal subnet. A GIF module is followed to effectively weight the extracted features from these two subnets. The overall architecture of the model is shown in Figure 2.
The traditional methods often formulate the aesthetic aesthetic assessment as binary classification as we have discussed earlier. The binary labels are typically derived from a distribution of scores (e.g. from in www.dpchallenge.com and from in www.photo.net
). They compute and threshold the mean score of distributions. However, the single binary label removes the useful information of the ground-truth score distribution, such as the variance, the median, etc. These removed information is useful to investigate the consensus and diversity of opinions among annotators. Thus in this paper, we formulate the aesthetic assessment as a label distribution predicting problem. Each image in the dataset consists of its ground truth (user) ratings. Let denote the score distributions of the images. represents the -th score bucket. is the total number of score buckets. denotes the number of voters that give a discrete score of to the image. As for AVA dataset, , , , but for Photo.net dataset, , , (The detailed introduction of AVA and Photo.net dataset can be found in section IV). The score distributions are -normalized as a preprocessing step. Thus . When we predict the score distributions, the mean score can be obtained via
. Then we can perform the classification and regression task. The loss function used in our paper is defined as follows:
is the cumulative distribution function,is set as 2 to penalize the Euclidean distance between the CDFs. Our proposed GPF-CNN is applicable to a variety of CNN, such as AlexNet , VGGNet , ResNet  as demonstrated in the experiment part. For fair comparison with most of the aesthetic assessment methods, we select the VGG16  as our baseline.
Iii-a Top-down Neural Attention Feedback
The detail information locates in the original high resolution image. Training deep networks with large-size inputs requires a significantly larger dataset, and hardware memory. In this work, we use the top-down neural attention to discover the most important region of an image. The network then directs the high resolution “fovea” to extract fine-grained details. This offers a two-fold bonus. First, it helps to reduce the parameters. if we estimate the salience map via a new saliency network, the number of learning parameters tends to be quite large. This will increase the amount of computation and difficulty of training. Second, extracting local fine-grained features based on the global network's attention can establish the relationships between the global and the local features.
Recently, lots of methods have been proposed to explore where the neural networks “look” in an image for evidence for their predictions [35, 36]. Our work is inspired by the excitation backprop method  which generates the top-down neural attention map based on the probabilistic Winner Take All (WTA) model. Given a selected output class, the probabilistic WTA scheme uses a stochastic sampling process to generate a soft attention map. The winning (sampling) probability is defined as
is the overall neuron set),is the parent node set of (top-down order). As Eq. 2 indicates, is a function of the winning probability of the parent nodes in the preceding layers . Thus, the winner neurons are recursively sampled in a top down fashion based on a conditional winning probability . The conditional winning probability is defined as
where is the normalization factor, is the response of , is the connection weight between and . Recursively propagating the top-down signal based on Eq. 2 and Eq. 3 layer by layer, we can compute the attention map of the predicted class. The computed attention map indicates which pixels are more relevant for the class. Next, we crop and zoom in the attended region to finer scale with higher resolution to extract fine-grained features.
Attention based automatic image cropping tries to identify the most important region in the image. It aims to search for the smallest region inside which the summed attention is maximized. Suppose is a non-negative valued top-down neural attention map. Larger attention values in indicate higher visual importance. Without loosing generality, the attended regions can be found by optimising the following problem:
where is the minimum percentage of total attention to be preserved, is the smallest rectangle that contain percentage of total attention, is the rectangular area of . It should be emphasized that for a given 111We use the search strategy of , and follow the default parameter setting in the paper, i.e. , may not be unique. In our algorithms, we always choose with the largest summed attention value.
Iii-B Gated Information Fusion (GIF) Network
The GIF module aims to balance the global and local feature according to the feature maps. The overall structure is shown in Figure 3. Similar gated information fusion mechanism has been proposed for multi-modal learning . In this paper, we generalise this design and focus on weighting the features by modeling the relationship between channels. The same idea has been adopted in SENet . Let and denote the feature maps from the peripheral subnet and the foveal subnet. The GIF module consists of two parts: the weight generation part and the feature fusion part. In the weight generation part, a global pooling layer is applied before concatenating the feature maps and . is used to squeeze global spatial features into channel descriptors . Then, a bottleneck with two fully connected (FC) layers is applied in parallel to fully capture channel-wise dependencies. The sigmoid gating layer is employed to modulate the learned weights. Finally, the weighted feature maps are fed into the fully connected layers and the classification layer. Let denote the features after containing. We summarize the operations of the GIF module as follows.
where denotes the ReLU function , is a dimensionality-reduction layer and is a dimensionality-increasing layer as defined in SENet , refers the output features of , and denotes the input features of -th branch.
In this section, we verify the effectiveness of the proposed photo aesthetic prediction approach on different datasets and CNN architectures. First, we perform the ablation studies on AVA dataset. The training networks include AlexNet , VGGNet , ResNet , and InceptionNet . For all the architectures, our proposed scheme learns to perform better than the original networks. Next, we compare the performance of our scheme with state-of-the-art methods on AVA and Photo.net dataset.
AVA Dataset: The AVA aesthetic dataset  includes images, which is the largest public available aesthetics assessment dataset. The images are collected from www.dpchallenge.com. Each image has about aesthetic ratings ranging from one to ten. We use the same partition of training data and testing data as the previous work [5, 19, 18], i.e. images for training and validation, the rest images for testing.
Photo.net Dataset: The Photo.net dataset  is collected from www.photo.net. It consists of images but only images have aesthetic label distribution. Distribution (counts) of aesthetics ratings are in scale. From the overall images, images are used to train, images are used for validation and the rest images are used for test.
Iv-B Implementation Details and Evaluation Criteria
Considering that the peripheral subnet is used for encoding the global composition features, we do not rescale the input into fixed size but use downsampling to keep its original aspect ratios. The longest dimension of the input image is kept to
. The training process includes two steps. In the first step, we initialize convolutional layers in the peripheral subnet by the pre-trained VGG16 from ImageNet. We first train the peripheral subnet with softmax loss to classify the images into high or low category. After training the peripheral subnet, we can get the attended regions by feeding back the top-down neural attention. In the second step, we freeze the convolutional layers of peripheral subnet, and start to train the foveal subnet and the GIF module. Each input image is normalised through mean RGB-channel subtraction. Both the two steps adopt the SGD optimization algorithm. The minibatch samples images randomly in each iteration. The momentum is . The initial learning rate is set to and reduced by a factor of every epochs. The training continues until the validation loss reaches a plateau for
epochs. We unify the hyper-parameters for the first and second step training. Our networks are implemented based on the open source PyTorch framework with a NVIDIA Pascal TITAN X GPU.
|Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
Unlike most traditional methods that are designed to perform the binary classification, we evaluate our proposed method with respect to three aesthetic quality tasks: (i) aesthetic score regression, (ii) aesthetic quality classification, and (iii) aesthetic score distribution prediction. For the aesthetic score regression task, we compute the mean score of the label distribution via . For the aesthetic quality classification, we threshold the mean score using the threshold just as the work of [5, 6, 19, 27]. Images with predicted scores above
are categorized as high quality and vice versa. The evaluation metrics related to the three prediction tasks are as follows.
Image aesthetic score regression: We report the Spearman rank-order correlation coefficient (SRCC), Pearson linear correlation coefficient (LCC), root mean square error (RMSE) and mean absolute error (MAE). These are the most significant for testing the performance of an IQA method. Of these criteria, SRCC measures the prediction monotonicity, and the LCC provides an evaluation of prediction accuracy. Both SRCC and LCC range from to , and larger value indicates better result. While for MAE and RMSE, the smaller value indicates the better results.
Image aesthetic quality classification: We report the overall accuracy, defined as
Image aesthetic score distribution: We report the EMD values. The EMD measures the closeness of the predicted and ground truth rating distribution with in Eq.1.
Iv-C Ablation Studies
Traditional methods extract the local features based on random cropping . The random cropping method is independent of the image content. It is unlikely to capture the semantic meaning. Another alternative is to extract the fine-grained details based on salient object detection. The salient object detection can perform well on condition that there is one salient object. When there are multiple objects, it is difficult to choose which is the most important one. Besides, for most landscape images, there is no salient object in the image. However, extracting fine-grained features based on neural attention can tackle the above challenges. Figure 4 shows some examples of patches cropped with neural attention. Figure 4(a)(c)(e) have only one subject in the image. The cropped patches can capture the important region and preserve the semantic integrity. Figure 4(d)(f) have multiple objects in the image. But the cropped patches can capture both of them.
To validate the neural attention module quantitatively, we conduct two baselines models: VGG16 and Random-VGG16. The VGG16 is pre-trained on ImageNet and fine-tuned to predict the aesthetic quality. The input of VGG16 is obtained by wrapping the original input image to the fixed size of . The Random-VGG16 is a double-column deep convolutional neural networks. The first column encodes the global views and the input image is . The second column uses random cropping method to extract the local fine-grained information. The cropped patch size is fixed to be . The PF-CNN is a simplification of proposed GPF-CNN by removing the GIF module. It uses the neural attention to extract the fine-grained details. The attended regions of foveal subnet are resized to in training and testing. For fair comparison, we use the same network architecture and unify the hyper-parameters. The results are shown in Table LABEL:table:attention. It can be seen that both Random-VGG16 and PF-CNN achieve better performance compared with VGG16, which indicates that incorporating local fine-grained features can improve the prediction results. This is consistent with the results of [19, 18], who used the random cropping strategy to encode fine-grained details. The PF-CNN exceeds both VGG16 and Random-VGG16 by a significant margin. This illustrates the importance of using attention mechanism to encode the fine details.
In order to see whether the GIF module is effective, we compare the GPF-CNN with PF-CNN. Compared with PF-CNN, GPF-CNN has GIF module to weight the global and local features. The baseline network is still VGG16. The detailed parameters of GIF module in VGG16 are illustrated in Table LABEL:table:GIF_parameter. The comparison results are shown in Table LABEL:table:attention. GPF-CNN performs better than PF-CNN. In conclusion, our experimental results confirm the importance of fusing global and local fine details, emphasizing the critical importance of neural attention and GIF module in our framework.
Iv-D Extension to Other Network Architectures
We next investigate the performance of GPF-CNN mechanism on several other architectures: AlexNet , ResNet-16 , and InceptionNet . The parameters of GIF module that is integrated with AlexNet, ResNet-16 and InceptionNet are shown in Table LABEL:table:GIF_parameter. The is a bottleneck with two fully connected (FC) layers: a dimensionality-reduction layer with parameters , a ReLU, and then a dimensionality increasing layer with parameters . The and are set and respectively for AlexNet, and for VGG16, and for InceptionNet (We have tried other parameters. But we have not seen any improvements.). The comparison results are illustrated in Table LABEL:table:other_network. As with the previous experiments, we observe significant performance improvements induced by the GPF-CNN mechanism.
|Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
Iv-E Content-based Photo Aesthetic Analysis
In this section, we demonstrate the effectiveness of the proposed method on various types of images. We select eight category images from the test set of AVA dataset: i.e. animal, landscape, cityscape, floral, food-drink, architecture, portrait, still-life. The image collection is the same with previous works of [5, 19, 42], about in each of the categories. In each category of images, we systematically compare the proposed GPF-CNN with VGG16, Random-VGG16, and PF-CNN. The experimental results are illustrated in Table LABEL:table:category. For all the seven categories, random-VGG16, PF-CNN, and GPF-CNN perform better than VGG16. These results indicate that fine-details information is quite important for image aesthetic prediction. We can also find that the performance of the proposed GPF-CNN significantly outperforms the baselines in most of the categories. The portrait shows substantial improvements, reaching a improvement compared with VGG16. This is because the fine details in the face, such as light, contrast is quite important in portrait aesthetic assessment. The proposed GPF-CNN is sensitive to the faces since it uses the neural attention to extract the fine-grained details (see Figure 4(a)).
|Category||Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
|Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
|Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
Iv-F Comparison with the State-of-the-Art on AVA Dataset
We quantitatively compare our GPF-CNN with several state-of-the-art methods: i.e. NIMA , MTRLCNN , A-Lamp , MNA-CNN , RAPID , DMA-Net  on AVA dataset. Note that methods of [27, 19, 5, 17, 18] are designed to perform binary classification on the aesthetic scores. Only aesthetic quality classification results are reported. Table LABEL:table:AVA shows the comparison results. As shown in the table, our GPF-CNN achieves the best performance across the board. Methods of RAPID  and DMA-Net  are based on shallow networks, achieving and respectively. But the proposed GPF-CNN (AlexNet) achieves . This is a and performance improvement. For the larger VGG16 network, our method GPF-CNN (VGG16) performs slightly worse than A-Lamp  but outperforms MTRLCNN  and MNA-CNN  by and respectively. Note that A-Lamp  only performs binary classification. Our method provides richer and more precise information than binary classification. NIMA  is most closely related to our work since they use the EMD loss to optimise their network. The SRCC and LCC of NIMA is and respectively on VGG16, while GPF-CNN achieves and . This is a and improvement. This is, to the best of our knowledge, the state-of-the-art performance on AVA dataset.
Figure 5 shows the top six and bottom six images randomly selected in the AVA test set. Plots of the ground-truth and predicted distributions are also shown. We can find that the model can achieve a high degree of accuracy, with almost perfect reconstruction in some cases. Figure 6
shows some failure cases of our model. Our trained model performs poorly on images which have very non-Gaussian distributions. But the Gaussian functions perform adequately forof all the images in the AVA dataset, as reported by Murray .
Iv-G Evaluating Performance on Photo.net Dataset
We compare our proposed model with state-of-the-art models, including the deep learning models proposed in , VGG16 and the traditional feature extraction models  on Photo.net dataset. For VGG16, we directly replaced the last layer with a fully connected layer with neurons followed by soft-max activations (the scale of the Photo.net dataset is ). The comparison results are shown in Table LABEL:table:photo.net. Again, GPF-CNN outperforms the baselines by a large margin, achieving accuracy rate. This is around better than MTCNN , and better than VGG16.
|Network architecture||Accuracy (%)||SRCC(mean)||LCC (mean)||MAE||RMSE||EMD|
This paper presents a biological model for photo aesthetic assessment. In human vision system, the fovea has the highest possible visual acuity and is responsible for seeing the fine details. The peripheral vision has a significantly lower density of cones and is used for perceiving the broad spatial scene. Besides, foveal and peripheral vision play different roles in processing different visual stimuli. We are inspired by these observations and propose the GPF-CNN architecture. It can learn to focus on the important regions of top-down neural attention map to extract the fine details features. The GIF module can adaptively fuse the global and local features according to the input feature map. The experimental results on the large-scale AVA and Photo.net datasets show that our GPF-CNN can significantly improve the state-of-the-art for three tasks: aesthetic quality classification, aesthetic score regression and aesthetic score distribution prediction. In the future work, we will further explore the human vision system and design more powerful model for aesthetic prediction tasks.
-  F.-L. Zhang, M. Wang, and S.-M. Hu, “Aesthetic image enhancement by dependence-aware object recomposition,” IEEE Trans. Multimedia, vol. 15, no. 7, pp. 1480–1490, 2013.
-  A. Samii, R. Měch, and Z. Lin, “Data-driven automatic cropping using semantic composition search,” Computer Graphics Forum, vol. 34, no. 1, pp. 141–151, 2015.
-  S. Bhattacharya, R. Sukthankar, and M. Shah, “A framework for photo-quality assessment and enhancement based on visual aesthetics,” in Proceedings of the 18th ACM International Conference on Multimedia, 2010, pp. 271–280.
-  H. Talebi and P. Milanfar, “Learned perceptual image enhancement.” [Online]. Available: https://arxiv.org/abs/1712.02864
L. Mai, H. Jin, and F. Liu, “Composition-preserving deep photo aesthetics
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 27-30, 2016, pp. 497–506.
-  L. Guo, Y. Xiong, Q. Huang, and X. Li, “Image esthetic assessment using both hand-crafting and semantic features,” Neurocomputing, vol. 143, pp. 14–26, 2014.
-  X. Tang, W. Luo, and X. Wang, “Content-based photo quality assessment,” IEEE Trans. Multimedia, vol. 15, no. 8, pp. 1930–1943, 2013.
-  C. Chamaret and F. Urban, “No-reference harmony-guided quality assessment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, 2013, pp. 961–967.
-  M. Nishiyama, T. Okabe, I. Sato, and Y. Sato, “Aesthetic quality classification of photographs based on color harmony,” in Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, 2011, pp. 33–40.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, 2015.
-  Z. Jiao, X. Gao, Y. Wang, J. Li, and H. Xu, “Deep convolutional neural networks for mental load classification based on EEG data,” Pattern Recognition, vol. 76, pp. 582–595, 2018.
-  X. Zhang, X. Gao, W. Lu, L. He, and Q. Liu, “Dominant vanishing point detection in the wild with application in composition analysis,” Neurocomputing, vol. 311, pp. 260–269, 2018.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
-  X. Tian, Z. Dong, K. Yang, and T. Mei, “Query-dependent aesthetic model with deep learning for photo quality assessment,” IEEE Trans. Multimedia, vol. 17, no. 11, pp. 2035–2048, 2015.
-  S. Kong, X. Shen, Z. L. Lin, R. Mech, and C. C. Fowlkes, “Photo aesthetics ranking network with attributes and content adaptation,” in Proceedings of 14th European Conference on Computer Vision, 2016, pp. 662–679.
-  B. Jin, M. V. O. Segovia, and S. Süsstrunk, “Image aesthetic predictors based on weighted cnns,” in Proceedings of International Conference on Image Processing, September 25-28, 2016, pp. 2291–2295.
-  X. Lu, Z. L. Lin, H. Jin, J. Yang, and J. Z. Wang, “Rating image aesthetics using deep learning,” IEEE Trans. Multimedia, vol. 17, no. 11, pp. 2021–2034, 2015.
-  X. Lu, Z. Lin, X. Shen, R. Mech, and J. Z. Wang, “Deep multi-patch aggregation network for image style, aesthetics, and quality estimation,” in Proceedings of IEEE International Conference on Computer Vision, December 7-13, 2015, pp. 990–998.
-  S. Ma, J. Liu, and C. W. Chen, “A-lamp: Adaptive layout-aware multi-patch deep convolutional neural network for photo aesthetic assessment,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, July 21-26, 2017, pp. 722–731.
-  H. Strasburger, I. Rentschler, and M. Jüttner, “Peripheral vision and pattern recognition: A review,” Journal of Vision, vol. 11, no. 5, pp. 13–13, 2011.
P. Wang and G. W. Cottrell, “Central and peripheral vision for scene recognition: a neurocomputational modeling exploration,”Journal of Vision, vol. 17, no. 4, pp. 9–9, 2017.
S. Gould, J. Arfvidsson, A. Kaehler, B. Sapp, M. Messner, G. R. Bradski,
P. Baumstarck, S. Chung, and A. Y. Ng, “Peripheral-foveal vision for
real-time object recognition and tracking in video,” in
Proceedings of the 20th International Joint Conference on Artificial Intelligence, January 6-12, 2007, pp. 2115–2121.
-  C. J. Ludwig, J. R. Davies, and M. P. Eckstein, “Foveal analysis and peripheral selection during active visual sampling,” Proceedings of the National Academy of Sciences, vol. 111, no. 2, pp. E291–E299, 2014.
-  A. M. Larson and L. C. Loschky, “The contributions of central versus peripheral vision to scene gist recognition,” Journal of Vision, vol. 9, no. 10, pp. 6–6, 2009.
-  R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Studying aesthetics in photographic images using a computational approach,” in Proceedings of European Conference on Computer Vision, May 7-13, 2006, pp. 288–301.
-  L. Zhang, Y. Gao, R. Zimmermann, Q. Tian, and X. Li, “Fusion of multichannel local and global structural cues for photo aesthetics evaluation,” IEEE Trans. Image Processing, vol. 23, no. 3, pp. 1419–1429, 2014.
-  Y. Kao, R. He, and K. Huang, “Deep aesthetic quality assessment with semantic information,” IEEE Trans. Image Processing, vol. 26, no. 3, pp. 1482–1495, 2017.
-  L. Marchesotti, F. Perronnin, D. Larlus, and G. Csurka, “Assessing the aesthetic quality of photographs using generic image descriptors,” in Proceedings of IEEE International Conference on Computer Vision, November 6-13, 2011, pp. 1784–1791.
-  M. Kucer, A. C. Loui, and D. W. Messinger, “Leveraging expert feature knowledge for predicting image aesthetics,” IEEE Trans. Image Processing, vol. 27, no. 10, pp. 5100–5112, 2018.
-  X. Jin, L. Wu, X. Li, S. Chen, S. Peng, J. Chi, S. Ge, C. Song, and G. Zhao, “Predicting aesthetic score distribution through cumulative jensen-shannon divergence,” in Proceedings of the Thirty-Second Conference on Artificial Intelligence, February 2-7, 2018, pp. 77–84.
-  N. Murray and A. Gordo, “A deep architecture for unified aesthetic prediction.” [Online]. Available: https://arxiv.org/abs/1708.04890
-  H. Talebi and P. Milanfar, “NIMA: neural image assessment,” IEEE Trans. Image Processing, vol. 27, no. 8, pp. 3998–4011, 2018.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 26th Annual Conference on Neural Information Processing Systems, December 3-6, 2012, pp. 1106–1114.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition.” [Online]. Available: https://arxiv.org/abs/1409.1556
B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” inProceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 27-30, 2016, pp. 2921–2929.
-  J. Zhang, Z. L. Lin, J. Brandt, X. Shen, and S. Sclaroff, “Top-down neural attention by excitation backprop,” in Proceedings of the 14th European Conference on Computer Vision, October 11-14, 2016, pp. 543–559.
-  J. Chen, G. Bai, S. Liang, and Z. Li, “Automatic image cropping: A computational complexity study,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 27-30, 2016, pp. 507–515.
-  J. Kim, J. Koh, Y. Kim, J. Choi, Y. Hwang, and J. W. Choi, “Robust deep multi-modal learning based on gated information fusion network.” [Online]. Available: https://arxiv.org/abs/1807.06233
-  J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 18-22, 2018, pp. 7132–7141.
Proceedings of the 27th International Conference on Machine Learning, June 21-24, 2010, pp. 807–814.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
-  N. Murray, L. Marchesotti, and F. Perronnin, “AVA: A large-scale database for aesthetic visual analysis,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June 16-21, 2012, pp. 2408–2415.