With the explosive growth of online visual data, the demand for image aesthetic assessment  in many multimedia applications has been dramatically increased. Typically, this assessment process seeks to evaluate the aesthetic level of each image according to certain rules commonly agreed by human visual perception, ranging from fine-grained local textures and lighting details to high-level semantic layout and composition. These highly subjective and ambiguous perceptual metrics pose formidable challenges on designing intelligent agents to automatically and quantitatively measure image aesthetics, especially with conventional hand-crafted features.
Following the recent advances in deep convolutional neural networks, researchers have explored various data-driven learning based approaches for aesthetic assessment and have reported impressive results in the past few years[24, 25, 33, 34], benefiting from several image aesthetic benchmarks [27, 16]. However, the inherent shortcomings of these datasets are still deterring us from continuously scaling up the volume of training data and improving the performance: (1) the sizes of existing image aesthetic datasets are far from enough to feed up latest neural networks with very deep architectures, since the labor work involved in manual labeling is prohibitively expensive and developing aesthetics-invariant data augmentation solution remains an open problem; (2) meanwhile, subjective human annotations are often strongly biased towards personal aesthetic preference, and thus require excessive amount of data to neutralize such inconsistency for reliable training.
Due to the poor scalability and consistency of available aesthetics datasets, it is becoming more and more attractive to break these bottlenecks of purely supervised training with unsupervised [9, 35], or more specifically, self-supervised features [2, 5, 31, 41]. The main idea of self-supervision is to design pretext tasks that are naturally available, contextually relevant, and capable of providing proxy loss as the training signal to guide the learning of features which are consistent with the real target. As a result, we can significantly broaden the scope of training data without introducing any further annotation cost.
In the context of image aesthetic assessment, due to the absence of an in-depth understanding of the human perception system, designing such a proper self-supervised pretext task can be challenging. However, although it is difficult to quantitatively measure the aesthetic score, predicting the relative influence of certain controlled image editing operations can be much easier. Therefore, we propose two key observations to help design the task (see Fig. 1): (1) some parametric image manipulation operations, such as blurring, pixelation, and patch shuffling, are very likely to have consistent negative impact on image aesthetics; (2) and the degree of impact due to such degradation will be monotonically increasing with respect to values of the corresponding operation parameters.
In this work, motivated by the strong correlation between image aesthetic levels and some degradation operations, we propose, to the best of our knowledge, the very first self-supervised learning scheme for image aesthetic assessment. The core idea behind our approach is to extract aesthetics-aware features with two novel self-supervision pretext tasks on distinguishing the type and strength of image degradation operations. To improve the training efficiency, we also introduce an entropy-based weighting strategy to filter out image patches with less useful training signals. The experimental results demonstrate that our self-supervised aesthetics-aware feature learning is able to achieve promising performance on available aesthetics benchmarks such as AVA and AADB , and outperforms a wide range of commonly adopted self-supervision schemes, including context prediction , jigsaw puzzle 18, 41], and rotation recognition .
In summary, our main contributions include:
We propose a simple yet effective self-supervised learning scheme to extract useful features for image aesthetic assessment without using manual annotations.
We present an entropy-based weighting strategy to help strengthen meaningful training signals in manipulated image patches.
On three image aesthetic assessment benchmarks, our approach outperforms other self-supervised counterparts and even works better than models pre-trained on ImageNet or Places datasets using a large number of labels.
Image aesthetic assessment
has been extensively studied in the past two decades. Conventional solutions use hand-crafted feature extractors designed with domain expertise to model the aesthetic aspects of images [1, 15]. More recently, the power of deep neural networks makes it possible to learn feature representations that can surpass hand-crafted ones. Typical approaches include distribution-based objective functions [34, 13], multi-level spatially pooling , attention-based learning schemes , and attribute/semantics-aware models [24, 30]. The advance of learning-based image aesthetic assessment has also inspired a number of practical solutions for various usage scenarios such as clothing  and food .
Self-supervised feature learning
can be considered as one type of unsupervised learning algorithms, which intends to learn useful representations without manual annotations. Many effective pretext tasks have been proposed in this direction, such as context prediction , colorization , split-brain , and RotNet . The representations learned from self-supervised schemes turn out to be useful for many downstream tasks, e.g., tracking , re-identification , and image generation .
Aesthetics-aware image manipulations
can be generally divided into two categories: (1) Rule-based approaches leverage empirical knowledge and domain expertise to enforce well-established photographic heuristics such as gamma correction and histogram equalization. (2) Data-driven approaches focus on learning powerful feature representations from examples to improve aesthetics in certain aspects, such as aesthetics-aware image enhancement[4, 11], image blending , and colorization .
Our self-supervised feature learning approach for image aesthetic assessment is based on two key observations. First, experiments in previous work on image aesthetic assessment benchmarks indicate that inappropriate data augmentation (e.g., brightness/contrast/saturation adjustment, PCA jittering) during the training process will result in performance degradation at the test time [27, 16]. Second, it is observed that a convolutional neural network (CNN) model trained from manually annotated aesthetic labels can inherently acquire the ability to distinguish fine-grained aesthetic differences caused by various image manipulation methods. In Fig. 2, for instance, some image editing operations (such as Gaussian blur, downsampling, color quantization, and rotation) can increase the prediction confidence of aesthetically negative images, or even turn an aesthetically positive example into a negative one (as shown in the last row of Fig. 2).
Consequently, we argue that these fine-grained perceptual quality problems, without manual annotations, are closely related to the image aesthetic assessment task, for which meaningful training instances can be constructed via proper expert-designed image manipulations.
Selected Image Manipulations
According to empirical knowledge, different image editing operations have diverse effects on the manipulated output [23, 40]. Furthermore, some operations require complex parameter settings (e.g., inpainting) and it is often difficult to automatically compare the output to the input in terms of aesthetic level (e.g., grayscale conversion). In our case, however, we need to select image manipulations with easily controllable parameters and predictable perceptual quality.
Specifically, as listed in Table 1
, we adopt a variety of image manipulation operations with different parameters for artificial training instances, including (1) downsampling by a scaling factor and upsampling back to the original resolution via bilinear interpolation; (2) JPEG compression with a percentage number to control the quality level; (3) Gaussian noise controlled by the variance; (4) Gaussian blur controlled by the standard deviation; (5) color quantization into a small number of levels; (6) brightness change based on a scaling factor; (7) random patch shuffle; (8) pixelation based on a patch size; (9) rotation by a certain degree; and (10) linear blending (mixup ) based on a constant alpha value.
|Much noise||JPEG compression|
|Soft / Grainy||Downsampling|
Aesthetics-Aware Pretext Tasks
In this section, we propose our aesthetics-aware pretext task in a self-supervised learning scheme from two aspects. On one hand, the ability to categorize different types of image manipulations can be beneficial to the learning of aesthetics-aware representation. On the other hand, for the same type of editing operation, different control parameters can render various aesthetic levels correspondingly, and the tendency of quality shift is predictable . Take JPEG compression for instance, decreasing the output image quality parameter always decreases the aesthetic level. Therefore, by constructing images with manipulations resulting in predictable degradation behaviors, we can extract meaningful training signals for the fine-grained aesthetics-aware pretext tasks.
Degradation identification loss.
We denote an image patch as , and the manipulation parameter as for . The loss term of our first pretext task, i.e., , reinforces the model to recognize which operation has been applied to :
where is the transformed output patch given the image patch by the parameters , and
is the probability predicted by our modelthat has undergone a degradation operation of type .
For a comprehensive coverage of image attributes (e.g., resolution, color, spatial structures), we leverage a variety of typical manipulation operations as listed in Table 1. To take better advantage of synthesized instances, we adopt parameters that will always induce aesthetic degradation of observably different patterns. Apart from these distortions, None operation is also taken into consideration. That is, we require the model to categorize different classes of editing operations.
Training with alone is not enough for the task of aesthetic assessment, since some editing operations may produce some low-level artifacts that are easy to detect and will fool the network to learn some trivial features of no practical semantics . Our solution to address this issue is to encode the information via triplets , where and are two different parameters of a certain operation in Tab. 1 (except for rotation and exposure). The two parameters are specified to create aesthetic distortions with a predictable relationship, i.e., the edited image patch using is aesthetically more positive than . Therefore, in an ideal aesthetic-aware representation, the distance between the original image patch and should be smaller than the distance between and . In this way, we propose the second task, :
is the normalized feature extracted from the modelgiven a patch .
It should be noted that we do not apply the triplet loss term from the beginning of the training process, since the representation learned from in the early stage can oscillate drastically and thus may not be suitable for comparisons of fine-grained features. To combat the training dynamics, we activate after the curve of has shown some plateau.
Total loss function.
By putting the degradation identification loss item and the triplet loss item together, we formulate a new self-supervised pretext task as below:
where is a mini-batch made up of patches, is the set of all the image manipulations, and is a scaling factor to balance the two terms. The diagram of our proposed self-supervised scheme is shown in Fig. 3.
To improve the training efficiency and reduce noisy signals from misleading instances, we present a simple yet effective entropy-based weighting strategy. Specifically, after warm up for several epochs, we apply an entropy-based weightfor each patch :
where is used to control the lower bound of and we set it as in our experiments. The rationale behind this strategy is that instances of higher entropy values tend to have more uncertain visual cues for image aesthetics, and thus should be assigned with lower weights in the optimization process.
in which CNNs are trained to discriminate instances from different types of data augmentation methods. Different from these previous methods, we select a set of image manipulation operations and design our loss function carefully so that the learned representation can be aware of significant patterns of visual aesthetics. Besides, instead of building on selected high quality photos, we conduct pretext task optimization directly on images from ImageNet. This strategy makes our learning scheme more flexible to use.
One might argue that we ignore some global aesthetic factors, e.g., rule of thirds. The reasons are two folds. First, global image attributes are more complex to manipulate than local ones which are considered in our approach. Second, global factors generally involve semantics which are not available in the context of self-supervised feature learning. We are not intended to mimic the statistics of real-world visual aesthetics, but to propose pretext tasks which are suitable for visual aesthetic assessment.
We compare the performance of our method with five typical self-supervised visual pretext tasks, as listed below:
The context predictor  that predicts the relative positions between two square patches cropped from one input image.
The cross-channel predictor  estimates one subset of the color channels from another with constrained spatial regions.
The primitive counting task  requires the model to generate visual primitives with their number invariant to transformations including both scaling and tiling.
A rotation predictor  is trained to recognize the 2D rotation applied to input images, i.e., is image rotation and .
In our experiments, we use the pre-trained models released by the authors for reliable and fair comparisons111Context: https://github.com/cdoersch/deepcontext.222Colorization: https://github.com/richzhang/colorization.333Split-brain: https://github.com/richzhang/splitbrainauto.444Counting: https://github.com/gitlimlab/Representation-Learning-by-Learning-to-Count.555RotNet: https://github.com/gidariss/FeatureLearningRotNet..
Our entire training pipeline (Fig. 4) contains two parts: a self-supervised pre-training stage to learn visual features with unlabeled images and a task adaptation stage to evaluate how the learned features perform in the task of image aesthetic assessment.
In the pre-training stage (Fig. 4 left), the first few layers share the same structure with AlexNet  for fair comparisons. In the task adaptation stage (Fig. 4 right), following the same configurations in , we freeze each model learned in the pre-training and leverage similar linear classifiers to evaluate visual features from each convolutional layer for the task of binary aesthetic classification. The channel number of each convolutional layer is shown in Fig. 4, and the dimensions of the corresponding fully connected layers are , , , , and , respectively.
In the pre-training stage, we first resize the shorter edge of each input image to . Then we randomly crop one patch of resolution from the resized image. Next, we randomly choose three manipulation operations in Tab. 1 to edit each patch. We apply SGD optimization using a batch size of
, with the Nesterov momentum ofand the weight decay of . We begin with a learning rate of , dropped it by a factor of after every epochs. To eschew training oscillating, we activate with of after the first epochs. The following adaptation stage shares the same settings except that the learning-rate starts from .
Benchmarks for Aesthetic Assessment
Aesthetic Visual Analysis (AVA).
The AVA dataset  contains approximately images. Each image has about crowdsource aesthetic ratings in the range of to . Following the common practice in [24, 10], we consider images whose average aesthetic scores are no less than as positive instances and adopt the same training/test partition, i.e., images for training and for testing.
Aesthetics with Attributes Database (AADB).
images for training, validation, and testing, respectively. Without loss of generality, we binarize the aesthetic ratings into two classes using a threshold of, similar to AVA.
Chinese University of Hong Kong-Photo Quality Dataset (CUHK-PQ).
The CUHK-PQ dataset  contains images with binary aesthetic labels. Commonly used training/testing partitions on this dataset include a random split and a five-fold split for cross-validation. We use the former one in our experiments.
Results and Discussions
Evaluation of Unsupervised Features
Our experimental results on the three benchmarks are reported in Table 2. In each column, the best numbers are shown in bold font and the second best are highlighted with an underscore. We can make several interesting observations from the table.
The proposed scheme generally works the best on all the three benchmarks. It is evident that our approach can achieve competitive results consistently, compared with other baselines on the three datasets, especially for the mid-level layers, e.g., . Self-supervised visual features can even outperform semantics-related features that are pre-trained with manual labels from ImageNet or Places. Furthermore, from the accuracy perspective, our method can be comparable to or work better than existing self-supervised learning schemes in image aesthetic assessment.
Mid-level features achieve the best results in task adaptation. By comparing performance of different layers, we can see the correlation between image aesthetics and network depth. As shown in all tables, mid-level features ( & ) generally outperform high-level features () and low-level ones ( & ) in terms of accuracy. The observations are consistent with [41, 8]. One possible explanation is that, during the back propagation process, gets more training signals for the pretext task as compared to . Consequently, is more likely to suffer from the overfitting issue, while using leads to better generalization performance.
In addition to image attributes used in our pretext tasks, color is another important factor in image aesthetic assessment. Among other tested pretext tasks, Split-brain  and Colorization  consistently achieve the better results. It indicates that color and image attributes in Tab. 1 are the key factors in assessing visual aesthetics. Regarding why RotNet fails to achieve good results on the CUHK-PQ benchmark, we suspect that both high-quality images and low-quality ones in this dataset share similar distributions or visual patterns.
|Method||Backbone||Labels in||Aesthetic||Results ()|
|the pre-training||label alone|
|Our pretext task||AlexNet||0||82.0|
|+ non-linear layers||ResNet-18||82.8|
Low Data Adaptation
One practical issue that self-supervised learning schemes are able to handle is low data adaptation, where very few labels are available for task adaptation. In our case, we simulate the low data adaptation regime by using of the original training data per class in the task adaptation stage. The final results are shown in Fig. 5 and Fig. 6. From these two figures, we can see that our proposed learning scheme generally outperforms baseline models pre-trained with -ways labels from ImageNet. As the margin becomes larger when using more manual aesthetic labels in the task adaptation, our method presents higher efficiency of data usage compared to the vanilla supervised counterpart.
It is also interesting to note that our method and vanilla supervised method have similar adaptation performance when training data is available. We believe this fact is due to the complexity of visual aesthetics, i.e., when aesthetic labels are extremely few, the sampled instances cannot cover the entire distribution faithfully and thus lead to poor assessment results.
Adaption Using a Non-Linear Classifier
If we use non-linear layers in the task adaptation stage, we can achieve results close to state-of-the-art fully supervised approaches [24, 33, 34, 10] which have a test accuracy of on the AVA benchmark, as shown in Tab. 3. Note that our method does not use millions labels from ImageNet during the pre-training, and we only use aesthetic labels in the task adaptation stage. Besides, our accuracy is about on the AADB benchmark and on the CUHKPQ dataset, using ResNet-18 as the backbone network. These numbers are also close to that of the same network pre-trained with millions labels from ImageNet (i.e., i.e., on the AADB and on the CUHKPQ). Arguably, we manage to achieve similar assessment performance using far less manual labels.
We perform the pre-training with either one of the two pretext tasks and measure the final assessment results. It turns out that joint training using both loss terms generally yields better aesthetic assessment accuracy than using or alone. The accuracy difference is on the AVA dataset and on the AADB dataset,. We also find that pre-training with alone is prone to undesirable training dynamics, while the two-stage learning scheme is more stable and consequently leads to better results.
Image editing operations.
We randomly select one operation in Tab 1 and exclude it from the pre-training stage to analyze the impact on the final assessment performance. We can make several observations from the corresponding results shown in Tab. 4. First, editing operations which are related to softness, camera shake, and poor lighting are relatively more significant in learning aesthetic-aware features, compared to other operations. Second, image manipulations which are related to distracting, fuzziness, and noise have relatively smaller margins. This fact indicates that these operations may create aesthetically ambiguous instances that do not always provide consistent signals. Interestingly, camera shake has the most significant impact on the AVA, while soft/grainy are the most important ones for the AADB.
|Soft / Grainy|
Without our entropy-based weighting scheme, there will be at least down in the performance after task adaptation. Besides, some undesired dynamics will occur during the training process. Therefore, we can strengthen meaningful training signals from manipulated instances and improve training efficiency by assigning an entropy-based weight to each patch.
In this paper, we propose a novel self-supervised learning scheme to investigate the possibility of learning useful aesthetic-aware features without manual annotations. Based on the correlation between negative aesthetic effects and several expert-designed image manipulations, we argue that an aesthetic-aware representation space should distinguish between results yielded by various operations. To this end, we propose two pretext tasks, one to recognize what kind of editing operation has been applied to an image patch, and the other to capture fine-grained aesthetic variations due to different manipulation parameters. Our experimental results on three benchmarks demonstrate that the proposed scheme can learn aesthetics-aware features effectively and generally outperforms existing self-supervised counterparts. Besides, we achieve results comparable to state-of-the-art supervised methods on the AVA dataset without using labels from ImageNet. We hope these findings can help us obtain a better understanding of image visual aesthetics and inspire future research in related areas.
This work was supported in part by National Natural Science Foundation of China under nos. 61832016, 61672520 and 61720106006, in part by Key R&D Program of Jiangxi Province (No. 20171ACH80022), in part by CASIA-Tencent Youtu joint research project.
-  (2006) Studying aesthetics in photographic images using a computational approach. In ECCV, pp. 288–301. Cited by: Introduction, Image aesthetic assessment.
-  (1994) Learning classification with unlabeled data. In NIPS, pp. 112–119. Cited by: Introduction, Self-supervised feature learning.
-  (2009) ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248–255. Cited by: Discussions., Baseline Methods.
-  (2018) Aesthetic-driven image enhancement by adversarial learning. In ACM MM, pp. 870–878. Cited by: Aesthetics-aware image manipulations.
-  (2015) Unsupervised visual representation learning by context prediction. In ICCV, pp. 1422–1430. Cited by: Introduction, Introduction, Self-supervised feature learning, item Context:.
-  (2014) Discriminative unsupervised feature learning with convolutional neural networks. In NIPS, pp. 766–774. Cited by: Discussions..
-  (2018) Unsupervised person re-identification: clustering and fine-tuning. TOMM 14 (4), pp. 83. Cited by: Self-supervised feature learning.
-  (2018) Unsupervised representation learning by predicting image rotations. In ICLR, External Links: Cited by: Introduction, Self-supervised feature learning, Triplet loss., item RotNet:, Evaluation of Unsupervised Features.
-  (2006) Reducing the dimensionality of data with neural networks. Science 313 (5786), pp. 504–507. Cited by: Introduction.
-  (2019) Effective aesthetics prediction with multi-level spatially pooled features. In CVPR, pp. 9375–9383. Cited by: Image aesthetic assessment, Aesthetic Visual Analysis (AVA)., Adaption Using a Non-Linear Classifier, Table 3.
-  (2018) Exposure: a white-box photo post-processing framework. TOG 37 (2), pp. 26. Cited by: Aesthetics-aware image manipulations.
-  (2018) Learning to blend photos. In ECCV, pp. 70–86. Cited by: Aesthetics-aware image manipulations.
-  (2018) Predicting aesthetic score distribution through cumulative jensen-shannon divergence. In AAAI, pp. 77–84. Cited by: Image aesthetic assessment.
-  (2017) PatchShuffle Regularization. arXiv preprint arXiv:1707.07103. Cited by: Selected Image Manipulations.
-  (2006) The design of high-level features for photo quality assessment. In CVPR, pp. 419–426. Cited by: Image aesthetic assessment.
-  (2016) Photo aesthetics ranking network with attributes and content adaptation. In ECCV, pp. 662–679. Cited by: Introduction, Introduction, Key Observations, Aesthetics with Attributes Database (AADB)., Table 3.
-  (2012) Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097–1105. Cited by: Training Pipeline.
-  (2017) Colorization as a proxy task for visual understanding. In CVPR, pp. 6874–6883. Cited by: Introduction.
-  (2017) RankIQA: Learning From Rankings for No-Reference Image Quality Assessment. In ICCV, pp. 1040–1049. Cited by: Discussions..
-  (2015) Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. In ICCV, pp. 990–998. Cited by: Table 3.
-  (2019) High-fidelity image generation with fewer labels. In ICML, pp. 4183–4192. Cited by: Self-supervised feature learning.
-  (2011) Content-based photo quality assessment. In ICCV, pp. 2206–2213. Cited by: Chinese University of Hong Kong-Photo Quality Dataset (CUHK-PQ)..
-  (2018) End-to-end blind image quality assessment using deep neural networks. TIP 27 (3), pp. 1202–1213. Cited by: Selected Image Manipulations.
-  (2017) A-Lamp: Adaptive layout-aware multi-patch deep convolutional neural network for photo aesthetic assessment. In CVPR, pp. 722–731. Cited by: Introduction, Image aesthetic assessment, Aesthetic Visual Analysis (AVA)., Adaption Using a Non-Linear Classifier, Table 3.
-  (2016) Composition-preserving deep photo aesthetics assessment. In CVPR, pp. 497–506. Cited by: Introduction, Table 3.
-  (2015) Discovering beautiful attributes for aesthetic image analysis. IJCV 113 (3), pp. 246–266. Cited by: Aesthetics-Aware Pretext Tasks.
-  (2012) AVA: A large-scale database for aesthetic visual analysis. In CVPR, pp. 2408–2415. Cited by: Introduction, Introduction, Key Observations, Aesthetic Visual Analysis (AVA)..
-  (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, pp. 69–84. Cited by: Introduction.
-  (2017) Representation learning by learning to count. In ICCV, pp. 5898–5906. Cited by: item Counting:.
-  (2019) Image aesthetic assessment assisted by attributes through adversarial learning. In AAAI, pp. 679–686. Cited by: Image aesthetic assessment.
-  (2016) Context encoders: feature learning by inpainting. In CVPR, pp. 2536–2544. Cited by: Introduction.
-  (2018) Gourmet photography dataset for aesthetic assessment of food images. In SIGGRAPH Asia 2018 Technical Briefs, Cited by: Image aesthetic assessment.
-  (2018) Attention-based multi-patch aggregation for image aesthetic assessment. In ACM MM, pp. 879–886. Cited by: Introduction, Image aesthetic assessment, Adaption Using a Non-Linear Classifier, Table 3.
-  (2018) NIMA: Neural image assessment. TIP 27 (8), pp. 3998–4011. Cited by: Introduction, Image aesthetic assessment, Adaption Using a Non-Linear Classifier, Table 3.
Extracting and composing robust features with denoising autoencoders. In ICML, pp. 1096–1103. Cited by: Introduction.
-  (2018) Tracking emerges by colorizing videos. In ECCV, pp. 391–408. Cited by: Self-supervised feature learning.
-  (2017) Deep cropping via attention box prediction and aesthetics assessment. In ICCV, pp. 2186–2194. Cited by: Table 3.
-  (2018) Aesthetic-based clothing recommendation. In WWW, pp. 649–658. Cited by: Image aesthetic assessment.
-  (2018) Mixup: beyond empirical risk minimization. In ICLR, External Links: Cited by: Selected Image Manipulations.
The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pp. 586–595. Cited by: Selected Image Manipulations.
-  (2016) Colorful image colorization. In ECCV, pp. 649–666. Cited by: Introduction, Introduction, Self-supervised feature learning, Aesthetics-aware image manipulations, item Colorization:, Evaluation of Unsupervised Features, Evaluation of Unsupervised Features.
-  (2017) Split-brain autoencoders: unsupervised learning by cross-channel prediction. In CVPR, pp. 1058–1067. Cited by: Self-supervised feature learning, item Split-brain:, Training Pipeline, Evaluation of Unsupervised Features.
Places: a 10 million image database for scene recognition. TPAMI 40 (6), pp. 1452–1464. Cited by: Baseline Methods.