DeepAI
Log In Sign Up

Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression

12/25/2017
by   Guanjun Guo, et al.
Xiamen University
The University of Adelaide
Academia Sinica
0

Despite recent progress, computational visual aesthetic is still challenging. Image cropping, which refers to the removal of unwanted scene areas, is an important step to improve the aesthetic quality of an image. However, it is challenging to evaluate whether cropping leads to aesthetically pleasing results because the assessment is typically subjective. In this paper, we propose a novel cascaded cropping regression (CCR) method to perform image cropping by learning the knowledge from professional photographers. The proposed CCR method improves the convergence speed of the cascaded method, which directly uses random-ferns regressors. In addition, a two-step learning strategy is proposed and used in the CCR method to address the problem of lacking labelled cropping data. Specifically, a deep convolutional neural network (CNN) classifier is first trained on large-scale visual aesthetic datasets. The deep CNN model is then designed to extract features from several image cropping datasets, upon which the cropping bounding boxes are predicted by the proposed CCR method. Experimental results on public image cropping datasets demonstrate that the proposed method significantly outperforms several state-of-the-art image cropping methods.

READ FULL TEXT VIEW PDF

page 1

page 3

page 5

page 6

page 8

page 10

page 12

page 13

03/21/2018

A Cascaded Convolutional Neural Network for Single Image Dehazing

Images captured under outdoor scenes usually suffer from low contrast an...
08/15/2021

A Cascaded Zoom-In Network for Patterned Fabric Defect Detection

Nowadays, Deep Convolutional Neural Networks (DCNNs) are widely used in ...
11/30/2020

Rethinking and Designing a High-performing Automatic License Plate Recognition Approach

In this paper, we propose a real-time and accurate automatic license pla...
09/29/2017

Light Cascaded Convolutional Neural Networks for Accurate Player Detection

Vision based player detection is important in sports applications. Accur...
12/30/2022

An Experience-based Direct Generation approach to Automatic Image Cropping

Automatic Image Cropping is a challenging task with many practical downs...
12/13/2016

Spatial Pyramid Convolutional Neural Network for Social Event Detection in Static Image

Social event detection in a static image is a very challenging problem a...

I Introduction

Computational image understanding consists of not only image classification, object detection, object tracking and other popular computer vision tasks, but also semantic inference of aesthetic from images. Computational inference of aesthetic has a wide range of applications. Image aesthetics can be used to recommend aesthetically pleasing images in an image repository to users. General consumers or designers can utilize the feedback from an automated aesthetic evaluation system to improve decisions

[1]. However, computational inference of image aesthetics remains a challenging task because image aesthetics is highly subjective and difficult to represent with precise mathematical explanations.

Fig. 1: Illustration of the proposed CCR method to crop aesthetic regions in two images. (a) Input images. (b) The cropped images obtained by the proposed method. The black regions are cut off by the proposed method.

Although aesthetic measurement is subjective, some works attempt to learn visual properties that make photographs aesthetically beautiful [2, 3, 4, 5, 6, 7]. For example, in [2], color, texture and other low-level visual features are extracted for training a two-class classification tree model to predict the aesthetic quality of an image. Several recent methods [3, 4, 8]

have followed this scheme. Specifically, different handcrafted features inspired by aesthetic experience are first extracted. Machine learning models are then trained to identify the visual properties of most relevance to the aesthetic quality. As deep CNN 

[9] is successful on many computer vision tasks, where features are automatically learned from raw images, CNN has also been used on aesthetic categorization for images [10, 11, 12]. Typically, CNN requires a large number of training samples to alleviate overfitting. A recently published large-scale dataset called the large-scale aesthetic visual analysis (AVA) dataset [13] makes it feasible to train a CNN based aesthetic classifier. The task of blind image quality assessment (BIQA) is closely related to the task of aesthetic categorization, and BIQA methods (such as [14, 15]) do not need many training samples, these BIQA methods usually focus high-quality image regions rather than object regions. Thus, these BIQA methods are usually not appropriate for the task of image cropping.

Image cropping is a key step to generate aesthetically pleasing images. A distinction between a professional photographer and an amateur heavily relies on whether one can remove distracting contents and highlight desired subjects to enhance the visual aesthetic of an image. Some composition rules for obtaining aesthetically pleasing photographs, such as the rules of thirds (i.e., an approximation of the Golden Ratio) and the rules of odds have been developed. Inspired by these composition rules, several image cropping methods, which are usually based on handcrafted features, have been proposed. Representative works are

[16, 4]. However, the composition rules may be different for various styles of images. For example, portrait photographers often highlight a person while landscape photographers focus more on the interaction among the elements of an image. In Gestalt psychology [17], the concept of goodness of configuration shows that the elements in an image are not isolated. People prefer to choose patterns, which have properties such as symmetry, simplicity, etc. Therefore, image cropping for visual aesthetic enhancement is a complicated and high-level cognition process.

In this paper, we develop a machine learning method to automatically crop images to produce the cropped images that are aesthetically pleasing. Specifically, a two-step learning approach is proposed to solve the automatic cropping problem with a small number of training samples that contain cropping information. First, a CNN classifier is trained using a large collection of visual aesthetic datasets. The CNN features are then extracted based on the trained classifier. Using the CNN features as the input, a cascaded cropping regression (CCR) method, which combines a set of weak random-ferns regressors [18]

as a primitive regressor, is proposed to fit the image cropping information annotated by professional photographers. A key motivation of this two-step learning approach is that the amount of image cropping data is limited and directly fitting a cropping model using deep CNNs may easily lead to overfitting. Bounding-box labels are very limited and expensive, especially in the case where bounding-box labels are annotated by professionals. However, a large number of weakly labeled images, which have image-level aesthetic labels, are available from the internet. Therefore, this work presents a two-step learning approach to respectively learn a CNN feature extractor and a set of regressors for the task of image cropping. The advantages of the proposed two-step learning approach are as follows. First, the proposed approach leverages a large number of aesthetic image data, and thus can effectively learn image features. Second, the proposed CCR method is effective in selecting features and learning cropping information in a cascaded manner. At each stage of regression, CNN features are extracted from the cropping region obtained at the previous stage. The final cropping result is obtained after several stages of regression. The primitive regressor employed in the CCR method is an ensemble of random-ferns regressors selected by using the gradient boosting algorithm 

[19]

. Each primitive regressor indirectly selects features from the CNN features by aggregating the features selected by the set of random-ferns regressors. In contrast to conventional cascaded methods, which directly use one random-ferns regressor as the primitive regressor, the CCR method converges quickly in a few stages, significantly reducing the computational complexity.

To the best of our knowledge, this work is the first to present a cascaded regression method with the CNN features for the task of automatic image cropping. We improve the convergence speed of the cascaded regression method based on random-ferns regressors, which makes the proposed image cropping method quite efficient. In addition, we propose a two-step learning strategy for limited labelled cropping data. The proposed image cropping method is effective and significantly outperforms several state-of-the-art image cropping methods. Fig. 1 illustrates the proposed image cropping method for visual aesthetic enhancement. The proposed method can imitate the manipulations of expert photographers to remove distracted regions (such as watermarks, non-subject regions, etc.) and highlight a subject in an image.

Ii Related Work

Existing image cropping methods can be roughly divided into three categories. Methods of the first category are attention-based, and output a cropping bounding box around an informative object. The informative object can be a salient object obtained by different saliency detection methods (such as [20, 21]). For example, Marchesotti et al. [22] propose a framework for visual saliency detection where one or more thumbnails are extracted from the obtained saliency maps. Thumbnails are usually salient foreground regions while non-informative pixels become part of the background. Fang et al. [23] also utilize a spatial pyramid of saliency maps as a composition feature to force a cropped image to contain a salient object. In addition, faces or other regions of interest are often used as informative regions for image cropping [24, 25, 26].

Fig. 2: The framework of the proposed CCR method. The dashed box denotes the procedure where stages of regression are performed for obtaining the final cropping result. (a) An input image (i.e., the initial cropping region). (b) The cropping-indexed CNN features. (c) The proposed primitive regressor used to predict the values of the cropping region increment based on the features obtained in (b). (d) Updating the cropping region. (e) The final cropping result.

Methods of the second category rely on the aesthetic evaluation of cropping results. Machine learning methods are often used to identify the aesthetic of cropped images. Moreover, these methods also consider the spatial distribution of elements in an image to obtain an optimized arrangement of the image elements. Representative works are [27, 16, 28]. However, aesthetic based methods are sensitive to evaluating the attractiveness of a cropped image, and therefore these methods focus on what remains in the cropped image. To overcome the above problem, Yan et al. [4, 29] propose an image cropping method belonging to the third category of image cropping methods, namely an experience-based method. In their method, they construct several cropping datasets, which are annotated by three professional photographers, for image cropping. Various handcrafted features are then extracted for regressing the cropping values annotated by the professional photographers. This method emphasizes professionals’ experience or the change caused by the manipulations of image cropping. A drawback is that handcrafted features are often limited and may lack useful information for high-level computer vision tasks.

Inspired by the success of deep CNNs on various vision tasks, in this paper we propose to learn the CNN features for automatic image cropping. To alleviate overfitting, the CNN model is trained on a combination of the large-scale AVA dataset and the CHUKPQ dataset. The learned model is then applied to several cropping datasets for extracting the CNN features. With the extracted CNN features, we propose the CCR (cascaded cropping regression) method to fit the cropping information annotated by professional photographers.

Iii The Proposed CCR Method

This section provides the details of the proposed CCR method for visual aesthetic enhancement. Fig. 2 shows the framework of the proposed CCR method. The size of the initial cropping region is set to be the same as that of the original image . The CNN features are extracted from the cropping region by using a pre-trained CNN model (the training details of the CNN model will be presented in subsection III-A

). Then a primitive regressor is used to estimate an improved cropping region using the CNN features at each stage. The final cropping result is obtained after

stages of regression.

A cropping region consists of the coordinates of the top-left and bottom-right corners of a rectangle region. Given an original image , the goal of image cropping is to estimate a cropping region that is as close as possible to the ground-truth cropping region provided by a professional photographer. The above problem can be modeled as the least-squares regression problem as follows:

(1)

This least-squares regression problem111

The least-squares loss function shown in Eq. (

1) can be improved by using the other loss functions (such as [30] and  [31]).

is solved by the proposed CCR method. Before describing the proposed CCR method in detail, we propose a CNN feature extraction strategy in the next subsection.

Iii-a Training an Aesthetic Classifier with CNN for Feature Extraction

Although some handcrafted features (such as exclusion and compositional features in [4, 29]) for image cropping have been proposed, these features fail to cover all possible situations for image cropping. Moreover, visual aesthetic assessment is subjective. Therefore, instead of designing some handcrafted features, the proposed method directly learns features from large-scale visual aesthetic datasets based on the deep convolutional neural network (CNN). The AVA dataset [13] and the CUHKPQ dataset [32] are two annotated large-scale visual aesthetic datasets, where most images are from the photography website of Dpchallenge222http://www.dpchallenge.com. The images of the two datasets received a number of votes or aesthetic judgements from the members of the photography community. As high-quality visual pleasing images usually have a clear topic and an impressive composition, the features learned from these images are relatively more useful for enhancing visual aesthetic during the image cropping process. Note that a cropping regressor cannot be directly trained using CNN on these two datasets due to the lack of cropping annotation.

Fig. 3: The proposed deep CNN structure, which is trained using the combined AVA and CHUKPQ datasets, is used to extract the CNN features.

We design a deep CNN structure (as shown in Fig. 3) to extract valid features for image cropping. The first convolutional layer is used to filter input images with kernels of size

and it outputs 32 feature maps, which have the same size as each of the input images. Each convolutional layer also outputs 32 feature maps. Each convolutional layer is then activated by a rectified linear unit (ReLU

[33]

and followed by a max-pooling layer in the first four layers. The max-pooling layer partitions the feature maps from the previous layer into a set of

non-overlapping neighborhoods, and outputs the maximum value for each neighborhood. The last layer is the spatial pyramid pooling layer (SPP) [34], which partitions each input feature map into divisions from a fine level to a coarse level and aggregates the local features in each division. Using the SPP layer can alleviate the problem that resizing a visually pleasing image may potentially damage its aesthetic. In total, five convolutional layers, four max-pooling layers and one SPP layer are used in the designed network structure (as shown in Fig. 3). The CNN model corresponding to this structure is trained on the combined AVA and CUHKPQ datasets.

Training datasets. The aesthetic classifier is trained using the combined AVA and CUHKPQ dataset. The AVA dataset contains more than 250,000 images collected from the Dpchallenge website. Each image has about 210 aesthetic ratings ranging from to . We divide these images into two categories (i.e., low-quality images and high-quality images) for training a two-class CNN model. Following the same strategy used in [13, 10], a parameter is used to discard ambiguous images from the training set. The images with average scores smaller than are referred to as low-quality images. The images with average scores larger than or equal to are considered as high-quality images. The images with average scores between and are considered as ambiguous and are thus discarded. In the implementation, we set the value of to be 1. Thus, 49,682 high-quality images and 7,983 low-quality images are selected from the AVA dataset. The CHUKPQ dataset contains about 30,000 images, which are collected from a variety of photography websites. Each image in this dataset has been labelled as either low or high quality. There are totally 10,525 high-quality images and 19,167 low-quality images labelled in the CHUKPQ dataset. The images from the two datasets are combined according to their categories. In total, 60,207 high-quality images and 27,150 low-quality images are collected from these two datasets. To alleviate the class imbalance problem, low-quality images are augmented by flipping each low-quality image horizontally to obtain 54,300 low-quality images in total. All the obtained images are split into 894 batches, and each batch consists of 128 images.

Fig. 4: The trained filters of the first convolutional layer of the proposed CNN model for image aesthetic classification.

Fig. 5: The feature maps obtained by the proposed CNN model on the image shown in the first row in Fig. 1 (a). (a)-(b) The feature maps obtained in the first and second convolutional layers of the CNN model, respectively.

Training the aesthetic classifier. In this paper, we use the GPU implementation code of  [35] to train the CNN model. 880 batches are used for training and 14 batches are used for testing. The whole training process takes around one day on a computer equipped with two GTX-1080 GPUs, and the trained CNN achieves 75.1% classification accuracy in test. This result is competitive even compared with that obtained by the state-of-the-art aesthetic classification method [10]. A part of the filters in the trained model are shown in Fig. 4. Most filters in the first convolutional layer are related to color. Fig. 5 shows the feature maps obtained by the CNN model on the image shown in the first row of Fig. 1.

The trained CNN model is then applied on each image in the cropping datasets. The feature maps before the classifier layer (i.e., the feature maps in the SPP layer in Fig. 3) are obtained and used for cropping regression. In our implementation, the SPP layer has three levels, where the feature maps of these three levels are respectively partitioned into , and divisions. An input image becomes feature maps with the size of after layers of pooling. Thus, the dimension of the obtained features is . Let denote the index of the stage in the proposed CCR method. In the th stage of regression, the CNN features are extracted from the cropped image obtained from the ()th stage of regression. We term these CNN features as the cropping-indexed CNN features. Such features correlate with spatial positions and also significantly improve the cascaded regression. However, because the number of training samples for anesthetic image cropping is often limited, existing regressors (such as CNN and random-fern regressors) cannot be used directly in this task. In the following subsection, we propose a novel and effective regressor to address the problem caused by a limited number of training data for the task of image cropping.

Iii-B Learning a Primitive Regressor using Gradient Boosting Algorithm

Inspired by the cascaded regression methods used in the tasks of pose regression [18] and face alignment [36], a cascaded regressor is trained to output , where () denotes the index of a

-tuple coordinate vector for a cropping region. For convenience, each regressor

is called as a primitive regressor. is the number of primitive regressors used for training. Note that the -tuple coordinate vector consists of the coordinates of the top-left and bottom-right corners (i.e., ) of a rectangle region. Based on an initial cropping region and the obtained cropping-indexed CNN features , each primitive regressor outputs a cropping region increment (defined as ), where is the -tuple coordinate vector for the cropping region annotated by a professional photographer. The estimates of the cascaded primitive regressors for image cropping are computed by accumulating the predicted values obtained by all the primitive regressors and the coordinates of an initial crop. The above process can be written as:

(2)

Given training samples , the primitive regressors are sequentially learned until the test error stops decreasing. Each primitive regressor (an abbreviation of ) is learned by minimizing the following objective function:

(3)

where denotes the th value in the -tuple coordinate vector of the cropping region annotated by a professional photographer for the th image.

The choice of primitive regressors is diverse and it mainly depends on what the application domain is. For example, in [37], CNN is chosen as a primitive regressor to regress the coordinates of facial points. However, CNN is hard to be applied for the task of image cropping because of limited labelled cropping data. In the task of pose regression [18], the random-ferns regressor, which shows better generalization ability than the tree regressor [38], is proposed as the primitive regressor. However, the experimental results in Section IV show that the random-ferns regressor is too weak for the task of cropping regression. Therefore, in this paper, we propose to learn the primitive regressor by using a group of random-ferns regressors to strengthen its regression ability. The gradient boosting algorithm [19] is employed to implement this. According to the gradient boosting algorithm [19], a non-parametric primitive regressor can be obtained by using the least-squares gradient boosting algorithm, which is given in Algorithm 1.

1:  Input: Data; The number of iterations ; Initial value
2:  for  do
3:               // compute the residual error
4:       //select a random-ferns regressor whose parameter is
5:             //update the predicted value
6:  end for
7:  Output:
Algorithm 1 The Least-Squares Gradient Boosting Algorithm

In Algorithm 1, denotes the predicted value for the th sample in the th iteration. denotes the random-ferns regressor and denotes its parameter. denotes the features for the th image. is the obtained primitive regressor, where is the number of the random-ferns regressors and is the random-ferns regressors with as its parameter (i.e., ). is the given label value for the th image and denotes the best gradient descent step-size. In the implementation of the random-ferns regressor, dozens of random-ferns regressors are generated uniformly in the th iteration and the random-ferns regressor with the best performance is chosen. Thus, the gradient descent step-size is an indicator vector with each element corresponding to one random-ferns regressor, where the value of an element is one if the corresponding random-ferns regressor is chosen, and zero otherwise. The gradient boosting algorithm uses a weak regressor to predict the residual error value between the predicted value obtained from the ()th iteration and the given label . The obtained regressor is also called as the master regressor consisting of random-ferns regressors. For more details, please refer to  [19].

Iii-C Cascaded Cropping Regression

1:  Input: Data; The number of stages ; Initial cropping region
2:  for  do
3:             // compute the cropping-indexed CNN features
4:      for  do
5:               // compute the residual error
6:          Learning the primitive regressor with (,) as the input by using Algorithm 1
7:               //update the cropping region
8:      end for
9:  end for
10:  Output:
Algorithm 2 The Cascaded Cropping Regression Algorithm
Fig. 6: An example of the image cropping dataset provided by [4, 29]. Three cropping regions are respectively annotated by three professional photographers for each image in the dataset. The three cropping regions are respectively shown in three different colors.
Fig. 7: An example of the image cropping dataset provided by [23]. Ten cropping regions are respectively annotated by ten professional photographers for each image in the dataset. The ten cropping regions are respectively shown in ten different colors.

This subsection describes the proposed CCR method in detail based on a set of random-ferns regressors. A random-ferns regressor is derived from a random-ferns classifier, which is originally proposed for classifying keypoints in [39]. Assume that the number of weak classifiers in a random-ferns classifier is , the random-ferns classifier randomly partitions -dimensional features into groups and the size of each group is , where is a small integer. Each weak classifier takes an -dimensional feature vector in the -dimensional feature space (instead of the original -dimensional space) as the input. Thus, the total number of the parameters of each weak classifier is significantly reduced. The random-ferns regressor is proposed in [18] for pose regression. Each random-ferns regressor also takes an -dimensional feature vector as an input and yields a real output value. However, as mentioned in Section III-B, -dimensional features are weak for the task of cropping regression. Thus, we use a group of random-ferns regressors to learn the primitive regressor by using the gradient boosting algorithm. Each random-ferns regressor still randomly takes an -dimensional feature vector as an input, but the dimension of the features taken by the obtained primitive regressor is more than . Each primitive regressor indirectly selects features from the -dimensional cropping-indexed CNN features by aggregating the features, which are selected by the group of random-ferns regressors. Since each random-ferns regressor randomly selects an -dimensional feature vector, some features may be selected repeatedly. As a result, the total number of features selected by the primitive regressor may vary. Based on the obtained primitive regressor, four CCR models are trained to fit the values of the 4-tuple coordinate vectors corresponding to the cropping regions annotated by professional photographers, respectively. The whole procedure of the proposed CCR method is summarized in Algorithm 2. In order to reduce the computational complexity, the cropping-indexed CNN features are extracted in the outside loop and they are kept unchanged in the inner loop.

Fig. 8: The IoU curves obtained by the proposed method with different hyper-parameter settings. (a) The IoU curves with different numbers of random-ferns regressors. (b) The comparison of the convergence speed between the proposed CCR method and the CCR method. (c) The IoU curves obtained by the proposed method with different initial cropping parameters. (d) The IoU curves obtained by the proposed method with different pre-trained CNN models.

In Algorithm 2, in order to alleviate the problem of scale variations for different images, the coordinate values of the initial cropping region for the th image are normalized with the maximum size of images. denotes the procedure of extracting the cropping-indexed CNN features from the cropping region for the th image at the th stage. Then the residual error , which is the difference between the th cropping value of the cropping region obtained at the ()th stage and that of the cropping region annotated by a professional photographer, is computed. With (,) as the input, the primitive regressor is learned by using Algorithm 1. Then, the th value of the cropping region at the th stage is updated with the predicted value obtained from . The whole training process of CCR stops at the th stage. After the training process, the obtained CCR model is used to predict a cropping region in an additive manner (see Eqn. (2)).

The proof for the convergence of the proposed method is readily derived from [19, 40, 18]. Let the relative error obtained by the primitive regressor be , where denotes the single uniform prediction and (,) denotes a distance function. The convergence of the proposed CCR method requires that , which means that the primitive regressor predicts a better result than the single uniform prediction.

Regularization and Shrinkage. To prevent the proposed CCR method from overfitting, three regularization parameters are introduced in our case. As shown in [19], a regularization parameter for the gradient boosting models is the number of weak learners or regressors (i.e., the number of the random-ferns regressors in CCR). The best value of can be estimated by evaluating the performance of the CCR method on an independent validation set. In addition, controlling the value of is equivalent to control the training process to stop at the th stage. Thus, is also a regularization parameter. The third regularization parameter is the shrinkage parameter, which often yields superior results to those obtained by restricting the number of regressors. In Algorithm 2, the introduction of the parameter in line is a straightforward shrinkage strategy for scaling the update at different stages.

Iv Experiments

In this section, we evaluate the performance of the proposed image cropping method for visual aesthetic enhancement. The experiments include two parts. The first part evaluates the performance of the proposed method with different parameter settings. The second part evaluates the proposed method for image cropping and compares it with several state-of-the-art methods on cropping datasets. We use an image dataset, which contains 950 images collected by Yan et al. [4, 29] from the CHUKPQ dataset, for evaluation. The image dataset contains seven classes of images, i.e., animal, architecture, human, landscape, night, plant and static. A cropped region is respectively annotated for each image by three professional photographers, from which three cropping datasets are formed. Fig. 6 shows an example of cropping regions annotated by the three professional photographers, where the regions outside of the bounding boxes are removed. Note that the images in the CHUKPQ dataset, which are the same as those in the cropping dataset, are removed in the process of training the aesthetic classifier.

We note that [23] contributes a new cropping dataset where each image has ten bounding boxes annotated by ten professional photographers. However, the ten annotated bounding boxes only have small overlaps (see Fig. 7 for an example). It shows that there is little correlation among the ten professional photographers’ knowledge or preference. Therefore, the proposed CCR method is not evaluated on that dataset.

Following the evaluation criteria used in [4, 29], the Intersection over Union (IoU) metric and the Boundary Displacement Error (BDE) metric are adopted. The IoU metric is formulated as where and respectively denote a cropping window annotated by a professional photographer and a cropping window generated by an evaluated method on a test image. The BDE metric measures the Euclidean distance of the generated cropping boundaries from those annotated by the photographer , which is written as .

Fig. 9: Examples showing that the proposed CCR method can alleviate the problem of boundary cutting. (a)-(f) show the results obtained by the proposed CCR method on three images at the 1st, 5th, 10th, 15th, 20th and 30th stage, respectively.

Iv-a Influence of the Parameters

After training CNN, the obtained CNN model is applied to extract cropping-indexed CNN features for training a CCR model. Several parameters are used for training the CCR model. In this subsection, we examine how these parameters affect the performance of the CCR method for image cropping.

Influence of the number of random-ferns regressors. Fig. 8 (a) shows the IoU curves obtained by the proposed CCR method with different numbers of random-ferns regressors. The convergence speed of the proposed CCR method is closely related to the number of random-ferns regressors. Usually, using more random-ferns regressors to learn a primitive regressor leads to faster convergence speed in the proposed CCR method. However, using too many random-ferns regressors to learn a primitive regressor may lead a CCR model to overfit a training dataset.

Using different primitive regressors. To compare the convergence speed between the proposed CCR method and the cascaded method (i.e., the CPR method in [18]), which uses a single random-ferns regressor as the primitive regressor, we apply the CPR method to the task of image cropping and compare its performance with that obtained by the proposed CCR method. In the implementation, the pose-indexed features in the CPR method are replaced by the cropping-indexed CNN features. We call this variant of the CPR method as CCR. The only difference between the proposed CCR method and the CCR method is in that they use different primitive regressors. As shown in Fig. 8 (b), the convergence speed of the proposed CCR method is significantly faster than that of the CCR method. The proposed method converges in only about 30 stages, indicating that the proposed CCR method only extracts the cropping-indexed CNN features around 30 times to obtain the final cropping result. In contrast, the CCR method only obtains a small improvement in average IoU values in the first 30 stages. Thus, the proposed CCR method significantly improves the convergence speed when it uses multiple random-ferns regressors as a primitive regressor.

Using different initial cropping values. The effect of the coordinates of an initial cropping region on the performance of the proposed CCR method for image cropping is also investigated. Each initial cropping region is simply scaled down to and of its original size without changing its centroid (i.e., and in Fig. 8 (c)), respectively. Then the experiment is repeated again while the other parameters are fixed. The experimental results are shown in Fig. 8 (c). As we can see that although the initial cropping parameters are different, the final cropping results are almost the same after several stages of regression. The proposed CCR method obtains the highest IoU values when the size of the initial cropping region is set to be the same as that of the original image. The reason is that the proposed CCR method mainly focuses on cutting out the regions from the current window, and more effective features can be selected by the proposed primitive regressor when the initial cropping region is larger than the cropping region to be estimated. In contrast, the proposed CCR method may infer image content outside of the current cropping box when the initial cropping region is smaller than the cropping region to be estimated, which makes the proposed CCR method less effective. In addition, the coordinates of an initial cropping region significantly affect the convergence speed of the proposed CCR method. If the average IoU value at the initial stage is small, then the proposed CCR method has a fast convergence speed to increase the average IoU value. The differences in convergence speeds of the proposed method with different initial cropping parameters therefore become smaller in the latter stages.

Boundary cutting problem. If the original initial cropping region is scaled down to 50% or 25% of its original size, then the cropping boundaries may pass through an object, which causes an unpleasant visual effect. However, the proposed method can alleviate this problem after several stages of regression. As shown in Fig. 9, the boundaries of the tree are cut in the beginning stage due to the small size of the original initial cropping region. However, the proposed CCR method obtains a better cropping region after several stages of regression. Note that the above cropping region initialization strategy is very challenging, where the proposed CCR method tries to regress the bounding box for the image content outside the cropping region obtained from the previous stage. The proposed CCR method gradually crops images from inside to outside to overcome the boundary cutting problem. However, the above cropping region initialization strategy is not optimal. As shown in Fig. 8 (c), the proposed CCR method obtains the highest average IoU values when it crops images from outside to inside (i.e., the original image is used as the initial cropping region).

Using different pre-trained CNN models to extract the cropping-indexed CNN features. To show the effectiveness of the cropping-indexed CNN features extracted by the CNN model, which is trained on the aesthetic datasets, we use another pre-trained CNN model (AlexNet [9]

) to extract cropping-indexed CNN features for image cropping as a comparison. AlexNet is a pre-trained deep CNN model on the ImageNet dataset 

[41] for image classification and it includes 5 convolutional layers and 2 fully-connected layers. We empirically choose the output from the second fully-connected layer of AlexNet as the cropping-indexed CNN features. The second fully-connected layer of AlexNet is the best-performing layer for image cropping in our experiments. The experimental results are shown in Fig. 8 (d). We can see that the average IoU values obtained by the proposed CCR method using the CNN model pre-trained on the aesthetic datasets are higher than those obtained by the proposed CCR method using the AlexNet CNN model. The reason is that the CNN model pre-trained on the aesthetic datasets can extract more effective aesthetic features, which are beneficial to visual aesthetic enhancement.

Fig. 10: The IoU curves obtained by a CNN regressor with different parameter initialization strategies on the training and test datasets. (a) The IoU curves obtained by the CNN regressor whose parameters are randomly initialized. (b) The IoU curves obtained by the CNN regressor, which copies the parameters from a pre-trained CNN classifier, on the aesthetic datasets.

Fig. 11: The IoU curves obtained by the proposed method on the three image cropping datasets with two different feature extraction strategies. (a) The IoU curves obtained by the proposed method with the cropping-indexed CNN features. (b) The IoU curves obtained by the proposed method with the mixed CNN features (i.e., the features extracted from the complete test images and the cropped regions obtained in each stage).

Fig. 12: Qualitative examples showing the cropped images obtained by the five competing methods. (a) The original Images. (b) The cropped images annotated by the first professional photographer. (c)-(g) The cropping results obtained by the attention-based [42], the aesthetic-based [27], the change-based (2013) [4], the change-based (2015) [29], and the proposed methods.

Using a native CNN regressor with different parameter initialization strategies. To show the effectiveness of the proposed CCR method, we also report the experimental results obtained by using a CNN regressor as a comparison. Two experiments are performed by the native CNN regressor with two different parameter initialization strategies to directly regress the 4-tuple coordinate vector. The CNN regressor has the same structure as that given in Fig. 3 except that the number of the output nodes in the last layer is 4 instead of 2 in Fig. 3. In the first experiment, the parameters of the CNN regressor are randomly initialized, while they are initialized by using the parameters of a CNN classifier pre-trained on the aesthetic datasets in the second experiment. The experimental results are shown in Fig. 10. As can be seen, the results obtained by the CNN regressor in the two experiments are worse than those obtained by the proposed CCR method (see Fig. 8). The main reason is that only a limited number of training samples are used to train the CNN regressor, which affects the performance of the CNN regressor.

Iv-B Comparison with state-of-the-art methods

In this subsection, we evaluate the performance of the proposed method for image cropping and compare it with several state-of-the-art methods on the cropping datasets. The comparison is performed using four state-of-the-art image cropping methods, which are the representative methods from three categories of image cropping methods (as discussed in Section II). The first two methods are the attention-based method [42] and the aesthetic-based method [27], respectively. Both methods are extended by Yan et al. [29]. For fairness, the extension version [29] of the two methods are used to achieve better results. The third method is the change-based method [4], which learns the experience from professional photographers by using hand-crafted features. The improved version of the change-based method [29] is also used in this paper. The method in [23] is not evaluated for comparison since its source code is not available to the public.

Datasets Accuracy
original 0.2512
Photographer1 0.4370
Photographer2 0.4141
Photographer3 0.3815
TABLE I: The test accuracy obtained by the pre-trained CNN classifier on the original cropping dataset and the three cropping datasets cropped by the proposed CCR method.
Methods Photographer1 Photographer2 Photographer3
Attention-based [42] 0.203 (0.254) 0.178 (0.200) 0.199 (0.259)
Aesthetic-based [27] 0.396 (0.178) 0.394 (0.178) 0.386 (0.183)
Change-based (2013) [4] 0.749 (0.067) 0.729 (0.072) 0.732 (0.072)
Change-based (2015) [29] 0.797 (0.053) 0.786 (0.057) 0.772 (0.059)
The proposed method 0.850 (0.032) 0.837 (0.033) 0.828 (0.035)
TABLE II: The IoU (and BDE) results obtained by the four competing state-of-the-arts methods and the proposed method for image cropping. The results obtained by the other four competing methods are from [29].

For the hyperparameter setting used for training the CCR model on the cropping dataset 

[29] annotated by the first professional photographer, we set , and . The images annotated by the other two professional photographers are used for test. This evaluation strategy is similar to that used in [4, 29]. As shown in Fig. 11(a), the IoU values obtained by the proposed method at the th stage on the three cropping datasets are , and , respectively. However, we note that the proposed CCR can also achieve promising results even at the 10th stage, where the IoU values obtained by the proposed CCR method on the three datasets are respectively , and .

To show the effectiveness of the proposed cropping-indexed CNN features, we evaluate the performance of the proposed method with the mixed CNN features that are simultaneously extracted from both a complete test image and the cropped region for comparison. In each stage, the pre-trained CNN model is respectively applied on the complete test image and the cropped region obtained from the previous stage, to extract the features. The features obtained from both the complete test image and the cropped region are concatenated as a vector, which is used as the mixed CNN features. The experimental results are shown in Fig. 11. As can be seen, the IoU values obtained by the proposed method with the mixed features at the 30th stage on the three cropping datasets are respectively 0.838, 0.834 and 0.823, which are lower than those obtained by the proposed method with only the cropping-indexed CNN features. The reason is that the cropping-indexed CNN features can provide aesthetic discriminative information. Moreover, the proposed primitive regressor can select the features, which are beneficial to image cropping for visual aesthetic enhancement. In contrast, the mixed CNN features include many redundant features, which will lead to high computational time and the overfitting problem. More specifically, effective features are hard to be selected from the mixed CNN features by the proposed primitive regressor because that the length of the mixed CNN feature vector is higher than that of the cropping-indexed CNN feature vector. Thus, the mixed CNN features are less effective than the cropping-indexed CNN features for cropping region regression. Additionally, the pre-trained CNN classifier is used to classify the images from the original cropping dataset and the cropped images obtained by the proposed CCR method, respectively. The classify accuracies are shown in Table 1. As can be seen, the obtained classification accuracies on the cropped images are higher than those on the original images about 13%-18%, which shows the effectiveness of the proposed image cropping method.

The results obtained by the proposed method are compared with those obtained by the other four competing methods. Table II lists the comparison results. Some examples of the cropping regions annotated by the first professional photographer and the cropping regions obtained by the five competing methods are also shown in Fig. 12. As we can see from Table II and Fig. 12, the proposed CCR method clearly outperforms the other four competing methods, and the cropping regions obtained by the proposed CCR method are closer to those annotated by the professional photographers than the other competing methods. The change-based method [4] and its improved version [29] are similar to the proposed CCR method because they also learn the experience from professional photographers. However, due to the fact that the hand-crafted features used in the change-based method and its improved version are often limited, both methods obtain worse performance than the proposed method. Among the five competing methods, the attention-based method [42] obtains the worst IoU and BDE results on all the three cropping datasets since it only focuses on the salient objects in an image while it ignores the aesthetic of a cropped image. The aesthetic-based method [27] achieves better results than the attention-based method [42], but it performs worse than the change-based method. This is because that the aesthetic-based method focuses more on high-quality local cropping regions, while it may ignore the object regions in an image. The proposed method and the change-based methods can overcome this drawback by learning the knowledge of the professional photographers. Thus, they can obtain better results than the aesthetic-based method.

To analyze the failure case of the proposed method, we also give some failure examples (see Fig. 13) obtained by the proposed method. Two kinds of images may cause the failure of the proposed method. The first kind of images are those in which the position of an object is close to the image boundaries. The proposed method may cut off a small part of the object in an image if the regression results are not accurate enough. As shown in the first failure example in Fig. 13(a), a part of the reflection of the bird is removed by the proposed method. The second kind of images usually have no much texture in their backgrounds. As shown in the first failure example in the figure, the background of the example is a solid color. The proposed method may not work quite well for the second kind of images. The total numbers of failure cases obtained by the proposed CCR method on the three cropping datasets are listed in Table III. As can be seen, the proposed CCR method only fails on a small percentage of images (4.3% - 6.3% of the whole test images).

Dataset Photographer1 Photographer2 Photographer3
#Failure 41 60 56
TABLE III: The numbers of failure cases (denoted by #Failure) obtained by the proposed CCR method in the cropping datasets.

In terms of running time, the total execution time of the proposed method (implemented in Matlab on a 4 GHz 32GB RAM PC) on each image is about 1.7 seconds. Most running time is used to extract the cropping-indexed CNN features. In contrast, the running time of the improved change-based method (implemented in C++ on a 2.33 GHz 4GB RAM PC) is about 11 seconds [29].

Fig. 13: Failure examples obtained by the proposed method. From top to bottom, each row shows the cropping results annotated by the first professional photographer and the cropping results obtained by the proposed method, respectively.

V Conclusion

In this paper, we propose a novel CCR method to perform automatic image cropping for visual aesthetic enhancement based on the cropping-indexed CNN features. A two-step learning strategy is proposed for a limited number of labelled cropping data. In addition, the proposed CCR method learns a primitive regressor by using the gradient boosting strategy on a group of random-ferns regressors at each stage. The convergence speed of the proposed CCR method is much faster than that of the cascaded regression method directly using random-ferns regressors. On the other hand, the performance of the proposed CCR method for image cropping also benefits from the cropping-indexed CNN features, which are extracted by the CNN model trained on large-scale visual aesthetic datasets. Experimental results show that the proposed method is quite effective and efficient for image cropping.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants U1605252, 61472334 and 61571379, and by the Natural Science Foundation of Fujian Province of China under Grant 2017J01127.

References

  • [1] D. Joshi, R. Datta, E. Fedorovskaya, Q.-T. Luong, J. Wang, J. Li, and J. Luo, “Aesthetics and emotions in images,” IEEE Signal Processing Magazine, vol. 28, pp. 94–115, 2011.
  • [2] R. Datta, D. Joshi, J. Li, and J. Z. Wang, “Studying aesthetics in photographic images using a computational approach,” in Proc. Eur. Comput. Vis. Conf. (ECCV), 2006, pp. 288–301.
  • [3] L. Marchesotti, F. Perronnin, D. Larlus, and G. Csurka, “Assessing the aesthetic quality of photographs using generic image descriptors,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2011, pp. 1784–1791.
  • [4] J. Yan, S. Lin, S. B. Kang, and X. Tang, “Learning the change for automatic image cropping,” in

    Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)

    , 2013, pp. 971–978.
  • [5] L. Zhang, M. Song, Y. Yang, Q. Zhao, C. Zhao, and N. Sebe, “Weakly supervised photo cropping,” IEEE Transactions on Multimedia (TMM), vol. 16, no. 1, pp. 94–107, 2014.
  • [6] W. Yin, T. Mei, C. W. Chen, and S. Li, “Socialized mobile photography: Learning to photograph with social context via mobile devices,” IEEE Transactions on Multimedia (TMM), vol. 16, pp. 184–200, 2014.
  • [7] A. Jahanian, S. V. N. Vishwanathan, and J. P. Allebach, “Learning visual balance from large-scale datasets of aesthetically highly rated images,” in IS&T/SPIE Electronic Imaging, 2015, pp. 93 940Y–93 940Y–9.
  • [8] X. Tang, W. Luo, and X. Wang, “Content-based photo quality assessment,” IEEE Transactions on Multimedia (TMM), vol. 15, no. 8, pp. 1930–1943, 2013.
  • [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2012, pp. 1097–1105.
  • [10]

    X. Lu, Z. Lin, H. Jin, J. Yang, and J. Z. Wang, “Rapid: Rating pictorial aesthetics using deep learning,” in

    ACM MM, 2014, pp. 457–466.
  • [11] L. Mai, H. Jin, and F. Liu, “Composition-preserving deep photo aesthetics assessment,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 497–506.
  • [12] F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, and Y. Zhu, “Deepsim: Deep similarity for image quality assessment,” Neurocomputing (NC), vol. 257, no. 1, pp. 104 – 114, 2017.
  • [13] N. Murray, L. Marchesotti, and F. Perronnin, “Ava: A large-scale database for aesthetic visual analysis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2012, pp. 2408–2415.
  • [14] F. Gao, D. Tao, X. Gao, and X. Li, “Learning to rank for blind image quality assessment,” IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 26, no. 10, pp. 2275–2290, 2015.
  • [15] A. K. Moorthy and A. C. Bovik, “Blind image quality assessment: From natural scene statistics to perceptual quality,” IEEE Trans. on Image Processing (TIP), vol. 20, no. 12, pp. 3350–3364, 2011.
  • [16] B. Cheng, B. Ni, S. Yan, and Q. Tian, “Learning to photograph,” in Int. Conf. on Multimedia (ICMM), 2010, pp. 291–300.
  • [17] K. Koffka, Principles of Gestalt psychology.   Harcourt, Brace New York, 1935.
  • [18] P. Welinder and P. Perona, “Cascaded pose regression,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2010, pp. 1078–1085.
  • [19] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Annals of Statistics, vol. 29, pp. 1189–1232, 2001.
  • [20] F. Perazzi, Y. Pritch, and A. Hornung, “Saliency filters: Contrast based filtering for salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2012, pp. 733–740.
  • [21] G. Guo, H. Wang, W. L. Zhao, Y. Yan, and X. Li, “Object discovery via cohesion measurement,” IEEE Trans. on Cybernetics (TCYB), vol. 1, no. 99, pp. 1–14, 2017.
  • [22] L. Marchesotti, C. Cifarelli, and G. Csurka, “A framework for visual saliency detection with applications to image thumbnailing,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2009, pp. 2232–2239.
  • [23] C. Fang, Z. Lin, R. Mech, and X. Shen, “Automatic image cropping using visual composition, boundary simplicity and content preservation models,” in ACM MM, 2014, pp. 1105–1108.
  • [24] G. Ciocca, C. Cusano, F. Gasparini, and R. Schettini, “Self-adaptive image cropping for small displays,” in International Conference on Consumer Electronics, 2007, pp. 1–2.
  • [25] T. N. Vikram, M. Tscherepanow, and B. Wrede, “A saliency map based on sampling an image into random rectangular regions of interest,” Pattern Recognition (PR), vol. 45, no. 9, pp. 3114–3124, 2012.
  • [26] A. Laurentini and A. Bottino, “Computer analysis of face beauty: A survey,” Computer Vision and Image Understanding (CVIU), vol. 125, pp. 184 – 199, 2014.
  • [27] M. Nishiyama, T. Okabe, Y. Sato, and I. Sato, “Sensation-based photo cropping,” in ACM MM, 2009, pp. 669–672.
  • [28] L. Zhang, M. Song, Q. Zhao, X. Liu, J. Bu, and C. Chen, “Probabilistic graphlet transfer for photo cropping,” IEEE Transactions on Image Processing, vol. 22, pp. 802–815, 2013.
  • [29] J. Yan, S. Lin, S. Kang, and X. Tang, “Change-based image cropping with exclusion and compositional features,” International Journal of Computer Vision (IJCV), vol. 114, pp. 1–14, 2015.
  • [30]

    “Robust non-convex least squares loss function for regression with outliers,”

    Knowledge-Based Systems, vol. 71, no. 1, pp. 290 – 302, 2014.
  • [31] J. Yu, X. Yang, F. Gao, and D. Tao, “Deep multimodal distance metric learning using click constraints for image ranking,” IEEE Transactions on Cybernetics (TCYB), vol. 1, no. 99, pp. 1–11, 2017.
  • [32] W. Luo, X. Wang, and X. Tang, “Content-based photo quality assessment,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2011, pp. 2206–2213.
  • [33] G. E. Dahl, T. N. Sainath, and G. E. Hinton, “Improving deep neural networks for LVCSR using rectified linear units and dropout,” in ICASSP, 2013, pp. 8609–8613.
  • [34] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 37, no. 9, pp. 1904–1916, 2015.
  • [35] R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” in Workshop on Proc. Adv. Neural Inf. Process. Syst. (WNIPS), 2011, pp. 1–6.
  • [36] X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” International Journal of Computer Vision (IJCV), vol. 107, pp. 177–190, 2014.
  • [37] Y. Sun, X. Wang, and X. Tang, “Deep convolutional network cascade for facial point detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2013, pp. 3476–3483.
  • [38] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees.   New York: Chapman & Hall, 1984.
  • [39] M. Ozuysal, M. Calonder, V. Lepetit, and P. Fua, “Fast keypoint recognition using random ferns,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 32, pp. 448–461, 2010.
  • [40] N. Duffy and D. Helmbold, “Boosting methods for regression,” Machine Learning, vol. 47, pp. 153–200, 2002.
  • [41] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, pp. 211–252, 2015.
  • [42] F. Stentiford, “Attention based auto image cropping,” ICVS Workshop on Computational Attention & Application, 2007.