Object Recommendation: For a given scene, recommend a sorted list of categories and bounding boxes for insertable objects;
Scene Retrieval: For a given object category, retrieve a sorted list of suitable background scenes and corresponding bounding boxes for insertion.
The motivation for the two tasks stems from the bilateral collaboration between media owners and advertisers in the advertising industry. Some media owners make profits by offering paid promotion , while many advertisers pay media owners for product placement . This collaboration pattern reflects the mutual requirement, from which we distill the novel research topic of dual recommendation for object insertion.
Consider a typical collaborative workflow between a media owner and an advertising artist consisting of three phases:
Matching: The media owner determines what kind of products are insertable, while the advertiser determines what kind of background scenes are suitable. Both of them, in this process, also consider where an insertion might potentially happen;
Negotiation: They contact each other and confirm what and where after negotiation;
Insertion: Post-process the media to perform the actual insertion.
In this work, both of the above tasks aim underpin phases 1 and 2, but neither include a fully automatic solution for phase 3. Analogously, the key idea here is to automatically make recommendations rather than make decisions for the user. We do not perform automatic segment selection or insertion, because in practice the inserted object will be brand-specific and the final decision depends upon the personal opinions of the advertiser . Nonetheless, for illustration purpose only, we use manually selected, yet automatically pasted object segments for cases presented in this paper, which demonstrates our system’s ability to make reasonable recommendations on categories and bounding boxes.
The advantage of our system is three-fold. First, we provide constructive ideas for designers: the object recommendation task can be especially useful for sponsored media platforms, which may profit by making recommendations to media owners. Second, the scene retrieval task provides a specialized search engine that is capable of retrieving images, given an object, that goes beyond previous content-based image retrieval systems [6, 7, 8]. Future applications include advertiser-oriented search engines, or matching services for designer websites. Third, the bounding boxes predicted for both tasks further makes the recommendation concrete and visualizable. As we will show in our experiments, this not only enables applications such as automatic preview over a gallery of target segments, but also may assist designers with a heatmap as hint to users.
Specifically, our contributions are:
We are the first, to the best of our knowledge, to propose dual recommendation for object insertion as a new research topic;
We outperform existing baselines on all subtasks under a unified framework, as demonstrated by both quantitative and qualitative results (Sect. V).
Ii Related Work
Although there are no related works that directly addresses exactly the same topic, we can still borrow ideas from previous arts on related tasks.
Object Recognition. The family of recognition tasks include image classification [10, 11, 12], object detection [13, 14, 15], weakly supervised object detection [16, 17, 18] and semantic segmentation [19, 20, 21].
Generally, the appearance of the target object is given, and the expected output is either the category (image classification), or the location (weakly supervised object detection), or both (object detection, semantic segmentation). Our object recommendation task shares the similar output of both category and location, but there are two key differences: (i) the appearance of the target object is unknown in our task, for the object is even not present at the scene; (ii) the expected outputs for both category and location are not unique, for there may be multiple objects suitable for the same scene with multiple reasonable placements.
In this work, we build our system upon the recently proposed state-of-the-art object detector, Faster R-CNN . The basic idea is to seek evidence from other existing objects in the scene, which requires object detection as a basic building block. We also extend the expected output from a single category and a single location to lists of each, to allow multiple acceptable results in an information retrieval (IR) fashion.
Generally, some attributes (topic, features, color, layout, etc.) are known about the target image, and the expected output is a list of images that satisfies these conditions. Our scene retrieval task is distinct to this family of tasks because our query object is not generally present in the scene. Neither is it an attribute possessed by the target image. Nonetheless, we share a similar idea as the retrieval systems in two aspects: First, we adopt the similar expected output as a ranked list, and employ the metric, normalized discounted cumulative gain (nDCG) , as is widely used in previous retrieval tasks; Second, similar to content-based image retrieval systems [6, 7, 8], we also utilize the known information of the image, typically the categories and locations of the existing objects.
Image Composition. Our work aims to provide inspirations for object insertion, which has a close relationship to image composition. Some works focus on interactive editing, for instance,  builds an interactive library-based editing tool. It enables users to draw rough sketches, leading to plausible composite images incorporated with retrieved patches; Some other works focus on automatic completion, with image in-painting as one of the most notable research topics [24, 25]
. These works aim to restore the removed region of an image, typically with neural networks that exploit the context. Our system is unique in two aspects: 1) We neither take the user’s sketches as input, nor require a masked region as location hint; 2) We do not take “plausible” as our final goal, because our motivation is to do recommendations, rather than make decisions, as explained in Sect.I .
Closest to our work is the automatic person composition task proposed by , which establishes a fully automatic pipeline for incorporating a person into a scene. This pipeline consists of two stages: 1) location prediction; 2) segment retrieval. Though our system is different from this work, in that we do not perform segment retrieval; while it could not make recommendations on categories or scenes. We compare our system’s performance on bounding box prediction with the first stage of this work, and report both quantitative and qualitative results.
In this section, first, we decompose the two tasks into three subtasks with probabilistic formulations, which we derive from the same joint probability distribution. Furthermore, we present an algorithm that models object-level context with a Gaussian mixture model (GMM), which leads to an approximation for the joint distribution. Finally, we report implementation details and per-image runtime.
Iii-a Problem Formulation
Given a set of candidate object categories , a set of scene images , and a set of candidate bounding boxes for each specific image , we further break the two tasks introduced in Sect. I into the following three subtasks:
Object Recommendation: for a given image , rank all candidate categories by ;
Scene Retrieval: for a given object category , rank all candidate images by ;
Bounding Box Prediction: for a given image and an object category , rank all candidate bounding boxes by .
We show that all of the three subtasks can be solved from the same joint probability distribution
. The basic intuition is that the object category and bounding box should be interrelated, when judging if the insertion is appropriate. By adopting Bayes’ theorem, we arrive at:
where we perform the maximum a posteriori (MAP) estimation and assume a uniform prior for.
Bounding Box Prediction:
where we rank all bounding boxes for each given pair of .
In summary, to achieve our goal (which breaks down to three subtasks), we need an algorithm to estimate , which is discussed in the next subsection.
Iii-B Modeling the Joint Probability Distribution
Iii-B1 Model Formulation
For each image , we obtain a set of bounding boxes for existing objects, which is typically the output of a region proposal network (RPN) .
Note that the candidate bounding box and category are conditionally independent with given , because is derivable from . We then model the joint probability distribution as follows:
We represent each context object with a probability distribution over all possible categories. Denoting the set of all categories considered in the context as , we have
where, the last term in the right-hand-side is the output distribution obtained from an object detector [13, 27]. The first term is decided by the co-occurrence frequency of the inserted object with an localized existing object . For simplicity, we drop and approximate this term with . The basic intuition is that does not contribute significantly to the ranking between categories. For instance, compared to a mouse, a cake is more likely to co-occur with a plate, no matter where the plate is. The second term is an object-level context  term that will be modeled with a Gaussian mixture model (GMM), as described next.
Iii-B2 Context Modeling with GMM
We now focus on the context term in equation 5 that remains unsolved. Consider the case when and . The term answers the question “Having observed a wall in a certain place, where should we insert a clock?”. Given such a question, a human agent would first identify that a clock is likely to be mounted on the wall, then conclude that the clock is likely to appear in the upper region of the wall, and its size should be much smaller than the wall. Our GMM model simulates the above process to judge each candidate bounding box .
Following , we extract pairwise bounding box feature, which encodes the relative position and scale of the inserted object and a context object:
where are the bottom-left corners of the 2 boxes, and are the widths and heights respectively. We then train a Gaussian mixture model (GMM) for each annotated triple from the Visual Genome  dataset:
where denotes the GMM model corresponding to triple . is the number of components same for each GMM, which we empirically set to 4 in our experiments.
is the normal distribution.are the prior, mean, and covariance for the th component of , which we learn using the EM algorithm implemented by Scikit-learn .
Iii-B3 Final Model
Putting everything together, we have:
Iii-C Implementation Details
We adopt the pretrained Faster R-CNN released by  as object detector. We use 10 object categories for insertion (detailed in Sect. IV) and keep the top 20 object categories and top 10 relations from the Visual Genome  dataset, sorted by the co-occurrence count with the 10 insertable categories. We consider at most existing objects with detection threshold of 0.4 for context modeling. For each image with size , we sample the candidate bounding boxes in a sliding window fashion, with window size
and stride, which generates around 800 candidate boxes per image. We further refine the size of the best ranked box by searching over sizes within interval equally discretized into 32 values. A complete, single thread, pure Python implementation on an Intel i7-5930K 3.50GHz CPU and a single Titan X Pascal GPU takes around 4 seconds per image.
Iv-a Scenes and Objects
We establish a test set that consists of fifty scenes from the Visual Genome  dataset. The test scenes come from 4 indoor scene types: living room, dining room, kitchen, office. The statistic for each scene type is shown in Table I.
|living room||dining room||kitchen||office|
There are ten insertable objects considered in this experiment, as shown in Table II. The same illustrations and specifications are emphasized to the annotators as a standard to ensure consistency for the same category. We choose these insertable objects based on the following principles:
Environment: Mostly appears indoor;
Frequency: Is within the top 150 frequent categories  in Visual Genome;
Flexibility: Is not generally embedded (e.g. sink) or large and clumsy (e.g. bed), so that it can be flexibly inserted into a scene;
Diversity: Does not have a significant context overlap with other object categories (e.g. bottle is not included because we already have cup).
|cup||A cup for drinking water that is medium in size.|
|cake||A small dessert cake (not a big birthday cake).|
|laptop||An open laptop.|
|TV||A LCD TV.|
|clock||A normal clock at home (not a watch / alarm clock / bracket clock).|
|book||A closed book that is roughly of B5 size and 200-300 pages.|
|pillow||A rectangle pillow that is commonly placed on sofas, chairs, etc.|
Iv-B Annotation Guideline
On average, there are 11 human annotators for each scene. For each scene, the annotator is asked to generate the following annotations:
Iv-B1 Insertable Categories
For each scene, the annotator is encouraged to annotate as much as possible, yet no more than 5 insertable object categories (chosen from the categories in Table II).
Iv-B2 User Preference
For each annotated object category, the annotator should assign a preference score ranging from . The annotators are shown a wide range of different example scenes in advance to ensure that they have consistent criterion towards this preference.
Score 2 (very suitable): Indicates “this category is very suitable to be inserted into the scene”;
Score 1 (generally suitable): Indicates “this category can be inserted into this scene, yet not very suitable”.
Iv-B3 Bounding Box Size
For each annotated object category, the annotator should draw a rectangle bounding box, whose longer side equates to the longer side of an appropriate bounding box of the object. We only need 1 freedom for size evaluation because the aspect ratio of the inserted object is typically fixed.
Iv-B4 Insertable Region
For each annotated object category, the annotator should draw a region. The method for drawing this region is that: Imagine you are holding the object for insertion, and you drag it over all the places that it can be inserted. In this process, the region that can be covered by the object is defined as the insertable region, which should be drawn using a brush tool (Fig. 2).
Note that, different annotators may have different opinions towards this region. For instance, for the scene in Fig. 2, some annotators may not include the left-bottom corner of the table when drawing the insertable region. This subjectivity is explicitly allowed within the range of quality control.
Table IV and V shows qualitative results for object recommendation and scene retrieval, both enhanced by bounding box prediction. We further quantitatively evaluate our method against existing baselines on our new test set. Task-specific metrics are designed for comprehensive evaluation.
We design experiments for the 3 subtasks systematically. First, for both the object recommendation and scene retrieval subtasks, we compare our system against a statistical baseline, bag-of-categories (BOC), which is based on category co-occurrence. Second, we separately evaluate the size and location for the bounding box prediction subtask, and compare our results against a recently proposed neural model for person composition . Finally, we report comparisons on both quantitative and qualitative results, which helps interpretations for what is learned by our algorithm.
|1) clock||2) mouse||3) cup|
|1) clock||2) tv||3) cup|
|1) cup||2) clock||3) book|
V-a Object Recommendation
We adopt the normalized discounted cumulative gain (nDCG) , which is an indicator widely used in information retrieval (IR) for ranking quality. We use this as the metric for object recommendation. Because the desired output for this subtask is a ranked list, and each item is annotated with a gain reflecting user preference, nDCG is a perfect choice for evaluation.
V-A1 Metric Formulation
For images and annotators for the th image, the averaged nDCG@K is defined as
where measures the ranking quality for the top-K recommended object categories of the th image, with regard to the ground truth user preference scores provided by the th annotator.
V-A2 Quantitative Results
The baseline method, bag-of-categories (BOC), regards each image as a bag of existing objects, and ranks all candidate objects by the sum of co-occurrences with the existing objects. BOC borrows idea from the simple yet effective bag-of-words (BOW) model 
in natural language processing, which ignores the structural information and only keeps the statistical count.
The quantitative comparison between our system and BOC is shown in Table III. We evaluate nDCG at the top-1, top-3, top-5 results respectively, because there are at most 5 annotations per image. As demonstrated by the results, our method achieves consistent improvements as compared to BOC.
V-A3 Qualitative Analysis
The largest gain of our method over baseline is reflected by nDCG@1, i.e. the top result. Fig. 3 shows qualitative comparison against BOC on top 1 recommendation. In Fig. 2(d), the baseline wrongly recommends a clock because there are 2 detected walls. Whereas, our system recognizes that most candidate boxes for clock lead to unreasonable relative positions with the walls. In Fig. 2(e), the baseline recommends a laptop due to high co-occurrence of pair (laptop, table). However, the table in this scene is too small, disabling any noticeable bounding box for insertion. In Fig. 2(f), the baseline recommends a book because there’s a mis-detection for a small shelf to the edge of the background (the blue box, which is actually a counter), which is almost ignored by our system for the same reason as in Fig. 2(e).
In summary, the key advantage of our algorithm over baseline is that we not only consider the co-occurrence frequency, but also take into account the relative locations and relationships between the inserted object and context objects. This enables our system to bypass candidate categories with high co-occurrence counts yet unreasonable placements; and to also be more robust when faced with detection failures.
V-B Scene Retrieval
Similarly, for the scene retrieval subtask, we also adopt nDCG as a metric for ranked image list.
V-B1 Metric Formulation
For insertable categories and candidate images for each category, the averaged nDCG@K is defined as
where measures the ranking quality of the top-K retrieved scene images for the th category, with regard to the ground truth user preference scores provided by the th annotator.
V-B2 Quantitative Results
The quantitative comparison between our system and BOC is shown in table VI. We evaluate nDCG at respectively, in consideration of the fact that there are 50 candidate images in total. Again, we outperform the baseline by a remarkable margin.
V-B3 Qualitative Analysis
Fig. 4 shows qualitative comparison against BOC on top 10 retrieved scenes. Intuitively, our system prefers scenes whose supportive objects that are visually large, continuous or close to the user, while the baseline is typically biased towards scenes with more relevant objects. This is due to the fact that only boxes that lead to reasonable relationships will contribute significantly to , while BOC is agnostic to the spatial structure of the context objects.
V-C Bounding Box Prediction
We evaluate the size and location of the predicted bounding box separately. The baseline for this subtask is the neural approach proposed by .  builds an automatic two-stage pipeline for inserting a person’s segment into an image. It first determines the best bounding box using the dilated convolution networks , then retrieves a context-compatible person segment from a database.
Here, we compare our system’s performance on bounding box prediction, against the first stage of . We adopt the same object detector  with the same confidence threshold as in our experiments, and the same training settings for  as reported in its supplementary material.
For size prediction, we design a single metric to measure the similarity of 2 lengths. For location prediction, however, we design 2 different metrics for automatic use cases and manual use cases, respectively. The automatic use case would require an API that returns the best ranked bounding box, while the manual use case would prefer a heatmap as an intuitive hint. We will discuss these 3 metrics and different use cases in detail.
V-C1 Metric Formulation — Size
For a bounding box with height and width , we define its box size . We then define a metric that evaluates how close is the ground truth box compared with the predicted box, under the measurement of box size. Note that we only preserve 3 freedoms for a box, because the aspect ratio of the inserted object segment should be predetermined.
Given images, and annotators for the th image, for a specific category , we define the average intersection over union (IoU) score for box size as:
where, , is the ground truth box size provided by annotator in image for category , is the predicted box size in image for category . has an upper bound of 1.0 (when ), and a lower bound of 0.0 (when and are drastically different).
V-C2 Metric Formulation — Location, Best Box
The best recommended box would be crucial to an automatic application, such as an automatic preview software. Hence, this experiment evaluates whether the best recommended box is in a reasonable location. We consider the location of a bounding box as reasonable, if it is contained within the insertable region annotated by the user.
) is not good enough, yet still visually better than a box that is an outlier (Fig.4(c)). For best box evaluation, the difference between accuracy and strict accuracy is that for cases like Fig. 4(b), the former one counts the fraction of area that is included in the insertable region, whereas the later one only counts valid boxes as in Fig. 4(a).
Note that this criterion can be biased towards smaller boxes. We address this drawback in 2 aspects: First, larger boxes that slightly exceeds the insertable region may still have a non-zero contribution to this metric; Second, unreasonably small boxes will pull down the size prediction score accordingly.
Given images, and annotators for the th image, for a specific category , we define the average accuracy for the location of best recommended box as
where, shares the same meaning as before. is the ground truth insertable region drawn by annotator in image for category , is the best recommended box in image for category . has an upper bound of 1.0 (when is entirely contained within ), and a lower bound of 0.0 (when is entirely outside ).
Furthermore, if we only regard bounding boxes that are fully contained by the insertable region as reasonable, we can define a stricter metric by substituting the in Eq. 14 with a binary indicator function , which is set to 1 if and only if is fully contained by . We denote this metric as the “strict accuracy”. This metric excludes boxes that are partially contained by the insertable region, and only counts for valid boxes that are entirely covered.
V-C3 Metric Formulation — Location, Heatmap
This metric evaluates the score distribution of all sampled boxes, which we further convert into an intuitive pixel-level representation. We denote this representation as a heatmap.
Specifically, we generate a heatmap by adding the score of each sampled box to all its contained pixels. The heat value at each pixel hence approximates the probability that it is contained within at least one insertable box 111For each pixel in image contained within candidate boxes , for a specific category , we have
Note that the heatmap does not support any programmatical usages, but only aims to provide a clear user hint. We do not adopt the distribution of the left-bottom corner or the stand position  because not all the insertable categories are supported from the bottom (e.g. TV). Hence, a heatmap that dissolves the probability of each box into its inner pixels is more cognitively consistent across different categories.
Given images, and annotators for the th image, for a specific category , we define the average IoU for the heatmap as (illustrated in Fig. 6)
In Eq. 16, shares the same meaning as before. is the ground truth insertable region drawn by annotator in image for category , is the predicted heatmap for category in image .
In Eq. 17, denotes the averaged ground truth insertable region, and denotes the predicted heatmap. iterates through all pixels of and . We normalize and such that they each sums to 1.0. This definition for IoU has a maximum value 1.0 when and are exactly the same, and a minimum value 0.0 when they are absolutely disjoint.
V-C4 Quantitative Results
We report the average IoU for size, the average accuracy for best recommended location, and the average IoU for heatmap, over all insertable categories. We refine the location heatmap generated by  by adding the heat value at each stand position to pixels in the corresponding box to match our heatmap definition. As shown in Table VII, we achieve consistent improvement over the baseline in all metrics designed for bounding box prediction.
|model||IoU (size)||accuracy (location, best box)||strict accuracy (location, best box)||IoU (location, heatmap)|
V-C5 Qualitative Analysis
Table VIII shows the qualitative comparison against  on bounding box prediction. We outperform baseline significantly, especially on location prediction. Possible reasons include: 1) The baseline employs an impainting model to generate fake background images that do not contain the target object, which leads to error propagation throughout the downstream training process; 2) The Visual Genome  dataset is relatively small, and images containing non-human objects (i.e. the insertable categories considered in this paper) are even fewer. We do not use larger datasets such as MS-COCO , because many important context object categories (e.g. desk, counter, wall, etc.) are not annotated. The data-driven nature of neural network hence limits the performance of .
We propose a novel research topic, dual recommendation for object insertion, and build an unsupervised algorithm that exploits object-level context. We establish a new test dataset and design task-specific metrics for automatic quantitative evaluation. We outperform existing baselines on all subtasks under a unified framework, as evidenced by both quantitative and qualitative results. Future work includes incorporation of high-dimensional image features, or larger datasets that is able to fully drive the training of neural networks.
This work was supported by the National Key R&D Program (No. 2017YFB1002604), the National Natural Science Foundation of China (No. 61772298 and No. 61521002), a Research Grant of Beijing Higher Institution Engineering Research Center, and the Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology.
-  F. Ricci, L. Rokach, and B. Shapira, “Recommender systems handbook,” pp. 1–35, 10 2010.
-  “Recommender system,” https://en.wikipedia.org/wiki/Recommender_system.
-  “Youtube advertisement policy,” https://support.google.com/youtube/answer/154235.
-  “Mirriad in-video advertising,” https://www.mirriad.com/.
-  Z. inaida Galkina, “Graphic designer-client relationships — case study,” https://lauda.ulapland.fi/bitstream/handle/10024/60505/Thesis.pdf?sequence=2&isAllowed=y, 2010.
-  J. Johnson, R. Krishna, M. Stark, L. Li, D. A. Shamma, M. S. Bernstein, and F. Li, “Image retrieval using scene graphs,” in , 2015, pp. 3668–3678. [Online]. Available: https://doi.org/10.1109/CVPR.2015.7298990
-  J. Wang, W. Liu, S. Kumar, and S. Chang, “Learning to hash for indexing big data—a survey,” Proceedings of the IEEE, vol. 104, no. 1, pp. 34–57, Jan 2016.
-  L. Zheng, Y. Yang, and Q. Tian, “Sift meets cnn: A decade survey of instance retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 5, pp. 1224–1244, May 2018.
-  A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie, “Objects in context,” in 2007 IEEE 11th International Conference on Computer Vision, Oct 2007, pp. 1–8.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 770–778.
-  Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 1–9.
-  S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. [Online]. Available: http://arxiv.org/abs/1506.01497
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 2014, pp. 580–587.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 21–37.
-  B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” CoRR, vol. abs/1512.04150, 2015. [Online]. Available: http://arxiv.org/abs/1512.04150
-  H. Bilen and A. Vedaldi, “Weakly supervised deep detection networks,” CoRR, vol. abs/1511.02853, 2015. [Online]. Available: http://arxiv.org/abs/1511.02853
-  V. Kantorov, M. Oquab, M. Cho, and I. Laptev, “Contextlocnet: Context-aware deep network models for weakly supervised localization,” CoRR, vol. abs/1609.04331, 2016. [Online]. Available: http://arxiv.org/abs/1609.04331
-  K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” CoRR, vol. abs/1703.06870, 2017. [Online]. Available: http://arxiv.org/abs/1703.06870
-  E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, April 2017.
-  W. Liu, A. Rabinovich, and A. C. Berg, “Parsenet: Looking wider to see better,” CoRR, vol. abs/1506.04579, 2015. [Online]. Available: http://arxiv.org/abs/1506.04579
-  W. Zhou, H. Li, and Q. Tian, “Recent advance in content-based image retrieval: A literature survey,” CoRR, vol. abs/1706.06064, 2017. [Online]. Available: http://arxiv.org/abs/1706.06064
-  S.-M. Hu, F.-L. Zhang, M. Wang, R. R. Martin, and J. Wang, “Patchnet: A patch-based image representation for interactive library-driven image editing,” ACM Trans. Graph., vol. 32, no. 6, pp. 196:1–196:12, Nov. 2013. [Online]. Available: http://doi.acm.org/10.1145/2508363.2508381
-  ——, “Patchnet: A patch-based image representation for interactive library-driven image editing,” ACM Trans. Graph., vol. 32, no. 6, pp. 196:1–196:12, Nov. 2013. [Online]. Available: http://doi.acm.org/10.1145/2508363.2508381
-  J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” CoRR, vol. abs/1801.07892, 2018. [Online]. Available: http://arxiv.org/abs/1801.07892
-  F. Tan, C. Bernier, B. Cohen, V. Ordonez, and C. Barnes, “Where and who? automatic semantic-aware person composition,” CoRR, vol. abs/1706.01021, 2017. [Online]. Available: http://arxiv.org/abs/1706.01021
-  P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and VQA,” CoRR, vol. abs/1707.07998, 2017. [Online]. Available: http://arxiv.org/abs/1707.07998
-  D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei, “Scene graph generation by iterative message passing,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 3097–3106.
-  R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, M. S. Bernstein, and F. Li, “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” CoRR, vol. abs/1602.07332, 2016. [Online]. Available: http://arxiv.org/abs/1602.07332
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,”Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
-  K. Järvelin and J. Kekäläinen, “Cumulated gain-based evaluation of ir techniques,” ACM Trans. Inf. Syst., vol. 20, no. 4, pp. 422–446, Oct. 2002. [Online]. Available: http://doi.acm.org/10.1145/582415.582418
-  “Bag-of-words model,” https://en.wikipedia.org/wiki/Bag-of-words_model.
-  F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” CoRR, vol. abs/1511.07122, 2015. [Online]. Available: http://arxiv.org/abs/1511.07122
-  T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [Online]. Available: http://arxiv.org/abs/1405.0312