Proposal Flow: Semantic Correspondences from Object Proposals

by   Bumsub Ham, et al.
Cole Normale Suprieure

Finding image correspondences remains a challenging problem in the presence of intra-class variations and large changes in scene layout. Semantic flow methods are designed to handle images depicting different instances of the same object or scene category. We introduce a novel approach to semantic flow, dubbed proposal flow, that establishes reliable correspondences using object proposals. Unlike prevailing semantic flow approaches that operate on pixels or regularly sampled local regions, proposal flow benefits from the characteristics of modern object proposals, that exhibit high repeatability at multiple scales, and can take advantage of both local and geometric consistency constraints among proposals. We also show that the corresponding sparse proposal flow can effectively be transformed into a conventional dense flow field. We introduce two new challenging datasets that can be used to evaluate both general semantic flow techniques and region-based approaches such as proposal flow. We use these benchmarks to compare different matching algorithms, object proposals, and region features within proposal flow, to the state of the art in semantic flow. This comparison, along with experiments on standard datasets, demonstrates that proposal flow significantly outperforms existing semantic flow methods in various settings.


page 1

page 4

page 5

page 8

page 11

page 12


SFNet: Learning Object-aware Semantic Correspondence

We address the problem of semantic correspondence, that is, establishing...

Multi-Image Semantic Matching by Mining Consistent Features

This work proposes a multi-image matching method to estimate semantic co...

Screentone-Preserved Manga Retargeting

As a popular comic style, manga offers a unique impression by utilizing ...

Detecting and Grouping Identical Objects for Region Proposal and Classification

Often multiple instances of an object occur in the same scene, for examp...

Multi-Scale Convolutions for Learning Context Aware Feature Representations

Finding semantic correspondences is a challenging problem. With the brea...

Constrained Parametric Proposals and Pooling Methods for Semantic Segmentation in RGB-D Images

We focus on the problem of semantic segmentation based on RGB-D data, wi...

On Extracting Data from Tables that are Encoded using HTML

Tables are a common means to display data in human-friendly formats. Man...

1 Introduction

Classical approaches to finding correspondences across images are designed to handle scenes that contain the same objects with moderate view point variations in applications such as stereo matching [1, 2], optical flow [3, 4, 5], and wide-baseline matching [6, 7]. Semantic flow methods, such as SIFT Flow [8] for example, on the other hand, are designed to handle a much higher degree of variability in appearance and scene layout, typical of images depicting different instances of the same object or scene category. They have proven useful for many tasks such as object recognition, cosegmentation, image registration, semantic segmentation, and image editing and synthesis [9, 10, 8, 7, 11, 12, 13]. In this context, however, appearance and shape variations may confuse similarity measures for local region matching, and prohibit the use of strong geometric constraints (e.g., epipolar geometry, limited disparity range). Existing approaches to semantic flow are thus easily distracted by scene elements specific to individual objects and image-specific details (e.g., background, texture, occlusion, clutter). This is the motivation for our work, where we use reliable and robust region correspondences to focus on regions containing prominent objects and scene elements rather than clutter and distracting details.

(a) Region-based semantic flow. (b) Dense flow field.
Fig. 1: Proposal flow generates a reliable and robust semantic flow between similar images using local and geometric consistency constraints among object proposals, and it can be transformed into a dense flow field. Using object proposals for semantic flow enables focusing on regions containing prominent objects and scene elements rather than clutter and distracting details. (a) Region-based semantic flow between source (left) and target (right) images. (b) Dense flow field (bottom) and image warping using the flow field (top). (Best viewed in color.)

Concretely, we introduce an approach to pairwise semantic flow computation, called proposal flow, that establishes region correspondences using object proposals and their geometric relations (Fig. 1). Unlike previous semantic flow algorithms [14, 9, 15, 16, 10, 8, 17, 18, 19, 7, 11, 20, 13, 21], that use regular grid structures for local region generation and matching, we leverage a large number of multi-scale object proposals [22, 23, 24, 25, 26], as now widely used to significantly reduce the search space or false alarms, e.g, for object detection [27, 28] and tracking tasks [29]

. Using object proposals for semantic flow has the following advantages: First, we can use diverse spatial supports for prominent objets and parts, and focus on these elements rather than clutter and distracting scene components. Second, we can use geometric relations between objects and parts, which prevents confusing objects with visually similar regions or parts, but quite different geometric configurations. Third, as in the case of object detection, we can reduce the search space for correspondences, scaling well with the size of the image collection. Accordingly, the proposed approach establishes region correspondences between object proposals by exploiting their visual features and geometric relations in an efficient manner, and generates a region-based semantic flow composed of object proposal matches. We show that this region-based proposal flow can be effectively transformed into a conventional dense flow field. We also introduce new datasets and evaluation metrics that can be used to evaluate both general semantic flow techniques and region-based approaches such as proposal flow. These datasets consist of images containing more clutter and intra-class variation, and are much more challenging than existing ones for semantic flow evaluation. We use these benchmarks to compare different matching algorithms, object proposals, and region features within proposal flow, to the state of the art in semantic flow. This comparison, along with experiments on standard datasets, demonstrates that proposal flow significantly outperforms existing semantic flow methods (including a learning-based approach) in various settings.

Contributions. The main contributions of this paper can be summarized as follows:

  • [leftmargin=*]

  • We introduce the proposal flow approach to establishing robust region correspondences between related, but not identical scenes using object proposals (Section 3).

  • We introduce benchmark datasets and evaluation metrics for semantic flow that can be used to evaluate both general semantic flow algorithms and region matching methods (Section 4).

  • We demonstrate the advantage of proposal flow over state-of-the-art semantic flow methods through extensive experimental evaluations (Section 5).

A preliminary version of this work appeared in [30]. Besides a more detailed presentation and discussion of the most recent related works, this version adds (1) an in-depth presentation of proposal flow; (2) a more challenging benchmark based on the PASCAL 2011 keypoint dataset [31]; (3) a verification of quality of ground-truth correspondence generation for our datasets; (4) an extensive experimental evaluation including a performance analysis with varying the number of proposals and an analysis of runtime, and a comparison of proposal flow with recently introduced state-of-the-art methods and datasets. To encourage comparison and future work, our datasets and code are available online:

2 Related work

Correspondence problems involve a broad range of topics beyond the scope of this paper. Here we briefly describe the context of our approach, and only review representative works pertinent for ours.

2.1 Semantic flow

Pairwise correspondence.

Classical approaches to stereo matching and optical flow estimate dense correspondences between pairs of nearby images of the same scene 

[3, 6, 1]. While advances in invariant feature detection and description have revolutionized object recognition and reconstruction in the past 15 years, research on image matching and alignment between images have long been dominated by instance matching with the same scene and objects [32]. Unlike these, several recent approaches to semantic flow focus on handling images containing different scenes and objects. Graph-based matching algorithms [33, 12] attempt to find category-level feature matches by leveraging a flexible graph representation of images, but they commonly handle sparsely sampled or detected features due to their computational complexity. Inspired by classic optical flow algorithms, Liu et al. pioneered the idea of dense correspondences across different scenes, and proposed the SIFT Flow [8] algorithm that uses a multi-resolution image pyramid together with a hierarchical optimization technique for efficiency. Kim et al[10] extende the approach by inducing a multi-scale regularization with a hierarchically connected pyramid of grid graphs. Long et al[34] investigate the effect of pretrained ConvNet features on the SIFT Flow algorithm, and Bristow et al[14] propose an exemplar-LDA approach that improves the performance of semantic flow. More recently, Taniai et al[13] have shown that the approach to jointly recovering cosegmentation and dense correspondence outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation. Zhou et al[21] propose a learning-based method that leverages a 3D model. This approach uses cycle consistency to link the correspondence between real images and rendered views. Choy et al[20] propose to use a fully convolutional architecture, along with a correspondence contrastive loss, allowing faster training by effective reuse of computations. While archiving state-of-the-art performance, these learning-based approaches require a large number of annotated images [20] or 3D models [21] to train the corresponding deep model, and do not consider geometric consistency among correspondences.

Despite differences in graph construction, optimization, and similarity computation, existing semantic flow approaches share grid-based regular sampling and spatial regularization: The appearance similarity is defined at each region or pixel on (a pyramid of) regular grids, and spatial regularization is imposed between neighboring regions in the pyramid models [10, 8, 34, 13]. In contrast, our work builds on generic object proposals with diverse spatial supports [22, 23, 24, 25, 26], and uses an irregular form of spatial regularization based on co-occurrence and overlap of the proposals. We show that the use of local regularization with object proposals yields substantial gains in generic region matching and semantic flow, in particular when handling images with significant clutter, intra-class variations and scaling changes, establishing a new state of the art on the task.

Multi-image correspondence. Besides these pairwise matching methods, recent works have tried to solve a correspondence problem as a joint image-set alignment. Collection Flow [35] uses an optical flow algorithm that aligns each image to its low-rank projection onto a sub-space capturing the common appearance of the image collection. FlowWeb [11] first builds a fully-connected graph with each image as a node, and each edge as flow field between a pair of images, and then establishes globally-consistent correspondences using cycle consistency among all edges. This approach gives state-of-the-art performance, but requires a large number of images for each object category, and the matching results are largely dependent on the initialization quality. Zhou et al[36] also use cycle consistency between sparse features to solve a graph matching problem posed as a low-rank matrix recovery. Carreira et al[37] leverage keypoint annotations to estimate dense correspondences across images with similar viewpoint, and use these pairwise matching results to align a query image to all the other images to perform single-view 3D reconstruction.

While improving over pairwise correspondence results at the expense of runtime, these multi-image methods all use a pairwise method to find initial matches before refining them, (e.g., with cycle consistency [36]). Our correspondence method outperforms current pairwise methods, and its output could be used as a good initialization for multi-image methods.

2.2 Object proposals and object-centric representations

Object proposals [22, 23, 24, 25, 26] have originally been developed for object detection, where they are used to reduce the search space as well as false alarms. They are now an important component in many state-of-the-art detection pipelines [27, 28]

and other computer vision applications, including object tracking 

[29], action recognition [38], weakly supervised localization [39], and semantic segmentation [40]. Despite their success for object detection and segmentation, object proposals have seldom been used in matching tasks [41, 42]. In particular, while Cho et al[41] have shown that object proposals are useful for region matching due to their high repeatability on salient part regions, the use of object proposals has never been thoroughly investigated in semantic flow computation. The approach proposed in this paper is a first step in this direction, and we explore how the choice of object proposals, matching algorithms, and features affects matching robustness and accuracy.

Recently, object-centric representation has been used to estimate optical flow. In [43], potentially moving vehicles are first segmented from the background, and the flow is estimated individually for every object and the background. Similarly, Sevilla-Lara et al[44] use semantic segmentation to break the image into regions, and compute optical flow differently in different regions, depending on the the semantic class label. The main intuition behind these works is that focusing on regions containing prominent regions, e.g., objects, can help estimate the optical flow field effectively. Proposal flow shares similar idea, but it is designed for semantic flow computation and leverages the geometric relations between objects and parts as well. We show that object proposals are well suited to semantic flow computation, and further using their geometric relations boosts the matching accuracy.

3 Proposal flow

Proposal flow can use any type of object proposals [22, 23, 24, 45, 25, 26] as candidate regions for matching a pair of images of related scenes. In this section, we introduce a probabilistic model for region matching (Section 3.1), and describe three matching strategies including two baselines and a new one using local regularization (Section 3.2). We then describe our approach to generating a dense flow field from the region matches (Section 3.3).

(a) Input images.
(b) Object proposals [25].
(c) Object proposals near the front wheel.
(d) NAM.
(e) PHM [41].
(f) LOM.
Fig. 8: Top: (a-b) A pair of images and their object proposals [25]. (c) Multi-scale object proposals contain the same object or parts, but they are not perfectly repeatable across different images. Bottom: In contrast to NAM (d), PHM [41] (e) and LOM (f) both exploit geometric consistency, which regularizes proposal flow. In particular, LOM imposes local smoothness on offsets between neighboring regions, avoiding the problem of using a global consensus on the offset in PHM [41]. The matching score is color-coded for each match (red: high, blue: low). The HOG descriptor [46] is used for appearance matching in this example. (Best viewed in color.)

3.1 A Bayesian model for region matching

Let us suppose that two sets of object proposals and have been extracted from images and (Fig. 8(a-b)). A proposal in is an image region with appearance feature and spatial support . The appearance feature represents a visual descriptor for the region (e.g., SPM [47], HOG [46], ConvNet [48]), and the spatial support describes the set of all pixel positions in the region (a rectangular box in this work). Given the data

, we wish to estimate a posterior probability of the event

meaning that proposal in matches proposal in :


where we decouple appearance and geometry, and further assume that appearance matching is independent of the data . In practice, the appearance term is simply computed from a similarity between feature descriptors and , and the geometric consistency term is evaluated by comparing the spatial supports and in the context of the given data , as described in the next section. We set the posterior probability as a matching score and assign the best match for each proposal in :


Using a slight abuse of notation, if , we will write and .

3.2 Geometric matching strategies

We now introduce three matching strategies, using different geometric consistency terms .

3.2.1 Naive appearance matching (NAM)

A straightforward way of matching regions is to use a uniform distribution for the geometric consistency term

so that


NAM considers appearance only, and does not reflect any geometric relationship among regions (Fig. 8(d)).

3.2.2 Probabilistic Hough matching (PHM)

The matching algorithm in [41]

can be expressed in our model as follows. First, a three-dimensional location vector (position plus scale) is extracted from the spatial support

of each proposal . We denote it by a function . An offset space is defined as a feasible set of offset vectors between and : . The geometric consistency term is then defined as


which assumes that the probability that two boxes and match given the offset is independent of the rest of the data and can be modeled by a Gaussian kernel in the three-dimensional offset space. Given this model, PHM replaces with a generalized Hough transform score:


which aggregates individual votes for the offset , from all possible matches in . Hough voting imposes a spatial regularizer on matching by taking into account a global consensus on the corresponding offset [49, 50]. However, it often suffers from background clutter that distracts the global voting process (Fig. 8(e)).

3.2.3 Local offset matching (LOM)

Here we propose a new method to overcome this drawback of PHM [41]

and obtain more reliable correspondences. Object proposals often contain a large number of distracting outlier regions from background clutter, and are not perfectly repeatable even for corresponding object or parts across different images (Fig.

8(c)). The global Hough voting in PHM has difficulties with such outlier regions. In contrast, we optimize a translation and scale offset for each proposal by exploiting only neighboring proposals. That is, instead of averaging over all feasible offsets in PHM, we use one reliable offset optimized for each proposal. This local approach substantially alleviates the effect of outlier regions in matching as will be demonstrated by our experiment results.

The main issue is how to estimate a reliable offset for each proposal in a robust manner without any information about objects and their locations. One way would be to find the region corresponding to through a multi-scale sliding window search in as in object detection [51], but this is expensive. Instead, we assume that nearby regions have similar offsets. For each region , we first define its neighborhood as the set of regions with overlapping spatial support:


Using an initial correspondence , determined by the best match according to appearance, each neighboring region is assigned its own offset, and all of them form a set of neighbor offsets:


From this set of neighbor offsets, we estimate a local offset for the region by the geometric median [52]111We found that the centroid and mode of the offset vectors in three-dimensional offset space show worse performance than the geometric median. This is because the neighboring regions may include clutter. Clutter causes incorrect neighbor offsets, but the geometric median is robust to outliers [53], providing a reliable local offset.:


which can be computed using Weiszfeld’s algorithm [54] with a form of iteratively re-weighted least squares. In other words, the local offset for the region is estimated by regression using its local neighboring offsets . Based on the local offset optimized for each region, we define the geometric consistency function:


which can be interpreted as the fact that the region in is likely to match in where its offset is close to the local offset , and the region has many neighboring matches with a high appearance fidelity. By using as a proxy for , LOM imposes local smoothness on offsets between neighboring regions. This geometric consistency function effectively suppresses matches between clutter regions, while favoring matches between regions that contain objects rather than object parts (Fig. 8(f)).

(a) Anchor match and pixel correspondence.
(b) Match visualization.
(c) Warped image.
Fig. 12: Flow field generation. (a) For each pixel (yellow point), its anchor match (red boxes) is determined. The correspondence (green point) is computed by the transformed coordinate with respect to the position and size of the anchor match. (b) Based on the flow field, (c) the right image is warped to the left image. The warped object shows visually similar shape to the one in the left image. The LOM method is used for region matching with the object proposals [24] and the HOG descriptor [46]. (Best viewed in color.)
(a) Keypoints and object bounding boxes.
(b) Warping.
(c) .
(d) Ground-truth correspondence
(e) NAM.
(f) PHM [41].
(g) LOM.
Fig. 20: Top: Generating ground-truth regions and evaluating correct matches. (a) Using keypoint annotations, dense correspondences between images are established using TPS warping [55, 56]. (b) Based on the dense correspondences, all pixels in the left image are warped to the right image, showing that the correspondences align two images well. (c) We assume that true matches exist only between the regions near the object bounding box, and thus an evaluation is done with the regions in this subset of object proposals. (d) For each object proposal (red box in the left image), its ground truth is generated automatically by the dense correspondences: We fit a tight rectangle (red box in the right image) of the region formed by the warped object proposal (yellow box in the right image) and use it as a ground-truth correspondence. Bottom: Examples of correct matches: The numbers of correct matches are 16, 5, and 38 for NAM (e), PHM [41] (f), and LOM (g), respectively. Matches with an IoU score greater than 0.5 are considered as correct in this example. (Best viewed in color.)

3.3 Flow field generation

Proposal flow gives a set of region correspondences between images that can easily be transformed into a conventional dense flow field. Let denote a pixel in the image (yellow point in Fig. 12(a)). For each pixel , its neighborhood is defined as the region in which it lies, i.e., . We define an anchor match as the region correspondence that has the highest matching score among neighboring regions (red boxes in Fig. 12(a)) where


Note that the anchor match contains information on translation and scale changes between objects or part regions. Using the geometric relationships between the pixel and its anchor match , a correspondence in the image (green point in Fig. 12

(a)) is obtained by linear interpolation, i.e., computed by the transformed coordinate with respect to the position and size of the anchor match.

The matching score for each correspondence is set to the value of its anchor match . When the pixels and in the image are matched to the same pixel in the image , we select the match with the highest matching score and delete the other one. Finally, joint image filtering [57] is applied under the guidance of the image to interpolate the flow field in places without correspondences. Figure 12(b-c) shows examples of the estimated flow field and corresponding warping result between two images: Using the dense flow field, we warp all pixels in the right image to the left image. Our approach using the anchor match aligns semantic object parts well while handling translation and scale changes between objects.

4 Datasets for semantic flow evaluation

Current research on semantic flow lacks appropriate benchmarks with dense ground-truth correspondences. Conventional optical flow benchmarks (e.g., Middlebury [58] and MPI-Sintel [59]) do not feature within-class variations, and ground truth for generic semantic flow is difficult to capture due to its intrinsically semantic nature, manual annotation being extremely labor intensive and somewhat subjective. Existing approaches are thus usually evaluated only with sparse ground truth or in an indirect manner (e.g. mask transfer accuracy) [14, 10, 8, 17, 18, 11]. Such benchmarks only evaluate a small number of matches, that occur at ground-truth keypoints or around mask boundaries in a point-wise manner. To address this issue, we introduce in this section two new datasets for semantic flow, dubbed PF-WILLOW and PF-PASCAL (PF for proposal flow), built using ground-truth object bounding boxes and keypoint annotations, (Fig. 20(a)), and propose new evaluation metrics for region-based semantic flow methods. Note that while designed for region-based methods, our benchmark can be used to evaluate any semantic flow technique. As will be seen in our experiments, it provides a reasonable (if approximate) ground truth for dense correspondences across similar scenes without an extremely expensive annotation campaign. Comparative evaluations on this dataset have also proven to be good predictors for performance on other tasks and datasets, further justifying the use of our benchmark.

Taniai et al. have recently introduced a benchmark dataset for semantic flow evaluation [13]. It provides 400 image pairs of 7 object categories, corresponding ground-truth cosegmentation masks, and flow maps that are obtained by natural neighbor interpolation [60] on sparse keypoint matches. In contrast, our datasets use over 2200+ image pairs of up to 20 categories. It is split into two subsets: The first subset features 900 image pairs of 4 object categories, further split into 10 sub-categories according to the viewpoint and background clutter, in order to evaluate the different factors of variation for matching accuracy. The second subset consists of 1300+ image pairs of 20 image categories. In the following, we present our ground-truth generation process in Section 4.1, evaluation criteria in Section 4.2, and datasets in Section 4.3.

4.1 Ground-truth correspondence generation

Let us assume two sets of keypoint annotations at positions and in and , respectively, with . Assuming the objects present in the images and their parts may undergo shape deformation, we use thin plate splines (TPS) [55, 56] to interpolate sparse keypoints (Fig. 20(b)). Concretely, the ground truth is approximated from sparse correspondences using TPS warping. For each region or proposal, its ground-truth match is generated as follows. We assume that each image has a single object and true matches only exist between a subset of regions, i.e., regions around object bounding boxes (Fig. 20(c)): where denotes an object bounding box in the image , and indicates the area of the region . For each region (e.g., red box in Fig. 20(d) left), the four vertices of the rectangle are warped to the corresponding ones in the image by the TPS mapping function (e.g., yellow box in Fig. 20(d) right). The region formed by the warped points is a correspondence of region . We fit a tight rectangle for this region and set it as a ground-truth correspondence for the region (e.g., red box in Fig. 20(d) right).

Note that WarpNet [61] also uses TPS to generate ground-truth correspondences, but it does not consider intra-class variation. In particular, WarpNet constructs a pose graph using a fine-grained dataset (e.g., the CUB-200-2111 [62] of bird categories), computes a set of TPS functions using silhouettes of image pairs that are closest on the graph, and finally transforms each image by sampling from this set of TPS warps. In contrast to this, we directly use TPS to estimate a warping function using ground-truth keypoint annotations.

4.2 Evaluation criteria

We introduce two evaluation metrics for region matching performance in terms of matching precision and match retrieval accuracy. These metrics build on the intersection over union (IoU) score between the region ’s correspondence and its ground truth :


For region matching precision, we propose the probability of correct region (PCR) metric where the region is correctly matched to its ground truth if (e.g., Fig. 25(a) top), where is an IoU threshold. Note that this region-based metric is based on a conventional point-based metric, the probability of correct keypoint (PCK) [63]. In the case of pixel-based flow, PCK can be adopted instead. We measure the PCR metric while varying the IoU threshold from 0 to 1. For match retrieval accuracy, we propose the average IoU of -best matches (dubbed mIoU@) according to the matching score (e.g., Fig. 25(a) bottom). We measure the mIoU@ metric while increasing the number of top matches . These two metrics exhibit two important characteristics of matching: PCR reveals the accuracy of overall assignment, and mIoU@ shows the reliability of matching scores that is crucial in match selection.

4.3 Dataset construction

We construct two benchmark datasets for semantic flow evaluation: The PF-WILLOW and PF-PSCAL datasets. The original images and keypoint annotations are taken from existing datasets [31, 64].

PF-WILLOW. To generate the PF-WILLOW dataset, we start from the benchmark for sparse matching of Cho et al. [64], which consists of 5 object classes (Face, Car, Motorbike, Duck, WineBottle) with 10 keypoint annotations for each image. Note that these images contain more clutter and intra-class variation than existing datasets [10, 17, 11] for semantic flow evaluation, which include mainly images with tightly cropped objects or similar background. We exclude the face class where the number of generated object proposals is not sufficient to evaluate matching accuracy. The other classes are split into sub-classes222They are car (S), (G), (M), duck (S), motorbike (S), (G), (M), wine bottle (w/o C), (w/ C), (M), where (S) and (G) denote side and general viewpoints, respectively. (C) stands for background clutter, and (M) denotes mixed viewpoints (side + general) for car and motorbike classes and a combination of images in wine bottle (w/o C + w/ C) for the wine bottle class. according to viewpoint or background clutter. We obtain a total of 10 sub-classes. Given these images and regions, we generate ground-truth data between all possible image pairs within each sub-class. The dataset has 10 images for each sub-class, thus 100 images and 900 image pairs in total.

PF-PASCAL. For the PF-PASCAL dataset, we use PASCAL 2011 keypoint annotations [31] for 20 object categories. We select meaningful image pairs for each category that contain a single object with similar poses, resulting in 1351 image pairs in total. The number of image pairs in the dataset varies from 6 for the sheep class to 140 for the bus class, and 67 on average, and each image pair contains from 4 to 17 keypoints and 7.95 keypoints on average. This dataset is more challenging than PF-WILLOW and other existing datasets for semantic flow evaluation.

5 Experiments

In this section we present a detailed analysis and evaluation of our proposal flow approach.

5.1 Experimental details

Object proposals. We evaluate four state-of-the-art object proposal methods: EdgeBox (EB) [26], multi-scale combinatorial grouping (MCG) [22], selective search (SS) [25], and randomized prim (RP) [24]. In addition, we consider three baseline proposals [23]: Uniform sampling (US), Gaussian sampling (GS), and sliding window (SW) (See [23] for a discussion). We use publicly available codes for all proposal methods.

For fair comparison, we use 1,000 proposals for all the methods in all experiments, unless otherwise specified. To control the number of proposals, we use the proposal score: Albeit not all having explicit control over the number of proposals, EB, MCG, and SS provides proposal scores, so we use the top proposals. For RP, which lacks any control over the number of proposals, we randomly select the proposals. For US, GS, and SW, we can control the number of proposals explicitly [23].

(a) Comparison of object proposals.
(b) Comparison of feature descriptors.
(c) Comparison of matching algorithms.
Fig. 25: PF-PASCAL benchmark evaluation on region matching precision (top, PCR plots) and match retrieval accuracy (bottom, mIoU@ plots): (a) Evaluation for LOM with HOG [46], (b) evaluation for LOM with RP [24], and (c) evaluation for RP with HOG [46]. The AuC is shown in the legend. (Best viewed in color.)

Feature descriptors and similarity. We evaluate four popular feature descriptors: two engineered ones (SPM [47] and HOG [46]) and two learning-based ones (ConvNet [48] and SIAM [65]). For SPM, dense SIFT features [66] are extracted every 4 pixels and each descriptor is quantized into a 1,000 word codebook [67]. For each region, a spatial pyramid pooling [47] is used with and pooling regions. We compute the similarity between SPM descriptors by the kernel. HOG features are extracted with cells and 31 orientations, then whitened. For ConvNet features, we use each output of the 5 convolutional layers in AlexNet [48]

, which is pre-trained on the ImageNet dataset 

[68]. For HOG and ConvNet, the dot product is used as a similarity metric333We also tried the kernel to compute the similarity between HOG or ConvNet features, and found that using the dot product gives better matching accuracy.. For SIAM, we use the author-provided model trained using a Siamese network on a subset of Liberty, Yosemite, and Notre Dame images of the multi-view stereo correspondence (MVS) dataset [69]. Following [65], we compute the similarity between SIAM descriptors by the distance.

5.2 Proposal flow components

We use the PF benchmarks in this section to compare three variants of proposal flow using different matching algorithms (NAM, PHM, LOM), combined with various object proposals [22, 23, 24, 25, 26], and features [46, 48, 47, 65].

(a) AuCs for PCR (top) and mIoU@ (bottom) curves on the PF-PASCAL.
(b) AuCs for PCR (top) and mIoU@ (bottom) curves on the PF-WILLOW.
Fig. 30: PF benchmark evaluation on AuCs for PCR and mIoU@ plots: (a) PF-PASCAL dataset, and (b) PF-WILLOW dataset. We can see that combining LOM, RP, and HOG performs best in both metrics and datasets, and the challenging PF-PASCAL dataset shows slightly lower matching precision and retrieval accuracy than the PF-WILLOW. (Best viewed in color.)
Methods aero bike bird boat bot bus car cat cha cow tab dog hor mbik pers plnt she sofa trai tv Avg.
LOM 0.52 0.56 0.34 0.39 0.47 0.61 0.58 0.34 0.43 0.43 0.27 0.36 0.46 0.48 0.31 0.34 0.35 0.37 0.52 0.50 0.43
Upper bound 0.70 0.72 0.63 0.66 0.71 0.77 0.73 0.63 0.72 0.69 0.57 0.67 0.70 0.72 0.66 0.62 0.53 0.65 0.73 0.78 0.68
TABLE I: AuC performance for PCR plots on the PF-PASCAL dataset (RP w/ HOG).

Qualitative comparison. Figure 20(e-g) shows a qualitative comparison between region matching algorithms on a pair of images and depicts correct matches found by each variant of proposal flow. In this example, at the IoU threshold , the numbers of correct matches are 16, 5, and 38 for NAM, PHM [41], and LOM, respectively. This shows that PHM may give worse performance than even NAM when there is much clutter in background. In contrast, the local regularization in LOM alleviates the effect of such clutter.

Quantitative comparison on PF-PASCAL. Figure 25 summarizes the matching and retrieval performance on average for all object classes with a variety of combination of object proposals, feature descriptors, and matching algorithms. Figure 25(a) compares different types of object proposals with fixed matching algorithm and feature descriptor (LOM w/ HOG). RP gives the best matching precision and retrieval accuracy among the object proposals. An upper bound on precision is measured for object proposals (around a given object) in the image using corresponding ground truths in image , that is the best matching accuracy we can achieve with each proposal method. To this end, for each region in the image , we find the region in the image that has the highest IoU score given the region ’s ground-truth correspondence in the image , and use the score as an upper bound precision. The upper bound (UB) plots show that RP generates more consistent regions than other proposal methods, and is adequate for region matching. RP shows higher matching precision than other proposals especially when the IoU threshold  is low. The evaluation results for different features (LOM w/ RP) are shown in Fig. 25(b). The HOG descriptor gives the best performance in matching and retrieval. The CNN features in our comparison come from AlexNet [48] trained for ImageNet classification. Such CNN features have a task-specific bias to capture discriminative parts for classification, which may be less adequate for patch correspondence or retrieval than engineered features such as HOG. Similar conclusions are found in recent papers [34, 70]. See, for example, Table 3 in [70] where SIFT outperforms all AlexNet features (Conv1-5). Among ConvNet features, the fourth and first convolutional layers (Conv4 and Conv1) show the best and worst performance, respectively, while other layers perform similar to SPM. This confirms the finding in [71], which shows that Conv4 gives the best matching performance among ImageNet-trained ConvNet features. The SIAM feature is designed to compute patch similarity, and thus it can be used as a replacement for any task involving SIFT. This type of feature descriptor using Siamese or triplet networks such as [71, 72, 65] works well in finding correspondences between images containing the same object with moderate view point changes, e.g., as in the stereo matching task. But, we can see that this feature descriptor is less adequate for semantic flow, i.e., finding correspondences of different scenes and objects. The main reason is that the training dataset [69] does not feature intra-class variations. We will show that the dense version of our proposal flow also outperforms a learning-based semantic flow method in Section 5.3. Figure 25(c) compares the performance of different matching algorithms (RP w/ HOG), and shows that LOM outperforms others in matching as well as retrieval.

Figure 30(a) shows the area under curve (AuC) for PCR (top) and mIoU@ (bottom) plots on average for all object classes with all combinations of object proposals, feature descriptors, and matching algorithms. This suggests that combining LOM, RP, and HOG performs best in both metrics. In Table I, we show AuCs of PCR plots for each class of the PF-PASCAL dataset (RP w/ HOG). We can see that rigid objects (e.g., bus and car) show higher matching precision than deformable ones (e.g., person and bird).

Fig. 33: AuCs for PCR and mIoUk plots and fraction of inlier proposals over all proposals on the PF-PASCAL (top) and PF-WILLOW (bottom). We can see that matching precision (left, PCR plots) and retrieval accuracy (center, mIoUk plots) are slightly increasing, except MCG. The MCG is designed to obtain high precision with small number of proposals, so the fraction (right) decreases as the number of proposals. The LOM method is used for region matching with the HOG descriptor. (Best viewed in color.)
Methods car(S) car(G) car(M) duc(S) mot(S) mot(G) mot(M) win(w/o C) win(w/ C) win(M) Avg.
LOM 0.61 0.50 0.45 0.50 0.42 0.40 0.35 0.69 0.30 0.47 0.47
Upper bound 0.75 0.69 0.69 0.72 0.70 0.70 0.67 0.80 0.68 0.73 0.71
TABLE II: AuC performance for PCR plots on the PF-WILLOW dataset (RP w/ HOG).

Quantitative comparison on PF-WILLOW. We perform the same experiments with the PF-WILLOW dataset: The behavior of the average matching and retrieval performance is almost the same as the one for the PF-PASCAL dataset shown in Fig. 25, so we omit these results. They can be found on our project webpage for completeness. In Fig 30(b), we show the AuC for PCR (top) and mIoU@ (bottom) plots. We have the same conclusion here in the PF-PASCAL dataset (Fig 30(a)), but we can achieve higher matching precision and retrieval accuracy than for the challenging PF-PASCAL dataset. In Table II, we show AuCs of PCR plots for each sub-class. From this table, we can see that 1) higher matching precision is achieved with objects having a similar pose (e.g., mot(S) vs. mot(M)), 2) performance decreases for deformable object matching (e.g., duck(S) vs. car(S)), and 3) matching precision can increase drastically by eliminating background clutters (e.g., win(w/o C) vs. win(w/ C)), which verifies out motivation of using object proposals for semantic flow.

Effect of the number of proposals. In Fig. 33, we show the AuCs of PCR (left) and mIoU@ (center) plots, on the PF-PASCAL (top) and PF-WILLOW (bottom), as a function of the number of object proposals. We see that 1) upper bounds on matching precision of all proposals are continuously growing, except MCG, as the number of proposal increases, and 2) matching precision and retrieval accuracy of proposal flow are increasing as well, but at a slightly slower rate. On the one hand, as the number of proposals is increasing, the number of inlier proposals, i.e., regions around object bounding boxes , is increasing, and thus we can achieve a higher upper bound. On the other hand, the number of outlier proposals, i.e., - , is increasing as well, which prevents us from finding correct matches. Overall, matching precision and retrieval accuracy increase with the number of proposals (except for MCG), and start to saturate around 1000 proposals. We hypothesize that this is related to the fraction of inliers over all proposals, i.e., , which may decrease in the case of MCG. To verify this, we plot this fraction as a function of the number of object proposals (Fig. 33, right). We can see that the fraction of MCG is drastically decreasing as the number of proposals, which means that MCG generates more and more outlier proposals corresponding, e.g., to background clutter. The reason is that high recall is the main criteria when designing most object proposal methods, but MCG is designed to achieve high precision with a small number of proposals [22].

5.3 Flow field

To compare our method with state-of-the-art semantic flow methods, we compute a dense flow field from our proposal flows (Section 3.3), and evaluate image alignment between all pairs of images in each subset of the PF-PASCAL and PF-WILLOW datasets. We also compare the matching accuracy with existing datasets: Clatech-101 [73], PASCAL parts [11], and Taniai’s datasets [13]. In each case, we compare the proposal flow to the state of the art. For proposal flow, we use a SS method and HOG descriptors, unless otherwise specified, and use publicly available codes for all compared methods.

(a) Source image.
(b) Target image.
(c) DeepFlow.
(d) SIFT Flow.
(e) DSP.
(f) Zhou et al.
(g) Proposal Flow.
Fig. 46: Examples of dense flow field. (a-b) Sourse images are warped to the target images using the dense correspondences estimated by (c) DeepFlow [4], (d) SIFT Flow [8], (e) DSP [10], (f) Zhou et al[21], and (g) Proposal Flow (LOM w/ RP and HOG). Compared to the existing methods, proposal flow is robust to background clutter, and translation and scale changes between objects. The first two images are from the PF-WILLOW and remaining ones are from the PF-PASCAL.
(a) Source image.
(b) Target image.
(c) DeepFlow.
(d) SIFT Flow.
(e) DSP.
(f) Zhou et al.
(g) Proposal Flow.
Fig. 56: Failure examples of (from top to bottom) sofa, bus, and cat classes on the PF-PASCAL dataset. (a-b) Source images are warped to the target images using the dense correspondences estimated by (c) DeepFlow [4], (d) SIFT Flow [8], (e) DSP [10], (f) Zhou et al[21] and (g) Proposal Flow (LOM w/ RP and HOG). Proposal flow is hard to deal with images containing (from top to bottom) severe occlusion, similarly shaped objects, and deformation.
Methods SW [23] MCG [22] EB [26] SS [25] RP [24]
NAM 0.29/0.44 0.27/0.46 0.37/0.51 0.36/0.52 0.37/0.54
PHM 0.37/0.48 0.35/0.48 0.35/0.45 0.42/0.55 0.42/0.54
LOM 0.35/0.42 0.38/ 0.49 0.37/0.45 0.45/0.56 0.44/0.55
DeepFlow [4] 0.21/0.20
GMK [12] 0.27/0.27
SIFT Flow [8] 0.33/0.38
DSP [10] 0.30/0.37
Zhou et al[21] 0.30/0.41
TABLE III: PCK () comparison for dense flow field on the PF dataset (PF-PASCAL / PF-WILLOW).
Methods aero bike bird boat bot bus car cat cha cow tab dog hor mbik pers plnt she sofa trai tv Avg.
LOM 0.75 0.76 0.34 0.41 0.55 0.71 0.73 0.32 0.41 0.41 0.21 0.27 0.38 0.57 0.29 0.17 0.33 0.34 0.54 0.46 0.45
DeepFlow [4] 0.55 0.31 0.10 0.19 0.24 0.36 0.31 0.12 0.22 0.10 0.23 0.07 0.11 0.32 0.10 0.08 0.07 0.20 0.31 0.17 0.21
GMK [12] 0.61 0.49 0.15 0.21 0.29 0.47 0.52 0.14 0.23 0.23 0.24 0.09 0.13 0.39 0.12 0.16 0.10 0.22 0.33 0.22 0.27
SIFT Flow [8] 0.61 0.56 0.20 0.34 0.32 0.54 0.56 0.26 0.29 0.21 0.33 0.17 0.23 0.43 0.18 0.17 0.17 0.31 0.41 0.34 0.33
DSP [10] 0.64 0.56 0.17 0.27 0.38 0.51 0.55 0.20 0.23 0.24 0.19 0.15 0.23 0.41 0.15 0.11 0.18 0.27 0.35 0.28 0.30
Zhou et al[21] 0.58 0.35 0.15 0.27 0.36 0.40 0.42 0.23 0.26 0.29 0.22 0.20 0.13 0.33 0.16 0.18 0.48 0.27 0.34 0.28 0.30
TABLE IV: PCK () comparison for dense flow field on the PF-PASCAL dataset (SS w/ HOG).
Methods car(S) car(G) car(M) duc(S) mot(S) mot(G) mot(M) win(w/o C) win(w/ C) win(M) Avg.
LOM 0.86 0.60 0.53 0.64 0.49 0.25 0.29 0.91 0.37 0.65 0.56
DeepFlow [4] 0.33 0.13 0.22 0.20 0.20 0.08 0.13 0.46 0.08 0.18 0.20
GMK [12] 0.48 0.25 0.34 0.27 0.31 0.12 0.15 0.41 0.17 0.18 0.27
SIFT Flow [8] 0.54 0.37 0.36 0.32 0.41 0.20 0.23 0.83 0.16 0.33 0.38
DSP [10] 0.46 0.30 0.32 0.25 0.31 0.15 0.14 0.85 0.25 0.64 0.37
Zhou et al[21] 0.77 0.34 0.52 0.42 0.34 0.19 0.20 0.78 0.19 0.38 0.41
TABLE V: PCK () comparison for dense flow field on the PF-WILLOW dataset (SS w/ HOG).
Methods Time (s)
NAM 4.6 1.0
PHM 5.4 1.1
LOM 8.8 1.3
DeepFlow [4] 4.7 0.6
GMK [12] 2.4 0.3
SIFT Flow [8] 4.2 0.8
DSP [10] 4.8 0.8
  • We used author provided MEX implementations.

TABLE VI: Runtime comparison for dense flow field on the PF-PASCAL dataset (SS w/ HOG).

Matching results on PF datasets. We test five object proposal methods (SW, MCG, EB, SS, RP). For an evaluation metric, we use PCK between warped keypoints and ground-truth ones [34, 63]. Ground-truth keypoints are deemed to be correctly predicted if they lie within pixels of the predicted points for in , where and are the height and width of the object bounding box, respectively. Table III shows the average PCK over all object classes. In our benchmark, all versions of proposal flow significantly outperform SIFT Flow [8], DSP [10], and DeepFlow [4], and proposal flow with PHM and LOM gives better performance than the learning-based method [21]. LOM with SS or RP outperforms other combination of matching and proposal methods, which coincides with the results in Section 5.2. Tables IV and V show the average PCK over each object class on the PF-PASCAL and PF-WILLOW, respectively. This shows that proposal flow consistently outperforms other methods for all object classes except for table and sheep classes in both datasets. We can also see that the learning-based method [21] does not generalize other object classes that are not contained in the PASCAL training set (e.g., duc(S)), and are not robust to the outliers (e.g., wine (w/ c)). Figure 46 gives a qualitative comparison with the state of the art on the PF-WILLOW and PF-PASCAL datasets. The better alignment found by proposal flow here is clearly visible. Specifically, proposal flow is robust to clutter and translation and scale changes between objects. Figure 56 shows failure examples of (from top to bottom) sofa, bus, and cat classes on the PF-PASCAL dataset, where we see proposal flow does not handle image pairs that contain severe occlusion, objects having similar shape, and deformation. Our current (un-optimized) MATLAB implementation takes on average 8.8 seconds on 2.5 GHz CPU for computing dense flow field using LOM w/ SS and HOG. Table VI shows runtime comparisons.

Proposals Methods LT-ACC IoU LOC-ERR
SW [23] LOM 0.78 0.47 0.25
SS [25] NAM 0.68 0.44 0.41
PHM 0.74 0.48 0.32
LOM 0.78 0.50 0.25
RP [24] NAM 0.70 0.44 0.39
PHM 0.75 0.48 0.31
LOM 0.78 0.50 0.26
DeepFlow [4] 0.74 0.40 0.34
GMK [12] 0.77 0.42 0.34
SIFT Flow [8] 0.75 0.48 0.32
DSP [10] 0.77 0.47 0.35
TABLE VII: Matching accuracy on the Caltech-101 dataset (HOG).
Methods FG3DCar JODS PASCAL Avg.
LOM 0.79 0.65 0.53 0.66
DFF [7] 0.50 0.30 0.22 0.31
DSP [10] 0.49 0.47 0.38 0.45
SIFT Flow [8] 0.63 0.51 0.36 0.50
Zhou et al[21] 0.72 0.51 0.44 0.56
Taniai et al[13] 0.83 0.60 0.48 0.64
TABLE VIII: Matching accuracy on the Taniai’s dataset (SS w/ HOG).
Methods IoU PCK
NAM 0.35 0.13
PHM 0.39 0.17
LOM 0.41 0.17
Congealing [74] 0.38 0.11
RASL [75] 0.39 0.16
CollectionFlow [35] 0.38 0.12
DSP [10] 0.39 0.17
FlowWeb [11] 0.43 0.26
TABLE IX: Matching accuracy on the PASCAL parts (SS w/ HOG).

Matching results on Caltech-101. We evaluate our approach on the Caltech-101 dataset [73]. Following the experimental protocol in [10], we randomly select 15 pairs of images for each object class, and evaluate matching accuracy with three metrics: Label transfer accuracy (LT-ACC) [76], the IoU metric, and the localization error (LOC-ERR) of corresponding pixel positions. For LT-ACC, we transfer the class label of one image to the other using dense correspondences, and count the number of correctly labeled pixels. Similarly, the IoU score is measured between the transferred label and ground truth. Table VII compares quantitatively the matching accuracy of proposal flow to the state of the art. It shows that proposal flow using LOM outperforms other approaches, especially for the IoU score and the LOC-ERR of dense correspondences. Note that compared to LT-ACC, these metrics evaluate the matching quality for the foreground object, separate from irrelevant scene clutter. Our results verify that proposal flow focuses on regions containing objects rather than scene clutter and distracting details, enabling robust image matching against outliers.

Matching results on Taniai’s Benchmark. We also evaluate flow accuracy on the dataset provided by [13] that consists of 400 image pairs of three groups: FG3DCar (195 image pairs of vehicles from [77]), JODS (81 image pairs of airplanes, horses, and cars from [78]), and PASCAL (124 image pairs of bicycles, motorbikes, buses, cars, trains from [79]). Matching accuracy is measured by the percentage of pixels in the ground-truth foreground region that have an error measure below a certain threshold. To this end, we compute the Euclidean distance between estimated and true flow vectors in a normalized scale where the larger dimensions of images are 100 pixels. Here, we use a threshold of 5 pixels following the work of [13]. We summarize average matching accuracy for each group in the Table VIII. The method of [21]

uses convolutional neural networks (CNNs) to learn dense correspondence. Since there is no previous dataset available for training the networks for semantic flow, it leverages a 3D model to use the known synthetic-to-synthetic matches as ground truth, allowing cycle consistency to propagate the correct match information from synthetic to real images. The method of 

[13] leverages an additional cosegmentation to estimate dense correspondence. This is a similar idea to ours in that excluding background regions when estimating correspondences improves the matching accuracy. In the FG3DCar dataset, this method [13] shows better performance then ours. But, overall, our method achieves the best performance on average over all datasets, and even outperforms the learning based method of [21].

Matching results on PASCAL parts. We use the dataset provided by [11] where the images are sampled from the PASCAL part dataset [80]. Following [11], we first measure part matching accuracy using human-annotated part segments. For this experiment, we measure the weighted IoU score between transferred segments and ground truths, with weights determined by the pixel area of each part (Table IX). To evaluate alignment accuracy, we measure the PCK metric () using keypoint annotations for the 12 rigid PASCAL classes [81] (Table IX). We use the same set of images as in the part matching experiment. Proposal flow does better than existing approaches on images that contain clutter (e.g., background, instance-specific texture, occlusion), but in this dataset [11], such elements are confined to only a small portion of the images (Fig. 68(a-b)), compared to the PF and the Caltech-101 [73] datasets. This may be a reason that, for the PCK metric, our approach gives similar results to other methods. FlowWeb [11] gives better results than ours, but relies on a cyclic constraint across multiple images (at least, three images444FlowWeb [11] uses 100 images to find correspondences for one pair of images. That is, a single output of DSP [10] is refined using 9900 pairs of matches.). FlowWeb uses the output of DSP [10] as initial correspondences, and refines them with the cyclic constraint. Since our method clearly outperforms DSP, using FlowWeb as a post processing would likely increase performance. Figure 68 visualizes the part matching results.

For more examples and qualitative results, see our project webpage:

(a) Source image.
(b) Target image.
(c) DSP.
(d) Proposal Flow.
(e) Source image.
(f) Target image.
(g) DSP.
(h) Proposal Flow.
Fig. 68: Examples of dense flow field on PASCAL parts. (a-b) Source images are warped to the target images using the dense correspondences estimated by (c) DSP [10] and (d) Proposal Flow w/ LOM, SS and HOG. (e-f) Similarly, annotated part segments for the source images are warped to the target images using the dense correspondences computed by (g) DSP and (h) Proposal Flow (LOM w/ SS and HOG). (Best viewed in color.)
Classes car(S) car(G) car(M) duc(S) mot(S) mot(G) mot(M) win(w/o C) win(w/ C) win(M) Avg.
PCK 0.95 0.96 0.99 0.93 0.88 0.89 0.91 1.00 1.00 1.00 0.95
TABLE X: PCK performance for a leave-one-out validation on the PF-WILLOW dataset.
Classes aero bike bird boat bot bus car cat cha cow tab dog hor mbik pers plnt she sofa trai tv Avg.
PCK 0.74 0.89 0.69 0.91 0.92 0.90 0.85 0.83 0.76 0.81 0.73 0.75 0.74 0.75 0.84 0.83 0.73 0.83 0.73 0.86 0.80
TABLE XI: PCK performance for a leave-one-out validation on the PF-PASCAL dataset.
Fig. 69: Verification of ground-truth data using a leave--out validation. This shows the average PCK of 10 trials over all object classes. For this experiment, we leave out randomly selected keypoints per each pair, and then measure PCK scores between the estimated correspondences (using TPS warps) of the leave out keypoints and their ground-truth annotations. (Best viewed in color.)

5.4 Quality of generated ground-truth correspondence

Of course, our “ground truth” for the PF datasets is only approximate, since it is obtained by interpolation. We evaluate its quality using a leave--out validation: When generating ground-truth dense correspondences using TPS warping as in Section 4.1, we leave out randomly selected keypoints per each pair (e.g., among 10 keypoints in the PF-WILLOW dataset), and then evaluate PCK () between the approximated correspondences (using TPS warps) of the leave-out keypoints and their ground-truth annotations. The average PCK of 10 trials over all object classes is shown in Fig. 69. The number in parentheses denotes the number of ground-truth keypoints. For the PF-PASCAL, each image pair has a different number of keypoints. We see that using more keypoint annotations improves the quality of generated ground truth. Note that perfect score would be 1.0. In Tables X and XI, we show the PCK results for a leave-one-out validation. The average PCK scores are and on the PF-WILLOW and PF-PASCAL, respectively. These numbers are quite reasonable, and validates the use of our ground-truth data using TPS.

5.5 Object proposals vs. sliding windows

Our experiments show that proposal flow outperforms state-of-the-art methods such as SIFT flow [8], DSP [10], and DeepFlow [5]

: Note that these methods all employ a sort of sliding window strategies for matching (i.e., regular sampling with a fixed stride, and in particular, DeepFlow 

[5] with stride 1). Figures 25(a) and 30 evaluate SW within our approach, where we make proposals by placing windows on a regular grid across predefined 5 scales and 5 aspect ratios with a uniform stride (following [23]). The PCR and mIoU@ plots show that object proposals clearly outperform SW with the same number of regions. In Table VII, we can see that 1) the proposal flow method with SW already outperforms competing algorithms, and 2) it further benefits from the use of SS to go from 0.47 to 0.50 in terms of the IoU metric. Note that this metric focuses on the foreground matching quality [10], implying that the use of object proposals helps in matching foreground regions. The advantage can be clearly seen with more cluttered images. For example, LOM with SW and SS on the PF-WILLOW gives PCK () of 0.42 and 0.56, respectively, as shown in Table III. The superior performance comes from the effective use of geometric contextual information as well as that of object proposals.

6 Discussion

We have presented a robust region-based semantic flow method, called proposal flow, and shown that it can effectively be mapped to pixel-wise dense correspondences. We have also introduced the PF datasets for semantic flow, and shown that they provide a reasonable benchmark for a semantic flow evaluation without extremely expensive manual annotation of full ground truth. Our benchmarks can be used to evaluate region-based semantic flow methods and also pixel-based ones, and experiments with the PF datasets demonstrate that proposal flow substantially outperforms existing semantic flow methods. Experiments with Caltech-101, the PASCAL parts, and Taniai’s datasets further validate these results.


This work was supported by ERC grants VideoWorld and Allegro, and the Institut Universitaire de France.


  • [1] M. Okutomi and T. Kanade, “A multiple-baseline stereo,” TPAMI, vol. 15, no. 4, pp. 353–363, 1993.
  • [2] C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in CVPR, 2011.
  • [3] B. K. Horn and B. G. Schunck, “Determining optical flow: A retrospective,” Artificial Intelligence, vol. 59, no. 1, pp. 81–87, 1993.
  • [4] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “DeepMatching: Hierarchical deformable dense matching,” IJCV, pp. 1–24, 2015.
  • [5] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, “Deepflow: Large displacement optical flow with deep matching,” in ICCV, 2013.
  • [6] J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image and vision computing, vol. 22, no. 10, pp. 761–767, 2004.
  • [7] H. Yang, W.-Y. Lin, and J. Lu, “Daisy filter flow: A generalized discrete approach to dense correspondences,” in CVPR, 2014.
  • [8] C. Liu, J. Yuen, and A. Torralba, “SIFT flow: Dense correspondence across scenes and its applications,” TPAMI, vol. 33, no. 5, pp. 978–994, 2011.
  • [9] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, “Non-rigid dense correspondence with applications for image enhancement,” ACM TOG, vol. 30, no. 4, p. 70, 2011.
  • [10] J. Kim, C. Liu, F. Sha, and K. Grauman, “Deformable spatial pyramid matching for fast dense correspondences,” in CVPR, 2013.
  • [11] T. Zhou, Y. Jae Lee, S. X. Yu, and A. A. Efros, “FlowWeb: Joint image set alignment by weaving consistent, pixel-wise correspondences,” in CVPR, 2015.
  • [12] O. Duchenne, A. Joulin, and J. Ponce, “A graph-matching kernel for object categorization,” in ICCV, 2011.
  • [13] T. Taniai, S. N. Sinha, and Y. Sato, “Joint recovery of dense correspondence and cosegmentation in two images,” in CVPR, 2016.
  • [14]

    H. Bristow, J. Valmadre, and S. Lucey, “Dense semantic correspondence where every pixel is a classifier,” in

    ICCV, 2015.
  • [15] T. Hassner, V. Mayzels, and L. Zelnik-Manor, “On SIFTs and their scales,” in CVPR, 2012.
  • [16] J. Hur, H. Lim, C. Park, and S. C. Ahn, “Generalized deformable spatial pyramid: Geometry-preserving dense correspondence estimation,” in CVPR, 2015.
  • [17] W. Qiu, X. Wang, X. Bai, Z. Tu et al., “Scale-space SIFT flow,” in WACV, 2014.
  • [18] M. Tau and T. Hassner, “Dense correspondences across scenes and scales,” TPAMI, vol. 38, no. 5, pp. 875–888, 2016.
  • [19] E. Trulls, I. Kokkinos, A. Sanfeliu, and F. Moreno-Noguer, “Dense segmentation-aware descriptors,” in CVPR, 2013.
  • [20] C. B. Choy, M. Chandraker, J. Gwak, and S. Savarese, “Universal correspondence network,” in NIPS, 2016.
  • [21] T. Zhou, P. Krähenbühl, M. Aubry, Q. Huang, and A. A. Efros, “Learning dense correspondence via 3D-guided cycle consistency,” in CVPR, 2016.
  • [22] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marques, and J. Malik, “Multiscale combinatorial grouping,” in CVPR, 2014.
  • [23] J. Hosang, R. Benenson, P. Dollár, and B. Schiele, “What makes for effective detection proposals?” TPAMI, 2015.
  • [24] S. Manen, M. Guillaumin, and L. Van Gool, “Prime object proposals with randomized Prim’s algorithm,” in ICCV, 2013.
  • [25] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” IJCV, vol. 104, no. 2, pp. 154–171, 2013.
  • [26] C. L. Zitnick and P. Dollár, “Edge boxes: Locating object proposals from edges,” in ECCV, 2014.
  • [27] R. Girshick, “Fast R-CNN,” in ICCV, 2015.
  • [28] H. Kaiming, Z. Xiangyu, R. Shaoqing, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in ECCV, 2014.
  • [29] G. Zhu, F. Porikli, and H. Li, “Beyond local search: Tracking objects everywhere with instance-specific proposals,” in CVPR, 2016.
  • [30] B. Ham, M. Cho, C. Schmid, and J. Ponce, “Proposal flow,” in CVPR, 2016.
  • [31] L. Bourdev and J. Malik, “Poselets: Body part detectors trained using 3D human pose annotations,” in ICCV, 2009.
  • [32] D. A. Forsyth and J. Ponce, “Computer vision: A modern approach (2nd edition),” Computer Vision: A Modern Approach, 2011.
  • [33] M. Cho and K. M. Lee, “Progressive graph matching: Making a move of graphs via probabilistic voting,” in CVPR, 2012.
  • [34] J. L. Long, N. Zhang, and T. Darrell, “Do convnets learn correspondence?” in NIPS, 2014.
  • [35] I. Kemelmacher-Shlizerman and S. M. Seitz, “Collection flow,” in CVPR, 2012.
  • [36] X. Zhou, M. Zhu, and K. Daniilidis, “Multi-image matching via fast alternating minimization,” in ICCV, 2015.
  • [37] J. Carreira, A. Kar, S. Tulsiani, and J. Malik, “Virtual view networks for object reconstruction,” in CVPR, 2015.
  • [38] G. Gkioxari, R. Girshick, and J. Malik, “Contextual action recognition with r*CNN,” in ICCV, 2015.
  • [39] R. G. Cinbis, J. Verbeek, and C. Schmid, “Multi-fold MIL training for weakly supervised object localization,” in CVPR, 2014.
  • [40] J. Dai, K. He, and J. Sun, “BoxSup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation,” in ICCV, 2015.
  • [41] M. Cho, S. Kwak, C. Schmid, and J. Ponce, “Unsupervised object discovery and localization in the wild: Part-based matching using bottom-up region proposals,” in CVPR, 2015.
  • [42] H. Jiang, “Matching bags of regions in RGBD images,” in CVPR, 2015.
  • [43] M. Bai, W. Luo, K. Kundu, and R. Urtasun, “Exploiting semantic information and deep matching for optical flow,” in ECCV, 2016.
  • [44] L. Sevilla-Lara, D. Sun, V. Jampani, and M. J. Black, “Optical flow with semantic segmentation and localized layers,” in CVPR, 2016.
  • [45] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in NIPS, 2015.
  • [46] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in CVPR, 2005.
  • [47] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in CVPR, 2006.
  • [48] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
  • [49] B. Leibe, A. Leonardis, and B. Schiele, “Robust object detection with interleaved categorization and segmentation,” IJCV, vol. 77, no. 1-3, pp. 259–289, 2008.
  • [50] S. Maji and J. Malik, “Object detection using a max-margin hough transform,” in CVPR, 2009.
  • [51] P. Felzenszwalb, D. McAllester, and D. Ramanan, “A discriminatively trained, multiscale, deformable part model,” in CVPR, 2008.
  • [52] H. P. Lopuhaa and P. J. Rousseeuw, “Breakdown points of affine equivariant estimators of multivariate location and covariance matrices,” The Annals of Statistics, pp. 229–248, 1991.
  • [53] P. T. Fletcher, S. Venkatasubramanian, and S. Joshi, “Robust statistics on riemannian manifolds via the geometric median,” in CVPR, 2008.
  • [54] R. Chandrasekaran and A. Tamir, “Open questions concerning weiszfeld’s algorithm for the fermat-weber location problem,” Mathematical Programming, vol. 44, no. 1-3, pp. 293–295, 1989.
  • [55] F. L. Bookstein, “Principal warps: Thin-plate splines and the decomposition of deformations,” IEEE TPAMI, no. 6, pp. 567–585, 1989.
  • [56] G. Donato and S. Belongie, “Approximate thin plate spline mappings,” in ECCV, 2002.
  • [57] B. Ham, M. Cho, and J. Ponce, “Robust image filtering using joint static and dynamic guidance,” in CVPR, 2015.
  • [58] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” IJCV, vol. 92, no. 1, pp. 1–31, 2011.
  • [59] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in ECCV, 2012.
  • [60] R. Sibson et al., “A brief description of natural neighbour interpolation,” Interpreting multivariate data, vol. 21, pp. 21–36, 1981.
  • [61] A. Kanazawa, D. W. Jacobs, and M. Chandraker, “WarpNet: Weakly supervised matching for single-view reconstruction,” in CVPR, 2016.
  • [62] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The CALTECH-UCSD birds-200-2011 dataset,” 2011.
  • [63] Y. Yang and D. Ramanan, “Articulated human detection with flexible mixtures of parts,” IEEE TPAMI, vol. 35, no. 12, pp. 2878–2890, 2013.
  • [64] M. Cho, K. Alahari, and J. Ponce, “Learning graphs to match,” in ICCV, 2013.
  • [65] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer, “Discriminative learning of deep convolutional feature point descriptors,” in ICCV, 2015.
  • [66] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, vol. 60, no. 2, pp. 91–110, 2004.
  • [67] K. Tang, A. Joulin, L.-J. Li, and L. Fei-Fei, “Co-localization in real-world images,” in CVPR, 2014.
  • [68] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in CVPR, 2009.
  • [69] M. Brown, G. Hua, and S. Winder, “Discriminative learning of local image descriptors,” TPAMI, vol. 33, no. 1, pp. 43–57, 2011.
  • [70] M. Paulin et al.

    , “Local convolutional features with unsupervised training for image retrieval,” in

    ICCV, 2015.
  • [71] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in CVPR, 2015.
  • [72] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg, “MatchNet: Unifying feature and metric learning for patch-based matching,” in CVPR, 2015.
  • [73] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” TPAMI, vol. 28, no. 4, pp. 594–611, 2006.
  • [74] E. G. Learned-Miller, “Data driven image models through continuous joint alignment,” TPAMI, vol. 28, no. 2, pp. 236–250, 2006.
  • [75] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images,” TPAMI, vol. 34, no. 11, pp. 2233–2246, 2012.
  • [76] C. Liu, J. Yuen, and A. Torralba, “Nonparametric scene parsing via label transfer,” TPAMI, vol. 33, no. 12, pp. 2368–2382, 2011.
  • [77] Y.-L. Lin, V. I. Morariu, W. Hsu, and L. S. Davis, “Jointly optimizing 3D model fitting and fine-grained classification,” in ECCV, 2014.
  • [78] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu, “Unsupervised joint object discovery and segmentation in internet images,” in CVPR, 2013.
  • [79] B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in ICCV, 2011.
  • [80] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun et al., “Detect what you can: Detecting and representing objects using holistic models and body parts,” in CVPR, 2014.
  • [81] Y. Xiang, R. Mottaghi, and S. Savarese, “Beyond PASCAL: A benchmark for 3D object detection in the wild,” in WACV, 2014.