1 Introduction
Finding point correspondences in image pairs of a static scene is a classical problem in stereo and structure from motion (SFM). Finding correspondences in wide baseline setups, i.e., when the cameras’ focal centers are distant, is particularly challenging. Images obtained in such setups are generally subject to significant distortion and their content may differ substantially also due to occlusion.
The problem of wide baseline stereo matching has received significant attention in recent years (see a brief review in Section 2
). Existing approaches have focused largely on developing better feature descriptors for correspondence and on accurate recovery of epipolar line constraints. However, although challenging, the problem of finding correspondences once the epipolar geometry has been estimated has not yet received sufficient attention.
In this paper we introduce a novel method for finding correspondences in wide baseline image pairs of a static scene. Noting that matching is often ambiguous even when epipolar constraints are taken into account, we propose to address the problem by using deformation maps to model geometric changes along epipolar lines. Specifically, given two images and an estimated fundamental matrix, our algorithm seeks to compute a geometric map that relates the images and satisfies two requirements; First, it should respect the epipolar constraints, and, secondly, we bound the amount of distortion that the mapping can exert locally. We refer to such a map by epipolar consistent boundeddistortion (EBD) map. Our core theoretical contribution is in showing that, while the set of maps whose distortion is bounded is nonconvex, its intersection with maps that satisfy the epipolar constraints (with an ordering assumption [2]) is convex, allowing us to introduce an efficient matching algorithm.
Bounded distortion (BD) maps are continuous, locally injective transformations whose conformal distortion at every point (defined as the condition number of their Jacobian matrices) is bounded. Intuitively, the conformal distortion measures how different the local map is from a similarity transformation, , how much local aspect ratio is changed. Bounding the conformal distortion is motivated by the following observation. Suppose two cameras are set so that their image planes are parallel (including as special case rectified setups). For any frontoparallel plane it can be readily verified that its projections onto the two image planes are related by a similarity transformation. Therefore such projections undergo no distortion. Bounding the distortion in these setups therefore limits the slant and tilt of the recovered planes.
To formulate our solution we define a cost function that seeks an EBD map that maximizes the number of matches. We optimize this robust objective using majorizationminimization. The use of a robust objective allows us to recover when certain portions of the images are distorted beyond the bounds allowed by our algorithm or when the set of initial correspondences include outliers.
We have tested our method on datasets containing pairs of images with ground truth matches and compared it to several stateoftheart methods. Our method consistently outperformed these methods.
2 Previous work
The problem of wide baseline stereo matching has been approached by a number of studies. Considerable effort has been put into designing better features and descriptors and into utilizing them to estimating the fundamental matrix. Several studies have used affine invariant features [29, 31]. A wide variety of alternatives to the SIFT descriptor [22] have been proposed, emphasizing speed (e.g, the Daisy descriptor [27]) or invariance to extreme transformations such as scale changes [12]. Other studies have utilized line segments [4] and regional features (e.g., MSER [13] and texturebased descriptors [24]). [23] groups coplanar points by identifying homographies and uses them to estimate epipolar lines. A few of those descriptors were designed to also account for occlusion (e.g., [27, 28]). Finally, a number of studies have approached the problem from a multiview perspective [25, 9].
Relevant to our work also are generic methods for robust, dense matching, based on a variety of pointfeature and regional descriptors, such as the SIFTflow [21, 20], patchmatch [3], NRDC [11], LDOF [7] and, more recently, SPM [14], as well as models of deformation (e.g., [5, 8, 16]), which can potentially be applied in a wide baseline setting. Another recent study [17] proposed an algorithm for mosaic stitching by finding a map that smoothly departs from a global affine transformation. Our experiments include comparison to [16] and [20] modified to seek matches near corresponding epipolar lines. We show the results of our method are superior to these methods even despite these modifications, suggesting that our global model of deformation provides a more suitable model for wide baseline stereo.
Our model of deformation maps is derived from the work of [18], that proposed an approach for optimizing functionals over bounded distortion transformations using sequences of convex optimization. [19] further used this approach for robust feature matching in general pairs of images (analogous to RANSAC [10]
, but allowing many degrees of freedom). Our work shows that the set of EBD maps are convex, allowing us to introduce an efficient algorithm that is less sensitive to initialization.
3 Method
In this section we describe our algorithmic approach to the problem of wide baseline image matching. We assume we are given two images , with their fundamental matrix either supplied as input or computed automatically, e.g., using RANSAC [10]. Our goal is to find a map from to that relates corresponding points in the two images; , for every pair of corresponding points, , the desired map satisfies . We start with a large set of candidate corresponding pairs of points , . Then, we search for a map , from the family of epipolar bounded distortion mappings (defined below) that matches as many pairs as possible. Specifically, we aim at optimizing
(1a)  
(1b) 
where for the norm is defined by: if , and otherwise. The optimization problem (1) strives to maximize the number of matched pairs under the deformation model. This can be seen by noting that the energy (1a) counts how many pairs are not matched by . Similarly to [19], we solve (1) by: 1) computing a set of candidate pairs of correspondences ; and 2) optimizing (1) using an iterative reweighted leastsquares (IRLS) approach. However, differently from previous work, we devise a novel formulation of the Bounded Distortion deformation model that is shown to be convex when matching images under the epipolar constraint. The convex model facilitates the optimization of (1), allows considerably faster optimization times, incorporates epipolar constraints, and does not require any particular initialization or convexification. We explain the deformation model next.
3.1 Convex Epipolar BD Deformations
At the core of our method is a convex characterization of the space of epipolar BD deformations. In a nutshell, is a one parameter family of nonrigid deformations that allow bounded amount of distortion and respect epipolar constraints. To formulate we introduce a triangulation on image , where is the vertex set, the edge set, and the triangles (faces).
A mapping is represented by prescribing new locations to the vertices of the triangulation in the second image, . The mapping is defined as the unique piecewiselinear (PL) mapping satisfying . We denote by the affine map of the restriction of to the triangle .
Using the entire collection of PL mappings defined on a triangulation is way too general as every vertex is allowed to move arbitrarily and in the context of stereo this will allow unreasonable geometries to be considered. Instead, we will restrict our attention to a one parameter family of mapping spaces that translate to a reasonable assumption of the scene’s geometry. In particular, in addition to imposing epipolar line constraints, we suggest to bound the deviation of the affine maps from similarity transformations using a parameter as is defined below. We next derive this constraint for a single affine transformation and later show how to set the constraints for the entire triangulation to define .
3.1.1 Epipolar BoundedDistortion affine map
We now focus on a single affine map. A general planar affine map can be written uniquely as
(2) 
where,
are a similarity matrix, an antisimilarity matrix (, a reflected similarity), and a translation vector, respectively
[18, 19]. The ratio of Frobenious norms of the antisimilarity and similarity parts, ,provides a natural scaleinvariant measure for deviation of from a similarity. In fact,
is the conformal distortion
of the affine map, which equals the ratio of the maximal singular value to the minimal singular value (, the condition number) of the linear part of the affine map,
. We therefore set the Bounded Distortion constraint,(3) 
where as mentioned above is a parameter of the deformation space. We note that an affine map satisfying (3) is also orientation preserving since and .
The BoundedDistortion constraint (3) is not convex and requires some convexification to work with in practice [18]. However, surprisingly, it becomes convex when we intersect this constraint with the epipolar line constraints (assuming epipolar line pairs can be oriented, as we explain below). More generally, when the affine map is known to map some directed line (, epipolar line) to another directed line , while preserving the direction, then Eq. (3) can be formulated as a convex constraint in , see Figure 1 for an illustration. We summarize this in a Proposition,
Proposition 1
The collection of BoundedDistortion planar affine transformations that map a directed line to another directed line is convex.
We start by proving the proposition for the case that the directed lines both coincide with the axis with the positive direction,
where . By assumption we have in particular that and . This implies that
(4) 
where . Plugging this into (3), squaring and rearranging we get
(5) 
If we show that then taking the squareroot of both sides of (5) leads to a (convex) secondorder cone (SOC) constraint,
(6) 
Indeed, since and (5) implies that we must have . We have therefore shown that any affine map (2) that satisfies the assumption (3) and maps the real axis to itself by preserving the positive direction has to satisfy (4) and (6). In the other direction, any nonzero affine map that satisfy (4) and (6) maps to itself while preserving the positive direction (since ) and satisfies (3).
For general directed lines we can represent any affine map satisfying the assumptions of Proposition 1 as
(7) 
where , , are similarities that map the axis (with positive direction) to , and is BoundedDistortion that maps to itself while preserving the positive direction as above. Note that this change of coordinates does not change the distortion of the affine map. Therefore, the collection of all affine maps satisfying the assumption of the proposition with general lines is convex.
The consequence of this proposition is that the set of bounded distortion affine transformations that map an epipolar line in one image to an epipolar line in another image is convex, provided that the pair of epipolar lines can be oriented. Consider a pair of epipolar lines and . It can be readily shown that any planar patch in 3D whose front size is visible to both cameras will project to and with consistent orientation. We note however that for more general scene structures orientation may not always be preserved. Still, many stereo algorithms assume ordering (dating back to [2]). We therefore conclude with the following corollary:
Corollary 1
The collection of BoundedDistortion planar affine transformations that map a directed epipolar line to another directed epipolar line is convex.
3.1.2 Mappings of triangulations
We use the results of the previous subsection to formulate our convex mapping space , where each of its members, , is a piecewise linear map whose restriction to a triangle is an affine map . Let us denote
The coefficient of this affine map , , and are all linear functions of the degrees of freedom (, the mapped vertices) of the mapping as follows,
(8) 
where here
are viewed as vectors in the plane. Note that the inverted matrix (the rightmost matrix in (
8)) is constant as it only depends on the source triangulation’s vertices . Therefore, if the triangle has an edge on an epipolar line , we can set with being the Fundamental matrix and combine (8) with (7), (6) and (4) to constrain to be Bounded Distortion and to respect the epipolar constraint . See Figure 1 for an illustration. For the third vertex of (shown in red) we can impose its epipolar constraint by adding the suitable linear equation. Adding these equations for all triangles (one SOC and a few linear equality constraints per triangle) results in a convex SOCP realization of the space of PL mappings with a single distortion parameter .3.1.3 Triangulating the source image
In order to construct we require a triangulation with the property that each triangle has an edge on an epipolar line of image . We call such a an epipolar triangulation. We construct such a triangulation by placing an equispaced grid of distance over a polar coordinate frame centered at the epipole (we used pixels). For each triangle we enforce its edges to coincide with the appropriate epipolar lines by applying constrained Delaunay triangulation is nonempty. We only keep triangles whose intersection with the image. Figure 2 depicts an example. We further determine the orientations of the epipolar lines. This can be done simply by recovering projective camera matrices from the fundamental matrix and testing the orientation induced, say, by the plane.
3.2 Optimization
To optimize (1) we first use a simple modification of SIFT [22] to find candidate pairs of corresponding points that satisfy the epipolar constraint. If the fundamental matrix is not provided we use standard SIFT and RANSAC to first estimate .
Next, we optimize (1) using IRLS combined with convex epipolar Bounded Distortion constraints. Assuming a fixed list of pairs , we reformulate (1) as
(9a)  
(9b)  
(9c) 
where are auxiliary variables, and the functions will be defined soon. The map is represented by the images of the vertices of the triangulation , that is . Namely, each vertex is mapped to a new (unknown) location in the second image , and
is the unique piecewise linear interpolation
over the triangles , as described in Section 3.1.2. The unknowns in the optimization problem (9) is therefore the target vertex locations .The constraint (9b) is set for every by finding the triangle containing and encoding in barycentric coordinates of the corners of that triangle, namely , where the barycentric weights satisfy and . (9b) then becomes
(10) 
The EBD constraint (9c) is set by adding Equations (8),(7),(6) and (4) for every triangle of the triangulation . Note that (6) is a second order cone, and the rest of the equations are linear equalities and inequalities.
Lastly, optimizing the energy (9a) w.r.t. requires to cope with the nonconvexity and nonsmoothness of the energy (1a). The IRLS point of view suggests replacing the zero norm with its approximations
(11) 
The functions are smooth () and converge to as . For a fixed , (9a) is optimized iteratively by replacing with a convex quadratic functional called majorizer, , with the properties that , and , for all . These two properties guarantee that the IRLS monotonically reduces the energy in each iteration. The majorizers are similar to those in [6],
(12) 
Replacing in (9a) with , where , and is the map found at the previous iteration, results in the following convex quadratic energy in (remember that are constants),
(13a)  
(13b)  
(13c) 
where is constant at each iteration. In view of (10) this implies a convex quadratic energy in the unknowns . We iteratively solve this problem, updating in each iteration until convergence. Each iteration is a convex Second Order Cone Program (SOCP) and is solved using MOSEK [1].
In practice, we fix and to be the diameter of image and solve the above IRLS. Upon convergence, we update and repeat. We continue this until
(pixels). This heuristic of starting from a large
and decreasing it helps avoiding local minima of the energy (1a) as the larger the the more convex the problem is; for example, for sufficiently large the global minimum of (9) lies in the convex (quadratic) part of all terms and can be found by a single SOCP. Our algorithm is summarized in Algorithm 1.4 Experiments
Datasets. We evaluate our method by applying the optimization algorithm presented in Sec. 3 to pairs of images from the dataset of [26]. The dataset contains two multiview collections of highresolution images , referred to as “Herzjesu” and “Fountain,” provided with ground truth depth maps. The Herzjesu dataset contains images and the Fountain dataset contains images. Therefore, in total there are 83 stereo pairs with varying distances between focal points. We tested each pair twice, seeking a map from the left image to the right one and vice versa, obtaining 166 matching problems.
For evaluation we further process the ground truth depth values to obtain ground truth matches. Specifically, for each dataset we employ raycasting (zbuffering) to the 3D surface, obtaining groundtruth correspondences at subpixel accuracy. We further used ray casting to determine an occlusion mask and excluded those pixels (for the left image) from our evaluation. (These masks of course are not known to the algorithm and used only for evaluation.)
Our optimization algorithm can work in reasonable runtimes (roughly 5 minutes) when applied to the highresolution images. However, in order to compare to stateoftheart algorithms, which are considerably slower at those resolutions, we use the lowerresolution suggested in [27, 28]. We do not rectify the images or apply any other preprocessing.
Epipolar SIFT. Our algorithm takes as input pairs of putative correspondences and builds an EBD map that is consistent with as many of the input matches as it can, For the experiments we used SIFT matches (using the VLFeat software package [30]). Classical SIFT matching seeks putative matches throughout the entire image domain. As we assume that epipolar geometry is known (either exactly or approximately), we modify the matching procedure as follows. Given a SIFT descriptor at location in the left image, we restrict the search for a putative match, , to the area close to the corresponding epipolar line in the right image. This area is determined by limiting the Sampson distance between and , i.e.
(14) 
where is the fundamental matrix, and are written in homogeneous coordinates, and denotes the entry of the vector . We further accept a match if its SIFT score is at least twice higher than the score of for any within Sampson distance . We set to 5. Fig. 4 shows an example of the putative matches obtained using the classical methodology of SIFT, while Fig. 4 demonstrates the the putative matches obtained with the described methodology, Epipolar SIFT. In these pictures the images are presented sidebyside with the color of the markers corresponding to the value of the coordinate and the size of the marker corresponds to the value of coordinate. It is evident that the set of putative matches obtained with Epipolar SIFT is reacher than that obtained with the classical method.
Algorithms for evaluation. We compare our method to the following algorithms:

BD: Feature matching by boundeddistortion suggested by Lipman et al. [18]. This method serves as baseline to our method since it seeks correspondences consistent with a bounded distortion transformation, but does not take epipolar constraints into account.

Spectral: The spectral technique of Leordeanu and Hebert [16]. This method uses graph methods to find point matches by minimizing pairwise energies.

SiftFlow: by Liu et al. [20], which finds dense correspondence by minimizing an MRF energy whose unary term measures the match between SIFT descriptors,

Homography: Mapping by looking for the best homography (computed with RANSAC [10])

Stereo: by Lee et al. [15], which finds dense correspondence between the images after rectification.
We note that the algorithms of [16] and [20] were not designed specifically for wide baseline stereo. For a fair comparison we therefore tested those algorithms in two settings, first in their original (unrestricted) setting, and secondly in a setting that integrates the knowledge of epipolar geometry into the algorithms. The latter is achieved as follows. For [16] we used a version of the algorithm that allows it to select from a candidate set of matches that were either extracted from the entire image (for the unrestricted setting) or from the epipolar SIFT matches (, the same input given to our algorithm). Furthermore, since this algorithm does not compute a map (it only return a sparse set of matches) we further applied cubic interpolation to extend the matches to the entire image. For [20] we modified the code to allow only maps on or close to corresponding epipolar lines (we set the Sampson distance to 2, which gave the best result). Finally, for homography we used putative matches obtained with the epipolar SIFT and for the stereo algorithm we used ground truth matches to perform the rectification.
Results. Figures 6 and 6 show an example for the results obtained with our method. The figures show respectively the set of correspondences and the map returned by our optimization. To further evaluate the map computed with our algorithm for the entire dataset, we checked for each tested pair of images and all pixels in after masking it with the ground truth occlusion map. For each nonoccluded pixel we measured the Euclidean distance where is the ground truth point corresponding to . We then produced a cumulative histogram depicting the fraction of nonoccluded points in against their displacement error from the ground truth target position. In Figures 8 and 8 we report for each error value the median number of points that achieved this error or less over all pairs of images. Table 1 further shows the median fraction of nonoccluded pixels that were mapped to a 1 pixel accuracy by our map . We show our results both with an exact fundamental matrix (obtained from ground truth) and with an approximated one (computed with RANSAC [10] using classical SIFT). Our results are further compared to Spectral [16], SiftFlow [21] (both with and without epipolar constraints), to homography estimation and to classical stereo estimation. (To simplify the table we only include results for the epipolarenhanced algorithms.) As can be seen from the figures and the table our method outperformed all the tested methods on both datasets with both an exact and an approximate fundamental matrix. We note further that for all algorithms there was no marked difference between the use of exact and approximate fundamental matrix (solid lines vs. dashed) and all methods benefited from incorporating epipolar constraints (compare to dotted lines, for non restricted version).
Figures 10 and 10 further show a breakdown according to the length of the baseline. For this figure we considered in each of the two datasets all pairs and for each value (between 1 and 7 for Herzjesu and between 1 and 10 for Fountain). For each such set of pairs we counted the number of pixels mapped by our computed map with error pixel and ploted the median of these numbers. As expected the closer together pairs are, the better our method is. Compared to the other methods our method seem to achieve superior accuracy in almost all conditions.
For a pair of images in this dataset our algorithm runs in 100 seconds on a 3.50 GHz Intel Core i7. This is compared to 400 seconds required for the nonconvex BD of [18]. In general, running the nonconvex BD with features restricted to epipolar lines is significantly slower and achieves slightly inferior results.
Algorithm  Fountain  Herzjesu 

EBD (ours), exact F  54.77  69.11 
EBD (ours), approx F  51.65  68.28 
Spectral, exact F  47.70  56.13 
Spectral, approx F  44.40  56.70 
SiftFlow, exact F  32.44  47.45 
SiftFlow, approx F  32.19  47.97 
Homography, exact F  27.40  39.95 
Stereo, exact F  26.84  34.89 
References

[1]
E. D. Andersen and K. D. Andersen.
The MOSEK interior point optimization for linear programming: an implementation of the homogeneous algorithm
, pages 197–232. Kluwer Academic Publishers, 1999. 
[2]
H. H. Baker and T. Binford.
Depth from edge and intensity based stereo.
In
Proc. Int. Joint Conf. on Artificial Intelligence
, pages 631––636, 1981.  [3] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3), Aug. 2009.
 [4] H. Bay, V. Ferrari, and L. V. Gool. Widebaseline stereo matching with line segments. In CVPR, 2005.
 [5] A. C. Berg, T. L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In CVPR, pages 26–33, 2005.
 [6] N. Bissantz, L. Dumbgen, A. Munk, and B. Stratmann. Convergence analysis of generalized iteratively reweighted least squares algorithms on convex function spaces. SIAM J. on Optimization, 19(4):1828–1845, 2009.
 [7] T. Brox, C. Bregler, and J. Malik. Large displacement optical flow. In CVPR, pages 41–48, 2009.

[8]
O. Duchenne, F. Bach, I.S. Kweon, and J. Ponce.
A tensorbased algorithm for highorder graph matching.
PAMI, 33(12):2383–2395, 2011.  [9] V. Ferrari, T. Tuytelaars, and L. V. Gool. Widebaseline multipleview correspondences. In CVPR, 2003.
 [10] M. Fischler and R. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Com. of the ACM, 24(6):381–395, 1981.
 [11] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski. Nonrigid dense correspondence with applications for image enhancement. ACM Trans. Graph., 30(4):70:1–70:9, 2011.
 [12] T. Hassner, V. Mayzels, and L. ZelnikManor. On SIFTs and their scales. In CVPR, pages 1522–1528, 2012.
 [13] M. U. J. Matas, O. Chum and T. Pajdla. Robust widebaseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):761–767, 2004.
 [14] J. Kim, C. Liu, F. Sha, and K. Grauman. Deformable spatial pyramid matching for fast dense correspondences. In CVPR, pages 2307–2314, 2013.
 [15] S. Lee, J. H. Lee, J. Lim, and I. H. Suh. Robust stereo matching using adaptive random walk with restart algorithm. Image and Vision Computing, 37:1–11, 2015.
 [16] M. Leordeanu and M. Hebert. A spectral technique for correspondence problems using pairwise constraints. In ICCV, volume 2, pages 1482–1489, 2005.
 [17] W.Y. Lin, S. Liu, Y. Matsushita, T.T. Ng, and L.F. Cheong. Smoothly varying affine stitching. In CVPR, pages 345–352, 2011.
 [18] Y. Lipman. Bounded distortion mapping spaces for triangular meshes. ACM Trans. Graph., 31(4):108:1–108:13, 2012.
 [19] Y. Lipman, S. Yagev, R. Poranne, D. W. Jacobs, and R. Basri. Feature matching with bounded distortion. ACM Trans. Graph., 33(3):26:1–26:14, 2014.
 [20] C. Liu, J. Yuen, and A. Torralba. Sift flow: dense correspondence across scenes and its applications. PAMI, 33(5):978–994, 2011.
 [21] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. Freeman. SIFT flow: dense correspondence across different scenes. In ECCV, pages 28–42, 2008. people.csail.mit.edu/celiu/ECCV2008/.
 [22] D. Lowe. Distinctive image features from scaleinvariant keypoints. IJCV, 60(2):91–110, 2004.
 [23] P. Pritchett and A. Zisserman. Wide baseline stereo matching. In ICCV, 1998.
 [24] F. Schaffalitzky and A. Zisserman. Viewpoint invariant texture matching and wide baseline stereo. In ICCV, 2001.
 [25] C. Strecha, T. Tuytelaars, and L. V. Gool. Dense matching of multiple widebaseline views. In ICCV, 2003.
 [26] C. Strecha, W. von Hansen, L. V. Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multiview stereo for high resolution imagery. In CVPR, pages 1–8, 2008.
 [27] E. Tola, V. Lepetit, and P. Fua. Daisy: an efficient dense descriptor applied to widebaseline stereo. PAMI, 32(5):815–830, 2010.
 [28] E. Trulls, I. Kokkinos, A. Sanfeliu, and F. MorenoNoguer. Dense segmentationaware descriptors. In CVPR, 2013.
 [29] T. Tuytelaars and L. J. V. Gool. Wide baseline stereo matching based on local, affinely invariant regions. In BMVC, 2000.
 [30] A. Vedaldi and B. Fulkerson. Vlfeat vision software. www.vlfeat.org.
 [31] J. Xiao and M. Shah. Twoframe wide baseline matching. In ICCV, 2003.