ANSAC: Adaptive Non-minimal Sample and Consensus

09/27/2017 ∙ by Victor Fragoso, et al. ∙ 0

While RANSAC-based methods are robust to incorrect image correspondences (outliers), their hypothesis generators are not robust to correct image correspondences (inliers) with positional error (noise). This slows down their convergence because hypotheses drawn from a minimal set of noisy inliers can deviate significantly from the optimal model. This work addresses this problem by introducing ANSAC, a RANSAC-based estimator that accounts for noise by adaptively using more than the minimal number of correspondences required to generate a hypothesis. ANSAC estimates the inlier ratio (the fraction of correct correspondences) of several ranked subsets of candidate correspondences and generates hypotheses from them. Its hypothesis-generation mechanism prioritizes the use of subsets with high inlier ratio to generate high-quality hypotheses. ANSAC uses an early termination criterion that keeps track of the inlier ratio history and terminates when it has not changed significantly for a period of time. The experiments show that ANSAC finds good homography and fundamental matrix estimates in a few iterations, consistently outperforming state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The robust estimation of geometrical models from pairs of images (e.g

, homography, essential and fundamental matrices) is critical to several computer vision applications such as structure from motion, image-based localization, and panorama stitching 

[Agarwal et al.(2011)Agarwal, Furukawa, Snavely, Simon, Curless, Seitz, and Szeliski, Brown et al.(2005)Brown, Szeliski, and Winder, Crandall et al.(2013)Crandall, Owens, Snavely, and Huttenlocher, Frahm et al.(2010)Frahm, Fite-Georgel, Gallup, Johnson, Raguram, Wu, Jen, Dunn, Clipp, Lazebnik, et al., Li et al.(2012)Li, Snavely, Huttenlocher, and Fua, Lou et al.(2012)Lou, Snavely, and Gehrke, Sattler et al.(2011)Sattler, Leibe, and Kobbelt, Zhang and Kosecka(2006)]. Several applications use robust estimators to compute these transformations from candidate image correspondences that contain outliers (i.e, incorrect image correspondences). While RANSAC-based estimators are robust to outliers, they lack a hypothesis generator that can produce an accurate model from a minimal set of noisy inliers (i.e, correct image correspondences with positional error). Since existing hypothesis generators are sensitive to the noise in the data, the generated hypotheses can deviate significantly from the optimal model, yielding a delay in the estimator’s convergence.

Although there exist hypothesis-generation improvements [Brahmachari and Sarkar(2013), Chum and Matas(2005), Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk, Fragoso and Turk(2013), Goshen and Shimshoni(2008), Tordoff and Murray(2005)] that accelerate the convergence of RANSAC-based estimations, there is no previous work addressing the noise in the correspondences at the hypothesis-generation stage. The effect of noisy inliers in a hypothesis generated from a minimal sample (a set with the least amount of correspondences to generate a hypothesis) can be reduced by using more samples (non-minimal). This is because more inliers provide additional constraints that help the estimator to produce a more accurate model. However, using a non-minimal sample to generate a hypothesis increases the chances of adding an outlier, which can lead to wrong hypotheses. For this reason, existing methods [Chum et al.(2003)Chum, Matas, and Kittler, Lebeda et al.(2012)Lebeda, Matas, and Chum, Raguram et al.(2008)Raguram, Frahm, and Pollefeys] that tackle the effect of noisy inliers operate as a refinement stage rather than operating directly in the hypothesis generation phase. While these methods reduce the effect of noise in hypotheses, they can increase the computational overhead since they run an additional inner-refinement stage in the hypothesis-and-test loop of a RANSAC-based estimator.

To reduce the noise effect on hypotheses produced in the hypothesis-generation phase, this work presents a novel Adaptive Non-Minimal Sample and Consensus (ANSAC) robust estimator. Unlike previous work [Brahmachari and Sarkar(2013), Chum et al.(2003)Chum, Matas, and Kittler, Chum and Matas(2005), Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk, Fragoso and Turk(2013), Goshen and Shimshoni(2008), Lebeda et al.(2012)Lebeda, Matas, and Chum, Raguram et al.(2008)Raguram, Frahm, and Pollefeys, Tordoff and Murray(2005)], ANSAC produces hypotheses from non-minimal sample sets in the hypothesis generation stage of a RANSAC-based estimator. The homography and fundamental matrix estimation experiments show that ANSAC returns a good estimate consistently and quickly.

1.1 Related Work

The key element of existing methods that accelerate the estimation process is their ability to assess the correctness of every candidate correspondence. Different approaches compute this correctness by means of probabilities, rankings, and heuristics. The probability-based methods (

e.g, Guided-MLESAC [Tordoff and Murray(2005)], BEEM [Goshen and Shimshoni(2008)], BLOGS [Brahmachari and Sarkar(2013)], and EVSAC [Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk]) aim to calculate the probability of correctness of each candidate correspondence using the matching scores. They use then the computed probabilities to form a discrete distribution over the candidate correspondences to sample and generate hypotheses.

Instead of randomly sampling candidate correspondences from a probability distribution, PROSAC 

[Chum and Matas(2005)] creates an initial subset with candidate correspondences ranked by quality. PROSAC generates a hypothesis by randomly sampling a minimal number of candidate correspondences from this subset. PROSAC iteratively expands the subset by including lower ranked candidate correspondences and keeps sampling a minimal number of candidate correspondences from this subset until convergence.

While previous methods accelerate the estimation process, they still use minimal samples, which are sensitive to noise. To alleviate this issue, Chum et al [Chum et al.(2003)Chum, Matas, and Kittler, Lebeda et al.(2012)Lebeda, Matas, and Chum] proposed LO-RANSAC, an estimator that adds a refinement stage after finding a good hypothesis. This refinement stage is another RANSAC process in which non-minimal samples are generated from the inliers supporting the new best hypothesis.

In contrast to the previous methods, ANSAC uses subsets with a high predicted inlier ratio to draw non-minimal samples and generate hypotheses that account for noise at the hypothesis-generation phase. Thus, ANSAC does not require an inner-RANSAC refinement process as in LO-RANSAC, which can be expensive [Lebeda et al.(2012)Lebeda, Matas, and Chum]. Consequently, it avoids the extra computational overhead that an inner refinement process incurs. The key element in ANSAC is to predict the inlier ratio of the subsets to determine the size of a non-minimal sample.

2 Ansac

The goal of ANSAC is to increase the likelihood of finding an accurate hypothesis in the early iterations by generating hypotheses from outlier-free non-minimal samples. To this end, ANSAC prioritizes the use of subsets with high estimated inlier ratios to generate hypotheses. The inlier ratio can be considered as the probability of selecting a correct correspondence from a set at random, since it is the fraction of correct correspondences in a given set. As such, ANSAC uses the inlier ratio to assess the odds of producing an outlier-free non-minimal sample, since it measures the purity of a set. The estimation pipeline of ANSAC (illustrated in Fig. 

1) has two main stages: initialization and hypothesis-and-test loop.

Figure 1:

Overview of the pipeline in ANSAC. First, it ranks the correspondences by a correctness quality value. Then, ANSAC builds subsets, and estimates the inlier-ratio prior using a correctness quality value for every subset. Then, the hypothesis-and-test loop uses the subsets to draw hypotheses and tests them to produce a final estimate. ANSAC uses a Kalman filter to refine the inlier-ratio estimate of the current subset from which it generates hypotheses. To do so, the filter combines the inlier ratio prior and the one calculated by testing hypotheses. The non-minimal sampler uses the refined inlier ratio to generate a non-minimal or minimal sample. ANSAC stops when a termination criterion is met.

2.1 Initialization

The main goal of the initialization is to compute subsets of correspondences and their estimated inlier ratio priors. This stage has three main components: correspondence ranking, subset computation, and inlier ratio prior calculation. Given the candidate image correspondences, the initialization stage outputs the ranked subsets and their initial inlier ratio estimates/priors , where is the total number of candidate correspondences, and is the minimal number of correspondences to generate a hypothesis.

2.1.1 Ranking Correspondences and Subset Generation

The main goal of this stage is to rank high the correspondences that are likely to be correct. This is because ANSAC aims to generate hypotheses from outlier-free samples as early as possible. Thus, ANSAC first calculates their correctness or quality value (e.g, using SIFT ratio [Lowe(2004)] or Meta-Recognition Rayleigh (MRR) [Fragoso and Turk(2013)]) and ranks the correspondences. Then, ANSAC builds an initial subset with the minimal number of best-ranked correspondences to draw a hypothesis. This initial subset is the base to generate the remaining subsets. ANSAC generates each subsequent subset by adding the next ranked correspondence to the previously generated subset. In this way, ANSAC keeps growing the most recently generated subset until it uses all the correspondences. The initialization stage computes the -th subset efficiently by including all the top ranked correspondences. Mathematically, the -th subset is , where is the -th ranked candidate correspondence.

2.1.2 Computing the Inlier Ratio Prior

The last step in the initialization stage is the calculation of the inlier ratio prior for every subset. These inlier ratio priors are key to initialize a Kalman filter that will refine the inlier ratio estimate of every subset over time; see Sec. 2.2.1.

The inlier ratio of is defined as the probability of drawing a correct correspondence from it, i.e, , where

is a random variable indicating correct or incorrect correspondence, respectively. This probability can be computed as follows:

, where is the indicator function which returns 1 when is true, and 0 otherwise. The ideal indicator function is an oracle that identifies correct correspondences, which is challenging to obtain. Nevertheless, it can be approximated by mapping the correctness or ranking quality values to the range. For instance, using MRR [Fragoso and Turk(2013)] probabilities or mapping Lowe’s ratio to the

range using a radial basis function. The initialization stage computes the inlier ratio prior

for every subset using this approximation to the indicator function .

2.2 Hypothesis-and-test Loop

The goal of this stage is to robustly estimate a model from the correspondences as quickly as possible. To this end, this stage iteratively draws non-minimal samples when possible from the ranked subsets and estimates their inlier ratios . Similar to PROSAC [Chum and Matas(2005)], ANSAC progressively utilizes all the subsets according to the rank in order to generate hypotheses. However, unlike PROSAC, which generates hypotheses from minimal samples, ANSAC estimates the subsets’ inlier ratios to decide the size of the non-minimal sample for hypothesis generation. Thus, since ANSAC assumes that the subsets at the top of the ranking are less contaminated it will likely use most of the candidate correspondences and generate hypotheses from non-minimal samples when possible. As ANSAC goes through the lowest-ranked subsets, it will likely use minimal samples to generate hypotheses. Similar to PROSAC, ANSAC uses a new subset once it reached the maximum number of hypotheses it can generate from the current subset. However, unlike PROSAC, which uses a recursive formula to determine the maximum number of hypotheses to draw from a given subset, ANSAC uses the estimated inlier ratio of a given subset to calculate .

0:  The correspondences
0:  Estimate
1:   Initialize()
2:  Initialize: , , , , ,  // See Section 2.1.
3:  for iteration = 1 to  do
4:       GenerateSample(, )   // Each subset generates at most hypotheses according to Eq. (8).
5:       GenerateHypothesis()
6:       TestHypothesis()
7:      if    // The new hypothesis has more inliers than the current best hypothesis. then
8:          , ,
9:           UpdateMaxIterations()   // See Sec. 2.2.4.
10:      end if
11:       UpdateSubset(iteration, )   // when hypotheses were generated, else .
12:       RefineInlierRatio(, )
13:      if Terminate(then
14:          Terminate loop
15:      end if
16:  end for
Algorithm 1 ANSAC

Specifically, ANSAC operates as detailed in Alg. 1. Given the input subsets and their inlier ratio priors , ANSAC sets its current-subset index pointing to the first subset, i.e, . Subsequently, it generates a sample from the current subset in step 4. However, unlike existing estimators that always draw minimal samples, ANSAC uses the current inlier ratio estimate to determine the size of the sample; see Sec. 2.2.2. Given the non-minimal sample, ANSAC generates a hypothesis in step 5. In step 6, the hypothesis tester identifies two sets of correspondences: 1) those that support the generated hypothesis across all the input correspondences to compute its overall inlier ratio ; and those in the current subset to calculate the subset’s inlier ratio . When the inlier ratio is larger than the inlier ratio of the current best hypothesis, then ANSAC updates several variables (steps 7-10): the current best hypothesis , the overall inlier ratio , the “observed” inlier ratio (the fraction of correspondences from the current subset supporting the new best hypothesis ), and the maximum number of hypotheses to draw from . In step 11, ANSAC uses a new subset by increasing if it has generated hypotheses from the current subset. Then, a Kalman filter refines the inlier ratio estimate of the current subset in step 12; see Sec. 2.2.1. Lastly, ANSAC checks if the current best hypothesis satisfies a termination criteria and stops if satisfied (steps 13-15).

2.2.1 Estimating the Inlier Ratio with a Kalman Filter

The Kalman filter [Fox et al.(2003)Fox, Hightower, Liao, Schulz, and Borriello, Thrun et al.(2005)Thrun, Burgard, and Fox] iteratively refines the estimate of the inlier ratio of the current subset . It does so by combining the inlier ratio prior and the observed inlier ratio . This is a crucial component since the inlier ratio is the value that determines the size of the sample and the progressive use of subsets. To use this filter, ANSAC assumes that the inlier ratio varies linearly as a function of the ranked subsets and it has Gaussian noise.

The prediction step of the Kalman filter estimates the inlier ratio at time given the inlier ratio at previous time and the inlier ratio prior . The filter uses the following linear inlier ratio “motion” model:

(1)
(2)

where is the predicted inlier ratio,

is the standard deviation for the predicted inlier ratio,

and are weights that control the contribution of their respective inlier ratio terms, is the standard deviation of the previous estimate , and is a standard deviation parameter that enforces stochastic diffusion. The weights and must satisfy since the inlier ratios range between .

To initialize the filter, ANSAC sets when it is the first iteration. However, when ANSAC uses a new subset, then ANSAC sets , where is the estimate of the previous set. This is a good initial value since subsets only differ by one sample. ANSAC uses the following function to set , where returns when a new current subset is used, and otherwise; and the parameter satisfies . This function forces the filter to use the inlier ratio prior only when ANSAC uses a new subset. This is because the inlier ratio of a subset can be refined over multiple iterations and the prior must contribute once to the refinement.

The update step aims at refining the output of the prediction step. To this end, it combines the observed inlier ratio , which is obtained by testing the best current hypothesis on the current subset. The update step performs the following computations:

(3)
(4)
(5)

where is a parameter enforcing stochastic diffusion, is the Kalman gain, is the refined inlier ratio for the current subset at time , and is its standard deviation. At the end of this stage, ANSAC updates the inlier ratio of the current subset .

2.2.2 Adaptive Non-minimal Sampling

Unlike many RANSAC variants (e.g[Brahmachari and Sarkar(2013), Chum and Matas(2005), Fischler and Bolles(1981), Fragoso and Turk(2013), Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk, Goshen and Shimshoni(2008), Raguram et al.(2008)Raguram, Frahm, and Pollefeys, Raguram and Frahm(2011)]) which use minimal samples to generate hypotheses, ANSAC computes the size of a sample as follows:

(6)

where is the minimum sample size to generate a hypothesis; is the maximum sample size, which is upper-bounded by the size of the current subset ; and

(7)

is a logistic curve with parameters (steepness) and (inflection point). The value constraints for these parameters are: and .

Eq.(6

) selects a sample size by linearly interpolating between

and . The sample size depends directly on measuring the likelihood of not including outliers, which is measured by (see Eq.). When the likelihood is high (), ANSAC produces a non-minimal sample. Otherwise, it produces minimal samples. ANSAC evaluates this function at every iteration since evolves over time.

2.2.3 Maximum Number of Hypotheses per Subset

ANSAC generates a maximum number of hypotheses from the current subset . ANSAC considers the worst case scenario where the inlier ratio of the current subset is low and thus the sample size is the minimal. Thus, it calculates using the following formula [Fischler and Bolles(1981), Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm]:

(8)

where is the probability of producing a good hypothesis, and is the probability of a minimal sample being outlier-free. ANSAC adapts as a function of every iteration. Eq. (8) provides the least amount of hypotheses drawn from minimal samples that an estimator needs to test to achieve a confidence  [Fischler and Bolles(1981), Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm]. The progressive use of the subsets in ANSAC depends directly on the estimated inlier ratio of each subset, unlike PROSAC [Chum and Matas(2005)] which uses a growth function that depends on the number of samples drawn by each subset.

2.2.4 Early Termination

The classical termination criterion of a RANSAC-based estimator calculates the maximum number of iterations adaptively using Eq. (8), since one iteration generates and tests a hypothesis. Even though that an estimator finds a good hypothesis in the early iterations, the estimator still iterates until it reaches the maximum number of iterations calculated with Eq. (8). In this case, the iterations after finding a good hypothesis are unnecessary.

To alleviate this problem, many approaches aim to detect if a hypothesis is likely to be bad [Capel(2005), Chum and Matas(2002), Matas and Chum(2005)] to skip the iteration, and thus avoid testing the hypothesis. Unlike these methods, ANSAC aims to detect if the estimator is unlikely to find a significantly better hypothesis than the current best one, and terminate the loop. To this end, ANSAC uses the history of the overall inlier ratio throughout of the iterations. Since ANSAC uses ranked correspondences to generate hypotheses, it is expected to see a rapid increase of the overall inlier ratio in the early iterations, and a slow or none increase in the later iterations. The proposed termination criterion detects when has reached the low-or-none increase zone.

The proposed early termination criterion analyzes the latest increments of the overall inlier ratio , where and is the overall inlier ratio at the -th iteration. ANSAC decides to stop iterating if their average variation is below a certain threshold . The value of is bounded by the overall maximum number of iterations , which is calculated using and Eq. (8). Thus, has to change adaptively as well. For this reason, ANSAC uses , where , to set the value for .

3 Experiments

This section presents a series of experiments, each demonstrating the effectiveness and efficiency of the proposed components in ANSAC. We implemented ANSAC in C++ and use the robust estimation API from the TheiaSfM library [Sweeney et al.(2015)Sweeney, Hollerer, and Turk]111Source code available at http://vfragoso.com/. We used the implementations of Root-SIFT [Arandjelović and Zisserman(2012)] and CasHash [Cheng et al.(2014)Cheng, Leng, Wu, Cui, and Lu] in TheiaSfM to match features and compute symmetric correspondences: common image correspondences obtained by matching image one as a reference and image two as a query and vice versa.

Datasets.

We used the Oxford affine-covariant-regions dataset [Mikolajczyk et al.(2005)Mikolajczyk, Tuytelaars, Schmid, Zisserman, Matas, Schaffalitzky, Kadir, and Van Gool] and USAC homography-image pairs [Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm] for homography estimation. In addition, we used the USAC fundamental-matrix-image pairs and Strecha dataset [Strecha et al.(2004)Strecha, Fransens, and Van Gool, Strecha et al.(2006)Strecha, Fransens, and Van Gool] for fundamental matrix estimation. The Oxford and Strecha dataset provide information to calculate ground truth. To get a ground truth for USAC dataset, we used the same procedure used by Raguram et al [Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm], which estimates the models by running RANSAC times, followed by a visual inspection of the image correspondences to ensure the models did not include outliers. See supplemental material for a visualization of the image pairs from these datasets.

3.1 Kalman Filter for Inlier Ratio Estimation

Figure 2: (a) Accuracy of the subset-inlier-ratio estimation for Homography (top row) and Fundamental matrix (bottom row) models. The filter tends to provide a close estimation (gray curve) to the ground truth (black curve). (b) Accuracy of the estimation using a bad inlier-ratio prior. The filter is robust and tends to provide a close estimation in this scenario.

The goal of this experiment is twofold: 1) measure the subset-inlier-ratio estimation accuracy of the Kalman filter; and 2) evaluate the robustness of the filter given a poor inlier ratio prior estimate. We used the Lowe’s “radius” as a correspondence quality: , where and are the Lowe’s ratios obtained by matching image one to image two and viceversa. We used for stochastic diffusion, set for the prediction step (see Eq. (1)), and removed the early termination criterion for this experiment.

Fig. 2 (a) shows the estimation accuracy results for homography and fundamental matrix estimation. It presents the ground truth inlier-ratio (black curve), the estimated inlier-ratio (gray curve), and the inlier ratio prior (dashed) computed from correspondence qualities (see Sec. 2.1.2) as a function of the sorted subsets. The estimate curve tends to be close to the ground truth even when the inlier-ratio priors are not accurate. This experiment confirms the efficacy of the filter to estimate the inlier-ratio of the subsets in ANSAC. To test the robustness of the filter to bad inlier-ratio priors, we replaced the inlier-ratio priors from the correctness quality values with synthetic estimates that deviate significantly from the ground truth. The results of this experiment are shown in Fig. 2 (b). The filter is able to “track” the ground truth for both models, even though it used priors that deviated significantly from the ground truth. See supplemental material for larger figures.

Figure 3: Convergence analysis for homography (top row) and fundamental matrix (bottom row) estimation. ANSAC tends to converge in the earliest iterations, while the competing methods (EVSAC, PROSAC, and RANSAC) require more iterations.

3.2 Convergence Analysis

The goal of this experiment is to analyze the benefits that the adaptive non-minimal sampler brings to the convergence of ANSAC. We did not use an early termination criterion for this experiment. The experiment was run with the following parameters: and for the filter, and which are the parameters of (see Eq. (7)), and (see Eq. (6)). The experiment considered the following competing estimators: RANSAC [Fischler and Bolles(1981)] (the baseline), EVSAC [Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk], and PROSAC [Chum and Matas(2005)]. We used the TheiaSfM implementations of the aforementioned estimators. The experiment used ANSAC with two different correctness correspondence quality measures: 1) Lowe’s radius (ANSAC+RAD), introduced above; and 2) the widely used Lowe’s ratio (ANSAC+LWR). Also, the experiment used correspondences ranked using Lowe’s ratio for PROSAC, and calculated the correctness probabilities from descriptor distances for EVSAC. A trial in this experiment measured the estimated overall inlier ratio of every estimator as a function of the iterations in their hypothesis-and-test loop. The experiment performed trials, and fitted a curve to all its trials for every estimator in order to get a smooth summary.

The top and bottom row in Fig. 3 show the results for homography and fundamental matrix estimation, respectively. The plots are sorted from left-to-right according to their ground truth overall inlier ratio. The plots show that ANSAC tends to converge faster than the competing methods. ANSAC converges faster than the competing methods when the inlier ratio is high (). However, when the inlier ratio is , ANSAC tends to converge faster or comparable than PROSAC. This is because when the overall inlier ratio of an image pair is high, ANSAC uses non-minimal samples which yields a faster convergence than that of the competing methods. On the other hand, when the overall inlier ratio is low, ANSAC tends to use minimal samples more often, which yields a comparable performance than that of PROSAC. See supplemental material for more results about convergence.

3.3 Estimation of Homography and Fundamental Matrix

Figure 4: Box-plots measuring (from left to right) wall clock time, overall inlier ratio, iterations, and success rate of methods estimating homography (top two rows) and fundamental (bottom two rows) matrix models. The plots show that ANSAC and USAC are the fastest; in most cases less than 20 msec. Unlike USAC, ANSAC achieves a similar inlier ratio and success rate that that of RANSAC, PROSAC, and EVSAC for both model estimations.

The goal of this last experiment is to measure the speed (wall clock time) of an estimator to converge, the number of iterations, and the success rate and the overall inlier ratio of the estimated model. The experiment consisted of trials of each estimator. We measured success rate as the ratio between the number of estimates that are close enough to the ground truth and the total number of trials. This experiment considered RANSAC, PROSAC, EVSAC, and USAC [Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm] as part of the competing methods. We used USAC’s publicly available C++ source, and enabled PROSAC+SPRT [Matas and Chum(2005)]+LO-RANSAC [Chum et al.(2003)Chum, Matas, and Kittler]. We used ANSAC with the Lowe’s radius and enabled the termination criterion using .

The top and bottom two rows in Fig. 4 present the results for homography and fundamental matrix estimation, respectively. The plots show that ANSAC and USAC were the fastest estimators. ANSAC consistently required less than 20 msec, while USAC presented more time variation in homography estimation (see first row in Fig. 4). ANSAC computed estimates whose inlier ratio and success rates are comparable to those of RANSAC, EVSAC, and PROSAC. On the other hand, estimates computed with USAC presented a larger inlier ratio variation. These experiments confirm that ANSAC can accelerate the robust estimation of models using an adaptive non-minimal sampler, while still achieving a comparable or better performance than the state-of-the-art estimators. See the supplemental material for larger plots and extended results on all the experiments.

4 Conclusion

We have presented ANSAC, an adaptive, non-minimal sample and consensus estimator. Unlike existing estimators, ANSAC adaptively determines the size of a non-minimal sample to generate a hypothesis based on the inlier ratio estimated with a Kalman filter. In contrast to LO-RANSAC methods, which use non-minimal samples in a refinement process, ANSAC uses them in the hypothesis generation phase, avoiding the computational cost of the additional refinement phase. The homography and fundamental matrix estimation experiments demonstrate that ANSAC can converge in the early iterations and perform consistently better or comparable than the state-of-the-art.

Acknowledgments.

This work was supported in part by NSF grants IIS-1657179, IIS-1321168, IIS-1619376, and IIS-1423676.

References

  • [Agarwal et al.(2011)Agarwal, Furukawa, Snavely, Simon, Curless, Seitz, and Szeliski] Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M Seitz, and Richard Szeliski. Building Rome in a day. Comm. of the ACM, 54(10):105–112, 2011.
  • [Arandjelović and Zisserman(2012)] Relja Arandjelović and Andrew Zisserman. Three things everyone should know to improve object retrieval. In Proc. of the IEEE CVPR, 2012.
  • [Brahmachari and Sarkar(2013)] Aveek Shankar Brahmachari and Santonu Sarkar. Hop-diffusion Monte Carlo for epipolar geometry estimation between very wide-baseline images. IEEE TPAMI, 35(3):755–762, 2013.
  • [Brown et al.(2005)Brown, Szeliski, and Winder] Matthew Brown, Richard Szeliski, and Simon Winder. Multi-image matching using multi-scale oriented patches. In Proc. of the IEEE CVPR, 2005.
  • [Capel(2005)] David P Capel. An Effective Bail-out Test for RANSAC Consensus Scoring. In Proc. of the BMVC, 2005.
  • [Cheng et al.(2014)Cheng, Leng, Wu, Cui, and Lu] Jian Cheng, Cong Leng, Jiaxiang Wu, Hainan Cui, and Hanqing Lu. Fast and accurate image matching with cascade hashing for 3D reconstruction. In Proc. of the IEEE CVPR, 2014.
  • [Chum and Matas(2002)] Ondřej Chum and Jiří Matas. Randomized RANSAC with Td, d test. In Proc. of the BMVC, 2002.
  • [Chum and Matas(2005)] Ondřej Chum and Jiří Matas. Matching with PROSAC-progressive sample consensus. In Proc. of the IEEE CVPR, 2005.
  • [Chum et al.(2003)Chum, Matas, and Kittler] Ondřej Chum, Jiří Matas, and Josef Kittler. Locally optimized RANSAC. In Pattern Recognition, pages 236–243. Springer, 2003.
  • [Crandall et al.(2013)Crandall, Owens, Snavely, and Huttenlocher] David J Crandall, Andrew Owens, Noah Snavely, and Daniel P Huttenlocher. SfM with MRFs: Discrete-continuous optimization for large-scale structure from motion. IEEE TPAMI, 35(12):2841–2853, 2013.
  • [Fischler and Bolles(1981)] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
  • [Fox et al.(2003)Fox, Hightower, Liao, Schulz, and Borriello] Dieter Fox, Jeffrey Hightower, Lin Liao, Dirk Schulz, and Gaetano Borriello. Bayesian filtering for location estimation. IEEE pervasive computing, (3):24–33, 2003.
  • [Fragoso and Turk(2013)] Victor Fragoso and Matthew Turk. SWIGS: A Swift Guided Sampling Method. In Proc. of the IEEE CVPR, June 2013.
  • [Fragoso et al.(2013)Fragoso, Sen, Rodriguez, and Turk] Victor Fragoso, Pradeep Sen, Sergio Rodriguez, and Matthew Turk. EVSAC: Accelerating Hypotheses Generation by Modeling Matching Scores with Extreme Value Theory. In Proc. of the IEEE ICCV, 2013.
  • [Frahm et al.(2010)Frahm, Fite-Georgel, Gallup, Johnson, Raguram, Wu, Jen, Dunn, Clipp, Lazebnik, et al.] Jan-Michael Frahm, Pierre Fite-Georgel, David Gallup, Tim Johnson, Rahul Raguram, Changchang Wu, Yi-Hung Jen, Enrique Dunn, Brian Clipp, Svetlana Lazebnik, et al. Building Rome on a cloudless day. In Proc. of the ECCV. Springer, 2010.
  • [Goshen and Shimshoni(2008)] Liran Goshen and Ilan Shimshoni. Balanced exploration and exploitation model search for efficient epipolar geometry estimation. IEEE TPAMI, 30(7):1230–1242, 2008.
  • [Lebeda et al.(2012)Lebeda, Matas, and Chum] Karel Lebeda, Jiri Matas, and Ondrej Chum. Fixing the locally optimized RANSAC. In Proc. of the BMVC, 2012.
  • [Li et al.(2012)Li, Snavely, Huttenlocher, and Fua] Yunpeng Li, Noah Snavely, Dan Huttenlocher, and Pascal Fua. Worldwide pose estimation using 3D point clouds. In Proc. of the ECCV. Springer, 2012.
  • [Lou et al.(2012)Lou, Snavely, and Gehrke] Yin Lou, Noah Snavely, and Johannes Gehrke. Matchminer: Efficient spanning structure mining in large image collections. In Proc. of the ECCV. Springer, 2012.
  • [Lowe(2004)] David G Lowe. Distinctive image features from scale-invariant keypoints. Intl. Journal of Computer Vision (IJCV), 60(2):91–110, 2004.
  • [Matas and Chum(2005)] Jiří Matas and OndJiříej Chum. Randomized RANSAC with sequential probability ratio test. In Proc. of the IEEE ICCV, 2005.
  • [Mikolajczyk et al.(2005)Mikolajczyk, Tuytelaars, Schmid, Zisserman, Matas, Schaffalitzky, Kadir, and Van Gool] Krystian Mikolajczyk, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman, Jiří Matas, Frederik Schaffalitzky, Timor Kadir, and Luc Van Gool. A comparison of affine region detectors. International Journal of Computer Vision, 65(1-2):43–72, 2005.
  • [Raguram and Frahm(2011)] Rahul Raguram and Jan-Michael Frahm. RECON: Scale-adaptive robust estimation via residual consensus. In Poc. of the IEEE ICCV, 2011.
  • [Raguram et al.(2008)Raguram, Frahm, and Pollefeys] Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys. A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus. In Proc. of the ECCV. 2008.
  • [Raguram et al.(2013)Raguram, Chum, Pollefeys, Matas, and Frahm] Rahul Raguram, Ondrej Chum, Marc Pollefeys, Jose Matas, and Jens Frahm. Usac: a universal framework for random sample consensus. IEEE TPAMI, 35(8):2022–2038, 2013.
  • [Sattler et al.(2011)Sattler, Leibe, and Kobbelt] Torsten Sattler, Bastian Leibe, and Leif Kobbelt. Fast image-based localization using direct 2D-to-3D matching. In Proc. of the IEEE ICCV, 2011.
  • [Strecha et al.(2004)Strecha, Fransens, and Van Gool] Christoph Strecha, Rik Fransens, and Luc Van Gool. Wide-baseline stereo from multiple views: a probabilistic account. In Proc. of the IEEE CVPR, 2004.
  • [Strecha et al.(2006)Strecha, Fransens, and Van Gool] Christoph Strecha, Rik Fransens, and Luc Van Gool. Combined depth and outlier estimation in multi-view stereo. In Proc. of the IEEE CVPR, 2006.
  • [Sweeney et al.(2015)Sweeney, Hollerer, and Turk] Christopher Sweeney, Tobias Hollerer, and Matthew Turk. Theia: A Fast and Scalable Structure-from-Motion Library. In Proc. of the ACM Conference on Multimedia, 2015.
  • [Thrun et al.(2005)Thrun, Burgard, and Fox] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
  • [Tordoff and Murray(2005)] Ben J Tordoff and David W Murray. Guided-MLESAC: Faster image transform estimation by using matching priors. IEEE TPAMI, 27(10):1523–1535, 2005.
  • [Zhang and Kosecka(2006)] Wei Zhang and Jana Kosecka. Image based localization in urban environments. In Proc. of the IEEE International Symposium on 3D Data Processing, Visualization, and Transmission, 2006.