## Declarations

Funding: This research was supported in part by an unrestricted start-up fund from the Herff College of Engineering and a graduate assistantship from the Department of Electrical and Computer Engineering, The University of Memphis.

Conflicts of interest: None.

Availability of data and material: Public benchmark datasets

Code availability: Available upon request

## 1 Introduction

The 3D architecture of an object or a scene can be estimated from two or more views of the scene by determining dense correspondences among the multiple views of the scene. A disparity map comprised of dense pixel-level correspondences between images in a stereo pair can be estimated using stereo matching algorithms. Stereo disparity estimation has wide-ranging applications such as robot navigation desouza2002vision, aerial data analysis svensk2017evaluation, image sequence analysis huang2008binocular, and 3-D surface reconstruction remondino2008turning. The presence of large untextured regions (homogeneous intensity), occluding objects and uneven intensity distributions in the stereo pair of images introduce significant challenges in estimating stereo disparity maps.

Two broad categories of stereo matching methods are window-based and energy-based algorithms scharstein2002taxonomy. The energy-based algorithms gu2008local are global methods with a cost function defined as a function of the entire image extent yang2008stereo. Window-based or local methods utilize a finite support window to define the cost function and are suitable for real-time applications. However, regions with homogeneous texture and occlusion affect the accuracy of the disparity estimated using local methods.

In general, stereo disparity estimation procedures include an initial cost calculation step, a cost aggregation step, an optimal disparity estimation step, and a disparity refinement step. Some of the successful and commonly used disparity cost measures are normalized cross-correlation (NCC) roma2002comparative, sum of squared differences (SSD) marghany20113d, sum of absolute differences (SAD) parvathysurvey, gradient-based measures de2011stereo, feature-based measures wang2014feature, rank transform (RT) gac2009high, and census transform (CT) mei2011building. Cost aggregation is useful for minimizing matching uncertainties and improving the accuracy of the estimates fang2012accelerating. During cost aggregation, initial disparity cost measures within the support region of each pixel are aggregated hamid2020stereo such as using a low pass filter with a fixed kernel size or a variable support window (VSW) tombari2008classification, using adaptive support weight (ASW) yoon2006adaptive, and using a cross-based support window zhang2009cross. Using edge-preserving filters such as a bilateral filter (BF) yoon2006adaptive; zhu2015edge and guided image filter (GIF) zhu2016edge for cost aggregation provides a significant improvement on the final disparity estimates over the initial disparity estimates. An optimal disparity map or disparity estimates are obtained from the aggregated cost volume using optimization procedures such as the winner-take-all (WTA) chang2018real or using global optimization procedures such as dynamic programming(DP) arranz2012multiresolution, graph cut (GC) wang2013effective, intrinsic curves tomasi1998stereo, and belief propagation liang2011hardware methods.

Establishing dense correspondences among geometrical coordinates of multiple views is an ill-posed problem due to scene occlusion. In addition, larger scene extents with smoother texture characteristics and differences in scene illumination with respect to the multiple views/observers are sources of difficulties in disparity estimation. A successful strategy for improving the accuracy of the disparity estimates is to utilize the spatial dependencies of the scene characteristics as well as of the disparity estimates. Among the probabilistic inference formulations for estimating dense geometric correspondences sun2003stereo, Markov Random Fields (MRF) based approaches have been successful in modeling spatial geometrical dependencies kim2015error

. In brief, unknown true disparities among multiple views of a scene are modeled as random variables on the pixel lattice (random field) and dependence among random variables is assumed to follow Markov property. Thus, MRF represents a joint distribution of the random field using conditional distributions of each of the random variables. In learning-based MRF models, parameters of the MRF potential functions are either learned separately from training data

lan2006efficient; roth2009fields or along with the unknown states of the random variables zhang2005parameter.One of the limitations of the MRF models is that the neighborhood system used for enforcing spatial dependencies needs to be maximal. Further, the chosen dependency structure is uniformly enforced for all the random variables. Therefore, pairwise cliques or cliques are more commonly used in MRF models. While learning methods are available for optimizing MRF parameters and neighborhood structure, they are generally limited to specific tasks.

In this paper, we present a new factor graph-based probabilistic graphical model (FGS algorithm) for disparity estimation that addresses the aforementioned MRF limitations. Specifically, our model allows a larger neighborhood system and a spatially variable neighborhood structure dependent on the local scene characteristics. The proposed factor graph framework can also be used for solving general-purpose optimization problems from its posterior distributions. Further, we present strategies for reducing the computational cost and accelerating convergence of factor graph messages namely by using a priori disparity distributions with smaller support and using factor node potential functions that significantly reduce marginalization calculations. A priori

disparity probabilities were estimated using a previously developed disparity cost calculation framework

Shabanian2021Hybrid. We demonstrate the performance of the proposed probabilistic factor graph model by conducting extensive experiments using the Middlebury benchmark stereo datasets scharstein2003high, scharstein2007learning, hirschmuller2007evaluation, scharstein2014high and compare its performance with other state-of-the-art disparity estimation algorithms using Middlebury evaluation dataset version 3.0 scharstein2002taxonomy.## 2 Probabilistic Factor Graph Model for Disparity Estimation

For notational convenience, pixel coordinates of images of size are referred using a linear index . Rectified stereo image pairs were corrected for any illumination differences using a homomorphic filter oppenheim2004frequency. In brief, each image is modeled as an interaction between an illumination component and a reflectance component as . Assuming that the illumination varies gradually over the imaging area, the illumination variation is subtracted from each image in the domain using a high-pass filter. For each of the rectified and illumination corrected stereo image pairs, disparity was calculated with respect to the left image channel (reference image).

### 2.1 Graph Structure and Message Passing for Approximate Inference

Figure 1 shows the schematic diagram of the factor graph model designed for an optimal estimation of disparities at each of the image pixel locations . The bipartite graph is comprised of a set of variable nodes and a set of factor nodes . The variable nodes represent disparity labels assigned to each pixel. Evidence factor nodes provide prior degree of belief or evidence in assigning possible disparity labels at each pixel location. Dependency factor nodes are used to model spatial dependencies among the disparity labels assigned to the neighboring pixels.

A random variable is assigned to each variable node to represent the disparity label assigned to the th pixel location. Each th evidence factor node in is connected one-to-one with the corresponding th variable node in to incorporate prior belief or evidence in determining the disparity labels . To represent the influence of each pixel location on its neighboring pixels, each variable node is connected to one or more dependency factor nodes using localized intensity characteristics in the reference image as described in Section 2.5.

Let, represent the set of neighboring nodes connected with any given node ; represent a set of all neighboring nodes of excluding node ; be a collection of random variables associated with any factor node ; and let, be a collection of random variables in except . An evidence potential function associated with each factor node is defined as a function of random variables of its neighboring nodes . Similarly, the potential function at the th dependency factor node is defined as a function of the random variables associated with its neighboring nodes. Therefore, the factor graph represents the joint distribution of disparity labels assigned to each of the pixel locations as

(1) | ||||

where, is the partitioning function. Probability of assigning various disparity labels to each pixel can be obtained by marginalizing equation (1) with respect to .

(2) |

This provides a sum-product formulation pearl1982reverend; kschischang2001factor for determining likely disparity labels for each of the pixel locations based on a priori disparity information and spatial dependency characteristics of disparities in a stereo image pair.

For an approximate and efficient computation of marginal beliefs or probabilities in equation (2) using loopy belief propagation, local information available in each node is shared with neighboring nodes as variable-to-dependency factor messages and factor-to-variable messages until convergence barber2012bayesian; murphy2013loopy. Each outgoing message from a node is defined as a function of incoming messages at the given node as follows barber2012bayesian.

(3) | ||||

(4) |

### 2.2 Probabilistic Model

For approximate inference on disparity label assignment, message structures for loopy belief propagation in equations (3) and (4

) and relevant potential functions were defined based on the posterior probability of disparity label assignment,

where, is a function of the state of all variable nodes neighboring the factor node . In our proposed method, the most relevant and compact spatial dependencies are defined independently at each pixel using local image characteristics as described in Section 2.5. Therefore, the joint likelihood term can be simplified as

and the posterior probability updated as

(5) |

It can be observed that the posterior probability of without conditioning on a dependency factor node

(6) |

resembles the structure of the variable-to-dependency factor node message in equation (3). This further suggests that individual likelihood terms form the dependency factor node-to-variable node message structure in equation (4).

Considering the individual likelihood terms in equation (6),

and assuming that random variables associated with a dependency factor node are independent as in Moon & Gunther moon2006multiple,

(7) |

In the absence of any loops between any two variable nodes and (i.e. when there is at most one common dependency factor node between any two variable nodes and ), the conditional probability can be interpreted as the posterior probability of conditioned on all dependency factor functions associated with the th variable node, i.e. . By excluding the factor function associated with the factor node in , the individual likelihood expression in equation (7) resembles the dependency factor node-to-variable node message structure in equation (4).

### 2.3 Message Passing Implementation

Based on the message-passing structures in our factor graph model, the variable-to-dependency factor node message in equation (3) was approximated as the posterior probability of conditioned on (satisfying) all of its neighboring factor nodes except the factor node to which the message is sent as in equation (6). Similarly, the factor node-to-variable node messages in equation (4) was approximated as the likelihood of satisfying spatial dependency among the random variable states of all variables nodes associated with the factor node except .

It can be observed that, for evidence factor nodes , the factor-to-variable messages in equation (4) simplifies as . This supplies fixed prior information about the state of the th variable node by restricting messages from variable nodes to only dependency factor nodes as in equation (3). We estimated the a priori distribution from a disparity cost volume containing the cost of assigning all possible disparity labels at each pixel location. Details of building the disparity cost volume is presented in Section 2.4. Therefore, the approximate inference obtained using our factor graph model provides updated disparities (an optimal surface within the cost volume) based on their posterior probabilities.

The potential function in equation (4) and its probabilistic representation in equation (7) at the spatial dependency factor node was assigned a value of 1.0 when the states of the neighboring nodes are same as that of ; and was assigned a value of 0.0 when the states are different from that of . This enforces spatial dependencies among the neighboring pixels in the final disparity map . In addition, this further reduces the number of marginalization operations in equations (4) and (7).

For each stereo image pairs, each evidence factor nodes were initialized with a priori probabilities of the variable node

as the evidence factor node-to-variable node messages. The initial message from other variable and dependency factor nodes were set to be a uniform probability vector representing equally likely states. After initial message passing from the evidence factor nodes

, message exchange continues among all graph nodes until message convergence. We utilized an measure of(8) |

for assessing message convergence at message passing iteration .

### 2.4 Disparity Cost Volume and a Priori Disparity Distribution

For any given stereo image pair, let, be a cost volume representing the cost of assigning a disparity label to location . Thus the final disparity map for a given stereo image pair will be an optimal surface within the cost volume . Fig. 2 shows a schematic representation of algorithmic steps used for disparity cost volume calculation.

For computationally efficient disparity estimation, the reference image was segmented using an unsupervised texture segmentation method jain1991unsupervised. In brief, sharp image segment boundaries were derived using a Gabor filter bank and image boundaries were aggregated using -means clustering to generate an image segmentation map. Within each of the segmented regions, highly confident disparity estimates at several candidate locations were obtained using an eigen-based feature matching method shi1994good. Using the zonal / regional disparity distributions, disparity cost at each of the location

was calculated using a normalized cross-correlation measure in the frequency domain. Within each segmented region, only disparity labels ranging from

were considered based on the distribution of the disparity labels within the segmented region as shown in Figure 2. This results in a sparse disparity cost volume and thus facilitates a faster inference due to reduced marginalization limits in equation (2). A detailed description of the initial cost volume computation as part of a hybrid of cross-correlation and scene segmentation (HCS) algorithm is available elsewhere Shabanian2021Hybrid.A priori probability of assigning a disparity label to the th pixel location in the reference image was estimated from the cost volume as

(9) |

### 2.5 Determining Variable Nodes Associated with Each Dependency Factor Node

We utilized edge-preserving filter kernels, namely the guided image filters (GIF)he2010guided and bilateral filters (BF)tomasi1998bilateral to determine neighboring variable nodes with the highest influence on the true state of disparity at each of the th pixel locations. Neighborhood dependency information from these kernels were used to define a non-symmetric, irregular, and higher-order neighborhood system based on the fact that objects at various depths in the scene may exhibit a disparity boundary along the object boundary. Let represent the 2D coordinates of the th pixel. Highly dependent neighbors of location were selected using an th percentile cut-off of the kernel coefficients. Coordinates of these highly dependent neighbors were used to identify the neighboring variable nodes of each dependency factor node.

In brief, guided image filtering (GIF) is an edge-preserving smoothing algorithm. At each pixel location in the reference image , a guided filter kernel with smoothness parameter was estimated as

(10) |

where is the window size used for estimating local illumination characteristics namely the mean illumination

at location .The Bilateral filter (BF) tomasi1998bilateral is an edge-preserving non-linear Gaussian filter with coefficients defined as a function of spatial and intensity similarities estimated respectively using a localized domain kernel and a range kernel. Bilateral filter kernel coefficients at location is given as

(11) |

where the first exponential term represents a domain kernel as a function of pixel distance with respect to the center pixel and the second term represents a range kernel as a function of regional image intensity with respect to that of the center pixel . Parameters and controls the extent of influence neighboring pixels have on the domain and range filters respectively.

### 2.6 Disparity Estimates

Upon message passing convergence, an approximate estimate of the posterior probability of assigning a disparity label is available in each variable node as given in equation (5). A maximum a posterior disparity estimate was determined at each pixel location based on the disparity label with the maximum posterior probability at the respective pixel .

### 2.7 Post-processing the Disparity Maps

Occluded pixels were identified based on a lack of consistency or agreement between the disparity maps estimated using the left-right order vs right-left order of each stereo pair cochran19923. For each occluded pixel, the disparity estimate from its nearest non-occluded pixel within the same scanline (row) was assigned. Further, a weighted median filter was used to minimize spurious disparity assignments in the occluded region brownrigg1984weighted.

## 3 Experimental Results and Discussion

Algorithmic steps of the proposed factor graph method are presented in Algorithm 1. We evaluated the proposed method using stereo images from the Middlebury benchmark stereo datasets scharstein2003high, scharstein2007learning, hirschmuller2007evaluation, and scharstein2014high. We present a detailed evaluation of the proposed FGS algorithm followed by a comprehensive comparison of its performance with other state-of-the-art disparity estimation algorithms using Middlebury evaluation dataset version 3.0 scharstein2002taxonomy.

### 3.1 FGS Parameters and FGS Implementation

The FGS algorithm was implemented in MATLAB 2018b and its performance was evaluated using an Intel(R) Xeon(R) workstation with E3-1271 v3, 3.6 GHz processor.

For illumination correction, high pass filters of size pixels were obtained from a low-pass averaging filter. For unsupervised texture segmentation of the reference image, Gabor filters were designed jain1991unsupervised with filter orientations between degrees in steps of , and wavelength starting from up to the magnitude of hypotenuse of the input image. -means clustering algorithm was initialized with with replicates and ran for a maximum of iterations. For cost volume calculations, templates of size pixels were used. For identifying variable nodes connected to each dependency factor node, bilateral filter kernel of size pixels, domain kernel parameter of , range kernel parameter of and coefficient percentile cut-off of were used. We observed that the smallest kernel sizes along with a higher percentile cut-off identified fewer but highly dependent neighboring nodes. Therefore, the computational cost of message passing was significantly reduced with fewer but highly reliable neighboring variable nodes connected to each dependency factor node.

### 3.2 Performance Metrics

For performance evaluation, we used the common performance metrics available for assessing the accuracy of the estimated disparity maps namely, the disparity error maps, peak signal-to-noise ratio (PSNR), and average absolute error (Avg. err in pixels). Disparity error maps were computed as location-wise difference between the estimated disparity

and its ground-truth as . PSNR provides a measure of similarity between an estimated disparity map of size pixels and the ground-truth disparity map as follows.(12) |

A thresholded average disparity error metric with a disparity threshold of pixels was defined as

(13) |

Average disparity errors were assessed at two disparity threshold levels of pixels (Bad2.0) and pixels (Avg. err).

In the majority of the stereo pairs tested, the FGS algorithm converged between 25 and 30 iterations based on the convergence measure in equation (8).

### 3.3 Filter Selection for Identifying FGS Variable Node Neighbors

The edge-preserving filters (Sec 2.5) with the highest accuracy and computational speed were selected for identifying neighboring variable nodes for each of the dependency factor nodes in the FGS algorithm. Figures 2(c) and 2(e) show the disparity maps for the “Teddy” stereo pair estimated using guided image filters and bilateral filters respectively.

Quantitative performance measures of the edge-preserving filters are presented in Table 1. The accuracy of the FGS algorithm with guided filters (based on the Avg. err, PSNR, and Bad2.0 metrics) were slightly better than the bilateral filters. Because the guided filters resulted in a larger number of neighboring variable nodes, the run time of the FGS algorithm, however, was higher with the guided filters when compared to the FGS algorithm with bilateral filters. Therefore, for further evaluation of the FGS algorithm, bilateral filtering was chosen as the optimal choice for identifying neighboring variable nodes in the FGS factor graph.

Algorithm | Avg. err | PSNR (dB) | Bad 2.0 (%) |
---|---|---|---|

Initial disparity using HCS | 5.18 | 26.33 | 24.21 |

FGS with guided filtering | 2.59 | 32.04 | 14.12 |

FGS with bilateral filtering | 2.60 | 32.02 | 14.17 |

### 3.4 Detailed Assessment of the FGS Algorithm using Selected Stereo Pairs

For detailed quantitative and qualitative assessment of the FGS algorithm, we utilized stereo image pairs with differing textures, illuminations, and exposure characteristics from the Middlebury stereo dataset including 2003 scharstein2003high, 2005 scharstein2007learning, 2006 hirschmuller2007evaluation, and 2014 scharstein2014high stereo datasets. The stereo pairs selected for assessment were the Teddy and Cones stereo pair (2003), Dolls stereo pair (2005), Rocks1 stereo pair (2006), and the Motorcycle stereo pair (2014). Assessment results based on the estimated disparity maps with and without post-processing are presented in the following sections.

#### 3.4.1 Assessment Results Without Post-processing:

For the selected stereo pairs, disparity maps estimated by the HCS algorithm without cost aggregation and post-processing, disparity maps estimated by the FGS algorithm without any post-processing and the disparity errors for the FGS algorithm are shown in Fig. 4. A summary of the assessment metrics without post-processing the disparity maps is presented in Table 2. It can be observed that the FGS algorithm significantly improved over the initial disparity estimates generated by the HCS algorithm without cost aggregation. In addition, the FGS algorithm is able to identify disparities in occluded regions.

Images | Initial | FGS Algorithm | ||||
---|---|---|---|---|---|---|

Avg.err | PSNR(dB) | Bad2.0(%) | Avg.err | PSNR(dB) | Bad2.0(%) | |

Teddy | 5.18 | 26.33 | 24.21 | 2.60 | 32.02 | 14.17 |

Cones | 5.88 | 25.20 | 25.69 | 2.78 | 32.35 | 17.60 |

Dolls | 7.51 | 23.51 | 29.90 | 3.20 | 30.75 | 22.47 |

Rocks1 | 6.35 | 24.42 | 21.97 | 3.29 | 30.15 | 13.58 |

Motorcycle | 5.96 | 26.07 | 33.42 | 3.81 | 29.45 | 20.04 |

#### 3.4.2 Assessment Results with Post-processing

Disparity estimates with post-processing and disparity error maps for selected stereo pairs are shown in Fig. 5. A summary of the assessment metrics after post-processing the disparity estimates is presented in Table 3.

Images | FGS-based | Final | ||||
---|---|---|---|---|---|---|

Avg.err | PSNR(dB) | Bad2.0(%) | Avg.err | PSNR(dB) | Bad2.0(%) | |

Teddy | 2.60 | 32.02 | 14.17 | 1.90 | 33.25 | 9.55 |

Cones | 2.78 | 32.35 | 17.60 | 2.32 | 33.33 | 15.11 |

Dolls | 3.20 | 30.75 | 22.47 | 2.04 | 32.52 | 18.98 |

Rocks1 | 3.29 | 30.15 | 13.58 | 2.78 | 31.10 | 12.06 |

Motorcycle | 3.81 | 29.45 | 20.04 | 3.36 | 30.02 | 18.87 |

### 3.5 Performance of the FGS Algorithm vs State-of-the-art Algorithms

To the best of our knowledge, the proposed FGS algorithm is the first disparity estimation technique based on factor graphs. The performance of the non-learning-based FGS algorithm was compared with recent state-of-the-art learning-based and non-learning-based disparity estimation algorithms. All disparity estimation algorithms were evaluated using stereo pairs from the current Middlebury evaluation dataset version 3.0. In addition to the performance metrics of Avg. err, PSNR and Bad 2.0 %, a weighted average measure was calculated for each performance metric based on the level of difficulty of estimating the disparity map for each stereo pair in the Middlebury evaluation dataset version 3.0. A summary of all the assessment metrics and a weighted performance measure for the FGS algorithm is presented in Table 4.

Images | FGS-based | ||

(Weight) | Avg.err | PSNR(dB) | Bad2.0(%) |

Adiron (8) | 03.63 | 30.10 | 17.96 |

ArtL (8) | 04.81 | 30.70 | 44.81 |

Jadepl (8) | 26.72 | 18.33 | 47.01 |

Motor (8) | 03.36 | 30.02 | 18.87 |

MotorE (8) | 05.90 | 27.11 | 28.48 |

Piano (8) | 04.23 | 30.30 | 24.12 |

PianoL (4) | 07.10 | 27.11 | 35.21 |

Pipes (8) | 07.71 | 25.71 | 39.25 |

Playrm (4) | 05.30 | 27.10 | 30.11 |

Playt (4) | 03.19 | 31.98 | 28.10 |

PlaytP (8) | 03.05 | 31.95 | 24.36 |

Recyc (8) | 03.71 | 31.59 | 32.54 |

Shelvs (4) | 05.30 | 27.81 | 34.07 |

Teddy (8) | 01.90 | 33.25 | 09.55 |

Vintage (4) | 10.22 | 29.11 | 54.10 |

Weighted | |||

Average | 6.45 | 28.85 | 30.22 |

### 3.6 FGS vs Non-learning based Disparity Estimation Methods

Performance of the FGS algorithm was compared with 12 recently developed non-learning-based disparity estimation procedures including 7 local methods, 3 global methods, and one fusion method using both local and global approaches for disparity estimation. The local methods were: weighted adaptive cross-region-based guided image filtering method (ACR-GIF-OW) kong2021local, real-time stereo matching algorithm with FPGA architecture (MANE) vazquez2021real, adaptive support-weight approach in pyramid structure (DAWA-F) navarro2019semi, encoding-based approaches PPEP-GF fu2021pixel, absolute difference (AD) and census transform-based stereo matching with guided image filtering (ADSR-GIF) kong-ADSR-GIF, the sum of absolute difference (SAD) based stereo matching aggregated with adaptive weighted bilateral filter (SM-AWP) razak2019effect, statistical maximum a posteriori estimation of MRF disparity labels (SRM) okae2020robust. The global disparity estimation procedures chosen for comparison were: binocular narrow-baseline stereo matching procedure using a max-tree data structure (MTS) brandt2020efficient and its improvement (MTS-2) brandt2020mtstereo, and an accelerated multi-block matching (MBM) algorithm on GPU chang2018real. FASW approachwu2019stereo uses both local and global strategies and is based on a census transform with adaptive support weight.

Images (Weight) |
FGS |
HCS Shabanian2021Hybrid |
ACR-GIF-OW kong2021local |
MANE vazquez2021real |
SRM okae2020robust |
MTS brandt2020efficient |
DAWA-F navarro2019semi |
PPEP-GF fu2021pixel |
MTS-2 brandt2020mtstereo |
ADSR-GIF kong-ADSR-GIF |
FASW wu2019stereo |
SM-AWP razak2019effect |
MBM chang2018real |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Adiron | 3.63 | 3.98 | 4.53 | 11.60 | 2.88 | 19.00 | 4.37 | 8.12 | 21.50 | 6.40 | 2.86 | 10.5 | 4.39 |

ArtL | 4.81 | 4.31 | 8.41 | 22.90 | 5.96 | 22.50 | 13.00 | 14.80 | 22.40 | 9.00 | 8.03 | 19.9 | 8.80 |

Jadepl | 26.72 | 27.22 | 22.10 | 45.90 | 24.70 | 123.00 | 44.40 | 46.90 | 108.00 | 26.10 | 34.70 | 62.7 | 37.60 |

Motor | 3.36 | 3.91 | 7.93 | 12.40 | 4.46 | 17.50 | 7.29 | 7.99 | 15.30 | 8.11 | 5.44 | 11:00 | 5.76 |

MotorE | 5.90 | 6.12 | 7.88 | 12.30 | 4.43 | 20.70 | 7.04 | 7.62 | 30.60 | 11.40 | 4.43 | 12.5 | 5.56 |

Piano | 4.23 | 5.23 | 6.36 | 15.10 | 5.73 | 13.00 | 3.27 | 9.76 | 10.00 | 6.15 | 5.54 | 9.08 | 6.67 |

PianoL | 7.10 | 7.44 | 27.70 | 24.70 | 7.99 | 32.00 | 21.70 | 18.80 | 26.20 | 34.00 | 10.80 | 29.7 | 12.40 |

Pipes | 7.71 | 7.52 | 11.00 | 22.30 | 6.96 | 29.40 | 15.90 | 17.30 | 24.60 | 14.90 | 10.80 | 21.11 | 11.80 |

Playrm | 5.30 | 6.16 | 8.51 | 31.10 | 10.70 | 26.90 | 8.86 | 19.30 | 23.30 | 10.50 | 7.31 | 20.7 | 12.90 |

Playt | 3.19 | 3.57 | 16.10 | 39.90 | 4.48 | 27.40 | 6.39 | 45.50 | 12.70 | 16.70 | 14.50 | 9.5 | 12.00 |

PlaytP | 3.05 | 3.17 | 6.60 | 17.30 | 3.32 | 12.00 | 3.34 | 24.50 | 9.29 | 10.00 | 3.32 | 9.75 | 6.37 |

Recyc | 3.71 | 3.62 | 4.26 | 9.67 | 2.92 | 17.50 | 3.89 | 7.64 | 11.00 | 4.20 | 2.84 | 7.18 | 3.67 |

Shelvs | 5.30 | 5.51 | 13.10 | 22.50 | 7.41 | 12.10 | 11.10 | 17.20 | 11.80 | 9.97 | 8.70 | 11.4 | 11.80 |

Teddy | 1.90 | 2.10 | 2.86 | 12.50 | 1.92 | 8.11 | 3.39 | 7.11 | 6.67 | 3.35 | 2.83 | 9.44 | 3.74 |

Vintage | 10.22 | 10.62 | 7.77 | 51.00 | 15.80 | 27.20 | 6.48 | 23.40 | 33.80 | 10.90 | 6.79 | 16.8 | 14.10 |

Weighted | |||||||||||||

Average | 6.45 | 6.71 | 9.48 | 21.33 | 6.92 | 27.64 | 10.65 | 17.11 | 25.06 | 11.25 | 8.39 | 17.38 | 10.08 |

Avg. err metric and a weighted average of Avg. err for the FGS algorithm and the non-learning-based disparity estimation procedures are presented in Table 5. Image weight given in Table 5 represents the level of difficulty in estimating the disparity from a given stereo pair. In general, the proposed FGS algorithm provided higher accuracy for all the stereo pairs comparable to that of other non-learning-based methods. The FGS method provided the lowest weighted average of Avg. err of 6.45 pixels among all the non-learning-based methods. Further, the FGS method provided the lowest estimation error (Avg. err) for 3 out of the 10 difficult stereo pairs (image weight = 8) and for 4 out of the 5 moderately difficult stereo pairs (image weight = 4).

### 3.7 FGS vs Learning-based Disparity Estimation Methods

Performance of the FGS algorithm was also compared with the following recently developed learning-based disparity procedures: a method based on a fusion of convolutional neural networks (CNN) and conditional random fields (LBPS)

knobelreiter2020belief, a fully convolutional-densely connected neural network (FC-DCNN)

hirner2021fc, a deep-learning assisted method to produce initial estimate with Semi-Global Block Matching method (SGBMP)

hu2020deep, a deep learning-based self-guided cost aggregation method (DSGCA) park2018deep, a stereo matching algorithm with a pretrained network and global energy minimization SIGMRF nahar2017learned, a multi-dimensional convolutional neural network (MSMD-ROB) lu2018cascaded and a CNN-based network using ResNeXt (CBMBNet) chen2018crop.
Images (Weight) |
FGS |
FC-DCNN hirner2021fc |
LBPS knobelreiter2020belief |
SGBMP hu2020deep |
DSGCA park2018deep |
SIGMRF nahar2017learned |
MSMD-ROB lu2018cascaded |
CBMBNet chen2018crop |
---|---|---|---|---|---|---|---|---|

Adiron | 3.63 | 2.87 | 1.92 | 6.50 | 7.68 | 3.07 | 2.85 | 1.63 |

ArtL | 4.81 | 6.30 | 7.02 | 9.33 | 21.70 | 7.83 | 8.58 | 8.89 |

Jadepl | 26.72 | 32.70 | 24.9 | 56.80 | 45.00 | 32.80 | 45.10 | 27.70 |

Motor | 3.36 | 4.65 | 4.12 | 4.04 | 10.60 | 5.83 | 5.12 | 4.19 |

MotorE | 5.90 | 4.58 | 4.09 | 5.43 | 10.40 | 5.92 | 4.99 | 4.12 |

Piano | 4.23 | 4.45 | 3.02 | 4.77 | 11.50 | 5.38 | 3.75 | 3.22 |

PianoL | 7.10 | 9.25 | 3.63 | 14.80 | 24.50 | 8.13 | 7.18 | 5.40 |

Pipes | 7.71 | 10.00 | 7.37 | 7.85 | 19.90 | 11.30 | 11.00 | 8.03 |

Playrm | 5.30 | 6.15 | 4.83 | 7.62 | 24.60 | 5.66 | 6.86 | 5.96 |

Playt | 3.19 | 9.60 | 3.20 | 10.60 | 34.50 | 13.40 | 9.74 | 5.69 |

PlaytP | 3.05 | 3.26 | 3.39 | 3.78 | 14.80 | 4.26 | 9.32 | 3.89 |

Recyc | 3.71 | 2.67 | 1.71 | 3.19 | 7.56 | 3.07 | 2.74 | 1.7 |

Shelvs | 5.30 | 10.00 | 3.19 | 5.00 | 17.30 | 8.57 | 3.56 | 7.70 |

Teddy | 1.9 | 2.17 | 2.33 | 3.35 | 12.20 | 2.76 | 3.02 | 4.55 |

Vintage | 10.22 | 9.34 | 3.18 | 30.00 | 43.80 | 15.50 | 9.59 | 5.71 |

Weighted | ||||||||

Average | 6.45 | 7.67 | 5.51 | 10.38 | 18.70 | 8.63 | 9.19 | 6.65 |

Avg. err metric for the FGS algorithm and the learning-based disparity estimation procedures are presented in Table 6. When compared to the learning-based disparity estimation procedures, the FGS method provided the second-lowest weighted average of Avg. err of 6.45, provided the lowest Avg. err for 4 out of 10 difficult stereo pairs and the lowest Avg. error for 1 out of 5 moderate difficulty stereo pairs.

## 4 Conclusions

We have presented a new probabilistic factor-graph-based disparity estimation algorithm that improves the accuracy of disparity estimates in stereo image pairs with varying texture and illumination characteristics by enforcing spatial dependencies among scene characteristics as well as among disparity estimates. In contrast to MRF models, our factor graph formulation allows a larger as well as a spatially variable neighborhood system dependent only on the local scene characteristics. Our factor graph formulation can be used for obtaining maximum a posteriori estimates from models or optimization problems with complex dependency structure among hidden variables. The strategies of using a priori distributions with shorter support and spatial dependencies are useful for improving the computational speed of message convergence in factor graph-based inference problems. We rigorously evaluated the performance of the new factor-graph-based disparity estimation algorithm using Middlebury benchmark stereo datasets scharstein2003high, scharstein2007learning, hirschmuller2007evaluation, and scharstein2014high. Our experimental results indicate that the factor-graph algorithm provides disparity estimates with higher accuracy when compared to recent non-learning as well as learning-based disparity estimation algorithms using Middlebury evaluation dataset version 3.0 scharstein2002taxonomy. The factor-graph algorithm may also be useful for other dense estimation problems such as optical flow estimation.

Comments

There are no comments yet.