## 1 Introduction

Precise surface measurements become more and more a matter of fringe projection profilometry. However, the requirements concerning spatial resolution, accuracy, and especially measurement speed are always increasing. Whereas faster and better hardware components fulfill these requirements on the one hand, better algorithms may improve measurement systems without considerable higher costs.

High-speed applications of precise contactless surface measurements which do not interrupt production process are of special interest in the field of industrial quality check and control. Measurement systems based on fringe projection technique are usually limited: projection and observation rate using common hardware components is typically not higher than , and long projection sequences decrease the 3D measurement frequency.

Typically, the projection sequence consists of a number of phase shifted sinusoidal images and an additional (e.g. Gray code [9]) sequence in order to ensure uniqueness of the phase values, the so called phase unwrapping [13]. An overview over existing fringe projection methods is given by Zhang [14].

However, in order to realize real-time processing, additional code for phase unwrapping should be omitted. Methods for like Multi-frequency-technique [6] are out of interest because they require additional code. Two alternatives are advertised: either continuous phase tracking, as e.g. proposed by Herráez [4] or geometric phase unwrapping using additional geometric information, e.g. concerning the measurement volume (see [2]).

Indeed, Herráez’ algorithm has two main disadvantages. First, it has relative low robustness in case of objects with sharp edges and hidden parts leading to phase jumps. Second, calculation effort is quite extensive. Other interesting approaches for phase unwrapping algorithms using multiple projector or camera positions are presented by Ishiyama et al. [5], or Young et al. [12].

In this paper, a new method will be described which realizes 3D point determination without additional code. It is achieved by phase unwrapping using double triangulation and evaluation of multiple 3D point candidates.

## 2 Measurement Principle

Using structured light for finding point correspondences leads to the principle of phasogrammetry which is the combination of photogrammetry and fringe projection in closed mathematical form (see [10]). A projection unit projects a well-defined sequence of fringe images onto the measurement object which is observed by one or more cameras . One sequence of projected fringe images usually consists of a Gray code sequence (see e.g. [9]) which realizes the uniqueness of the fringes and a sequence of sinusoidal fringe patterns.

The observed fringe image sequences are processed resulting in phase images for each measurement position and camera. After rotation of the fringe pattern by , the sequence may be projected and observed again resulting in a second phase image for each camera. The phase values correspond to image coordinates in the projector image plane. The resulting 3D points are obtained by triangulation between the coordinates of the camera and the projector (CP-mode), or between corresponding points of two cameras (CC-mode). Triangulation can be regarded as standard procedure in photogrammetry (see [7]).

Epipolar geometry is a well-known principle which is often used in photogrammetry when stereo systems are present as for example described in [7]. It is characterized by an arrangement of two cameras observing almost the same object scene. A measurement object point defines together with the projection centres and of the cameras a plane in the 3D space (see also Fig. 1). The images of are corresponding epipolar lines concerning . When the image point of is selected in camera image, the corresponding point in camera image must lie on the corresponding epipolar line. This restricts the search area in the task of finding corresponding points.

## 3 The New Approach

### 3.1 Finding Correspondences and 3D Point Calculation

First the concept of the new method should be briefly outlined. It is based on the double application of triangulation procedure in order to calculate the resulting 3D points. The projected and observed sequence of sinusoidal fringe images is used to calculate one image of wrapped phase values for each camera. The principle of epipolar geometry should be applied in order to support finding phase correspondences.

The idea is the following. For each camera pixel of camera all potential sub-pixel exact corresponding candidates in the image of camera are found and converted to potential 3D point candidates by triangulation (CC-mode). The obtained list is compared to those candidates found by application of the CP-mode using camera and the projector for triangulation (see Fig. 2).

Let a fringe projection based stereo sensor be given. The kind of the measurement objects, the arrangement of the optical components, and the adjustment of the lenses (range of sharp observation) restrict the measurement volume (MV) in its depth (). This restriction leads to reduction of the search area for the corresponding phase value onto a segment in the image of camera using the CC-mode. All positions on corresponding to the phase value are candidates for point correspondence. The number of such candidates depends on the measurement volume width () and depth (), the number of projected fringes (or equivalently the fringe period length ) and the main triangulation angle (or equivalently the length of the baseline between the projection centers and of the two cameras and the minimal object distance ). More details are given in [2]. For illustration see Fig. 2.

The relationships between the parameters can be expressed by the formula:

(1) |

Certainly, corresponds to the length of the illuminated area by the projected fringes at the distance . Because should be a natural number, the obtained by (1) must be rounded up.

Applying the CP-mode the same holds. However, the number of candidates will be approximately only the half, if the projector is symmetrically arranged between the cameras (see Fig. 2). For each candidate the resulting 3D point candidate is generated by triangulation for both candidate lists (CC-mode and CP-mode) and all 3D points of CC are compared to those obtained by CP-mode.

### 3.2 Rejection of False Candidates

All pairs of 3D points with a 3D distance bigger than a given threshold are rejected. This threshold

can be found by analysis of the typical phase error or can also be determined by experiments. The ratio between false positive 3D points and correctly determined should be under a certain level which makes it easy to identify them as outliers in the resulting 3D point cloud and remove them manually or automatically.

### 3.3 Outlier Removal

There are two possible ways for outlier removal depending on the decision strategy for correct candidate selection. The first strategy (mode ) is to select exactly one candidate using the minimal 3D distance between the CC- and the CP-candidate. In this case the false candidate should be removed from the 3D point cloud. The second strategy (mode ) selects all candidates with distances below and only candidates with a distance bigger than are removed from the 3D point cloud.

Which strategy will be selected should be decided after first corresponding experiments. In the current state of our algorithm the outlier removal was performed manually in the 3D point cloud (see next section). Alternatively, an automatic removal using point clustering and probabilistic properties of the produced clusters may be realized (see [3]).

### 3.4 Calibration Improvement by Precise Projector Distortion Calculation

A careful calibration of fringe projection based 3D stereo sensors typically provides sufficient measurement accuracy according to the requirements. Using the CC-mode for triangulation, the calibration parameters of the projection unit are not necessary for the measurement process. Consequently, projection unit is often not or only insufficiently calibrated. However, an accurate projector calibration is necessary for our new code minimized method.

Especially the distortion is usually not sufficiently determined. This is often due to the fact, that projector distortion cannot be well described by a distortion function, because of the location of the principal point outside the projector chip. Here a very careful determination of the projector distortion must be performed. One possibility to realize this will be described by the new three step method presented in the following.

The precondition for the new method of projector distortion determination is an accurate calibration of the two cameras including camera distortion. There are several ways to evaluate the quality of the current calibration. One possibility is to check the measured surface of a well-known object concerning lengths and flatness of plane surface parts (as e.g. proposed in [11]), and to analyze epipolar line error and scaling error (see [1]).

A second precondition is an initial calibration of the projectiion unit.

In a first step we improved the intrinsic and extrinsic camera parameters of the two cameras of the sensor using the method proposed in [1] by minimization of the epipolar line error and the scaling error . In the next step the intrinsic and extrinsic calibration parameters of the projector were improved using the same method.

In the third step the distortion matrix of the projector was improved the following way. A plane surface was placed into the measurement volume at three distances: at , centrally, and at . Measurements using CC-mode were performed. A set of 3D points was obtained for each measurement , assumed to be correct. Then a CP-triangulation was performed obtaining three sets of corresponding 3D points. The elements of the

are assumed to be distorted. Now, every pair of corresponding 3D points determines a unique correction vector

obtained by back-projection of the into the projector image plane . The vectors describe the residuals in .In the final step the vectors are averaged to certain predefined grid points in the projector image plane . The distortion values for all other points of

being no grid points are interpolated using the grid points. Such, a so called distortion matrix of the projector is obtained (see Fig.

3).## 4 Experiments and Results

In order to evaluate the described methodology the following experiments were performed. It was used the sensor "kolibri Cordless" ([8], see Fig. 4), a mobile, lightweight, hand-held sensor. It is covering a measurement volume of about . It has a main triangulation angle between the cameras of about and between the cameras and the projector of about .

Sequences of phase shifted sinusoidal fringe images with four, eight, and 16 images corresponding to a phase shift of , , and were applied, respectively. The projected width of one fringe period varied between eight and 64 projector pixels, namely 8, 16, 32, and 64 pixels. Additionally, to each sequence a Gray code sequence was added in order to make statements concerning measurement accuracy and completeness in relation to a common reference measurement (GC).

The number of found 3D points using the Gray code sequence was chosen as reference number of "correct" points. However, note that even using Gray code for phase unwrapping, "false" 3D points may be generated, especially at sharp object edges or partly hidden parts of the object surface. Consequently, a completeness level of more than may already include all true 3D points in certain cases.

Two characteristic measurement objects were selected for the experiments concerning completeness of the measurements. The first one was a pyramid stump with a volume of and the second one a Schiller bust with a size of approximately (see Fig. 4).

### 4.1 Measurement Parameters for Evaluation of the New Method

The following parameters were defined in order to evaluate the new method. Completeness should be the percentage of the true 3D points in relation to the determined points using the Gray code measurement. Those 3D points which are not true but recognized as "true" points should be denoted by (false positives) as percentage in relation to .

Measurements were performed before (BCI) and after (modes and ) calibration improvement by projector distortion correction. The quality of the calibration should be evaluated by the average length difference of the corresponding 3D points obtained by CC-mode and CP-mode application.

### 4.2 Results

Table 1 shows the results for a 16-phase algorithm and a fringe width of 16 pixels for the Schiller bust for both measurement objects before (BCI) and after calibration improvement using modes and with different values for . Table 2 shows the results for the pyramid stump. Measurement volume depth was restricted by (bust) and (stump), respectively.

Before calibration improvement the value was about and after calibration improvement approximately . Epipolar line error was smaller than 0.1 pixel.

thr [rad] | ||||||||
---|---|---|---|---|---|---|---|---|

Meas. | ||||||||

BCI | 66.3 | 38.4 | 22.4 | 10.1 | 7.5 | 3.0 | 1.8 | 1.0 |

M1 | 98.4 | 95.7 | 91.0 | 73.8 | 2.6 | 2.3 | 2.0 | 1.4 |

M2 | 99.1 | 96.3 | 91.5 | 74.2 | 12.7 | 5.5 | 2.7 | 1.5 |

GC | 100 | 0.2 |

thr [rad] | ||||||||
---|---|---|---|---|---|---|---|---|

Meas. | ||||||||

BCI | 51.5 | 23.3 | 11.3 | 5.9 | 0.0 | 0.0 | 0.0 | 0.0 |

M1 | 99.5 | 98.9 | 98.1 | 86.1 | 0.0 | 0.0 | 0.0 | 0.0 |

M2 | 99.5 | 98.9 | 98.1 | 86.1 | 0.0 | 0.0 | 0.0 | 0.0 |

GC | 100 | 0.0 |

As it can be seen modes and lead to identical results at the pyramid stump measurement. Regarding the bust measurement mode leads to less false positive results but a little bit lower rate of the completeness than mode . However, it depends on the users priority which mode should be applied. Without calibration improvement (BCI) the results are not sufficient.

The new method has absolutely no influence on the measurement accuracy which only depends on the fringe width and the number of the sinusoidal images in the sequence. At all measurements the standard deviation of the measured 3D surface points was about

for the bust and about for the stump.See Fig. 5 for a typical measurement result of the Schiller bust (). Modes and are compared with and without outlier elimination. Mode leads to some missing points on the forehead but has only few false positives. Mode leads to a complete result with a relative high number of false positive 3D points. However, because of the big distances to the true 3D points, it is easy to eliminate the false positives manually.

## 5 Summary and Outlook

We presented a new method for geometric phase unwrapping for fringe projection based 3D stereo sensors with reduced projection code. The novelty is the double triangulation for 3D point calculation which leads to almost unique 3D results after considerable calibration improvement of the projector.

Future work should include automatic outlier elimination by 3D point clustering, application of the new method using more scanners, and the improvement of the evaluation of the results. We assume to obtain further improvement of the projector calibration by consideration of 3D distortion effects. If those could be detected and removed, the decision threshold could be minimized leading to less calculated false positive 3D points.

## References

- [1] Christian Bräuer-Burchardt, Peter Kühmstedt, and Gunther Notni. Error compensation by sensor re-calibration in fringe projection based optical 3D stereo scanners. Proc. ICIAP, pages 363–373, 2011.
- [2] Christian Bräuer-Burchardt, Peter Kühmstedt, and Gunther Notni. Phase unwrapping using geometric constraints for high-speed fringe projection based 3D measurements. Proc. SPIE, 8789:87890 1–11, 2013.
- [3] Christian Bräuer-Burchardt, Christoph Munkelt, Matthias Heinze, Peter Kühmstedt, and Gunther Notni. Using Geometric Constraints to Solve the Point Correspondence Problem in Fringe Projection Based 3D Measuring Systems. Proc. ICIAP, pages 265–274, 2011.
- [4] Miguel Arevallilo Herráez, David R. Burton, Michael J. Lalor, and Munther A. Gdeisat. Fast two-dimensional phaseunwrapping algorithm based on sorting by reliability following a noncontinuous path. Applied Optics, 41:7437–7444, 2002.
- [5] Rui Ishiyama, Shizuo Sakamotom, Johji Tajima, Takayuki Okatani, and Koichiro Deguchi. Absolute phase measurements using geometric constraints between multiple cameras and projectors. Applied Optics, 46(17):3528–3538, 2007.
- [6] E.B. Li, X. Peng, J.F. Chicaro, J.Q. Yao, and D.W. Zhang. Multi-frequency and multiple phase-shift sinusoidal fringe projection for 3D profilometry. Optics Express, 13:1561–1569, 2005.
- [7] Thomas Luhmann, Stuart Robson, Stephen Kyle, and Ian Harley. Close range photogrammetry. Wiley Whittles Publishing, 2006.
- [8] Christoph Munkelt, Christian Bräuer-Burchardt, Peter Kühmstedt, Ingo Schmidt, and Gunther Notni. Cordless hand-held optical 3D sensor. Proc. SPIE, 6618:66180D 1–8, 2007.
- [9] Giovanna Sansoni, Matteo Carocci, and Roberto Rodella. Three-dimensional vision based on a combination of Gray-code and phase-shift light projection: Analysis and compensation of the systematic errors. Applied Optics, 38(31):6565–6573, 1999.
- [10] Wolfgang Schreiber and Gunther Notni. Theory and arrangements of self-calibrating whole-body three-dimensional measurement systems using fringe projection technique. Optical Engineering, 39:159–169, 2000.
- [11] VDI / VDE 2634. Optical 3D-measuring systems. VDI/VDE guidelines, Parts 1-3, 2008.
- [12] Mark Young, Erik Beeson, James Davis, Szymon Rusinkiewicz, and Ravi Ramamoorthi. Viewpoint-coded structured light. CVPR, pages 1–8, 2007.
- [13] Hong Zhang, Lalor Michael J., and David R. Burton. Spatiotemporal phase unwrapping for the measurement of discontinuous objects in dynamic fringe-projection phase-shifting profilometry. Applied Optics, 38(31):3534–3541, 1999.
- [14] Song Zhang. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Optics and Lasers in Eng., pages 149–159, 2010.

Comments

There are no comments yet.