Camera calibration is a crucial step in many 3D vision applications, such as robotic navigation 
, depth map estimation, and 3D reconstruction . Camera calibration establishes the geometric relation between 3D world coordinate system (WCS) and the 2D image plane of camera by finding extrinsic and intrinsic camera parameters. The extrinsic parameters, which define the translation and orientation of the camera with respect to the world frame, transform 3D WCS into 3D camera coordinate system (CCS
). On the other hand, intrinsic parameters, including principal point, focal length and skewness factor, transform 3DCCS to 2D image plane of the camera.
Camera calibration can be roughly classified into two categories: photogrammetric[5, 15] and self-calibration [8, 2]. The former methods perform calibration based on sufficient measurements of 3D points with known correspondence in the scene and assume some calibration objects/templates are available. However, in a large-scale camera network, it is hard to acquire this kind of measurement or available information for each camera. Therefore, many methods have been proposed to self-calibrate the camera automatically based on certain assumptions on the online camera scenes [9, 14]. On the other hand, both methods can also accomplish camera calibration through vanishing points-based methods [3, 16] or pure rotation approaches .
Zhang’s method  is considered as the most widely used photogrammetric calibration method due to its low cost and flexibility, which only needs to use a printed pattern of checkerboard pasted on a flat surface, and captured by the camera with at least two different orientations. However, two main issues are associated with such an approach: (i) The checkerboard patterns are usually placed randomly and used all together, without a systematic procedure to screen out ill-posed patterns, and (ii) all intrinsic parameters are assumed to be fixed throughout the pattern capturing process. For Issue (i), inconsistent or unreasonable calibration results may be generated from different sets of checkerboard patterns for the same camera, as the two (intrinsic and extrinsic) apparently independent sets of parameters are simultaneously calculated via purely algebraic formulations. Moreover, the complexity of such formulations, which are not established for the original intrinsic parameters but for their nonlinear transformations, also greatly decrease the feasibility of the development of more general formulations to resolve Issue (ii) which may occur quite often in practice.
Tan et al.  partly address issue (i) by first replacing physical checkerboard patterns with virtual ones displayed on a screen to minimize localization error of point features (the corner points) resulting from a blurry image due to hand motion. Then, by conveniently using different sets of virtual patterns in the experiments, appropriate poses of these virtual patterns are suggested: the selected point features should distribute uniformly across the image captured by the camera. Nonetheless, such a conclusion may not be decisive as different suggestions for appropriate poses of calibration object/pattern also exist [11, 10].
To partly address Issue (ii), only the principal point is assumed fixed and estimated in  and , with the skewness factor ignored and the focal length not assumed to be fixed. The estimation is based on the establishment of a coordinate system of the image plane (IPCS), which has a special geometric relationship to a corresponding WCS, with the X-Y plane (the calibration plane) of the latter containing the calibration pattern. Specifically, their relationship can be described by the following rules of geometry, as depicted in Figure 1, wherein and are the image plane and the calibration plane, respectively, and the projection center is colinear with the line connecting the origins of IPCS and WCS:
R1. is parallel to intersection of and .
R2. Image of (colinear with ) is perpendicular to .
For R1, it is not hard to see that only lines parallel to , e.g., and , will have their images parallel to . On the other hand, the line containing in R2, e.g., in Figure 1, will passes through the vanishing point of images of lines perpendicular to , e.g., , and , and correspond to the axis of symmetry of them. Moreover, it is shown in  and  that such image line feature, called principal line in this paper, will also pass through the principal point , i.e., the intersection of the optical axis and the image plane. Therefore, the camera principal point can be identified as the intersection of principal lines obtained for a set of calibration planes of different poses.
In this paper, a new technique of camera calibration is proposed. The analytically tractable technique is based on R1 and R2 described in Figure 1 and has the following desirable features:
F1. Efficiency —While the principal line of each calibration plane is obtained empirically by analyzing a sequence of planar image patterns in  and  before R1 and R2 are achieved, equation of the principal line is obtained in closed form in this paper using only a single calibration pattern.
F2. Completeness —By assuming the circular symmetry111Such assumption is quite reasonable nowadays for a variety of cameras, as one can see later in Sec. 3. of the imaging system, the proposed approach can derived all intrinsic parameters, while extrinsic ones can be found readily for each calibration plane with respect to the special WCS-IPCS pair shown in Figure 1, but with their origins located on the camera optical axis, as well as for any WCS-IPCS pair if needed.
F3. Robustness/accuracy —Based on the geometry associated with each corresponding principal line, effective way of identifying ill-posed calibration planes is developed so that the robustness and accuracy of the calibration can be greatly improved by discarding such outliers, resolving Issue (i) of Zhang’s method .
F4. Flexibility —Without assuming a fixed camera focal length (FL), the proposed approach can find different values of FL adopted in the image capturing process for calibration patterns of different poses, and successfully address Issue (ii) of Zhang’s method.
The rest of this paper will be organized as follows. In the next section, a closed-form solution of the principal line is first derived for the homography matrix,
which is obtained from corresponding point features on a calibration plane and the image plane. A set of such line features can then be used to determine the principal point and, subsequently, the rest camera parameters. In Section 3, experimental results on both synthetic and real data to demonstrate the superiority of proposed approach in robustness, accuracy, and flexibility, with elaborations of some guidelines for the selection of appropriate poses of calibration planes. Finally, some concluding remarks will be given in Section 4.
2 Derivation of Closed-Form Solutions of Camera Parameters
In this section, camera parameters are derived analytically via the establishment of the special geometric relation of IPCS and WCS described in F2. In Section 2.1, the derivation of principal line with a single image of the calibration pattern is elaborated, which includes the establishment of and of R1 via rotation, followed by finding the principal line (and of R2) using the vanishing point. In Section 2.1, closed-form solutions of intrinsic parameters are derived, which include the derivation of the principal point from a set of principal lines, followed by the derivation of the camera focal length by using the principal point to establish the special WCS-IPCS pair described in F2. Finally, extrinsic parameters of can be obtained easily for such pair of coordinate systems.
2.1 Deriving Closed-Form Solution of the Principal Line
In this section, in order to simply the derivation of principal line on the image plane, orientation a unit square on the calibration plane is considered. Specifically, the rotation of the square which results in a trapezoidal shape of its image is derived in closed form. Subsequently, the direction of the two bases of the trapezoid is identified as the direction of and of R1, while the principal line is identified as the line orthogonal to the bases and passing through the intersection of the two legs of the trapezoid.
2.1.1 Finding the Direction of and of R1
Assume a square in WCS, as shown in Figure 2 (a), is captured by a camera, with being its image in the IPCS. Since and are planar surfaces, a homography matrix can be used to represent their relationship. The goal of this subsection is to derive the angle in Figure 2 (b) such that is parallel to , as shown in Figure 2 (c).
Assume is the rotation matrix associated with angle , and by rotating rectangle with , , and using the rotation matrix with respect to point , i.e.,
followed by multiplying with H to transform the rotated rectangle from WCS to IPCS, we have
in homogeneous coordinate. By defining and , and can be represented as
Since is parallel to , we have
Thus, we can obtain the desired rotation angle,
2.1.2 Finding the Principal Line (and of R2)
After the rotation shown in Figure 2(b), the intersection between lines () and () of the square , denoted by , can be calculated by using the cross product in homogeneous coordinates,
Then, the intersection point in IPCS, denoted by , which is the vanishing point in WCS, can be obtained by
A line that is perpendicular to line can then be expressed by
where , and is a constant. The value of can be solved by plugging (6) to (8), so (8) not only perpendicular to but also passing though vanishing point . The algebraic solution of is listed in the supplementary material.
Thus, a principal line is derived using only homography matrix alone. By repeating the abovementioned procedure for all image set containing calibration pattern, multiple principal lines can be obtained and used to find the principal point, as discussed next.
2.2 Deriving Closed-Form Solution of Intrinsic Parameters
2.2.1 Derivation of the Principal Point
2.2.2 Derivation of the Focal Length
Considering the original IPCS and WCS coordinate systems, the relationship between point on the calibration plane () and its image can be expressed as
or with 2D coordinates
where is the homography matrix in (1).
In this section, the formulation of focal length estimation is greatly simplified by transforming IPCS and WCS into the special geometry depicted in Figure 1, with the optical axis passing through their origins. Note that principal point in IPCS is derived in Section 2.2.1, whereas its corresponding point in WCS can be obtained using (2.2.2). Thus, the coordinate transformation is performed by these origins accordingly, followed by both IPCS and WCS axes according to the principal line (8) and its counterpart in WCS, respectively. Therefore, the following formulation can be established from (2.2.2):
where and are points of the transformed new IPCS and WCS, respectively, with
and and are similar to and , but with , , and replaced by , , and respectively. Note that and are coefficients associated with (8) and its counterpart in WCS, respectively, with .
On the other hand, for the new IPCS and WCS, it is easy to see that we will have rotation matrix
or , for their relative orientation, and
for their relative location. By comparing (13) and (14) with (2.2.2), we have, up to a scaling factor 333Equivalently, but less directly, homography matrix similar to that in (15) can be obtained by finding coordinates of point features for the new IPCS-WCS before such matrix can be estimated.,
2.3 Derivation of Extrinsic Parameters
In the previous subsection, analytic expressions of two extrinsic parameters are derived in (16) and (17). As mentioned before, we have for the other four parameters for the new WCS-IPCS pair . In fact, only five parameters (15) are needed to completely specify the relative position and orientation of the two planes. In particular, the relative position of the new WCS-IPCS, can be represented by the distance () between the two origins, with , while their relative orientation can be represented by: (a) the azimuth angle of the principal line, i.e., and (b) the elevation angle () between the two planes. Such concise formulation of extrinsic parameters is more intuitive and useful. For example, (b) can be used to screen out calibration patterns of bad poses, while a set of good but redundant patterns can be identified using (a), as will be illustrated in some experimental results presented in the next section.
On the other hand, the set of extrinsic parameters similar to the found in Zhang’s method for the original IPCS and WCS can also be obtained if needed. Following the notation in (2.2.2) and plugging the solution of principal point (9) and focal length (18) to , can be solved as
being a column vector. By using the property of rotation matrix, can be calculated as . Thus, the extrinsic parameters for the original IPCS and WCS pair can be solved.
3 Experimental Results and Discussions
In this section, two sets of experimental results will be provided. First, synthetic data are used to: (i) validate the correctness of analytic expressions derived in Section 2, and (ii) compare the accuracy of estimated parameters result with Zhang’s, with and without noises added to point features in the image plane, as ground truth (GT) are available. Then, calibration results for real data are provided for more comprehensive demonstration of the proposed method. Possibilities of improving the calibration results by screening out inappropriate calibration patterns are also provided for both cases.
3.1 Performance Evaluation Using Synthetic Data
In this sub-section, correctness of analytic expressions derived in Sec. 2.2.1 for the principal point will first be verified using synthetic data described in the following. It is assumed for simplicity that the camera optical axis is passing through the origin of WCS whose X-Y plane, e.g., the calibration plane shown in Figure 1, has a fixed rotation angle with respect to , with additional calibration planes obtained by rotating the plane, each time by , with respect to . Figure 3 shows a set of images obtained for with the principal point located exactly at the image center, wherein four corners of a square calibration pattern are used to derive elements of in (1) for each (virtual) 640480 image.
Multiple sets of images under different extrinsic parameters (, , , and
), perturbation (with/without noise) and different focal lengths (FL) are then collected for evaluating the performance of the proposed method and Zhang’s method. The evaluation metrics for, , rotation  and translation are based on the difference between the estimation and the ground truth, and can be defined as
3.1.1 Synthetic Data without Noise
Under the setting without noise, Figure 3(b) shows the resultant eight (perfect) principal lines obtained in closed-form using (8), which intersect exactly at the principal point. One can see that due to the symmetry in their orientations, only four (pairs of) principal lines can be seen from the illustration. Perfect estimation results for focal length (FL) and other extrinsic parameters are obtained for both the proposed and Zhang’s method for this simple setting. Note that real numbers are used to represent all numerical values during the associated computations, while the input images and the final results shown in Figure 3 are illustrated with virtual 640480 images.
3.1.2 Synthetic Data with Noise and Variation of Poses
To investigate the robustness of the proposed approach, noises are added to the point features used to find in (1), which are the only source of interference that may affect the correctness of each principal line. Figure 3(c) illustrate calibration results similar to those shown in Figure 3
(b), with noises uniformly distributed betweenpixels added to and coordinates of the corners shown in Figure 3(a) to simulate point feature localization errors resulted from image digitization.
For more systematic error analysis, and also taking into account the influence from different poses of the calibration plane, similar simulations are performed for two different noise levels with ranging from to (with and ), and repeated 20 times for each pose of the calibration plane. Because of the simplicity of the proposed approach, a total of 17×20 = 340 principal points are estimated in 4.5 seconds for each noise level, including the analysis of images to generate the same number of principal lines.
Figures 4(a) and (b) show means and standard deviations of estimation error (in image pixels), respectively, for the above simulation. It readily observable that larger noises will result in less accurate calibration results which are also less robust. Aside from the statistical comparison, we found that better results are generated for poses of calibration plane away from 2 degenerated conditions in Figure 1, i.e., and . Moreover, according to the results in Figure 4, it is reasonable to suggest that: (i) the best values of are around . Based on such observation, is selected to be approximately for the following experiments. Furthermore, as the principal point is derived via least square solution (9) from all the principal lines, it is also suggested that: (ii) it is better to distribute uniformly between and .
3.1.3 Full Camera Calibration for Fixed Focal Length
As for a more complete error analysis for the estimation of all parameters of a camera with fixed focal length, more general IPCS-WCS configurations (also with ) are considered, as listed in Table 1 for the first two datasets, with additive noises of pixels. One can see that smaller estimation errors are achieved for all parameters by the proposed approach, possibly due to the simplicity of the formulations established in Section 2, compared with those given in . (Note that is computed for the average value of FLs each obtained for a single input image for the proposed approach, and for the mean value of and for Zhang’s method.)
As for the robustness of camera calibration, it is possible for the proposed geometric-based approach to improve the parameter estimation by screening out calibration pattern with bad poses. For example, the 3rd dataset in Table 1 is obtained by replacing four input patterns of the 2nd dataset with four unfavorable ones (), resulting in less accurate estimates for most parameters. Nonetheless, by removing such patterns using (16), as shown in Table 1 with the last (4th) dataset, the calibration can be improved for all parameters.
|,||FL||, ,||(for 8 calibration patterns)|
|5||40, 10||400 & 440||0, 0, 35||0.0||400.0, 400.0, 400.0, 400.0, 440.0, 440.0, 440.0, 440.0||0.0||0.0||4.9||320.4||1.61||3.85|
|6||✓||40, 10||400 & 440||0, 0, 35||5.2||402.5, 383.4, 395.1, 409.2, 430.7, 449.1, 422.9, 442.3||0.89||0.84||14.0||399.4||1.63||3.02|
3.1.4 Full Camera Calibration for Varied Focal Length
As one of the key feature (F4) mentioned in Section 1, the proposed method can calibrate cameras with non-fixed focal length (FL), while Zhang’s method is not designed to cope with such situation. Table 2 shows the calibration results obtained with the proposed approach, as well as those from Zhang’s method. It is readily observable that even under the noise-free condition, significant estimation errors can already be observed in the latter, although perfect estimations are achieved with the proposed approach. As for the noisy case, while our results will have some expected estimation errors, Zhang’s method may generate extraneous errors for some, if not all, parameters. In either case, the major difference of calibration performance is in the estimation of focal length, which is greatly constrained by the ability to cope with varied focal length during the image acquisition process.
3.2 Performance Evaluation Using Real Data
For performance evaluation of the proposed calibration method under realistic conditions, 78 checkerboard images are employed as calibration patterns. Similar to the experiments considered in Section 3.1, we demonstrate the flexibility of the proposed work over Zhang’s method by comparing the respective performance under 2 different experiment setups, i.e., (a) using fixed focal length and (b) using different focal lengths throughout the image acquisition process. For setup (a), another approach of outlier removal, which is based on the RMSE of the estimation of principal point in (9), will be demonstrated to filter out ill-posed calibration planes to further improve the stability and robustness of the result.
3.2.1 Real Data with Fixed Focal Length
For camera calibration considered in this section, the focal length of the camera is fixed while calibration patterns are captured. As suggested in Section 3.1.2, a good set of images should have and should be nearly uniformly distributed between and . Figure 5(a) shows four of eight images thus obtained with a Logitech Webcam camera with a image resolution of 640480, while Figure 5(b) shows a total of 8 principal lines, with (313.5, 246.3) being estimated as the location of the principal point. The estimated principal point, which is near the center of the image, along with the estimated focal length (FL) are shown as the 1st set (set 7) of data of Table 3. Under this near ideal circumstances, both our method and Zhang’s method produce similar results. Note that rotation and translation are not reported due to limited space and can be found in supplementary materials.
To evaluate the sensitivity of calibration to unfavorable (ill-posed) calibration patterns, results of three more datasets (Sets 8-10) are also included in Table 3, each obtained by replacing some good patterns in Set 7 with unfavorable ones. The adverse influence of such replacements are readily observed from the dramatic increments of RMSE/STD of the proposed approach, e.g., for principal lines shown in Figures 6(a) and (b), and the estimation errors in PP/FL of Zhang’s method. Nonetheless, the proposed approach seems to be more robust as the average values of PP and FL are not affected as much.
|7||(313.5, 246.3)/2.77||617.1/7.8||(317.6, 248.9)||616.8|
|8||(310.8, 240.2)/59.8||610.2/29.7||(342.2, 235.8)||631.8|
|9||(313.2, 237.4)/56.4||611.5/26.3||(350.1, 230.9)||634.8|
|10||(307.1, 235.7)/30.7||619.6/30.7||(335.1, 237.0)||628.2|
|14||39||(1917.6, 1270.5)||3851.7, 3787.6, 3817.9, 3836.3, 3837.9, 3835.6||(1923.0, 1274.5)||3822.3|
|15||50||(1920.6, 1263.8)||4770.0, 4741.9, 4727.2, 4669.1, 4721.7, 4707.1||(1919.1, 1275.6)||4712.5|
|16||Mixed||(1916.8, 1266.0)||3791.5, 3823.2, 3841.7, 4734.7, 4710.2, 4766.1||(1897.0, 1416.8)||3953.0|
On the other hand, it is possible for the proposed approach to remove the above problematic calibration patterns, similar to that performed in Sec. 3.1.3 for synthetic data. Specifically, Sets 11, 12 and 13 in Table 4 are obtained by simply screening out possible outliers in Sets 8, 9 and 10, respectively, whose RMSE are greater than 15. One can see that most estimations are improved, and with both RMSE/STD reduced. Figures 6(c) and (d) show such outlier removal results for their counterparts shown in Figures 6(a) and (b), respectively.
Beside ill-posed calibration patterns, a set of good patterns which violates guideline (ii) mentioned in Sec. 3.1.2 may also gives unreliable estimation results, as shown in Figures 7(a) and (b) for the calibration patterns and the corresponding principal lines, respectively. However, such condition can be easily detected by using azimuth angle of the principal lines in (4), and new calibration images can be retaken to generate more reliable calibration results.
3.2.2 Camera Calibration with Varied Focal Length
In this subsection, calibration of cameras with varied focal length (FL) is considered, which is a more challenging but corresponds to fairly common situation in real world scenarios. Table 5 shows the calibration results for three sets of calibration patterns which are captured by a Canon 5D camera with an image resolution of 38402560.444Due to limited space, images of calibration patterns similar to those shown in Figure 5 are provided in the supplementary material. As guidelines mentioned in Sec. 3.1.2 for selecting good poses of the calibration pattern are closely followed, satisfactory calibration results are obtained with both methods for the first two datasets (Sets 14 and 15), each established for a fixed (but different) FL.
On the other hand, consider the last dataset (Set 16) in Table 5, which corresponds to a mixture of calibration patterns from Set 14 and Set 15, with half of them obtained from the 1st half of the former and the other half from the 2nd half of the latter. It is readily observable that our approach still performs satisfactorily and generates results similar to those for Sets 14 and 15. However, unacceptable results are obtained with Zhang’s method which assumed fixed FL. In particular, the estimated FL (3953.0) is quite different from one of the two FLs obtained for the fixed cases (3822.3) and very different from the other one (4712.5). Moreover, the estimation of principal point is also impaired quite seriously under such situation, i.e., with a deviation of more than 20 pixels (140 pixels) in the horizontal (vertical) directions. (Similar problems can be expected for the estimation of extrinsic parameters as well.)
In this paper, we have made a major attempt to establish a new camera calibration procedure based on a geometric perspective. The proposed approach resolves two main issues associated with the widely used Zhang’s method, which include the lack of clear hints of appropriate pattern poses and the limitation of its applicability imposed by the assumption of fixed focal length. The main contribution of this work is to provide a closed-form solution to the calibration of extrinsic and intrinsic parameters based on the analytically tractable principal lines, with intersection of such lines being the principal point while each of them conveniently representing the relative 3D orientation and position (up to one degree of freedom for both) between the image plane and a calibration plane for a corresponding IPCS-WCS pair. Consequently, computations associated with the calibration can be greatly simplified, while useful guidelines to avoid outliers in the computation can be established intuitively. Experimental results for both synthetic and real data clearly validate the correctness and robustness of the proposed approach, with both compared favorably with Zhang’s method, especially in terms of the possibilities to screen out problematic calibration patterns as well as the ability to cope with the situation of varied focal length. More applications of this new technique of camera calibration are currently under investigation.
-  (2016-07) Camera principal point estimation from vanishing points. In 2016 IEEE National Aerospace and Electronics Conference (NAECON) and Ohio Innovation Summit (OIS), Vol. , pp. 307–313. External Links: Cited by: §1, §1, §1.
-  (1998-01) From projective to euclidean space under any practical situation, a criticism of self-calibration. In Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Vol. , pp. 790–796. External Links: Cited by: §1.
-  (1990-03-01) Using vanishing points for camera calibration. International Journal of Computer Vision 4 (2), pp. 127–139. External Links: Cited by: §1.
-  (2016-10) Calibration of a dynamic camera cluster for multi-camera visual slam. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 4637–4642. External Links: Cited by: §1.
-  (1993-01) Three-dimensional computer vision: a geometric viewpoint. MIT press, pp. . Cited by: §1.
Accurate and efficient stereo processing by semi-global matching and mutual information.
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, pp. 807–814 vol. 2. External Links: Cited by: §1.
-  (2018-10) Fully automatic camera calibration for principal point using flat monitors. pp. 3154–3158. External Links: Cited by: §1, §1, §1.
-  (1992-08-01) A theory of self-calibration of a moving camera. International Journal of Computer Vision 8 (2), pp. 123–151. External Links: Cited by: §1.
-  (2010-Sep.) Capabilities and limitations of mono-camera pedestrian-based autocalibration. In 2010 IEEE International Conference on Image Processing, Vol. , pp. 4705–4708. External Links: Cited by: §1.
-  (2011-Sep.) Optimal conditions for camera calibration using a planar template. In 2011 18th IEEE International Conference on Image Processing, Vol. , pp. 853–856. External Links: Cited by: §1.
-  (2017-10) [POSTER] efficient pose selection for interactive camera calibration. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Vol. , pp. 182–183. External Links: Cited by: §1.
-  (2010-02) A fast 3d hand model reconstruction by stereo vision system. In 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), Vol. 5, pp. 545–549. External Links: Cited by: §1.
-  (2017) Automatic camera calibration using active displays of a virtual pattern. In Sensors, Cited by: §1.
-  (2016-12) Camera self-calibration from tracking of moving persons. In 2016 23rd International Conference on Pattern Recognition (ICPR), Vol. , pp. 265–270. External Links: Cited by: §1.
-  (1987-08) A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal on Robotics and Automation 3 (4), pp. 323–344. External Links: Cited by: §1.
-  (1991-04) Camera calibration by vanishing lines for 3-d computer vision. IEEE Trans. Pattern Anal. Mach. Intell. 13 (4), pp. 370–376. External Links: Cited by: §1.
-  (2016) Quantum theory, groups and representations: an introduction. Columbia University. March 13. Cited by: §3.1.
-  (2000-11) A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (11), pp. 1330–1334. External Links: Cited by: §1, §1, §3.1.3.