A Novel Illumination-Invariant Loss for Monocular 3D Pose Estimation

11/28/2013 ∙ by Srimal Jayawardena, et al. ∙ Australian National University 0

The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods. It does not require prior training, knowledge of the camera parameters, explicit point correspondences or matching features between the image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object. It works on a single static image from a given view under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between the 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection in real photographs are presented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

(a) Initial rough pose
(b) Final pose
Figure 1: Recovered pose of a Mazda Astina using a scanned 3D model of the car. 1(a) shows the ‘Initial rough pose’ (from the wheel match method) used to initialize the optimization. 1(b)

shows the resulting ‘Final pose’ (a perfect match) obtained by optimizing the novel loss function. The pose is shown in ‘yellow’ by an outline of the projected 3D model. The images have been cropped for visual clarity. Note the large amount of reflection in the front of the car, which make pose recovery very challenging with conventional methods.

Pose estimation is a fundamental problem in computer vision and has applications in robotic vision and intelligent image analysis. In general, pose estimation refers to the process of obtaining the location and orientation of an object and its parts. We restrict our work to non-articulated objects where there is no relative movement between object parts. The accuracy and nature of the pose estimate required varies from application to application. Certain applications require the estimation of the full 3D pose of an object, while other applications require only a subset of the pose parameters.

Motivation. The 2D-3D registration problem in particular is concerned with estimating the pose parameters that describe a 3D object model within a given 2D scene. An image/photograph of a known object can be analyzed in greater detail if a 3D CAD model of the object can be registered over it (as in Figure 1(b)) to be used as a ground truth. A target application is automatic damage detection in vehicles using photos taken by a non-expert. The photos will be taken in an uncontrolled environment (where the orientation of the vehicle and camera parameters are unknown) and delivered to a server for analysis. We restrict ourselves to cases where the vehicle is not completely destroyed. The focus of this work is to develop a method to estimate the pose of a known 3D object model in a given 2D image, with an emphasis on estimating the pose of vehicles. We have the following objectives in mind.

  • Use only a single, static image limited to a single view

  • Work with any unknown camera (without prior camera calibration)

  • Avoid user interaction

  • Avoid prior training / learning

  • Work under varying and unknown lighting conditions

  • Estimate the full 3D pose of the object (not a partial pose as in an overhead view of traffic or machine parts along a conveyor belt)

  • Work in an uncontrolled environment

A 3D pose estimation method with these properties would also be useful in remote sensing, automated scene recognition and computer graphics, as it allows for additional information to be extracted without the need for human involvement.

Existing pose estimation methods include point correspondence based [6][25], implicit shape model based [1] and image gradient based [16][37] methods. However, these methods do not fully satisfy the objectives mentioned above, hence the necessity of our novel method. A detailed review of existing pose estimation methods ranging over the past 30 years is presented in Section 2.

Main contribution. This paper presents a method which registers a known 3D model onto a given 2D photo containing the modeled object while satisfying the objectives outlined above. It does this by measuring the closeness of the projected 3D model to the 2D photo on a pixel (rather than feature) basis. Background and unknown lighting conditions of the photo are major complications, which prevent using a naive image difference like the absolute or square loss as a measure of fit.

The major contribution of this paper is the novel “distance” measure in Section 3 that does neither depend on the lighting of the real scene in the photo nor on choosing an appropriate lighting in the rendering of the 3D model, and hence does not require knowledge of the lighting. Technically, we derive in Section 4

a loss function for vector-valued pixel attributes (of different modality) that is invariant under linear transformations of the attributes.

The loss functions is analyzed using synthetic and real photographs in Section 5. We show that the loss function is well behaved and can be optimized using a standard optimization method to find an accurate pose. Optimizing the loss function is described in Section 6. Sensitivity of the final recovered pose to the initialization and results on real photographs are presented in Section 7. Implementation details are discussed in Section 8.

2 Related Work

Model based object recognition has received considerable attention in the computer vision community. A survey by Chin and Dyer [5] shows that model based object recognition algorithms generally fall into three categories based on the type of object representation used - namely 2D representations, 2.5D representations and 3D representations.

2D representations store the information of a particular 2D view of an object (a characteristic view) as a model and use this information to identify the object from a 2D image. Global feature methods have been used by Gleason and Algin [11] to identify objects like spanners and nuts on a conveyor belt. Such methods use features such as the area, perimeter, number of holes visible and other global features to model the object. Structural features like boundary segments have been used by Perkins [30] to detect machine parts using 2D models. A relational graph method has been used by Yachida and Tsuji [42] to match objects to a 2D model using graph matching techniques. These 2D representation based algorithms require prior training of the system using a ‘show by example’ method.

2.5D approaches are also viewer centered, where the object is known to occur in a particular view. They differ from the 2D approach as the model stores additional information such as intrinsic image parameters and surface-orientation maps. The work done by Poje and Delp [32] explain the use of intrinsic scene parameters in the form of range (depth) maps and needle (local surface orientation) maps. Shape from shading [12] and photometric stereo [40] are some other examples of the use of the 2.5D approach used for the recognition of industrial parts. A range of techniques for such 2D/2.5D representations are described by Forsythe and Ponce [9], by posing the object recognition problem as a correspondence problem. These methods obtain a hypothesis based on the correspondences of a few matching points in the image and the model. The hypothesis is validated against the remaining known points.

3D approaches are utilized in situations where the object of interest can appear in a scene from multiple viewing angles. Common 3D representation approaches can be either an ‘exact representation’ or a ‘multi-view feature representation’. The latter method uses a composite model consisting of 2D/2.5D models for a limited set of views. Multi-view feature representation is used along with the concept of generalized cylinders by Brooks and Binford [3] to detect different types of industrial motors in the so called ACRONYM system. The models used in the exact representation method, on the contrary, contain an exact representation of the complete 3D object. Hence a 2D projection of the object can be created for any desired view. Unfortunately, this method is often considered too costly in terms of processing time.

Limitations. The 2D and 2.5D representations are insufficient for general purpose applications. For example, in the case of vehicle damage detection, a vehicle may be photographed from an arbitrary view in order to indicate the damaged parts. Similarly, the 3D multi-view feature representation is unsuitable as it restricts the pose of the object to a limited set of views. Therefore, an exact 3D representation is preferred. Little work has been done to date on identifying the pose of an exact 3D model from a single 2D image. Huttenlocher and Ullman [13] use a 3D model that contains the locations of edges. The edges/contours identified in the 2D image are matched against the edges in the 3D model to calculate the pose of the object. The method has been implemented for simple 3D objects. However, it is unclear if this method will work well on objects with rounded surfaces without clearly identifiable edges.

Image gradients. Gray scale image gradients have been used to estimate the 3D pose in traffic video footage from a stationary camera by Kollnig and Nagel [16]

. The method compares image gradients instead of simple edge segments, for better performance. Image gradients from projected polyhedral models are compared against image gradients in video images. The pose is formulated using 3 degrees of freedom; 2 for position and 1 for angular orientation. Tan and Baker

[37] use image gradients and a Hough transform based algorithm for estimating vehicle pose in traffic scenes, once more describing the pose via 3 degrees of freedom. Pose estimation using 3 degrees of freedom is adequate for traffic image sequences, where the camera position remains fixed with respect to the ground plane. However, this approach does not provide a full pose estimate required for a general purpose application.

Implicit Shape Models. Recent work by Arie-Nachimson and Ronen Basri [1] makes use of ‘implicit shape models’ to recognize 3D objects from 2D images. The model consists of a set of learned features, their 3D locations and the views in which they are visible. The learning process is further refined using factorization methods. The pose estimation consists of evaluating the transformations of the features that give the best match. A typical model requires around 65 images to be trained. Many different vehicle models exist and new ones are manufactured frequently. Hence, methods that require training vehicle models are too laborious and time consuming for our work.

Feature-based methods [6, 25]

attempt to simultaneously solve the pose and point correspondence problems. The success of these methods are affected by the quality of the features extracted from the object. Objects like vehicles have large homogeneous regions which yield very sparse features. Also, the highly reflective surfaces in vehicles generate a lot of false positives. Our method on the contrary, does not depend on feature extraction.

Distance metrics can be used to represent a distance between two data sets, and hence give a measure of their similarity. Therefore, distance metrics can be used to measure similarity between different 2D images, as well as 2D images and 2D projections of a 3D model. A basic distance metric would be the Euclidean Distance or the 2-norm . However, this has the disadvantage of being dependent on the scale of measurement. We use the Mahalanobis Distance [20] for our work, which is a scale-invariant distance measure. It is used by Xing et al. [41] for clustering. It is also used by Deriche and Faugeras [7] to match line segments in a sequence of time varying images.

3 Matching 3D Models and 2D Photos using the Invariant Loss

We describe our approach of matching 3D models to 2D photos in this section using a novel illumination-invariant loss function. A detailed derivation of the loss is provided in Section 4.

The problem. Assume we want to match a 3D model () to a 2D photo () or vice versa. More precisely, we have a 3D model (e.g. as a triangulated textured surface) and we want to find a projection for which the rendered 2D image has the same perspective as the 2D photo . As long as we do not know the lighting conditions of , we cannot expect to be close to , even for the correct . Indeed, if the light in came from the right, but the light shines on from the left, may be close to the negative of .

Setup. Formally, let be the set of (integer) pixel coordinates, and be a pixel coordinate. Let be a photo with real pixel attributes, and be a projection of a 3D object using pose parameters to a 2D image with real pixel attributes. Possible attributes include colours, local texture features or surface normals. In the following we consider the case of gray-level photos (), and for reasons that will become clear, use surface normals and brightness of the (projected) 3D model.

Lambertian reflection model. A simple Lambertian reflection model [8] is not realistic enough to result in a zero loss on real photos, even at the correct pose. Nevertheless (we believe and experimentally confirm that) it results in a minimum at the correct pose, which is sufficient for matching purposes.

We use Phong shading without specular reflection for this purpose [8]. Let and be the global ambient and diffuse light intensities of the 3D scene. Let be the (global) unit vector pointing towards the light source (or their weighted sum in case of multiple sources). For reasons to become clear later, we introduce an extra illumination offset (which is 0 in the Phong model). For each surface point , let and be the ambient and diffuse reflection constants (intrinsic surface brightness) and

be the unit (interpolated) surface “normal” vector. Then the apparent intensity

of the corresponding point in the projection is

(1)

The last expression is the same as the first, just written in a more convenient form: are the known surface (dependent) parameters, and are the four (unknown) global illumination constants, and . Since is linear in and , any rendering is a simple global linear function of . This model remains exact even for multiple light sources and can easily be generalized to color models and color photos.

Illumination invariant loss. We measure the closeness of the projected 3D model to the 2D photo by some distance , e.g. square or absolute or Mahalanobis. We do not want to assume any extra knowledge like the lighting conditions under which the photo has been taken, which rules out a direct use of . Ideally we want a “distance” between and that is independent of and is zero if and only if there exists a lighting condition such that and coincide.

Indeed, this is possible, if (rather than defining as some -dependent rendered projection of ) we use -independent brightness and normals as pixel features as defined above, and define a linearly invariant distance as follows. Let

(2)

be the average attribute values of photo and projection, and

(3)

be the cross-covariance matrix between and and similarly and the covariance matrices and . Consider the following distance or loss function between and which is obtained from (21) derived in Section 4, when , and .

(4)

Obviously this expression is independent of . In the next section we show that it is invariant under regular linear transformations of the pixel/attribute values of and and zero if and only if there is a perfect linear transformation of the pixel values from to . This makes it unnecessary to know the exact surface reflection constants of the object ( and ). We will actually derive

(5)

This implies that is zero if and only if there is a lighting under which and coincide, which we desired.

4 Derivation of the Invariant Loss Function

A detailed derivation of the loss function is given in this section.

Notation. Using the notation of the previous section, we measure the similarity of photo and projected 3D model (returning to general ) by some loss:

(6)

where is a distance measure between corresponding pixels of the two images to be determined below. A very simple, but as discussed in Section 2 for our purpose unsuitable, choice in case of would be the square loss .

It is convenient to introduce the following probability notation: Let

be uniformly distributed

111With a non-uniform distribution one can easily weight different pixels differently. in , i.e. 

. Define the vector random variables

and . The expectation of a function of and then is

(7)

With this notation, (6) can be written as

(8)

Noisy (un)known relation. Let us now assume that there is some (noisy) relation between (the pixels of) and , i.e. between and :

(9)

If is known and is Gaussian, then

(10)

is an appropriate distance measure for many purposes. In case and are from the same source (same pixel attributes, lighting conditions, etc), choosing as the identity function results in a standard square loss. In many practical applications, is not the identity and furthermore unknown (e.g. mapping gray models to real color photos of unknown lighting condition). Let us assume belongs to some set of functions . could be the set of all functions or just contain the identity or anything in between these two extremes. Then the “true/best” may be estimated by minimizing and substituting into :

(11)
(12)

Given , can in principle be computed and measures the similarity between and for unknown . Furthermore, is invariant under any transformation for which .

Linear relation. In the following we will consider the set of linear relations

(13)

For instance, a linear model is appropriate for mapping color to gray images (same lighting), or positives to negatives. For linear , becomes

(14)

and the distance is invariant under all regular linear re-parametrization of , i.e.  for all and all non-singular . Unfortunately, is not symmetric in and and in particular not invariant under linear transformations in . Assume that the components are very different to each other (=color, =angle, =texture). Then the 2-norm

does not take these differences into account. A standard solution is to normalize by the variance, i.e. use

, where , but this norm is (only) invariant under component scaling.

Linearly invariant distance. To get invariance under general linear transformations, we have to scale by the covariance matrix

(15)

The Mahalanobis norm (cf. Section 2)

(16)

is invariant under linear homogeneous transformations, as can be seen from

(17)

where we have used .

The following distance is hence invariant under any non-singular linear transformation of and any non-singular (incl. non-homogeneous) linear transformation of :

(18)

Explicit expression. Since (18) is quadratic in and , the minimization can be performed explicitly, yielding after some linear algebra

(19)

where and similarly and . Inserting (4) back into (18) and rearranging terms gives

(20)

This explicit expression shows that is symmetric in and if not for the term. For comparisons, e.g. for minimizing w.r.t. , the constant does not matter. Since the trace can assume all and only values in the interval , it is natural to symmetrize to obtain

(21)

Returning to original notation, this expression coincides with the loss (4). It is hard to visualize this loss, even for and , but the special case is instructive, for which the expression reduces to , where is the correlation between and . The larger the (positive or negative) correlation, the more similar the images and the smaller the loss. For instance, a photo has maximal correlation with its negative.

5 Practical Behaviour of the Loss Function

In this section, we explore the nature of the loss function derived in Section 4 for real and synthetic photographs.

Representation of the pose. The pose of a generic object may be represented by translations along the X,Y and Z axes and a suitable rotation representation. A quaternion or exponential map [17] based rotation can be used to avoid Gimbal Lock problems that may occur with Euler angles or roll/yaw/pitch based rotations. Careful selection of pose parameters can aid the optimizer when finding the best pose. Since we work with vehicles, the following pose representation was used, temporarily neglecting the effects of perspective projection. It is consistent with the rough pose estimation method described in [14].

(22)

is the visible rear wheel center of the car in the 2D projection. is the vector between corresponding rear and front wheel centers of the car in the 2D projection. The 2D image is a projection of the 3D model on to the XY plane. is a unit vector in the direction of the rear wheel axle of the 3D car model. Therefore, and need not be explicitly included in the pose representation . This representation is illustrated in Figure  2. This pose is converted to OpenGL translation, scale and rotation as per [14] to transform and project the 3D model. As we directly optimize (4) w.r.t (Section 6) explicit knowledge of intrinsic camera parameters etc., are not required.

Figure 2: The pose representation used for 3D car models. We use the rear wheel center , the vector between the wheel centers and unit vector in the direction of the rear wheel axle.

Perspective projection. The pose estimation was extended to handle perspective projection as follows. The 3D model was rendered using the OpenGL perspective projection model. The degree of perspective distortion was changed by varying the parameter (Figure 3(a)) in the OpenGL frustum. was included as a pose parameter during the optimization. The 3D model is sometimes clipped by the projection plane when positioned too close to . We avoid this by shifting and scaling the 3D model by a constant factor (Figure 3(b)), thus obtaining the same projected image without clipping.

(a) Perspective projection model
(b) Handling object clipping
Figure 3: Rendering with perspective projection. 3(a) shows the perspective projection model used. 3(b) illustrates clipping when the 3D model is located too close to the projection plane and how this is prevented.

Thus the parallel projection based pose in (22) becomes.

(23)

Loss landscape for synthetic photographs. To understand the behavior of the loss function, we have generated loss landscapes for synthetic images of 3D models. To produce these landscapes, a synthetic photograph was generated by projecting the 3D model at a known pose with Phong shading. We then vary the pose parameters, two at a time about and find the value of the loss function between this altered projection and the “photograph” taken at . These loss values are recorded, allowing us to visualize the behavior of the loss function by observing surface and contour plots of these values. The unaltered pose values should project an image identical to the input photograph, giving a loss of zero according to the loss function derived in Section 4, with a higher loss exhibited at other poses. The variation of the loss with respect to a pair of pose parameters is shown in Figure 4(a). It can be seen from these loss landscapes that the loss has a clear minimum at the initial pose . The loss values increase as these pose parameters deviate away from , up to . From this data, we are able to see that the minimum corresponding to can be considered a global minimum for all practical purposes. The shape of the surface plots was similar for all other parameter pairs, indicating that the complete landscape of the loss function should similarly have a global minimum at the initial pose, allowing us to find this point using standard optimization techniques, as demonstrated in Section 6.

Loss landscape for real photographs. The landscape of the loss function was analyzed for real photographs by varying the pose parameters of the model about a pose obtained by manually matching the 3D car model to a real photograph. The variation was plotted by taking a pair of pose parameters at a time over the entire set of pose parameters. A loss landscape obtained by varying and for a real photograph is shown in Figure 4(b). The variation of the loss function for other pose parameter pairs were found to be similar. Although a global minimum exists at the best pose of the real photograph, the nature of the loss function surface makes it more difficult to optimize when compared to synthetic photos (Figure 4(a)). In particular, one can observe local minima and the landscape in higher dimensions is considerably more complex.

(a) Synthetic photo
(b) Real photo
Figure 4: Loss landscapes for synthetic and real photos. The six dimensional loss function was visualized by plotting its variation with a pair of pose parameters at a time. Based on our pose representation this results in fifteen plots. The variation of the loss function with a pair of pose parameters are shown for a synthetic photograph and a real photograph. The nature of the loss function for real photographs makes it more difficult to find the global minimum (hence the correct pose) than for synthetic photographs.

6 Optimizing the Loss Function for Pose Estimation

As explained in Section 4, the correct pose parameters will give the lowest loss value. The loss function landscape, as discussed in Section 5, shows that corresponds to the global minimum of the loss function. Therefore, the loss function (4) was minimised w.r.t. to obtain . The optimization strategy is described in this section.

The optimizer. To immunise the optimisation from pixel quantisation artefacts and noise in the images, direct search methods that do not calculate the derivative of the loss function were considered. The optimization was performed using the well known Downhill Simplex Method (DS) [27, 33, 22], owing to its efficiency and robustness. When optimizing an -dimensional function with the DS method, a so called simplex consisting of points is used to traverse the -dimensional search space and find the optimum.

The reliability of the optimization is adversely affected by the existence of local minima. Fortunately, the Downhill Simplex method has a useful property. In most cases, if the simplex is reinitialized at the pose parameters of the local minimum and the optimization is performed again, the solution converges to the global minimum. Proper parameterization is important for the optimizer to give good results. We have used a normalized pose parameterization as follows.

Normalised pose parameters. Normalization gives each pose parameter a comparable range during optimization. The normalized pose was obtained by normalizing the pose w.r.t. the dimensions of the photograph as follows.

(24)

are the width and height of the photograph (2D image). is a unit vector and does not require normalization.

Initialisation. The downhill simplex method, like all optimization techniques, requires a reasonable starting position. There are many methods for selecting a starting point, from repeated random initialization to structured partitioning of the optimization volume. A disadvantage of these methods is that they require a number of optimization runs to locate the optimal point, which can take significant time. Depending on the application, it may be possible to develop a coarse location method which provides an estimate of the initial pose. Possible methods for obtaining a coarse initial pose include the work done by [29], [36] and [1]. We have used the wheel match method described in [14]. to obtain an initial pose for vehicle photos where the wheels are visible. The wheels need not be visible with the other methods mentioned above. Since the wheel match method gives the pose for an orthogonal projection, the perspective parameter was initialized to a large value as to get negligible perspective distortion in the initial rough pose used for the optimization.

Background removal. As the effects of the background clutter in the photo adds considerable noise to the loss function landscape we use an adaptation of grabcut [34] to remove a considerable amount of the background pixels from the photo. The initial rough pose estimate is used as a prior to generate the background and foreground grabcut masks 222We use the cv::grabCut() method provided in OpenCV [2] version 2.1. The masks are obtained by scaling the model projection obtained from the initial pose by a margin .

7 Experimental Results

Experimental results on real photos of different vehicle types and colours (using corresponding 3D Models) are shown in Figures 1 and 6. The photos have realistic conditions like cast shadows and surface specularities. We have used a laser scanned 3D CAD model of a Mazda Astina (with more than 2 million polygons), a Mazda 3 3D model, a Jeep Cherokee 3D model and a Hyundai Getz 3D model (Figure 5). The latter models were obtained from the Internet and have less than 500,000 polygons. The optimization was done using perspective projection. A perfect 3D pose is recovered with the scanned 3D model (Figure 1). The 3D models obtained from the Internet do not match the proportions and details of the real vehicles exactly. However, we see good results even when the 3D models do not perfectly represent the object in the scene (Figure 6).

(a) Hyundei Getz
(b) Jeep Cherokee
(c) Mazda 3
(d) Mazda Astina (scanned)
Figure 5: Some of the 3D CAD models used for the experiments are shown. 5(d) is a laser scanned 3D model of a real Mazda Astina car and matches the proportions and detail of the real car in Figure 1. Additionally it has a very high number of polygons.
(a) Initial
(b) Final
(c) Initial
(d) Final
(e) Initial
(f) Final
(g) Initial
(h) Final
Figure 6: Experimental Results The ‘Initial rough pose’ (column 1) used to initialize the optimization and the resulting ‘Final pose’ (column 2) obtained by optimizing the novel loss function are shown for different real photos (row-wise). The pose is shown in ‘yellow’ by an outline of the projected 3D model. Unlike the scanned 3D model in Figure 1, these 3D models do not perfectly match the proportions and detail of the real vehicle in the photo. However, the proposed method produces good results even with approximate 3D models. The images have been cropped for visual clarity.

8 Implementation Details

In this section we describe some of the technical aspects of the proposed work. The initial code was implemented in MATLAB [22], however, components were gradually ported to C in order to improve performance.

3D rendering. In order to calculate the loss values described in Section 4, it was required to render the surface normals and brightness of a 3D model at a given pose. Initially, the rendering was done using model3D [24], a BSD licensed MATLAB [22] class. As this rendering was not fast enough for our application, a separate module was written in C to render the model off-screen using OpenGL [28] pBuffer extension and GLX. This C module was used with the MATLAB code using the MEX gateway. Initially, only the rendering was done in C. The rendered 2D intensity and surface normal matrices were returned back to MATLAB using the MEX gateway. This seemed to exhaust memory during the reliability tests described in Section 6. Therefore, the r endering and the loss calculation were also implemented in C, with only the loss value returned to MATLAB for use in optimization.

Approach Loss calc. Render
MATLAB 0.16 s 2.28 s
C/OpenGL 0.04 s 0.17 s
Table 1: Rendering and loss calculation times.

This second approach improved performance in terms of speed and memory usage. A summary of the time taken to render the image and to calculate the loss using these approaches are presented in Table 1.

Running times. A typical Downhill Simplex minimization required in the order of 100–200 loss function evaluations. Using the C based loss calculation and OpenGL rendering, pose estimation in synthetic images took around 1 minute for models with more than 30,000 nodes. Recent work done in [25] on pose estimation using point correspondences, takes more than 3 minutes (200 seconds) for a synthetic image of a model with only 80 points. Hence, despite being a pixel based method, the performance of our approach is very encouraging. Further improvements in speed may be obtained by using the graphics hardware (GPU) for computing the loss function.

9 Discussion

A method to register a known 3D model on a given 2D image is presented in this paper. A novel distance measure (Section 4) between attributes in the 2D image and projected 3D model is optimized to recover the 3D pose of the object in the given image. Pose estimation results on real photos of different vehicles are shown with the optimization initialized from a rough pose obtained using wheel locations [14]. The method differs from existing 2D-3D registration methods found in the literature. The proposed method requires only a single view of the object. It does not require a motion sequence and works on a static image from a given view. Also, the method does not require the camera parameters to be known a priori. Explicit point correspondences or matched features (which are hard to obtain when comparing 3D models and image modalities) need not be known before hand. The method can recover the full 3D pose of an object. It does not require prior training or learning. As the method can handle 3D models of high complexity and detail, it could be used for applications that require detailed analysis of 2D images. It is particularly useful in situations where a known 3D model is used as a ground truth for analyzing a 2D photograph. The method has been currently tested on real and synthetic photographs of cars with promising results.

Outlook. A planned application of the method is to analyze images of damaged cars. A known 3D model of the damaged car will be registered on the image to be analyzed, using the proposed registration method. This will be used as a ground truth. The method could be extended further to simultaneously identify the type of the car while estimating its pose, by optimizing the loss function for a number of 3D models and selecting the model with the lowest loss value. More sophisticated optimization methods may be used to improve results further.

Conclusion. We conclude from our results that the linearly invariant loss function derived in Section 4 can be used to estimate the pose of cars from real photographs. We also demonstrate that the Downhill Simlpex method can be effectively used to optimize the loss function in order to obtain the correct pose. Allowing simplex re-initializations makes the method more robust against local minima. Despite being a direct pixel based method (as opposed to a feature/point based method), the performance of our method is very encouraging in comparison with other recent approaches, as discussed in Section 8.

Acknowledgment. This work was supported by ControlC=xpert. The authors wish to thank Stephen Gould and Hongdong Li for the valuable feedback and advice.

References

  • [1] Arie-Nachimson, M., Basri, R.: Constructing implicit 3d shape models for pose estimation. In: ICCV (2009). URL http://www.wisdom.weizmann.ac.il/~vision/ism3D
  • [2] Bradski, G.: The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000)
  • [3] Brooks, R. A., and Binford, T. O.: Geometric modelling in vision for manufacturing. In: Proceedings of the Society of Photo-Optical Instrumentation Engineers Conference on Robot Vision, vol. 281, pp. 141–159. Washington, DC, USA (1981)
  • [4] Chia, A., Leung, M., Eng, H.L., Rahardja, S.: Ellipse detection with hough transform in one dimensional parametric space. IEEE International Conf. on Image Processing (ICIP’07) 5, 333–336 (2007). DOI 10.1109/ICIP.2007.4379833
  • [5] Chin, R.T., Dyer, C.R.: Model-based recognition in robot vision. ACM Comput. Surv. 18(1), 67–108 (1986). DOI http://doi.acm.org/10.1145/6462.6464
  • [6] David, P., DeMenthon, D., Duraiswami, R., Samet, H.: SoftPOSIT: Simultaneous pose and correspondence determination. International Journal of Computer Vision 59(3), 259–284 (2004)
  • [7] Deriche, R., Faugeras, O.: Tracking line segments. Image and Vision Computing 8(4), 261–270 (1990)
  • [8] Foley, J.: Computer graphics: principles and practice. Addison-Wesley Professional (1996)
  • [9] Forsyth, D.A., Ponce, J.: Computer Vision: A Modern Approach. Prentice Hall Professional Technical Reference (2002)
  • [10] Forsyth, D.A., Ponce, J.: Computer Vision: A Modern Approach. Prentice Hall (2002)
  • [11] Gleason, G.J., Agin, G.J.: A modular system for sensor-controlled manipulation and inspection. In: Proceedings of the 9th International Symposium on Industrial Robots, pp. 57–70. Society of Manufacturing Engineers, Washington, DC, USA (1979)
  • [12] Horn, B.: Obtaining shape from shading information. In: PsychCV75, pp. 115–155 (1975)
  • [13] Huttenlocher, D., Ullman, S.: Recognizing solid objects by alignment with an image. International Journal of Computer Vision 5(2), 195–212 (1990). URL http://www.springerlink.com/index/X204134510478U64.pdf
  • [14] M. Hutter and N. Brewer. Matching 2-D Ellipses to 3-D Circles with Application to Vehicle Pose Identification. In Image and Vision Computing New Zealand, 2009. IVCNZ’09. 24th International Conference, pages 153–158, 2009.
  • [15] Kanatani, K., Ohta, N.: Automatic detection of circular objects by ellipse growing. International Journal of Image and Graphics 4(1) (2004)
  • [16] Kollnig, H., Nagel, H.H.: 3d pose estimation by directly matching polyhedral models to gray value gradients. Int. J. Comput. Vision 23(3), 283–302 (1997). DOI http:article–deriche1990tracking,title=––Trackinglinesegments˝˝,author=–Deriche,R.andFaugeras,O.˝,journal=–ImageandVisionComputing˝,volume=–8˝,number=–4˝,pages=–261–270˝,year=–1990˝,publisher=–Elsevier˝˝//dx.doi.org/10.1023/A:1007927317325
  • [17] Lepetit, V., Fua, P.: Monocular model-based 3d tracking of rigid objects: A survey. In: Foundations and Trends in Computer Graphics and Vision, pp. 1–89 (2005)
  • [18] Linefor: Free 3d models 3d people 3d objects (2008). http://www.linefour.com/
  • [19] Lowe, D.G.: Object recognition from local scale-invariant features. In: Prof. International Conference on Computer Vision (ICCV’99), pp. 1150–1157 (1999)
  • [20] Mahalanobis, P.: On the generalized distance in statistics. In: Proceedings of the National Institute of Science, Calcutta, vol. 12, p. 49 (1936)
  • [21] Mai, F., Hung, Y.S., Zhong, H., Sze, W.F.: A hierarchical approach for fast and robust ellipse extraction. Pattern Recogn. 41(8), 2512–2524 (2008). DOI http://dx.doi.org/10.1016/j.patcog.2008.01.027
  • [22] MathWorks, T.: MATLAB R2009b. URL http://www.mathworks.com
  • [23] Mclaughlin, R.A.: Randomized hough transform: improved ellipse detection with comparison. Pattern Recogn. Lett. 19(3-4), 299–305 (1998). DOI http://dx.doi.org/10.1016/S0167-8655(98)00010-5
  • [24] Michael, S.: Model3d. URL http://www.mathworks.com/matlabcentral/fileexchange/7940-mode%l3d
  • [25] Moreno-Noguer, F., Lepetit, V., Fua, P.: Pose priors for simultaneously solving alignment and correspondence. Computer Vision–ECCV 2008 pp. 405–418 (2008). URL http://www.springerlink.com/content/m7lmoreno2008pose1th35w0j%033v7/fulltext.pdf
  • [26] Moutarde, F., Stanciulescu, B., Breheret, A.: Real-time visual detection of vehicles and pedestrians with new efficient adaboost features. 2nd Workshop on Planning, Perception and Navigation for Intelligent Vehicles (2008)
  • [27] Nelder, J., Mead, R.: A simplex method for function minimization. The computer journal 7(4), 308 (1965)
  • [28] OpenGL: URL http://www.opengl.org
  • [29] Ozuysal, M., Lepetit, V., P.Fua: Pose estimation for category specific multiview object localization.

    In: Conference on Computer Vision and Pattern Recognition. Miami, FL (2009)

  • [30] Perkins, W.A.: A model-based vision system for industrial parts. IEEE Trans. Comput. 27(2), 126–143 (1978). DOI http://dx.doi.org/10.1109/TC.1978.1675046
  • [31]

    Pettersson, N., Petersson, L., Andersson, L.: The histogram feature – a resource-efficient weak classifier.

    In: IEEE Intelligent Vehicles Symposium (IV2008) (2008)
  • [32] Poje, J. F., and Delp, E. J.: A review of techniques for obtaining depth information with applications to machine vision. Tech. rep., Center for Robotics and Integrated Manufacturing, Univ. of Michigan, Ann Arbor (1982)
  • [33] Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical recipes in C (3rd ed.): the art of scientific computing. Cambridge University Press, New York, NY, USA (2007)
  • [34] Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG) 23(3), 309–314 (2004)
  • [35]

    Safaee-Rad, R., Smith, K., Benhabib, B., Tchoukanov, I.: Application of moment and fourier descriptors to the accurate estimation of elliptical shape parameters.

    In: Prof. International Conf. on Acoustics, Speech, and Signal Processing (ICASSP’91), pp. 2465–2468 vol.4 (1991). DOI 10.1109/ICASSP.1991.150900
  • [36] Sun, M., Xu, B.X., Bradski, G., Savarese, S.: Depth-encoded hough voting for joint object detection and shape recovery. In: ECCV. Crete, Greece (2010). URL http://www.ics.forth.gr/eccv2010/program.php
  • [37] Tan, T., Baker, K.: Efficient image gradient based vehicle localization. IEEE Transactions on Image Processing 9(8), 1343–1356 (2000). URL http://nlpr-web.ia.ac.cn/english/irds/papers/tnt/Efficient%20%Image%20Gradient-Based%20Vehicle%20Localisation.pdf
  • [38] Tsuji, S., Matsumoto, F.: Detection of ellipses by a modified hough transformation. IEEE Trans. Comput. 27(8), 777–781 (1978). DOI http://dx.doi.org/10.1109/TC.1978.1675191
  • [39]

    Viola, P., Jones, M.J.: Robust real-time face detection.

    Int. J. Comput. Vision 57(2), 137–154 (2004). DOI http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb
  • [40] Woodham, R.: Photometric stereo: A reflectance map technique for determining surface orientation from image intensity. In: Proc. SPIE, vol. 155, pp. 136–143 (1978)
  • [41] Xing, E., Ng, A., Jordan, M., Russell, S.: Distance metric learning with application to clustering with side-information. Advances in neural information processing systems pp. 521–528 (2003)
  • [42] Yachida, M., Tsuji, S.: A versatile machine vision system for complex industrial parts. IEEE Trans. Comput. 26(9), 882–894 (1977). DOI http://dx.doi.org/10.1109/TC.1977.1674936