From 2D to 3D Geodesic-based Garment Matching

09/21/2018 ∙ by Meysam Madadi, et al. ∙ 0

A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured by using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by mean square error (MSE) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As shopping for garments is increasingly moving to a digital domain, the next step after just seeing the desired clothes is to virtually try them on. The focus of this paper is an application for garment retexturing where the images of the person are captured with a Kinect-2 RGB-D camera. There are several steps between taking an RGB-D picture and displaying the final result with a retextured garment. These steps involve segmentation of the garment, garment matching and surface retexturing. The novelty of this paper lies in the retexturing part, which involves several challenges. First, a coordinate map must be created between the image of the new texture and the image that is being retextured. This problem is especially difficult in the case of non-rigid and easily transformable surfaces like clothes. Another challenge is to shade the new texture correctly. It is possible to use the colour information of the original image, but the lighting, intensity and the original colour of the surface are usually not previously known and must be estimated.

There exist several standard methods for projecting textured surfaces on screen. The simplest shading methods work only by using surface normals independently without considering the overall surface, attempting to estimate the brightness of the surface given some known viewer and light source direction. Examples of this kind of method are the Gouraud shading, Phong shading, and Blinn-Phong shading [30]. However, these methods do not support shadows.

Figure 1: Overview of the proposed retexturing method.

The proposed automatic retexturing method, after the segmentation stage uses point set registration method [21] to find correspondence between the outer 2D contours of the person and the target garment. After the contour matching, the surface topology of the flat 2D garment is approximated using geodesic distance in a global closed form solution using thin plate spline (TPS) [6] and the final result is superimposed onto the segmented area.

The rest of the paper is organised as follows: Section 2 discusses related work in the field of object retextureing, Section 3 gives a detailed description of the proposed retexturing method and Section 4 shows and compares the results obtained by using two different mapping methods. Finally, Section 5 concludes the paper.

2 Literature review

Due to the fact that an actual try-on of clothes is time-consuming, a virtual alternative has always been desired, and many researchers have been engaged in developing novel strategies and systems to perform such a task [32, 47, 17, 8, 29]. It requires scanning, classification of the body based on gender and size, 3D modelling [44, 16, 12] and visualisation. Constrained texture mapping and parametrisation of triangular mesh are some popular examples, although they suffer from some deficiencies such as finding the parameter values and manual adjustments [26, 24]. Many researchers have also suggested methodologies for visually fitting garments onto the human body based on dense point clouds [18, 3].

The matching problem stage can be defined as a correspondence problem, which incorporates pair-wise constraints. Hence, it is often solved with a graph matching approach [19, 46, 11], which is especially suitable for deformable object matching. Furthermore, additional constraints can be added to the framework in order to reduce the computation time (e.g. clearly, each cloth type is constrained to the body part where it is dressed), or in order to take problem-specific aspects into account.

There exist various techniques for conducting a mapping from 2D image texture space to a 3D surface. Some examples are intermediate 3D shape [5], direct drawing onto the object [15], or using an exponential fast marching method by applying geodesic distance [37, 34]. Many researchers have devoted special attention trying to enhance the realism of virtual garment representation during the last decade [7]. One of the most frequently used texture fitting methods was proposed by Turquin et al. [35], which allows the users to sketch garment contours directly onto a 2D view of a mannequin. The initial algorithm has been further enhanced by many other researchers [40, 43].

Another popular way of mapping a 2D texture onto a 3D surface is by using a single image [45]. As proposed by [46], an estimation of a 3D pose and shape of the mannequin is followed by constructing an oriented facet for each bone of a mannequin according to angles of the pose, and projecting the 2D garment outlines into corresponding facets. Eckstein et al. [10] proposed a constrained texture mapping algorithm, which can be used for 2D and 3D modelling, and multi-resolution texture mapping and texture deformation, but it may produce a Steiner vertex effect when a simple solution does not exist. Kraevoy et al. [22] introduced a method based on iterative optimisation of a constrained texture mapping method. In their method, it is a requirement to specify the corresponding constraint points on the grid model and texture image, the parametrised mesh. Later, Yanwen et al. [39] reported a constrained texture mapping method based on harmonic mapping, with interactive constraint selection by the user; the method produces high efficiency, real-time optimisation, and adjustment of mapping results. The block based constrained texture mapping methods are also used in order to bring higher speed and lower computational costs [25].

3 Retexturing approach

In this paper, we propose a new automatic retexturing method covering the stages of segmentation, 2D to 3D garment matching and rendering. We use a Kinect 2 device to capture scene information. As preprocessing, we use RGB, depth and infrared images of the Kinect and segment out the garment from the background. The segmented depth image is used to compute retexturing from a source 2D flat garment image. We reduce the problem of surface point matching to an interpolating problem by using garment contour matching. The interpolation process takes surface topology into account using geodesic distance in a global closed form solution using thin plate spline (TPS) [6]. Thus, 2D garment contours are matched beforehand applying point registration based on Gaussian mixture models [21]. Finally the resulting mapped source image is sampled, and the segmented area can be superimposed using these colours. As a result, realistic rendering is provided showing both qualitative and quantitative advantages in relation to state-of-the-art method alternatives based on thin-plate splines with geodesic interpolation. The proposed retexturing method is visualised in Fig. 1.

(a) (b) (c)
Figure 2: Short and long sleeve examples for contour correspondences obtained using point set registration. Red contour corresponds to and blue contour corresponds to .

3.1 Segmentation

In order to make accurate measurements in real world units, we standardise the coordinate system of body and garment models according to real world coordinates. Moreover, rich visualisation includes aligned image data (RGB and depth images), so as to provide animations as close as possible to the real scenario [19].

The first step of the proposed retexturing method is segmentation of garments from the background. It is necessary to extract a set of points from the image corresponding to the area being retextured. The proposed method works under the following assumptions: the area to be retextured is a shirt (or some other initially known garment) worn by a person, the person is assumed to be standing in front of the camera and is assumed not to occlude the area of interest with his/her hands. The segmentation is done by first extracting pixels and the skeleton of the body using Kinect SDK. Skeleton joint locations, along with some artificial joints, are used to train the GrabCut algorithm [28]

and select areas with desired joints. In the case of the reference 2D image, the GrabCut algorithm is also applied initializing the background color with the pixels on the borders of the image. The output of the grabCut algorithm is a binary mask, where the set of points are comprised of the outline of the binary mask. The initial point density is related to image resolution and area occupied by the garment. Typically the outline consists of few thousand points. This simple automatic segmentation approach worked accurately in our dataset. In case of other non-controlled scenarios, any other automatic or semi-automatic segmentation approach could be considered, such as deep learning based garment segmentation

[23] or pose guided garment boundaries. For cases where segmentation failed, we manually guided GrabCut to get the binary mask of the garment.

Figure 3: Comparing Euclidean to geodesic distace in TPS. TPS finds a mapping between two point sets based on known correspondences. In this image we consider such a mapping between two 2D example lines where end points are matched with . As can be seen, point has equal Euclidean distance to points and . In this case, mapping does not take line topology into account, causing a wrong interpolation where points on the right hand side of get much denser than the points on its left hand side. This problem can be solved by using geodesic distance in the mapping .

3.2 Outer contour matching

Contour matching can be viewed as a point set registration problem, where a correspondence must be found between a scene and a model. A few of the most well known methods for point set registration are iterative closest point [4], robust point matching [31, 14], and Coherent point drift [27] algorithms. For our purposes, a correspondence must be found between highly deformed shapes. Out of available algorithms, we have chosen to use non-rigid point set registration using Gaussian mixture models (GMM)  [21] because of its accurate fitting under different conditions and fast execution time. Additionally, Gaussian mixtures provide robust results even if the shapes have different features, such as different neck lines, hand positions and folds.

Let’s define the contour of a garment on a real person as and the contour of the flat garment as . The aim is to create a correspondence between contour models and .

In the GMM point matching algorithm, the point sets are represented as Gaussian mixture models. Instead of assuming a one-to-one correspondence based on the nearest neighbour criterion, one-to-many relaxations are used to allow for fuzzy correspondences, also known as soft assignment. The idea is to assume that each model point corresponds to a weighted sum of the scene points, instead of the closest scene point alone. The weights are proportional to a Gaussian function of the pairwise distances between the moving model and the fixed scene. The method works by dawning a statistical sample from a continuous probability distribution of random point locations. Afterwards the point set registration problem is viewed as an optimisation problem, meaning that a certain dissimilarity measure between the Gaussian mixtures constructed from the transformed model set and the fixed scene set is minimised based on L2 distance between the mixtures 

[21].

Before finding the corresponding points between the shapes, the contours point sets are reduced to 400 points. Afterwards, the point set x and y frame coordinates are normalised in the range [0,1]. Essentially the used method provides information about how has to be transformed to match . Outer contour matching examples are shown in Fig. 2.

3.3 Inner contour matching

Inner contour matching refers to the process of finding correspondence points between the body surface and the 2D flat garment in order to assign to each body point a colour from the garment. This process is mainly a difficult task due to, first, the lack of depth information for the 2D flat garment and the lack of texture for the depth image, and second, dissimilar textures for the source 2D flat garment and target put-on garment. Therefore feature based matching is not applicable. Conformal based approaches like [36, 42] fail due to the different topologies of the surfaces.

In order to solve this problem efficiently, we first generate a triangulated 3D mesh based on the depth image of the segmented area. To have a smooth shape at the boundaries of garment, we apply morphological opening using disk structuring element type with mask size of 5 to the binary mask. A solution can be obtained by finding an affine deformation matrix for each face triangle to bring both source and target surfaces into alignment according to the matched points of the outer contours. However, we cannot guarantee a perfect matching for near contour points in such a solution due to different surface topologies and depth camera noise in the contours. Instead, we propose to use thin plate splines (TPS) [6] as a solution in closed-form based on a radial basis kernel. Let be the set of all points belonging to the segmented and discretized body surface . Then, a mapping from to the source image is computed through

(1)

where is a set of trained coefficients based on and , is a radial basis kernel and is the number of contour points. This basic formulation is based on Euclidean distance among the points which is not applicable for our problem since contour points do not cover all the surface; besides that, Euclidean distance does not describe the surface topology. Instead we propose a geodesic-based distance to include surface topology. We show this idea in Fig. 3.

Since we apply discretized body surface , the Dijkstra algorithm can be used to compute the shortest distance from to each . However, we get a stairstep-like shortest path which introduces some amount of error in the distance, no matter how much we refine mesh. Instead, we follow the fast marching algorithm of [9] to compute a fast and accurate approximation of geodesic distance. The fast marching algorithm is closely related to the Dijkstra algorithm with the difference that it satisfies the Eikonal equation to update the graph where is the gradient of the action map and is a positive outwards speed function at point . is a function of time at point that describes the evolution of the surface with respect to and surface gradient. We assume the surface is differentiable at all points. Starting from , at each iteration, the algorithm sweeps outwards one grid point with respect to to locate the proper grid point to update. Then geodesic distance can be computed for two vertices and from the shortest path by

(2)

To compute geodesic distance efficiently, we set a flag for cell of the distance table as 1 if vertices and already exist on a larger optimum path, avoiding recomputing the optimum path for them.

Then we rewrite the TPS formulation to compute the coefficient matrix as

(3)

where . is a regularization term and is added to the kernel where

is the identity matrix and

. values close to zero make the kernel sensitive to wrong correspondences, and values far from zero tend to an affine transformation. We set to -1000, and by doing so, the visualization becomes more realistic and less noisy.

Afterwards, a solution can be achieved by applying trained coefficients as

(4)

where . Matrix includes warped points to the 2D shirt image. We assign each point the colour of its corresponding pixel from the shirt image.

Figure 4: Sample images used in our dataset. a) Garments used in the first data set, b) Garments with landmarks used in the second dataset, c) People who participated in creating the first data set.

3.4 Shading

The shading effect of the garment is achieved using an adaptation of method [2] which is an automatic technique for garment retexturing and shading, where the shading information is acquired from Kinect 2 infrared information and is superimposed on the inner shape results. It is worth noticing that shadow mapping on the garment is not the main contribution of this paper, and thus its usage and coverage are limited to the extent demanded for visualising the results illustrating the effectiveness of the proposed mapping method.

The general procedure for obtaining the final visualisation is as follows. The point cloud corresponding to the area of interest provided by the Microsoft Kinect 2 camera is triangulated and rendered as described in the previous section. The image created as a result of mapping in the previous steps is used as a texture image, such that each vertex corresponds to a point on the image. Afterwards, the rendered image is modified by the corresponding infrared values for each pixel. Finally, the segmented area in the Kinect frame is replaced by the colour information from the previous step.

In order to enhance the quality of the representation, the point cloud is preprocessed before rendering, since it usually is noisy. More clearly, smoothing the depth image with a Gaussian filter is considered, which, according to our experiments, significantly improves the results.

4 Experimental Results and Discussion

In order to present the results, first we describe the setup of the experiments in terms of data, methods and parameters and evaluation metrics.

4.1 Setup

The proposed retexturing method was tested on an image database taken using the Kinect 2 RGB-D camera. According to [38], Kinect 2 can capture frames starting from 0.5 meters and has depth accuracy error smaller than 2 mm in the cater part of the frame. The error increases towards edges of the frame, and it also increases with greater measurement distances. The best distance for scanning objects is the 0.5 to 2m range. To achieve the best depth resolution, the people were scanned at a distance of 1.5 to 2 meters where the error in the horizontal and vertical plane is the smallest. Each image contains a person facing the camera in a pose that does not significantly occlude the worn garment. The garments segmented from the original database were retextured using another database consisting of images of flat shirts. The flat shirt database was captured with various cameras providing decent quality images, as depth was not required.

Figure 5: a) Landmarks used in the second dataset on the flat garment (right) and landmark locations after putting the garment on as ground truth. Landmarks are shown by indices for comparison purposes. b) Retextured garment and estimated landmarks (left) and displacement arrows to ground truth landmarks (right) for computing error.
Method T-shirt Votes T-shirt Percentage Long sleeve Votes Long sleeve Percentage
NRICP [1] 77 2.68% 32 3.69%
CPD [27] 485 16.88% 245 28.23%
GM-TPS 2311 80.44% 591 68.09%
Table 1: Mean Opinion Score (MOS) comparison

The first data set contained 91 retextured images with 14 people (11 males and 3 females). This data set used 13 flat garments (4 long sleeve garments and 9 t-shirts). The second data set contained 39 retextured images with 5 people (4 males and 1 female). This data set used 8 flat garments (4 long sleeve garments and 4 t-shirts). We physically attached 16 landmarks to garments in the second dataset. The location of the landmarks was chosen empirically with the aim of visually demonstrating the texture shifts for different parts of the garment. Matching the landmarks was a manual process, therefore we limited ourselves to 16 landmarks. This comparison was done in order to determine the retexturing precision by retexturing the same garment onto itself. In ideal case the retextured image should be identical to the original image. Fig. 5 shows a sample of a real put-on image and the landmarked garment itself. Some samples of both datasets are shown in Fig. 4.

We used two metrics for evaluation of our method: qualitative comparison using the mean opinion score (MOS), and quantitative comparison using the mean square error (MSE). The MOS score was measured by showing 91 sets of images from the first dataset to 41 people. The data was presented in an online survey where the image size was two times larger to that of shown in 6, with the exception of column (c) which was not shown to the participants. Also it has to be pointed out, that most of the participants did not have educational background in image processing or related fields. In the survey each person was asked for an opinion about which one of the images in each set looks visually more realistic. The MSE was measured on the second dataset by retexturing the flat version of the shirt and computing the average distance from retextured landmarks to ground truth landmarks. Fig. 5 shows the process of computing MSE. Unfortunately, some retexturing or garment fitting papers just report results as a few qualitative images [47, 17, 43]. Regarding our contribution as a garment point matching, we select point set registration methods in the comparison using introduced evaluation metrics: nonrigid iterative closest point (NRICP) [1] and coherent point drift (CPD) [27].

All compared results were produced with the same set of parameters that were determined empirically. The setup parameters for matching the contours needed for the point registration algorithm [21] are set as follows (see original paper for the definition of parameters): sigma, which is the scale parameter of Gaussian mixtures, is set to 0.2 and 0.1, and the maximum number of function evaluations at each level is set to 50, 500, 100, 100 and 100. The point registration algorithm uses contours with 400 points. After the transformation and point correspondence are found, the contour is further down-sampled to 120 points and used for the inner point matching. A larger number would have resulted in a long computation time, whereas a smaller number of points resulted in some undersampled parts and produced inferior mappings. 120 points were chosen as a compromise between the execution time and the resulting mapping quality.

(a) (b) (c) (d) (e) (f)
Figure 6: Images created by the proposed retexturing method, (a) is the original image, (b) is the image of a shirt, (c) shows the shape correspondence, (d) is the retextured image based on the geodesic mapping, (e) is mapping using the Coherent Point Drift (CPD) algorithm and (f) is mapping using the non-rigid Iterative Closest Point (ICP) algorithm.
(a) (b) (c) (d)
Figure 7: Images created by the proposed retexturing method, (a) is the original image, (b) is the retextured image based on the geodesic mapping, (c) is the image of pants and (d) shows the shape correspondence.
Figure 8: Retexturing effects for different necklines

4.2 Evaluation

We separated long and short sleeve images in the results to analyse them separately. We show the MOS percentage in Table 1. The results illustrate that our method outperforms state-of-the-art methods by a large margin regarding realistic view. This can also be seen qualitatively in Fig. 6. We added correspondences between flat garment and body contours in the third column of Fig. 6 to see the effect of outer contour matching on the retexturing results. It can be seen that the final retextured image still has a realistic appearance even with small misalignment in outer correspondences. However, a small misalignment can have a local impact. This can mainly be seen in the long sleeves. As an additional qualitative example, Fig. 7 shows the proposed retexturing approach used to successfully retexture pants. Given the appropriate input data, the same can be done for other garment types. It has to be noted that the appearance of shadows are highly dependant on the material of the garment a person is wearing. The adjustment of Kinect IR values uses the same parameters for all generated results, therefore the shadow effects can appear different for different garment types. The use of Kinect IR data for shadow generations is the same as the one presented in [33, 2].

If the source and target garments have different features, for example if a collar is present in the put-on image and not present in the flat image, some unnatural effects may be seen; the same goes for different neck lines as shown in Fig 8. The NRICP algorithm has the worst visual results due to the different topologies of the surfaces between flat garment and body, and the CPD algorithm has difficulties with aligning surfaces in the boundary regions.

MSE values are shown in Table 2. As seen from the visual results, in most cases our method is more accurate than state-of-the-art methods regarding marker distances to ground truth. Our method generates a lower error for short sleeves than long sleeves. However, this is not a significant change according to the MSE results. Often our method performs better than other methods for almost each marker in Fig. 9 where samples represent different garments, and landmarks are the white circles that are placed on the garment, as is shown in Fig. 5. Our method is more stable among different persons and different markers in comparison to the state-of-the-art methods. However, the long sleeves error as seen in Fig. 9 fluctuates among different persons due to higher variation in hand position. Marker numbers 8 and 16, which were placed at the end of the sleeves, have the highest error in both long and short sleeve garments. This happens due to slight point misalignment in outer contour matching.

Method T-shirts Long sleeves
NRICP [1] px px
CPD [27] px px
GM-TPS px px
Table 2: MSE for marker mapping error on the second dataset

4.3 Time complexity

We analyze time complexity of the retexturing part, the main contribution of this work. Computing geodesic distance is the most time consuming part of retexturing. The basic fast marching algorithm has a time complexity for Eikonal solver where is the number of nodes in the mesh. Yatziv et al. [41] reduced the complexity to via untidy priority queue. Although basic formulation does not allow parallel computing, near optimal iterative Eikonal solvers have been appeared for fast and parallel computing [20]. Fu et al. [13] reported a computation time 459ms for a Stanford dragon with 631,187 vertices speeding up basic fast marching algorithm by a factor of 14. Note that a garment in our setup has 15K vertices in average. Without loss of generality one can resize depth image by a factor of 0.5 and reduce number of vertices less than 4K, meaning a geodesic computation in less than 3ms for a single node. However, we need to compute geodesic distance for all vertices which makes it polynomial time complexity. Fortunately having a table of shortest distances among all vertices which is getting updated iteratively allows us to reduce polynomial time complexity to , assuring real time computation performance.

Figure 9: Method comparison for different garments. Graph (a) and (b) show average landmark error (in pixels) for each sample for T and long sleeve shirts. Graphs (c) and (d) show per landmark error averaged over all samples for T and long sleeve shirts.

5 Conclusion

We proposed a retexturing method based on robust point registration and thin plate spline interpolation. The proposed method can be used to segment out the garment worn by a person and retexture it with another similar piece of garment, i.e. a garment lying on the table or some other flat surface. In this fashion, the outer boundaries of the segmented put-on garment are matched to the boundaries of the flat source garment. Afterwards, the whole surfaces are matched based on geodesic thin plate spline to assign each point on the target garment a color from the source garment. We compared our approach to the state-of-the-art methods and achieved the best results in both visual and numerical evaluations.

Our current approach is limited to a relaxed pose without occlusions on the garment. However, our approach is general as long as boundary correspondences are given. In future work, we will consider a 3D human model fitting to cope with current limitations.

Acknowledgment

This work has been partially supported by Estonian Research Council Grant PUT638, Fits.Me (Rakutan) through the Research and Development Project LLTTI16056, the Spanish Projects TIN2015-65464-R and TIN2016-74946-P (MINECO/FEDER, UE), CERCA Programme / Generalitat de Catalunya, the Scientific and Technological Research Council of Turkey (TÜBİTAK) 1001 Project (116E097), the COST Action IC1307 iV&L Net (European Network on Integrating Vision and Language) supported by COST (European Cooperation in Science and Technology), and the Estonian Centre of Excellence in IT (EXCITE) funded by the European Regional Development Fund.

References

  • [1] B. Amberg, S. Romdhani, and T. Vetter. Optimal step nonrigid icp algorithms for surface registration. In

    2007 IEEE Conference on Computer Vision and Pattern Recognition

    , pages 1–8. IEEE, 2007.
  • [2] E. Avots, M. Daneshmand, A. Traumann, S. Escalera, and G. Anbarjafari. Automatic garment retexturing based on infrared information. Computers & Graphics, 59:28–38, 2016.
  • [3] S. Barone, A. Paoli, and A. V. Razionale. Three-dimensional point cloud alignment detecting fiducial markers by structured light stereo imaging. Machine Vision and Applications, 23(2):217–229, 2012.
  • [4] N. M. Besl, Paul J. A method for registration of 3-d shapes. In IEEE Trans. on Pattern Analysis and Machine Intelligence, pages 239––256. IEEE, 1992.
  • [5] E. Bier and K. Sloan. Two-part texture mappings. IEEE Computer Graphics and Applications, 6(9):40–53, 1986.
  • [6] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6):567–585, 1989.
  • [7] W. Chang and W. Chang. Real-time 3d rendering based on multiple cameras and point cloud. In 7th International Conference on Ubi-Media Computing and Workshops, pages 121–126.
  • [8] M. Daneshmand, A. Aabloo, C. Ozcinar, and G. Anbarjafari. Real-time, automatic shape-changing robot adjustment and gender classification. Signal, Image and Video Processing, pages 1–8, 2015.
  • [9] T. Deschamps and L. D. Cohen. Fast extraction of minimal paths in 3d images and applications to virtual endoscopy, 2001.
  • [10] I. Eckstein, V. Surazhsky, and C. Gotsman. Texture mapping with hard constrains. Computer Graphics Forum, 20(3):95–104, 2001.
  • [11] H. Fan, Y. Cong, and Y. Tang. Object detection based on scale-invariant partial shape matching. Machine Vision and Applications, 26(6):711–721, 2015.
  • [12] S. A. Fezza and M.-C. Larabi. Color Calibration of Multi-View Video Plus Depth for Advanced 3D Video. Signal, Image and Video Processing, pages 1–15, 2015.
  • [13] Z. Fu, W.-K. Jeong, Y. Pan, R. M. Kirby, and R. T. Whitaker. A fast iterative method for solving the eikonal equation on triangulated surfaces. SIAM Journal on Scientific Computing, 33(5):2468–2488, 2011.
  • [14] C. Haili and R. Anand. A new point matching algorithm for non-rigid registration. In Computer Vision and Image Understanding - Special issue on nonrigid image registration (Volume:89 , Issue: 2-3), pages 114––141. Elsevier Science Inc., 2003.
  • [15] P. Hanrahan and P. Haeberli. Direct wysiwyg painting and texturing on 3d shapes. ACM SIGGRAPH computer graphics, 24(4):215–223, 1990.
  • [16] J. Harvent, B. Coudrin, L. Brèthes, J.-J. Orteu, and M. Devy. Multi-view dense 3d modelling of untextured objects from a moving projector-cameras system. Machine vision and applications, 24(8):1645–1659, 2013.
  • [17] S. Hauswiesner, M. Straka, and G. Reitmayr. Free Viewpoint Virtual Try-on with Commodity Depth Cameras. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, pages 23–30. ACM, 2011.
  • [18] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Rnvironments. In 12th International Symposium on Experimental Robotics (ISER). Citeseer, 2010.
  • [19] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgb-d mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments. International Journal of Robotics Research, 31(5):647–663, 2012.
  • [20] S. Hong and W.-K. Jeong. A multi-gpu fast iterative method for eikonal equations using on-the-fly adaptive domain decomposition. Procedia Computer Science, 80:190–200, 2016.
  • [21] B. Jian and B. C. Vemuri. Robust point set registration using gaussian mixture models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1633–1645. IEEE, 2010.
  • [22] V. Kraevoy, A. Sheffer, and C. Gotsman. Matchmaker: Constructing constrained texture maps. 22(3), 2003.
  • [23] X. Liang, C. Xu, X. Shen, J. Yang, S. Liu, J. Tang, L. Lin, and S. Yan.

    Human parsing with contextualized convolutional neural network.

    In Proceedings of the IEEE International Conference on Computer Vision, pages 1386–1394, 2015.
  • [24] L. Liu, L. Zhang, Y. Xu, C. Gotsman, and S. J. Gortler. A Local/global Approach to Mesh Parameterization. In Computer Graphics Forum, volume 27, pages 1495–1504. Wiley Online Library, 2008.
  • [25] L. M. Lui, K. C. Lam, T. W. Wong, and X. Gu. Texture map and video compression using beltrami representation. SIAM Journal on Imaging Sciences, 6(4):1880–1902, 2013.
  • [26] Y. Ma, J. Zheng, and J. Xie. Foldover-Free Mesh Warping for Constrained Texture Mapping. IEEE Transactions on Visualization and Computer Graphics, 21(3):375–388, 2015.
  • [27] A. Myronenko and X. Song. Point set registration: Coherent point drift. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32:2262 – 2275, 2010.
  • [28] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG), 23(3):309–314, 2004.
  • [29] S. Sengupta and P. Chaudhuri. Virtual Garment Simulation. In Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), pages 1–4. IEEE, 2013.
  • [30] P. Shirley, M. Ashikhmin, and S. Marschner. Fundamentals of computer graphics. CRC Press, 2009.
  • [31] G. Steven, R. Anand, L. Chien-Ping, S. Pappu, and M. Eric. New algorithms for 2d and 3d point matching:: pose estimation and correspondence. In Pattern Recognition (Volume:31), pages 1019––1031, 1998.
  • [32] J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan. Scanning 3D Full Human Bodies Using Kinects. Transactions on Visualization and Computer Graphics, 18(4):643–650, 2012.
  • [33] A. Traumann, G. Anbarjafari, and S. Escalera. A new retexturing method for virtual fitting room using kinect 2 camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 75–79, 2015.
  • [34] A. Traumann, M. Daneshmand, S. Escalera, and G. Anbarjafari. Accurate 3d measurement using optical depth information. Electronics Letters, 51(18):1420–1422, 2015.
  • [35] E. Turquin, M. P. Cani, and J. F. Hughes. Sketching garments for virtual characters. In ACM SIGGRAPH, 2007.
  • [36] T. Windheuser, U. Schlickewei, F. Schmidt, and D. Cremers.

    Geometrically consistent elastic matching of 3d shapes: A linear programming solution.

    In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2134–2141, 2011.
  • [37] S. Xu and J. Keyser. Texture mapping for 3d painting using geodesic distance. In 18th meeting of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2014.
  • [38] L. Yang, L. Zhang, H. Dong, A. Alelaiwi, and A. E. Saddik. Evaluating and improving the depth accuracy of kinect for windows v2. In IEEE Sensors Journal ( Volume: 15, Issue: 8), pages 4275–4285. IEEE, 2015.
  • [39] G. Yanwen, Y. Pan, X. Cui, and Q. Peng. Harmonic maps based constrained texture mapping method. Journal of Computer Aided Design and Computer Graphics, 7:1457–1462, 2005.
  • [40] Z. Yasseen, A. Nasri, W. Boukaram, P. Volino, and M.-T. N. Sketch-based garment design with quad meshes. Computer-Aided Design, 45(2):562–567, 2013.
  • [41] L. Yatziv, A. Bartesaghi, and G. Sapiro. O (n) implementation of the fast marching algorithm. Journal of computational physics, 212(2):393–399, 2006.
  • [42] W. Zeng, Y. Zeng, Y. Wang, X. Yin, X. Gu, and D. Samaras. 3d non-rigid surface matching and registration based on holomorphic differentials. ECCV, pages 1–14, 2008.
  • [43] M. Zhang, L. Lin, Z. Pan, and N. Xiang. Topology-independent 3d garment fitting for virtual clothing. Multimedia Tools and Applications, 2013.
  • [44] Y. Zhang, Z. Sun, K. Liu, and Y. Zhang. A Method of 3D Garment Model Generation Using Sketchy Contours. In Sixth International Conference on Computer Graphics, Imaging and Visualization, pages 205–210. IEEE, 2009.
  • [45] B. Zhou, X. Chen, Q. Fu, K. Guo, and P. Tan. Garment modeling from a single image. Computer Graphics Forum, 32(7):85–91, 2013.
  • [46] F. Zhou and F. De la Torre. Factorized graph matching. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 127–134. IEEE, 2012.
  • [47] Z. Zhou, B. Shu, S. Zhuo, X. Deng, P. Tan, and S. Lin. Image-based Clothes Animation for Virtual Fitting. In SIGGRAPH Asia 2012 Technical Briefs, page 33. ACM, 2012.