Log In Sign Up

Interactive 3D Character Modeling from 2D Orthogonal Drawings with Annotations

We propose an interactive 3D character modeling approach from orthographic drawings (e.g., front and side views) based on 2D-space annotations. First, the system builds partial correspondences between the input drawings and generates a base mesh with sweeping splines according to edge information in 2D images. Next, users annotates the desired parts on the input drawings (e.g., the eyes and mouth) by using two type of strokes, called addition and erosion, and the system re-optimizes the shape of the base mesh. By repeating the 2D-space operations (i.e., revising and modifying the annotations), users can design a desired character model. To validate the efficiency and quality of our system, we verified the generated results with state-of-the-art methods.


page 1

page 2

page 3

page 4


CreatureShop: Interactive 3D Character Modeling and Texturing from a Single Color Drawing

Creating 3D shapes from 2D drawings is an important problem with applica...

SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D Animalmorphic Head Design

Head shapes play an important role in 3D character design. In this work,...

HeterSkinNet: A Heterogeneous Network for Skin Weights Prediction

Character rigging is universally needed in computer graphics but notorio...

Neural Puppet: Generative Layered Cartoon Characters

We propose a learning based method for generating new animations of a ca...

Interactive All-Hex Meshing via Cuboid Decomposition

Standard PolyCube-based hexahedral (hex) meshing methods aim to deform t...

Singularity Structure Simplification of Hexahedral Mesh via Weighted Ranking

In this paper, we propose an improved singularity structure simplificati...

1 Introduction

In the animation and game industries, when modeling new 3D characters (or objects), artists first draw orthographic views of them. However, it is cumbersome and time consuming converting 2D drawings into 3D models manually because 3D modeling with specialized tools (e.g., Maya and 3DMAX) require professional knowledge and those user interfaces are not as intuitive as 2D drawing.

Although several sketch-based modeling methods have been proposed for 3D content creation [3] , it is still a challenging issue to represent characteristics of character drawings — there is a gap between the generated results and professional 3D modelings. To solve this issue, we proposed a user interface to easily and efficiently design such characteristics on 3D shapes with the help of 2D annotations. Leveraging orthogonal views, our system can faithfully reconstruct 3D models from drawings. The main contribution of this paper is to provide a novel user-friendly workflow for designing 3D models from 2D drawings with sketch-like annotations, which eliminates the need for complex 3D operations.

Figure 1: Overview of the proposed system.

2 Related Work

Sketching is a form of artistic expression that is highly abstracted from the real world that has been used in various graphics applications, such as normal map editing [5], flow design [6], and portrait drawing [7]. As input for 3D modeling, 2D sketch has ambiguity problem in free-form drawing. To solve this issue, Teddy [8]

was proposed as one of the earliest free-form sketch modeling user interface. Once 3D viewpoints are determined by users, smooth surfaces are generated by interpolating the curves extracted from users’ sketches. Several interpolation functions for sketch modeling 

[10, 9, 2] were proposed to improve the results. However, these approaches usual tended to get over-smoothed surfaces. Some approaches, such as BendSketch [11] offered a solution for this issue.

Another popular approach to sketch-based modeling is data-based learning. By analyzing a massive number of sketch-model pairs, methods of this type can generate an accurate 3D model from a user’s simple sketch. Smirnov et al. [14]

applied Coons patches to learn shape surfaces, but their method is limited to generating smooth shapes. Sketch2CAD allows users to create objects incrementally with sketches, which were inferred to CAD instructions by convolutional neural networks 

[12]. SimpModeling provides a sketching system for animalmorphic-head modeling which can generate details of a head from sketches by pixel-aligned implicit learning.

SketchModeling [13] is very relevant to our work, which is also attempting to reconstruct character models from multi-views of sketch images, though it is an automatic approach. With an encoder-decoder U-Net architecture, SketchModeling can get depth maps and corresponding normal maps from input sketches, optimize the point cloud by merging these views, and obtain complete 3D models. Inspired by structured annotations [4], which is a generalized-cylinder-based modeling for a single view, we propose a user interface to easily and efficiently design characteristics on 3D shapes with fewer types of annotations leveraging the orthogonal views.

3 User interface

In this section, we describe how users interact with the proposed two-stage user interface (see Fig. 2) to model a character with annotations. The tool bar consists of four parts: (1) Annotation mode, including local mode (alignment annotation addition and edge/background marking for corresponding alignment annotation), addition annotation boundary addition(B), and erosion annotation addition (E) from left to right; (2) View mode, including 2D front view (V1), 2D side view (V2), 3D view (V3D) and selected-annotation-only mode; (3) Rendering mode for annotations, including drawing as segments, drawing as curve and drawing as generated cylinder; (4) Other options. including relocation a selected annotation from V1 to V2 (and V2 to V1), a lock button and a unlock button shows whether or not adopting epipolar constraint from the other view as a reference when relocating.

3.1 Annotation Tool

Since input 2D orthogonal drawings often do not provide complete information for 3D modeling, our system allows the user to freely draw annotations that are not limited by edge information. The user can draw brief strokes in either front view or side view by inserting key points on the canvas with a mouse-click operation, and each stroke can be labelled as alignment, addition, or erosion. Then, the system automatically generates corresponding strokes in the other view with the same label. The user is allowed to edit strokes in editing mode to calculate correct 3D coordinates of strokes in the 3D view. In contrast, with the eraser tool, the user clicks on a stroke, and the system deletes the selected stroke. Moreover, the undo shortcut (“Z” key) can delete the last stroke from the stroke list. Note that our system can also load or export the user-drawn annotations by pushing down the “Load”(“L”) or “Save”(“S”) key on the keyboard.

Figure 2: User interface in the proposed system. A user models each part of the character by annotating on both views and viewing the results in 3D view.

3.2 Editing Tool

In order to facilitate repeated modification and find the appropriate correspondence between annotations in two views, the system provides an editing function. In 2D-view mode (V1 or V2), any vertex position of the corresponding annotation can be modified by selecting any visible curve. In editing mode, the system allows the user to generate polar constraints using the corresponding annotations of another view, reducing the 2D editing of the vertices to 1D editing.

4 Overview

Figure 1 shows an overview of our sketch-based character modeling system. After a character design is input, users can draw corresponding annotations one by one in both front and side views under the epipolar constraint. Users can also add extra edge information that is invisible in the original images by sketching for each part of the model with the help of our region-based boundary extraction. Once the user’s annotations are completed, the corresponding lines from alignment annotations (blue strokes) are extracted as hard constraints, and those lines marked with the other two annotations are excluded to revise relationships between the two views of the sketches and calculate more precise coordinates of 3D points for base mesh. After the base mesh is determined, cloud points will be sampled from edge information according to the addition annotations (orange strokes) and erosion annotations (green strokes) as constrains and optimization-based surface fitting is conducted to generate a smooth surface.

4.1 Alignment-Based Global Modeling (Base-Mesh Generation)

Figure 3: Generalized cylinders. Candidate boundaries which would be projected into 2D views are colorized in right column. Three type boundaries are orthogonal from one to each others.

In the first step, generalized cylinders are generated as a base mesh according to corresponding alignment annotations in two views. Each part of the character is modelled separately. Here, any annotation is restored as a series of ordered key points , belonging to a single part and represented as a Hermitian curve. In this system, alignment annotations are mainly used to represent the skeleton or center of gravity of modeling parts and to correct the 3D position of a specific curve in some cases.

Primitives. A typical generalized cylinder is shown in Fig. 3, consisting of a skeleton curve , and a cross-section radial distance function . Here, different types of candidate boundaries which may be projected onto two views are marked with different colors.

Camera model and epipolar constraint. In

space, extrinsic parameters of a camera for 3D reconstruction can be denoted by a translation vector

and a rotation matrix . Here denotes the focal length of the camera. Given a set of 3D points , the set of points projected onto the front view is , and the set of points projected onto the side-view is . If , and correspond to each other and the camera parameters of two views are () and (), respectively, then, and can be expressed with following equations:


Since the two views are orthogonal to the y-axis, and are located exactly at orthogonal planes. We also have , , and . Then, the epipolar constraint derived from Equation. 1 can be simplified as:


Thus, the corresponding 3D positions can be calculated correctly. For instance, support a point with coordinates in the front view and in the side view. Since epipolar line constraint, its 3D coordinates are . Although we are using only this special case, the polar constraints can be naturally extended to reduce the workload of a multi-view alignment.

Base mesh generation. Our system uses the edge information of the image and the relative position of alignment annotations to automatically calculate the preliminary boundary of 2D generalized cylinders of for each view. For any point on the skeleton curve of an alignment annotation where , the nearest intersection points in two directions between its vertical line ends and edges can be found as an initial boundary of cross-section . If edges denotes a set of world coordinates converted from edge pixels in input images and function denotes the distance between and , the formula can be described as follows:


Note that the mesh generated by this method tends to fail for edges that are close to parallel. The main reason for this is the need to discretize the mesh when generating it, so parameter for has a certain interval. This weakness will be overcome in the following local refinement step.

4.2 Local Refinement with Annotation Constrains in Two Orthogonal Views

The next step is refining the base mesh with optimization. Here, we introduce addition annotations and erosion annotations, as shown in Fig. 1, to realize this objective: addition annotations define boundaries for cross-sections along a skeleton curve, while erosion annotations modify shapes of end-caps of generalized cylinders. Note both annotations only work when they attached to an alignment annotation specified by the user.

These annotations should be converted to , denoting a boundary function with a type of (). is a set of types of boundary, and in our case K=0, 1, 2 denoting a blue curve (cross-section contour), orange curve, and green curve in the right column of Fig. 3, respectively. All reconstructed parts of character should have minimized errors between the visible contours and annotations constraints when project back to 2D views. The objective function in this step can be summarized as:


where is a set of boundaries extracted from annotations; edges and function are the same as the one in Equation. 3 described. Once is determined, the base mesh in the global step can be refined as follows:

Cross-section modeling vertical to the skeleton direction of generalized cylinder. When , there are only at most four constraint points for each cross-section. In this case, the system would fit an ellipse by regarding these constraint points as its poles. Specially, only one constraint point in the cross-section means this cross-section is a circle.

If let , cross-sections between these addition annotation boundaries will be calculated with cubic B-spline interpolation.As the cross-sections along to the skeleton direction of generalized cylinder have been calculated, the side surface of the generalized cylinder matching with input and user-defined boundaries can be generated.

End-caps of generalized cylinder. If there is an erosion annotation for a generalized cylinder, the end-caps surfaces will be deformed with Laplacian-based editing [1] according to the annotation’s shape. Otherwise, surfaces at the ends of the generalized cylinder are planes.

The main idea of this step is to use two kinds of annotations to obtain constraints for a generalized cylinder. Therefore, this step can not only perform boundary refinement, but also improve the effectiveness of boundary classification, which in turn improves modeling efficiency and accuracy.

5 Result and Discussion

(a) Input images 251mins 221mins (b) SketchModeling 8-parts, 6mins 6-parts, 8mins (c) Ours
Figure 4: The models created in the comparison study with SketchModeling[13] and our proposed system.

In our implementation, the system was programmed in C as a real-time drawing application on the Windows 10 platform. A workstation with Intel Core i5-8400, 2.80GHz 2.80GHz, NVIDIA RTX2070 GPU, and 16GB RAM was used as the testing computing environment. Figure 4 shows 3D modelling results with our system comparing with SketchModeling [13]. Both the final cloud points and mesh of the first-row character do not match well with the input sketch after hours of calculations, which means this learning-based method failed to predict the position and depth-map of some parts of the characters, while models generated with our system were more faithful to the original design drawings in less time by modelling several independent parts.

Unlike the template-based method, the proposed system enables users to make character models without careful parameter tuning. By comparing our results with results of state-of-the-art methods, we verified that the proposed system could improve the quality of 3D character models with simpler but more intuitive operations. The current system focuses mainly on the 3D shapes reconstruction process, so texture mapping and complex structure modeling might be a good topic for future research.

This research was supported by the Kayamori Foundation of Informational Science Advancement, JSPS KAKENHI JP20K19845, and JP19K20316.


  • [1] O. K. Au, C. Tai, L. Liu, and H. Fu (2006) Dual laplacian editing for meshes. IEEE Transactions on Visualization and Computer Graphics 12 (3), pp. 386–395. External Links: Document Cited by: §4.2.
  • [2] A. Bernhardt, A. Pihuit, M. Cani, and L. Barthe (2008) Matisse: painting 2d regions for modeling free-form shapes. In Proceedings of the Fifth Eurographics Conference on Sketch-Based Interfaces and Modeling (SBM), pp. 57–64. Cited by: §2.
  • [3] S. Bhattacharjee and P. Chaudhuri (2020) A survey on sketch based content creation: from the desktop to virtual and augmented reality. Computer Graphics Forum 39 (2), pp. 757–780. External Links: Document Cited by: §1.
  • [4] Y. I. Gingold, T. Igarashi, and D. Zorin (2009) Structured annotations for 2d-to-3d modeling. ACM Trans. Graph. 28 (5), pp. 148. External Links: Link, Document Cited by: §2.
  • [5] Y. He, H. Xie, C. Zhang, X. Yang, and K. Miyata (2021) Sketch-based normal map generation with geometric sampling. In International Workshop on Advanced Imaging Technology (IWAIT) 2021, Vol. 11766, pp. 261 – 266. External Links: Document, Link Cited by: §2.
  • [6] Z. Hu, H. Xie, T. Fukusato, T. Sato, and T. Igarashi (2019)

    Sketch2VF: sketch-based flow design with conditional generative adversarial network

    Computer Animation and Virtual Worlds 30 (3-4), pp. e1889. Note: e1889 cav.1889 External Links: Document, Link, Cited by: §2.
  • [7] Z. Huang, Y. Peng, T. Hibino, C. Zhao, H. Xie, T. Fukusato, and K. Miyata (2022) DualFace: two-stage drawing guidance for freehand portrait sketching. Computational Visual Media 8, pp. 63–77. External Links: Link Cited by: §2.
  • [8] T. Igarashi, S. Matsuoka, and H. Tanaka (1999) Teddy: A sketching interface for 3d freeform design. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 409–416. External Links: Document Cited by: §2.
  • [9] P. Joshi and N. A. Carr (2008) Repoussé: automatic inflation of 2d artwork. In Proceedings of Eurographics Workshop on Sketch-Based Interfaces and Modeling (SBIM), pp. 49–55. External Links: Document Cited by: §2.
  • [10] O. A. Karpenko and J. F. Hughes (2006) Implementation details of smoothsketch: 3d free-form shapes from complex sketches. In ACM SIGGRAPH 2006 Sketches, pp. 51–es. Cited by: §2.
  • [11] C. Li, H. Pan, Y. Liu, X. Tong, A. Sheffer, and W. Wang (2017) BendSketch: modeling freeform surfaces through 2d sketching. ACM Trans. Graph. 36 (4), pp. 125:1–125:14. External Links: Link, Document Cited by: §2.
  • [12] C. Li, H. Pan, A. Bousseau, and N. J. Mitra (2020) Sketch2CAD: sequential CAD modeling by sketching in context. ACM Transactions on Graphics (TOG) 39 (6), pp. 164:1–164:14. External Links: Document Cited by: §2.
  • [13] Z. Lun, M. Gadelha, E. Kalogerakis, S. Maji, and R. Wang (2017) 3D shape reconstruction from sketches via multi-view convolutional networks. In Proceedings of International Conference on 3D Vision (3DV), pp. 67–77. External Links: Document Cited by: §2, Figure 4, §5.
  • [14] D. Smirnov, M. Bessmeltsev, and J. Solomon (2021) Learning manifold patch-based representations of man-made shapes. In International Conference on Learning Representations (ICLR), pp. 1–24. External Links: Link Cited by: §2.