 # Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey

This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The application of illustrative visualization has increased in recent years.The principle goal behind the concept of illustrative visualization is a meaningful, expressive, and simplified depiction of a problem, a scene or a situation. As an example, running people are represented running stickmans, which can be seen in the Olympic games, and other objects become simplified line drawings, see Figure 1. More complex examples can be found in medical atlases. Most anatomical structures are painted and illustrated with pencils and pens. Gray’s anatomy is one of the famous textbooks for medical teaching. Most other textbooks in this area orient to depict anatomy with art drawing, too. Figure 1: Visual abstraction of the four Olympic disciplines: archery, basketball, football and handball in the style of the pictograms of the Olympic Games 2012 in London.

Other than simplified representation, illustrative visualization is not restricted to these fields. Illustrative techniques are essential for focus-and-context visualizations. Consider a scene with anatomical structures and one specific (important) structure. The specific structure may be strongly related to the surrounding objects. Therefore, hiding the other objects is not a viable option. In contrast, depicting all structures leads to visual clutter and optical distraction of the most important structures. Focus-and-context visualization is characterized by a few local regions that are displayed in detail and with emphasis techniques, such as a saturated color. Surrounding contextual objects are displayed in a less prominent manner to avoid distraction from focal regions. Medical examples are vessels with interior blood flow, livers with inner structures including vascular trees and possible tumors, proteins with surface representation and interior ribbon visualization. Focus-and-context visualization is not restricted to medical data. An example is the vehicle body and the interior devices. The user or engineer needs the opportunity to illustrate all devices in the same context.

There are numerous methods for different illustration techniques. This survey is focused on a specific illustrative visualization category: feature lines. Feature lines are a special group of line drawing techniques. Another class of line drawing methods is hatching. Hatching tries to convey the shape by drawing a bunch of lines. Here, the spatial impression of the surface is even more improved. Several methods exist to hatch the surface mesh, see Girshick2000 ; Hertzmann2000 ; Lawonn2013CGF ; Praun2001 ; Webb2002 ; Zander2004 . In contrast, feature lines try to generate lines at salient regions only. Not only for illustrative visualization, feature lines can also be used for rigid registrations of anatomical surfaces Stylianou2004 or for image and data analysis in medical applications Eberly1996 . The goal of this survey is to convey the reader to the different feature line methods and offer a tutorial with all the knowledge to be able to implement each of the methods.

#### Organization.

We first give an overview of the mathematical background. In Section 2, we introduce the necessary fundamentals of differential geometry. Afterwards, we adapt these fundamentals to triangulated surface meshes in Section 3. Section 4 discusses general aspects and requirements for feature lines. Next, we present different feature line methods in Section 5 and compare them in Section 6. Finally, Section 7 holds the conclusion of this survey. Figure 2: The basic elements for differential geometry. A parametric surface is given and the partial derivatives create the tangent space.

## 2 Differential Geometry Background

This section presents the fundamentals of differential geometry for feature line generation, which will be crucial for the further sections. We present the basic terms and properties. This section is inspired by differential geometry books doCarmo1976 ; doCarmo1992 ; Kuehnel2006 .

### 2.1 Basic Prerequisites

A surface is called a parametric surface if is an immersion. An immersion means that all partial derivatives are injective at each point. The further calculations are mostly based on the tangent space of a surface. The tangent space of is defined as the linear combination of the partial derivatives of :

 Tpf\coloneqqspan{∂f∂x1∣∣x=u,∂f∂x2∣∣x=u}.

Here, is the space of all linear combinations. Formally:

. With the tangent space, we can define a normalized normal vector

. The (normalized) normal vector at is defined such that for all elements the equation holds, where denotes the canonical Euclidean dot product. Therefore, is defined as:

This map is also called the Gauss map. Figure 2 depicts the domain of a parametric surface as well as the tangent space and the normal .

### 2.2 Curvature

The curvature is a fundamental property to identify salient regions of a surface that should be conveyed by feature lines. Colloquially spoken, it is a measure of how far the surface bends at a certain point. If we consider ourselves to stand on a sphere at a specific point, it does not matter in which direction we go, the bending will be the same. If we imagine we stand on a plane at a specific point, we can go in any direction, there will be no bending. Without knowing any measure of the curvature, we can state that a plane has zero curvature and that a sphere with a small radius has a higher curvature than a sphere with a higher radius. This is due to the fact that a sphere with an increasing radius becomes locally more a plane. Intuitively, the curvature depends also on the direction in which we decide to go. On a cylinder, we have a bending in one direction but not in the other. Painting the trace of a walk on the surface and view it in 3D space, we could treat this as a 3D curve. The definition of the curvature of a curve may be adapted to the curvature of a surface. The adaption of this concepts is explained in the following. Let be a 3D parametric curve with . The property of constant length of the derivative is called arc length or natural parametrization. One can show that such a parametrization exists for each continuous, differentiable curve that is an immersion. So, if we want to measure the size of bending, we can use the norm of the second derivative of the curve. Therefore, the (absolute) curvature at a time point is defined as:

 κ(t)=∥∥c′′(t)∥∥. Figure 3: The curve’s second derivative is decomposed into the tangential and normal part.

To determine the curvature on a certain point of the surface in a specific direction, we can employ a curve and calculate its curvature. This approach is imperfect because curves that lie in a plane can have non-vanishing curvature, e.g., a circle, whereas we claimed to have zero curvature on a planar surface. Therefore, we have to distinguish which part of the second derivative of the curve contributes to the tangent space and which contributes to the normal part of the surface. Decomposing the second derivative of the curve into tangential and normal part of the surface yields:

 c′′(t)=projTpfc′′(t)tangential part+⟨c′′(t),n⟩nnormal part,

where and means the projection of the point onto the space , see Figure 3. The curvature of the surface at along the curve is defined as the coefficient of the normal part:

 κc(p)=⟨c′′(t),n⟩. (1)

Hence, we know that and . Deriving the last equation yields:

 ddt⟨c′(t),n⟩ =0 ddt⟨c′(t),n⟩ =⟨c′′(t),n⟩+⟨c′(t),∂n∂t⟩.

We obtain

 ⟨c′′(t),n⟩=−⟨c′(t),∂n∂t⟩.

Combining this equation with Equation 1 yields

 κc′(t)(p)=−⟨c′(t),∂n∂t⟩. (2)

Thus, the curvature of a surface at a specific point in a certain direction can be calculated by a theorem by Meusnier. We call the vectors at the maximal/ minimal principle curvature directions of the maximal and minimal curvature, if , for all directions . If such a minimum and maximum exists, then and are perpendicular, see Section 2.5 for a proof. If we want to determine the curvature in direction , we first need to normalize and can then determine by:

 κu(p)=⟨u,v⟩2κv(p)+⟨u,w⟩2κw(p). (3)

The coefficients of the curvature are the decomposition of the principle curvature directions with the vector .

### 2.3 Covariant Derivative

The essence of the feature line generation is the analysis of local variations in a specific direction, i.e., the covariant derivative. Therefore, the covariant derivative is a crucial concept for feature line methods. We consider a scalar field on a parametric surface . One can imagine this scalar field as a heat or pressure (as well as a curvature) distribution. The directional derivative of in direction can be written as and is defined by:

 Dvφ(x)=limh→0φ(x+hv)−φ(x)h.

If is differentiable at , the directional derivative can be simplified:

 Dvφ(x)=⟨∇φ(x),v⟩,

where denotes the gradient. The gradient is an operator applied to a scalar field and results in a vector field. When we want to extend the definition of the derivative to an arbitrary surface, we first need to define the gradient of surfaces. In the following, we make use of the covariant derivative. The standard directional derivative results in a vector which lies somewhere in the 3D space, whereas the covariant derivative is restricted to stay in the tangent space of the surface. The gradient is a two-dimensional vector. Actually, we need a three-dimensional vector in the tangent space of the surface. Here, we employ the gradient and use it as coefficients of the tangential basis. Unfortunately, this leads to wrong results because of the distortions of the basis of the tangent space, see Figure 4. The basis is not necessarily an orthogonal normalized basis as in the domain space and can therefore lead to distortions of the gradient on the surface. Figure 4: Given: a scalar field in the domain. Determining the gradient and using it as coefficient for the basis tangent vectors leads to the wrong result (grey). Balancing the distortion with the inverse of the metric tensor yields the correct gradient on the surface (black).

One way to calculate this vector is to use the plain scalar field . Afterwards, we are able to attain the gradient in three-dimensional space and project it on the tangent space. However, we want to use the gradient of in the domain of a parametric surface and compensate the length distortion such that we can use it as coordinates with the basis in the tangent space. One important fact is when multiplying the gradient with the -th basis vector, one obtains the partial derivative of with . Hence, we know that the three-dimensional gradient lies in the tangent space. Therefore, it can be represented as a linear combination of with coefficients :

 ∇φ=α⋅∂f∂x1+β⋅∂f∂x2.

Multiplying both sides with the basis vectors and using the relation , we obtain an equation system:

 ⎛⎜⎝∂φ∂x1∂φ∂x2⎞⎟⎠ = ⎛⎜⎝α⋅⟨∂f∂x1,∂f∂x1⟩+β⋅⟨∂f∂x1,∂f∂x2⟩α⋅⟨∂f∂x1,∂f∂x2⟩+β⋅⟨∂f∂x2,∂f∂x2⟩⎞⎟⎠ = ⎛⎜⎝⟨∂f∂x1,∂f∂x1⟩⟨∂f∂x1,∂f∂x2⟩⟨∂f∂x1,∂f∂x2⟩⟨∂f∂x2,∂f∂x2⟩⎞⎟⎠g\coloneqq(αβ).

The matrix is called the metric tensor. This tensor describes the length and area distortion from to the surface. The last equation yields the coefficients when multiplied with the inverse of :

 (αβ)=g−1⎛⎜⎝∂φ∂x1∂φ∂x2⎞⎟⎠.

This leads to a general expression of the gradient for a scalar field :

 ∇φ=n∑i,j=1(gij∂φ∂xj)∂∂xi, (4)

where is the -th matrix entry from the inverse of and means the basis. Now, we are able to determine the covariant derivative of a scalar field by first determining its gradient and afterwards using the dot product:

 Dwφ=⟨∇φ,w⟩.

### 2.4 Laplace-Beltrami Operator

The Laplace-Beltrami operator is needed for a specific feature line method and will therefore be introduced. The Laplace operator is defined as a composition of the gradient and the divergence. When interpreting the vector field as a flow field, the divergence is a measure of how much more flow leaves a specific region than flow enters. In the Euclidean space, the divergence of a vector field is the sum of the partial derivatives of the components :

 divΦ=n∑i=1∂∂xiΦi.

The computation of the divergence for a vector field in Euclidean space is straightforward. However, for computing the divergence to an arbitrary surface we have to be aware of the length and area distortions. Without giving a derivation of the divergence, the components of the vector field have to be weighted by the square root of the determinant of the metric tensor before taking the derivative. The square root of the determinant of describes the distortion change from the Euclidean space to the surface. Formally, the divergence of a vector field with a given metric tensor is given by:

 divΦ=1√|g|n∑i=1∂∂xi(√|g| Φi). (5)

Given the definition of the gradient and the divergence, we can compose both operators to obtain the Laplace-Beltrami operator of a scalar field on surfaces:

 Δφ=div∇φ=1√|g|n∑i,j=1∂∂xi(√|g|gij ∂φ∂xj). (6)

### 2.5 Shape Operator

In Section 2.2, we noticed that the curvature of a parametric surface at a specific point in a certain direction can be determined by Equation 2:

 κc′(t)(p)=−⟨c′(t),∂n∂t⟩.

Actually, this means that the curvature in the direction is a measure of how much the normal changes in this direction, too. Given is with and . Then, we determine the coefficients of with the basis :

 (αβ)=g−1⎛⎜⎝⟨v,∂f∂x1⟩⟨v,∂f∂x2⟩⎞⎟⎠.

We use to determine the derivative of along by using the two-dimensional curve and calculate:

 Dvn\coloneqqddtn(~c(t)).

We define . This linear operator is called Shape Operator (also Weingarten Map or Second Fundamental Tensor). One can see that holds. Note that this operator can directly operate on the 3D space with a three-dimensional vector in the tangent space, as well as the 2D space with the coefficients of the basis. Therefore, it can be represented by a matrix . Recall Equation 2, we substitute with and by :

 κv(p)=⟨v,Sv⟩.

We want to show that the principle curvature directions are the eigenvectors of

. Assuming

are the normalized eigenvectors with the eigenvalues

. Every normalized vector can be written as a linear combination of : with . Therefore, we obtain:

 κw(p) =⟨w,Sw⟩=12[(α2−β2)(λ1−λ2)+λ1+λ2]. (7)

One can see from Equation 7 that reaches a maximum if , , and a minimum is reached if , . If the eigenvalues (curvatures) are not equal, we can show that the principle curvature directions are perpendicular. For this, we need to show that is a self-adjoint operator. Thus, the equation holds. We show this by using the property and derive this with :

 ⟨∂n∂xj,∂f∂xi⟩+⟨n,∂2f∂xi∂xj⟩=0.

We demonstrate that is a self-adjoint operator with the basis :

 ⟨S(∂f∂xi),∂f∂xj⟩ =⟨−∂n∂xi,∂f∂xj⟩=⟨n,∂2f∂xi∂xj⟩=⟨−∂n∂xj,∂f∂xi⟩=⟨S(∂f∂xj),∂f∂xi⟩.

Now, we show that the eigenvectors (principle curvature directions) are perpendicular if the eigenvalues (curvatures) are different:

 λ1⟨v1,v2⟩=⟨Sv1,v2⟩=⟨v1,Sv2⟩=λ2⟨v1,v2⟩.

The equation is only true if are perpendicular (and holds).

## 3 Discrete Differential Geometry

This section adapts the continuous differential geometry to discrete differential geometry, the area of polygonal meshes that approximate continuous geometries. The following notation is used in the remainder of this paper. Let be a triangulated surface mesh. The mesh consists of vertices with associated positions , edges , and triangles . We write as the normalized normal vector at vertex . If nothing else is mentioned, we refer to normal vectors at vertices. Furthermore, denotes the neighbors of . So, for every , holds. Furthermore, if we use a triangle for calculation, we always use this notation: given a triangle with vertices , and the edges are defined as .

### 3.1 Voronoi Area

We need to introduce the term Voronoi area, as it is important for the determination of the curvature. So, given are points in a 2D space. Every point is spread out in equal speed. If two fronts collide, they stop to spread out further at this region. After all fronts stopped, every point lies in a region that is surrounded by a front. This region is called a Voronoi region. Formally, given distinct points in the plane, the Voronoi region for the point is defined as the set of points with

 V(xk)={x∈R2: ∥x−xk∥≤∥x−xj∥,j≠k}.

See Figure 5(a) for an example of a Voronoi diagram. To obtain the Voronoi area of a vertex on a surface mesh, the Voronoi area of each incident triangle is accumulated. The Voronoi area calculation is based on the method by Meyer et al. Meyer2002 . In case of a non-obtuse triangle, the Voronoi area at is determined by the perpendicular bisector of the edges incident to . The point of intersection, the midpoint of the incident edges and the point itself define the endpoints of the Voronoi area. The triangle area of the Voronoi region equals:

 A△(pi)=18(∥e1∥2⋅cot(e2,e3)+∥e3∥2⋅cot(e1,e2)).

In case of an obtuse triangle, the Voronoi area is equal half of the triangle area if the angle at is obtuse. Otherwise it is a quarter of the triangle area, see Figure 5(b).

### 3.2 Discrete Curvature

The calculation of the curvatures as well as the principle curvature directions are important for a number of feature line techniques. Several approaches exist to approximate the curvatures. Some methods try to fit an analytic surface (higher order polynomials) to the mesh and determine the curvatures analytically Cazals2003 ; Goldfeather2004

. Another approach estimates the normal curvature along edges first and then estimates the shape operator

Chen1992 ; Hameiri2003 ; Meyer2002 ; Page2001 ; Taubin1995_curvature . Other approaches are based on the calculation of the shape operator  Alliez2003 ; Cohen-Steiner2003 ; Rusinkiewicz2004 . We use the curvature estimation according to Rusinkiewicz2004 . After is determined on a triangle basis, it is adapted to vertices. We already defined that yields the change of the normal in the direction of :

 Sv=Dvn.

This property is used to assess for each triangle. When applying to the edge , it should result in because of the change of the normals along the edge. We need a basis of the tangent space of the triangle:

 ~e1=e1|e1|,     ~e2=e2|e2|.

Afterwards, we build the orthogonal normalized basis vectors by:

 x△\coloneqq~e1,     y△\coloneqqx△×(~e2×x△)∥x△×(~e2×x△)∥. (8)

Applying the aforementioned property of the shape operator to all edges according to the basis leads to the following equation system:

 (9) Figure 6: The shape operator estimation is based on a local coordinate system, the edges and the normals.

see Figure 6 for an illustration. Here, we have three unknowns (the matrix entries of the symmetric matrix ) and six linear equations.Thus, a least square method can be applied to fit the shape operator to approximate curvature for each triangle. Next, we need to calculate for each vertex of the mesh. As the triangle basis normally differs from each vertex tangent space basis, we need to transform the shape operator according to the new coordinate system. First, we assume that the normal of the face is equal to the incident vertex normal . Hence, the basis of the triangle is coplanar to the basis of the incident vertex . Assuming we have the shape operator given in the vertex basis, then the entries can be determined by:

 ep =(10)(epfpfpgp)(10)=xTiSxi fp =(10)(epfpfpgp)(01)=xTiSyi gp =(01)(epfpfpgp)(01)=yTiSyi.

As we have determined the shape operator in the basis , we can express the basis of the vertex by expressing the new coordinate system with the old basis :

 α =⟨xi,x△⟩ β =⟨xi,y△⟩.

The entry can be determined by:

 ep=(αβ)S(αβ). (10)

The other entries can be calculated by analogous calculations. For the second case, we rotate the coordinate system of the triangle around the cross product of the normal such that the basis of the vertex and the triangle are coplanar. Finally, we use this to determine the shape operator of the vertices. We determine the shape operators for all incident triangles of a vertex. Afterwards, we rotate the coordinate systems of the triangles to be coplanar with the basis of the vertex. Next, we re-express the shape operator in terms of the basis of the vertex. Then, we weight the shape operator according to the Voronoi area of the triangle and accumulate this tensor. Finally, we divide the accumulated shape operator by the sum of the weights. The eigenvalues provide the principle curvatures, and the eigenvectors give the principle curvature directions according to the basis. The pseudo-code 1 summarizes the algorithm.

Please note that this algorithm can be generalized to obtain higher-order derivatives. It can be used to determine the derivative of the curvature as it is important for a specific feature line method. Formally, the derivative of the shape operator has the form:

 C=(DvSDwS)=((abbc) (bccd)). (11)

For the determination of the change of the curvature in direction , the tensor has to be multiplied multiple times:

 (12)

### 3.3 Discrete Covariant Derivative

First, we consider a linear 2D scalar field and its gradient:

 ∇φ=⎛⎜⎝∂∂x1φ∂∂x2φ⎞⎟⎠=(αβ). (13)

To determine the gradient of a triangle with scalar values , , and , we build a basis according to Equation 8 and transform the points to by:

This transformation describes an isometric and conformal map. The next step is a linearization of the scalar values . We want to determine a scalar field such that

 φ′(p′i)=φiφ′(p′j)=φjφ′(p′k)=φk

holds. These conditions yield the following equation system:

 (αβ)(p′ip′jp′k)+(γγγ)=(φiφjφk).

With we obtain the following solution:

 γ =φi, (αβ) =(φj−φiφk−φj)(p′jp′k)−1.

According to Equation 13, the gradient of is determined by .

The basis yields the gradient in 3D:

Figure 7 illustrates the gradient of a triangle. To determine the gradient per vertex, we use the same procedure as for the shape operator estimation. We transform the basis and weight the triangle gradient according to its proportion of the Voronoi area.

### 3.4 Discrete Laplace-Beltrami Operator

Several methods exist to discretize the Laplace-Beltrami operator on surface meshes. For an overview, we recommend the state of the art report by Sorkine Sorkine2005 . The operator can be presented by the generalized formula:

 Δφ(pi)=∑jwij (φ(pj)−φ(pi)).

Different weights give different discrete Laplace-Beltrami operators. For presenting different versions of this operator it is preferable that it fulfills some properties motivated by the smooth Laplace-Beltrami operator:

• (Sym) The weights should be symmetric .

• (Loc) If then .

• (Pos) All weights should be non-negative.

• (Lin) If is contained in a plane and is linear, then should hold.

In the following, we introduce different discrete Laplace-Beltrami operators.

Combinatorial: For the combinatorial Laplace-Beltrami operator we have:

 wij={1,if (i,j)∈E0,otherwise.

This version may result in non-zero values on planar surfaces for linear scalar fields. Therefore, it violates (Lin).

Uniform: Taubin Taubin1995 suggested the uniform Laplace-Beltrami operator. The weights are determined by the number of neighbors of :

 wij={1N(i),if (i,j)∈E0,otherwise.

These weights also violate (Lin). Figure 8: This figure illustrates the triangles with the angles for the weight calculation.

Floater’s mean value: Floater Floater2003 proposed the mean value weights by the tangent of the corresponding angles:

 wij=⎧⎨⎩tan(δij/2)+tan(γij/2)∥pi−pj∥,if (i,j)∈E0,otherwise.

See Figure 8 for the angles. These weights violate (Sym).

Cotangent weights: MacNeal Macneal1949 suggested the cotangent weights:

 wij={cot(αij)+cot(βji),if%  (i,j)∈E0,otherwise.

See Figure 8 for the angles. On general meshes the weights can violate (Pos).

Belkin weights: Belkin Belkin2008 suggested to determine weights over the whole surface:

 Δφ(pi)=14πh2(pi)∑△kA(△k)3∑j∈△ke−∥pi−pj∥24h(pi)(φ(pj)−φ(pi)),

where denotes the area of the triangle and corresponds intuitively to the size of the neighborhood. This violates the (Loc) property.

Results: The discussion leads to the question if there is any discrete Laplace-Beltrami operator which fulfills all required properties for an arbitrary surface mesh. Wardetzky et al. Wardetzky2007 showed that there is no such operator. The proof is based on a Schönhardt polytope which demonstrate that there is no Laplace-Beltrami operator, which does not violate any condition.

### 3.5 Isolines on Discrete Surfaces

For feature line methods, it is essential not to restrict the lines to the edges, as it is not desirable to perceive the mesh edges. Given is a surface mesh and a scalar field, we want to depict the zero crossing of the scalar field. Therefore, we linearize the scalar values for each triangle according to the values of the incident points. Afterwards, we look for points on an edge such that the linearized values of the scalar values of the connecting points are equal to zero. Having two points on two edges of a triangle, we connect them. Suppose we have a triangle with scalar values , and . Thus, we know that somewhere on edge and there is a zero crossing. We determine and multiply with edge . This yields the position of the zero crossing on the edge. The position on the edge is determined as well. Afterwards, both points will be connected, see Figure 9. Figure 9: In LABEL: the position of the zero crossing is determined and the points are connected. In LABEL: the isoline through a mesh is depicted.

## 4 General Requirements of Feature Lines

The generation of feature lines leads to several requirements, which have to be considered for acquiring appropriate results.

Smoothing: Most of the feature line methods use higher order derivatives. Therefore, the methods assume sufficiently smooth input data. For data acquired with laser scanners or industrial measurement process, smoothness cannot be expected. Discontinuities represent high frequencies in the surface mesh and lead to the generation of distracting (and erroneous) lines. Several algorithms exist, which smooth the surface by keeping relevant features. Depending on the feature line method, different smoothing algorithms can be applied. If the algorithm only uses the surface normals and the view direction, it is sufficient to simply smooth the surface normals. Geometry-based approaches, however, require to smooth the mesh completely. Operating only on scalar values, an algorithm which smoothes the scalar field around a certain region may be applied, too.

Frame coherence: The application of feature line approaches or in general for non-photorealistic rendering makes it crucial to provide methods that are frame-coherent. This means, during the interaction the user should not be distracted by features that pop out or disappear suddenly. A consistent and continuous depiction of features should be provided in consecutive frames of animation.

Filtering: Feature line algorithms may generate lines on salient regions as well as lines that result from small local irregularities, which may not be necessary to convey the surface shape or even annoying and distracting. Filtering of feature lines to set apart relevant lines from distracting ones is a crucial part of a feature line generation. User-defined thresholds may control the rate of tolerance for line generation. Some algorithms use an underlying scalar field for thresholding. Lines are only drawn if the corresponding scalar value exceeds the user-defined threshold. Other methods integrate along a feature line, determine the value, and decide to draw the whole line instead of filtering some parts. We will also mention the filtering method of each presented feature line generation method.

## 5 Feature Lines

Line drawings were used extensively for medical visualization tasks, such as displaying tissue boundaries in volume data Burns2005 ; Treavett2000 , vascular structures Ritter2006 , neck anatomy Krueger2005 and brain data Jainek2008 ; Svetachov2010 . Furthermore, some higher order feature lines were qualitatively evaluated on medical surface data Lawonn2013BVM . The importance of feature lines in medical visualization is discussed in Preim2014 . Feature line methods can be divided into image-based and object-based methods. Image-based approaches are not in the focus of this survey. These methods are based on an image as input. All calculations are performed on the image with the pixels containing, for instance, an RGB or grey value. Usually, the image is convolved with different kernels to obtain the feature lines. The resulting feature lines are represented by pixels in the image space. These lines are mostly not frame-coherent. Comprehensive overviews of different feature line methods in image space are given by Muthukrishnan2011 ; Nadernejad2008 ; Senthilkumaran2009 . This section presents selected object-based feature line methods. We will explain the methods and limitations. Further information on line drawings can be found in  Preim2014 ; Rusinkiewicz2008 .

### 5.1 Contours

We refer to a silhouette as a depiction of the outline of an object as this is the original definition by Étienne de Silhouette. The contour is defined as the loci of points where the normal vector and the view vector are mutually perpendicular:

where is the normal vector and is the view vector which points towards the camera. For the discrete case, we highlight edges as a contour whenever the sign of the dot product of the view vector with the normals of the incident triangle normals changes. The contour yields a first impression of the surface mesh. On the other hand, it is not sufficient to depict the surface well. The contour is not appropriate to gain a spatial impression of the object. Furthermore, it cannot depict salient regions, for instance strong edges.
Summary: In the first place, the contour is necessary for gaining a first impression on the shape of the object. Unfortunately, spatial cues, as for instance strong edges, are not depicted.

### 5.2 Crease Lines

Crease lines are a set of edges where incident triangles change strongly. The dihedral angle, i.e., the angle of the normals of the corresponding incident triangles, along the edges is calculated. The edge belongs to a crease line if the dihedral angle exceeds a user-defined threshold . As the change of the normals is an indicator of the magnitude of the curvature, one can state that all points contribute to a feature line if the underlying absolute value of the maximum curvature exceeds a threshold:

for adjacent triangles with corresponding normals . Afterwards, all adjacent vertices which fulfill the property are connected. These feature lines need to be computed only once, since they are not view-dependent. Furthermore, these lines are only drawn along edges.
Summary: Crease lines display edges where the dihedral angle is large. Strong edges are appropriately depicted, but if the object has small features, this method is not able to depict only important edges. This is caused by the local determination of the dihedral angle without concerning a neighborhood. Even smoothing the surface mesh would not deliver proper line drawings. Furthermore, this method is only able to detect features on edges.

### 5.3 Ridges and Valleys Figure 12: The brain model with ridges and valleys, and contours.

Ridges and valleys were proposed by Interrante et al. Interrante1995 and adapted to triangulated surface meshes by Ohtake et al. Ohtake2004 . These feature lines are curvature-based and not view-dependent. The computation is based on the principle curvature as well as the associated principle curvature direction with . Formally, ridges and valleys are defined as the loci of points at which the principle curvatures assume an extremum in the principle direction:

According to two constraints, the sets of points are called

 Dk1Dk1κ1{<0,and κ1>0: ridges>0,and κ1<0: valleys. (14)

To determine the ridge and valley lines, we first need to compute the principle curvatures and their associated principle curvature directions, recall Section 3.2. Afterwards, we determine the gradient of for each vertex, see Section 3.3. Finally, we compute the dot product of the gradient and the associated principle curvature direction . This yields the scalar value of for each vertex. Next, we distinguish between ridges and valleys and determine for each vertex. Here, we need again the gradient of each vertex with the value and determine the dot product of the result with . Hence, we gain two scalar values per vertex: and . Afterwards, we assess the zero-crossing of the first scalar value, recall Section 3.5. We connect the zero crossings in every triangle for which one condition of Equation 14 holds. The filtering of the lines is again performed by employing an user-defined threshold. The integral along each ridge and valley line is determined according to the underlying curvature. If the magnitude of the integral exceeds the threshold for ridges or valleys, the line is drawn.
Summary: The calculation is solely based on the curvature and therefore view-independent. This method is able to detect small features. The filtering depends on the underlying curvature and the length of the curve. Therefore, a long line with small curvature has also the chance to be drawn as a small line with high curvature. This strategy emphasizes also long feature lines. Ridges and valley lines are very susceptible to noise, since this method is of 3rd order. Therefore, small discontinuities on the surface mesh lead to erroneous derivatives and this error propagates for each further derivative. A crucial task for this method is to guarantee a smoothed mesh to obtain reasonable results. From an artist’s point of view, some features may be more highlighted than others from different points of view. This is caused by the different perception of an object and by various light positions. For this task, the ridge and valley lines are not appropriate due to the restriction of view-independent results.

### 5.4 Suggestive Contours

Suggestive contours are view-dependent feature lines introduced by DeCarlo et al. DeCarlo2003 . They extend the definition of the contour. These lines are defined as the set of minima of in the direction of , where is the surface normal, is the view vector which points towards the camera, and is the projection of the view vector on the tangent plane. Formally:

Another equivalent definition of the suggestive contours is given by the radial curvature . It is defined as the curvature in direction of . As seen in Equation 3, this curvature can be determined by knowing the principle curvature directions as well as the corresponding curvatures. Therefore, the definition of the suggestive contours is equivalent to the set of points at which the radial curvature is equal 0 and the directional derivative of in direction is positive:

The filtering strategy is to apply a small threshold to eliminate suggestive contour points where the radial curvature in direction of the projected view vector is very low. Additionally, a hysteresis threshold is applied to increase granularity.
Summary: Suggestive contours extend the normal definition of the contour. This method depicts zero crossing of the diffuse light in view direction. This can be seen as inflection points on the surface. This method is of 2nd order only and thus less susceptible to noise. Unfortunately, suggestive contours are not able to depict some sorts of sharp edges, which are in fact noticeable features. For instance, a rounded cube has no suggestive contours.

### 5.5 Apparent Ridges

Apparent ridges were proposed by Judd et al. Judd2007 . These feature lines extend the definition of ridges by a view-dependent curvature term. Therefore, a projection operator is used to map the vertices on a screen plane . The orthonormal basis of the screen plane is given by . Assume we have a parametrized surface . Then the projection of onto is given by:

 P(x)=(⟨v1,f(x)⟩⟨v2,f(x)⟩).

The Jacobian of can be expressed as:

 JP=⎛⎜⎝⟨v1,∂f∂x1⟩⟨v1,∂f∂x2⟩⟨v2,∂f∂x1⟩⟨v2,∂f∂x2⟩⎞⎟⎠.

In the discrete case with surface meshes, the Jacobian can be expressed by a basis for the tangent plane :

 JP=(⟨v1,e1⟩⟨v1,e2⟩⟨v2,e1⟩⟨v2,e2⟩).

If a point on the screen plane is not a contour point, there exists a small neighborhood where the inverse of exists. Normal vectors at a point on the screen plane are defined as . The main idea is to build a view-dependent shape operator at a point on the screen as

 S′(w′)=Dw′n′

where is a vector in the screen plane. The view-dependent shape operator is therefore defined as:

 S′=SJ−1P.

Here, the basis of the tangent space expressing and must be the same. In contrast to the shape operator, the view-dependent shape operator is not a self-adjoint operator, recall Section 2.5. Therefore, it is not guaranteed that

has two eigenvalues, but it has a maximum singular value

:

 κ′1=max∥w∥=1∥S′(w′)∥.

This is equivalent to find the maximum eigenvalue of and to take the square root. The corresponding singular eigenvector is called the maximum view-dependent principle direction. The rest of the method is similar to the ridge and valley methods. Formally, apparent ridges are defined as the loci of points at which the view-dependent principle curvature assumes an extremum in the view-dependent principle direction:

The sign of is always positive. To distinguish between ridge lines and valley lines, we may compare the sign of the object-space curvature:

 κ1{<0,ridges>0,valleys.

The calculation of the directional derivative is different from the other methods. This calculation is performed with finite differences. Therefore, we transform the singular eigenvector to object space using the corresponding basis of the associated vertex . Furthermore, we need the opposite edges of the vertex and determine two points , on the edges such that and the edges are orthogonal and , are the dropped perpendiculars of to the corresponding edges. The directional derivatives are determined by averaging the finite differences of the curvatures between and , . The curvature of ,

is assessed by linear interpolation of the endpoints of the associated edge. Having the principle view-dependent curvature direction

, we need to make it consistent over the mesh because it is not well-defined. Therefore, is flipped in opposite direction whenever it does not point to the direction where the view-dependent curvature is increasing. The zero-crossings are determined by checking if the principle view-dependent curvature directions of the vertices along an edge point are in the same direction. Only in this case there is no zero-crossing. Pointing in different directions means that the enclosing angle is greater than 90 degrees. The zero crossing is determined by interpolating the values of the derivatives. To locate only maxima, a perpendicular is dropped from each vertex to the zero crossing line. If the perpendiculars of the vertices of an edge make an acute angle with their principle view-dependent curvature directions, the zero crossing is a maximum. Otherwise, the zero crossing is a minimum. To eliminate unimportant lines, a threshold based on the view-dependent curvature is used.
Summary: Apparent ridges incorporate the advantages of the ridges and valley lines as well as the view dependency. They extend the ridge and valley definition by introducing view-dependent curvatures. This method is able to depict salient regions as sharp edges. Unfortunately, the 3rd order computation leads to low frame rates and to visual clutter if the surface mesh is not sufficiently smoothed.

### 5.6 Photic Extremum Lines

Photic extremum lines (PELs) were introduced by Xi et al. Xi2007 . These feature lines depict regions of the surface mesh with significant variations of illuminations. This method is based on the magnitude of the light gradient. Formally, these lines are defined as the set of points where the variation of illumination along its gradient direction reaches a local maximum:

with . Normally, is used as the headlight illumination: with as the normal vector and

as the view-vector. PELs have more degrees of freedom to influence the result by adding more light sources. Thus, the scalar value of

changes by adding the light values of the vertices by other lights. Noisy photic extremum lines are filtered by a threshold which is based on the integral of single connected lines. The strength of a line with points is determined by:

 T=∫∥∇f∥=n−1∑i=0∥∇f(xi)∥+∥∇f(xi+1)∥2∥xi−xi+1∥.

If is less than a user-defined threshold, the line is canceled out.
Summary: Photic extremum lines are strongly inspired by edge detection in image processing and by human perception of a change in luminance. It uses the variation of illumination. The result may be improved by adding lights. Beside the filtering strategy to integrate over the lines and accumulate the magnitude of the gradient, the noise can also be reduced by adding a spotlight that directs to certain regions. Nevertheless, smoothing is necessary to gain reasonable results. Here, the smoothing of the normal is sufficient as the computation is mainly based on the normals. However, the computation has high performance costs. The original work was improved by Zhang et al. Zhang2010 to significantly increase the runtime.

### 5.7 Demarcating Curves

Demarcating curves were proposed by Kolomenkin et al. Kolomenkin2008 . These feature lines are defined as the transition of a ridge to a valley line. To determine these lines, the derivative of the shape operator has to be calculated, recall Equation 11:

 C=(DvSDwS).

The demarcating curves are defined as the set of points where the curvature derivative is maximal:

The values for can be analytically found as the roots of a third order polynom. This is obtained by setting and combining this with Equation 12. A user-defined threshold eliminates demarcating curves, if it exceeds the value of .
Summary: Demarcating curves are view-independent feature lines displaying regions where the change of the curvature is maximal. Therefore, higher-order derivatives are used. A rank-3 tensor is determined. This method can be used to illustrate bumps by surrounding curves. The advantage of the method is to enhance small features. Especially when combined with shading, this approach has its strength in illustrating archaeology objects where specific details are important, e.g., old scripts. For this application, view-dependent illustration techniques are not recommended because details need to be displayed for every camera position. Contrary, due to higher-order derivatives, the method is sensitive to noise and is not well suited for illustrative visualization.

### 5.8 Laplacian Lines

Laplacian lines were proposed by Zhang et al. Zhang2011 . The introduction of these lines was inspired by the Laplacian-of-Gaussian (LoG) edge detector in image processing and aims at a similar effect for surface meshes. The idea of the LoG method is to determine the Laplacian of the Gaussian function and to use this kernel as a convolution kernel for the image. Laplacian lines calculate the Laplacian of an illumination function and determine the zero crossing as feature lines. To remove noisy lines, the lines are only drawn if the magnitude of the illumination gradient exceeds a user-defined threshold :

where is the discrete Laplace-Beltrami operator on the surface mesh and is the illumination with . Here, the discrete Laplace-Beltrami operator with the Belkin weights is used, as introduced in Section 3.4. The advantage of this method is the simplified representation of the Laplacian of the illumination:

 Δf(p) =Δ⟨n,v⟩ =⟨Δn,v⟩.

Here, is the vector Laplace operator in the Euclidean space.