Analyzing structural characteristics of object category representations from their semantic-part distributions

09/15/2015 ∙ by Ravi Kiran Sarvadevabhatla, et al. ∙ 0

Studies from neuroscience show that part-mapping computations are employed by human visual system in the process of object recognition. In this work, we present an approach for analyzing semantic-part characteristics of object category representations. For our experiments, we use category-epitome, a recently proposed sketch-based spatial representation for objects. To enable part-importance analysis, we first obtain semantic-part annotations of hand-drawn sketches originally used to construct the corresponding epitomes. We then examine the extent to which the semantic-parts are present in the epitomes of a category and visualize the relative importance of parts as a word cloud. Finally, we show how such word cloud visualizations provide an intuitive understanding of category-level structural trends that exist in the category-epitome object representations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

sketch-object-part-analysis

Code and data related to analysis of object sketches at a semantic part-level


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Studies from neuroscience show that structural part-mapping computations are employed by the human visual system in the process of recognition [5]. Put another way, the presence of certain parts seems to be anticipated by the visual system when it attempts to recognize an object. The knowledge of what these parts are and their relative importance for the overall task of recognition can lead to insights regarding the neuro-visual representation of objects.

In a recent work, Sarvadevabhatla et al. [7] describe the construction of sketch-based spatial representations for object categories termed category-epitomes. The epitomes are constructed to as sparse as possible while still being machine-recognizable (see Figure  2). To study these epitomes, one possibility would be to visually examine them for structural similarities on a per-category basis. However, if the number of such epitomes is large, visual examination can be ineffective. An alternate approach would be to examine the distribution of semantic-parts111E.g. spokes, seat, wheel, handle etc. are the semantic parts of a bicycle. We use the term semantic-parts to distinguish from the common interpretation of an object part as a certain spatial, unnamed portion of an object. in the epitomes of each category. As we show in this work, such an approach can lead to an intuitive understanding of category-specific “signature” structural elements (parts) which persist in category-epitomes (see Figure 1). Moreover, the category-epitomes we study have been obtained using human-drawn sketches as a starting point. Therefore, our approach also creates the possibility of analyzing the underlying human neuro-visual representations as well.

Figure 1: Sparsified yet recognizable freehand sketch representations of object category bicycle on the left. Examining such a large number of instances visually for structural similarities can be ineffective. Instead, the approach we propose captures the structurally significant parts as a semantic-part word cloud (on the right). The size of a part’s name in the word cloud reflects its importance across the set of sparsified representations of the category.
Figure 2: Original sketches (top row) and corresponding category-epitomes (bottom row) for various object categories. Figure has been taken from  [7].

2 Related Work

Determining the relative importance of part-level structural primitives for object category understanding has been explored only to a limited extent. Guo et al. [3] present an importance measure of shape parts based on their ability to reconstruct the whole object shape of 2-D silhouettes. However, the authors interpret parts to mean segments on the contour of the object. Ma et al. [6] propose a perception-based method to segment a sketch into semantically meaningful parts. Interestingly, they demonstrate the effectiveness of utilizing semantic parts rather than just consider parts as unnamed “regions” of the object. To the best of our knowledge, the relative importance of the semantic parts has not been studied.

3 Construction of a part-annotated sketch database

As the first step towards the semantic-part based understanding of category-epitomes, we manually annotated hand-drawn sketches from categories222airplane, bicycle, bus, car (sedan), cat, cow, dog, flying bird, horse, person walking, potted plant, sheep, train from the sketch database of Eitz et al. [2] for our analysis. A direction of research we intend to pursue in future involves simultaneous analysis of image and sketch based categories (whose part-level segmentations have been provided). With this in mind, the categories we examine were chosen to overlap with PASCAL-parts [1] – an image dataset containing part-level segmentations of object categories. From the categories in PASCAL-parts, we retained only those containing at least two dominant labeled parts. For example, category tv had only one dominant labeled part (screen

) and therefore was not admissible. Within the sketches of a category, we considered only the correctly classified sketches since

category-epitomes, by definition, cannot be constructed for misclassified sketches. Please refer to Section of  [7] for details regarding category-epitome construction.

The annotation of part contours in the sketches was performed by annotators who used an annotation tool developed in-house (see Figure 3). In the figure, the sketch to be annotated is on the left. As an annotation guide, a prototypical image of the category, labeled with parts, was also provided alongside the sketch. The annotators were provided basic guidelines on annotation to ensure reasonably compact boundary contours enclosing each semantic-part. At the end, we obtained semantic-part contour annotations for sketches across object categories for an average of sketches per category. A sample annotation can be viewed in Figure 4. For each annotated sketch, the corresponding sparsified representation (category-epitome) was obtained from the epitome data provided by  [7] at http://val.serc.iisc.ernet.in/eotd/epitome_images/.

Figure 3: Screenshot of our annotation system. The sketch to be annotated and the list of parts can be seen towards the left side. The reference image to guide the annotators for part names and locations is on the right.
Figure 4: User annotated sketch from category bicycle

4 Our approach

An overview of our approach can be seen in Figure 5. In the figure, locations with numbers circled in orange correspond to important processing stages which we shall refer to in the discussion that follows. We utilize an example from the category bicycle for the purpose of illustration. Let be the original full-sketch image from category , its category-epitome and , the set of 2-D contour points which correspond to user-annotated boundaries of semantic-parts in . Since the category-epitome is constructed from its full-sketch counterpart, we have (i.e. the set of sketch strokes in the epitome is a subset of the strokes in the corresponding full-sketch). Let us also suppose the cardinality of is and the number of semantic-parts in is .

Figure 5: Determining part-importances given the full sketch , the corresponding category-epitome and the set of user annotations from the category bicycle

. The output is a vector of semantic-part importances. Locations with numbered orange circles indicate key aspects of the pipeline. The sketches and annotation data have been color-coded for visualization purposes. This figure is best viewed in color.

4.1 Obtaining candidate part contours

In some instances, part contours may enclose an insignificant number of pixels from epitome . As the first step, we filter out such contours. For each part contour, we compute the number of stroke pixels in that lie within the part’s boundary. We also compute the number of stroke pixels in that lie within the part’s boundary. If the ratio is larger than a threshold, the contour is added to the candidate contour list. In Figure 5, the candidate contours are shown in bold (region labeled \⃝raisebox{-0.9pt}{1}). Note that multiple occurrences of the same semantic-part type (e.g. spokes of the bicycle) which satisfy the threshold criterion are counted independently. To avoid undue importance to multiple occurrences of the same part type, we normalize by the corresponding part contour counts in full sketch to obtain a ‘coarse’ part-importance factor for each semantic-part (see \⃝raisebox{-0.9pt}{2} in Figure 5). We term it a ‘coarse’ importance factor since it implicitly takes raw pixel counts into account for determining part importance. Shortly, we shall see how the spatial structure of the stroke is also utilized in determining the final semantic-part importance.

airplane
bicycle
bus
car
cat
cow
dog
flying bird
horse
person walking
potted plant
sheep
train
Figure 6: Importance of semantic structural parts for object categories : Each image shows a word cloud of parts for epitomes of each category. The size of the part name indicates its relative importance across epitomes of the category. The above depictions are for Length stroke sequence ordering.
airplane
bicycle
bus
car
cat
cow
dog
flying bird
horse
person walking
potted plant
sheep
train
Figure 7: Importance of semantic structural parts for object categories : Each image shows a word cloud of parts for epitomes of each category. The size of the part name indicates its relative importance across epitomes of the category. The above depictions are for Temporal stroke sequence ordering.
airplane
bicycle
bus
car
cat
cow
dog
flying bird
horse
person walking
potted plant
sheep
train
Figure 8: Importance of semantic structural parts for object categories : Each image shows a word cloud of parts for epitomes of each category. The size of the part name indicates its relative importance across epitomes of the category. The above depictions are for Alternate stroke sequence ordering.

4.2 Obtaining ‘fine-grained’ part-importance weights

The process of annotating a part typically results in a 2-D closed contour. Points on the contour tend to be in close proximity with the boundary pixels of the object. We exploit this observation to obtain a ‘fine-grained’ part importance factor for each part enclosed by a candidate contour.

Let be the sets of points comprising user-annotated boundary contours. Examples of such contours can be in the annotation image in Figure 5 (region \⃝raisebox{-0.9pt}{3}). For a given ‘part contour set’ , let be the ‘full-sketch point’ set of 2-D locations of stroke pixels enclosed by the part contour in the full sketch . Similarly, let be the ‘epitome point’ set of 2-D locations of stroke pixels enclosed by the part contour in epitome .

For each member in the ‘full-sketch point’ set, , we find the closest-by-distance matching point from . i.e. . We retain only those matches whose distance is less than a threshold. Intuitively, this procedure enables us to identify stroke pixels of the full sketch which “hug” the candidate contour’s boundary. Let be the number of such stroke pixels . Valid matches found using the above procedure are shown in red for a candidate part boundary in Figure 5 – refer to region \⃝raisebox{-0.9pt}{4}. A similar procedure gives us – the number of stroke pixels in the epitome which “hug” the candidate contour’s boundary (See region \⃝raisebox{-0.9pt}{5} in Figure  5). The ratio provides a fine-grained importance for the part enclosed by the boundary – the higher the value of , the more the number of pixels from both full sketch and the epitome which commonly “hug” the annotated part boundary. The part-importance is depicted in Figure 5 by the height of cylinders adjoining numbered regions \⃝raisebox{-0.9pt}{7},\⃝raisebox{-0.9pt}{8}. As we can see, the ‘wheel’ is more prominently present in the epitome compared to the ‘bicycle frame’ thereby according the former a larger part-importance () value.

4.3 Obtaining category-wise part weights

The above procedure of obtaining fine-grained importance weight is repeated for each part (indexed by ) enclosed by a candidate contour boundary by utilizing pixel location sets and . These weights are combined along with the coarse importance weights obtained in Section 4.1 to obtain the image-level part-wise importance weights (region labeled \⃝raisebox{-0.9pt}{6} in Figure 5

). The part-wise aggregation of these weights over all the ‘full sketch’-epitome pairs of a category results is normalized to obtain a probability distribution of part-level importance. This distribution can then be visualized as a semantic-part word cloud (see Figure

1) to determine the “signature” structural elements (semantic-parts) which persist in the epitomes of a category.

The pseudo-code for the procedure described in this section can be viewed in Appendix A. In the pseudo-code, portions highlighted in red indicate procedures whose details are provided separately.

5 Analyzing semantic-part word clouds

Before we proceed, it is important to point out that the category-epitome of a full sketch implicitly depends upon the order in which individual strokes of the full sketch are considered. For example, the epitome can be constructed by considering the temporal order in which the sketches were originally added. Yet another epitome can be constructed if we consider strokes in decreasing order of stroke length. Essentially, for each full sketch, there can be as many category-epitomes as the number of stroke orderings. In our analysis that follows, we keep the stroke ordering fixed over the set of categories. Details on stroke orderings and their effect on category-epitomes that result can be found in Sections of  [7].

The procedure described in Section 4 is used to generate semantic-part word clouds for the object categories we have chosen. Figure 6 shows the word clouds for the L E N G T H -based stroke ordering. Remember that the size of the semantic-part’s name indicates its dominance in the sparsified representations (epitomes). Categories which exhibit one or two dominant parts (e.g. horse, dog, potted plant) indicate that such parts are consistently present in most of the epitomes. This, in turn, suggests a consistency in which sketches of the category are drawn. word clouds of categories with more variety in depictions (e.g. airplane, person walking) tend to contain many parts whose names are similar in size. Another interesting trend exists across semantically related categories. For instance, ‘leg’ is found to be the common defining signature part for the animal categories (cow, dog, horse, person walking, sheep). Similarly, for the vehicular categories (car, bus, bicycle, train), ‘wheel’ is a dominant part and for the flying categories (airplane, bird), ‘wing’ is a dominant part.

The trends mentioned above can also be seen for the T E M P O R A L stroke ordering (see Figure 7). We can observe that the part importance trends are fairly same for each category across the stroke orderings. The epitomes created under T E M P O R A L stroke ordering scheme tend to contain the sequence of strokes added towards the beginning. Since the part importance trends for T E M P O R A L are not very dissimilar from the L E N G T H -based ordering, this suggests, somewhat counter-intuitively, that people do not necessarily draw the “signature” parts of a sketch first. The A L T E R N A T E stroke ordering consists of an alternating combination of longest strokes and decorative strokes (temporally reversed order). However, even in this case, the essential dominance of “signature” parts remains more or less unchanged across the categories (Figure 8). These results across the stroke ordering schemes suggest that the “signature” semantic parts live up to their name – they capture the discriminative structural elements of the category and are invariant to the manner in which sketch strokes are considered in the process of epitome construction.

A more traditional, tabular version of the part word clouds with numerical values for various stroke orderings can be viewed in Tables 1, 2 and 3.

6 Discussion and Future Work

In this paper, we have presented a novel framework for analyzing the structural characteristics of category-epitomes. We have shown that semantic-part annotations of sketches can be utilized to gain an intuitive understanding of category-level and sketch-stroke-ordering level structural trends in category-epitomes. The database of part-annotated sketches of object categories is another significant contribution of our work since we can now simultaneously analyze relationships with photographic image counterparts at a semantic-part level. Finally, the word cloud based analysis we have presented is quite general and can be applied to any spatial visual object representation wherein the part labelings have been provided.

At present, we have confined our analysis to the sketch database of Eitz et al. [2]. To examine the generalizability of our approach and results, it would be interesting to apply it to the part-segmented sketch database of Huang et al. [4]. Another possible extension would be to apply the sketch-part segmentation method suggested by the aforementioned authors for the entire set of categories (instead of the we have chosen) from the database of Eitz et al. [2].

Appendix A Pseudo-code

1:procedure GetEpiPW()
2:        - Name of the category (e.g. bicycle)
3:        - Stroke sequence ordering (Temporal,Length,Alternate)
4:
5:        Get list of structural parts for the category
6:       GetListOfParts()
7:        Get part annotations for sketches in the category
8:       GetUserAnnotations()
9:        Initialize accumulator for weight of each part across epitomes of the category
10:       
11:
12:        For each correctly classified test image in the category
13:       for  do
14:             GetImages()
15:              I = full sketch, epi_I = corresponding category-epitome
16:              Get weights of parts as present in category-epitome epi_I
17:             GetPartStats()
18:              Update accumulator for part weights with contribution from current epitome
19:             
20:       end for
21:
22:        Normalize the part weights
23:       
24:        Sort the part weights in decreasing order
25:       Sort()
26:        return the list of parts and their weights in sorted order
27:       
28:       return
29:end procedure
Algorithm 1 Algorithm to determine contribution of structural part strokes to category-epitome
1:procedure GetPartStats()
2:        - part annotations for the sketch
3:        - Full sketch image
4:        - Category-epitome image
5:        - list of structural parts for the category
6:
7:        Get frequency count of each part in full sketch
8:       GetPartCount()
9:        Get candidate contours
10:       GetCandidateContours()
11:        Get frequency count of parts w.r.t candidate contour list
12:       U,P,cand_contour_id
13:        Normalize for multiple occurrences of same part
14:       
15:       
16:       for  do
17:              Count stroke pixels of epitome “hugging” candidate contour’s boundary
18:             CtValidMatch()
19:              Count stroke pixels of fullsketch “hugging” candidate contour’s boundary
20:             CtValidMatch()
21:              Get the importance of the corresponding ‘part’
22:             
23:             
24:             for  do
25:                    if  then
26:                          
27:                          break
28:                    end if
29:             end for
30:       end for
31:       return
32:end procedure
Algorithm 2 Analyzes stroke information of an epitome using the part annotations of corresponding original sketch. Obtains a listing of object parts which are prominent in the epitome and their relative importance.
1:procedure GetCandidateContours()
2:        - Full sketch image
3:        - part annotation array for full sketch image
4:        - Category-epitome image
5:        - list of structural parts for the category
6:
7:       for  do
8:              Get 2D part contour from user annotation
9:             GetPartContour()
10:              Get count of stroke pixels within the part contour from the full sketch image
11:             CountPixels()
12:              Get count of stroke pixels within the part contour from epitome
13:             CountPixels()
14:              If number of stroke pixels within part contour from the epitome is greater than a threshold, add the part as a candidate part
15:             
16:             if  then
17:                    cand_contour_ids.Insert(p)
18:             end if
19:       end for
20:       return
21:end procedure
Algorithm 3 Gets list of candidate contours
1:procedure AnalyzeParts()
2:        - candidate part
3:        - part annotations for the sketch
4:        - Full sketch image
5:        - Category-epitome image
6:        - list of structural parts for the category
7:        - Part-wise importance factor
8:
9:        Count stroke pixels of epitome which “hug” the candidate part’s boundary
10:       CtValidMatch()
11:        Count stroke pixels of full sketch which “hug” the candidate part’s boundary
12:       CtValidMatch()
13:        Get the importance of this candidate part
14:       
15:       
16:       
17:        Account for multiple occurences of same part in the epitome
18:       for  do
19:             if  then
20:                    
21:             end if
22:       end for
23:       return
24:end procedure
Algorithm 4 Gets list of candidate parts (which potentially contribute) in the epitome
1:procedure CtValidMatch()
2:        - Sketch image (full or epitome)
3:        - array of part annotations for sketches in the category
4:        - Index into array
5:
6:        Get 2D part contour from user annotation
7:       GetPartContour()
8:        Get stroke pixels from image which lie inside candidate part’s mask
9:       GetStrokePixels()
10:        For each stroke pixel p from image which lies inside candidate part’s mask
11:        Find the nearest pixel p’ on the candidate part boundary
12:       
13:       GetNearest()
14:        Retain the matches whose distance is less than a threshold
15:       FilterList()
16:       Index() [xd_f yd_f] - Filtered matching boundary points
17:       Length()
18:       return
19:end procedure
Algorithm 5 Counts the number of stroke pixels of image that lie “close” to candidate part’s contour
Category Epitome part-list and weights (temporal)
airplane window (1.000), wing (0.373), fuselage (0.190), vertical stabilizer (0.183), wind shield (0.159), horizontal stabilizer (0.151), engine (0.095), door (0.048), nose (0.008)
bicycle spoke (1.000), frame (0.441), wheel (0.304), handlebars (0.147), seat (0.127), pedal (0.093), chain (0.088)
bus window (1.000), wheel (0.421), body (0.220), windshield (0.101), headlight (0.094), door (0.088), steering (0.044), roof (0.038)
car (sedan) wheel (1.000), window (0.963), frame (0.481), door (0.315), headlight (0.259), windshield (0.148), bumper (0.111), bonnet (0.074), seat (0.056), steering (0.037), radiator grille (0.037)
cat whiskers (1.000), paw (0.531), eye (0.449), ear (0.449), leg (0.245), nose (0.224), tail (0.204), mouth (0.143)
cow leg (1.000), ear (0.481), eye (0.462), patch (0.327), horn (0.308), tail (0.308), udder (0.269), mouth (0.231), nose (0.173)
dog leg (1.000), eye (0.405), ear (0.333), nose (0.286), body (0.286), head (0.286), tail (0.262), mouth (0.190)
flying bird wing (1.000), beak (0.500), head (0.500), body (0.500), tail (0.500), eye (0.455), leg (0.091)
horse leg (1.000), hoofs (0.310), eye (0.264), head (0.264), tail (0.264), mane (0.230), mouth (0.138), nose (0.138), body (0.069)
person walking leg (1.000), hand (0.940), foot (0.860), head (0.520), eye (0.480), mouth (0.240), chest (0.240), hair (0.140), nose (0.100)
potted plant leaf (1.000), stem (0.382), pot (0.224), flower (0.127)
sheep leg (1.000), eye (0.359), ear (0.321), mouth (0.269), body (0.269), tail (0.167), nose (0.000)
train wheel (1.000), window (0.578), coach (0.311), engine (0.156), chimney (0.139), smoke (0.128), coupler (0.122), track (0.117), front (0.061), door (0.028)
Table 1: Category-wise part-importances for temporal stroke ordering. Part-importances are listed in decreasing order of importance relative to the most dominant part (shown with weight 1)
Category Epitome part-list and weights (length)
airplane wing (1.000), window (0.692), fuselage (0.522), vertical stabilizer (0.423), wind shield (0.340), horizontal stabilizer (0.223), engine (0.121), door (0.067), nose (0.023)
bicycle frame (1.000), wheel (0.936), spoke (0.727), seat (0.239), chain (0.189), handlebars (0.149), pedal (0.124)
bus window (1.000), wheel (0.629), body (0.346), windshield (0.150), door (0.132), roof (0.063), steering (0.045), headlight (0.044)
car (sedan) wheel (1.000), window (0.767), frame (0.619), door (0.254), headlight (0.162), windshield (0.131), bonnet (0.106), radiator grille (0.040), bumper (0.032), seat (0.027), steering (0.024)
cat whiskers (1.000), paw (0.867), ear (0.717), leg (0.394), tail (0.334), eye (0.317), nose (0.232), mouth (0.066)
cow leg (1.000), ear (0.327), patch (0.299), tail (0.261), horn (0.258), udder (0.220), mouth (0.158), nose (0.114), eye (0.110)
dog leg (1.000), head (0.306), body (0.297), ear (0.290), tail (0.287), nose (0.173), mouth (0.166), eye (0.104)
flying bird wing (1.000), tail (0.549), body (0.479), head (0.314), beak (0.225), leg (0.011), eye (0.000)
horse leg (1.000), tail (0.295), mane (0.237), hoofs (0.236), head (0.232), eye (0.097), body (0.072), nose (0.059), mouth (0.058)
person walking leg (1.000), foot (0.751), hand (0.638), head (0.507), chest (0.233), hair (0.094), mouth (0.037), nose (0.030), eye (0.005)
potted plant leaf (1.000), stem (0.466), pot (0.359), flower (0.171)
sheep leg (1.000), body (0.401), mouth (0.272), ear (0.259), tail (0.171), eye (0.035), nose (0.000)
train wheel (1.000), coach (0.399), window (0.318), engine (0.195), track (0.141), chimney (0.140), coupler (0.119), smoke (0.095), front (0.078), door (0.013)
Table 2: Category-wise part-importances for length stroke ordering. Part-importances are listed in decreasing order of importance relative to the most dominant part (shown with weight 1)
Category Epitome part-list and weights (alternate)
airplane wing (1.000), window (0.928), fuselage (0.599), vertical stabilizer (0.471), wind shield (0.334), engine (0.256), horizontal stabilizer (0.193), door (0.054), nose (0.027)
bicycle wheel (1.000), spoke (0.960), frame (0.933), seat (0.263), handlebars (0.260), chain (0.207), pedal (0.097)
bus window (1.000), wheel (0.567), body (0.316), windshield (0.130), door (0.128), headlight (0.065), roof (0.056), steering (0.054)
car (sedan) wheel (1.000), window (0.935), frame (0.651), headlight (0.324), door (0.281), windshield (0.142), bonnet (0.115), radiator grille (0.059), bumper (0.057), steering (0.022), seat (0.010)
cat whiskers (1.000), paw (0.992), ear (0.817), leg (0.447), tail (0.389), eye (0.336), nose (0.229), mouth (0.183)
cow leg (1.000), ear (0.373), horn (0.308), patch (0.304), tail (0.263), eye (0.257), udder (0.236), mouth (0.174), nose (0.123)
dog leg (1.000), head (0.328), body (0.324), tail (0.304), eye (0.295), ear (0.294), nose (0.273), mouth (0.193)
flying bird wing (1.000), tail (0.551), body (0.514), head (0.387), beak (0.295), eye (0.102), leg (0.054)
horse leg (1.000), tail (0.283), hoofs (0.259), head (0.242), mane (0.211), eye (0.196), mouth (0.112), nose (0.096), body (0.079)
person walking leg (1.000), foot (0.839), hand (0.821), head (0.517), chest (0.247), mouth (0.136), eye (0.125), hair (0.118), nose (0.072)
potted plant leaf (1.000), stem (0.388), pot (0.345), flower (0.145)
sheep leg (1.000), body (0.440), eye (0.260), mouth (0.251), ear (0.214), tail (0.195), nose (0.000)
train wheel (1.000), window (0.388), coach (0.369), engine (0.176), track (0.153), chimney (0.123), coupler (0.121), smoke (0.074), front (0.062), door (0.011)
Table 3: Category-wise part-importances for alternate stroke ordering. Part-importances are listed in decreasing order of importance relative to the most dominant part (shown with weight 1)

References

  • [1] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, 2014.
  • [2] M. Eitz, J. Hays, and M. Alexa. How do humans sketch objects? SIGGRAPH, 2012.
  • [3] G. Guo, Y. Wang, T. Jiang, A. L. Yuille, F. Fang, and W. Gao. A shape reconstructability measure of object part importance with applications to object detection and localization. IJCV, 2014.
  • [4] Z. Huang, H. Fu, and R. W. H. Lau. Data-driven segmentation and labeling of freehand sketches. SIGGRAPH Asia, 2014.
  • [5] A. Lovett, D. Gentner, K. Forbus, and E. Sagi. Using analogical mapping to simulate time-course phenomena in perceptual similarity. Cognitive Systems Research, 2009.
  • [6] C. Ma, Z. Dong, T. Jiang, Y. Wang, and W. Gao. A method of perceptual-based shape decomposition. In ICCV, 2013.
  • [7] R. K. Sarvadevabhatla and V. B. Radhakrishnan. Eye of the dragon : Exploring discriminatively minimalist sketch-based abstractions for object categories (url: http://val.serc.iisc.ernet.in/eotd-preprint.pdf). In ACMMM, 2015.