Zero Shot Recognition with Unreliable Attributes

09/15/2014 ∙ by Dinesh Jayaraman, et al. ∙ The University of Texas at Austin 0

In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category's attributes. For example, with classifiers for generic attributes like striped and four-legged, one can construct a classifier for the zebra category by enumerating which properties it possesses---even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute's error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Visual recognition research has achieved major successes in recent years using large datasets and discriminative learning algorithms. The typical scenario assumes a multi-class task where one has ample labeled training images for each class (object, scene, etc.) of interest. However, many real-world settings do not meet these assumptions. Rather than fix the system to a closed set of thoroughly trained object detectors, one would like to acquire models for new categories with minimal effort and training examples. Doing so is essential not only to cope with the “long-tailed” distribution of objects in the world, but also to support applications where new categories emerge dynamically—for example, when a scientist defines a new phenomenon of interest to be detected in her visual data.

Zero-shot learning offers a compelling solution. In zero-shot learning, a novel class is trained via description—not labeled training examples [9, 16, 7]. In general, this requires the learner to have access to some mid-level semantic representation, such that a human teacher can define a novel unseen class by specifying a configuration of those semantic properties. In visual recognition, the semantic properties are attributes shared among categories, like black, has ears, or rugged. Supposing the system can predict the presence of any such attribute in novel images, then adding a new category model amounts to defining its attribute “signature” [7, 3, 16, 22, 17]. For example, even without labeling any images of zebras, one could build a zebra classifier by instructing the system that zebras are striped, black and white, etc.

Interestingly, computational models for attribute-based recognition are supported by the cognitive science literature, where researchers explore how humans conceive of objects as bundles of attributes [23, 15, 5]. Natural categories appear to be convex regions in a conceptual space with axes corresponding to (attribute-like) “psychological quality dimensions” [5]. Furthermore, new category systems evolve to provide maximum information with least cognitive effort by mapping categories to attribute structures [23], and novel human judgments can be extrapolated based on how people associate predicates (biological attributes) with object names (mammals) [15].

So, in principle, if we could perfectly predict attribute presence111and have an attribute vocabulary rich enough to form distinct signatures for each category of interest, zero-shot learning would offer an elegant solution to generating novel classifiers on the fly. The problem, however, is that we can’t assume perfect attribute predictions. Visual attributes are in practice quite difficult to learn accurately—often even more so than object categories themselves. This is because many attributes are correlated with one another (given only images of furry brown bears, how do we learn furry and brown separately?), and abstract linguistic properties can have very diverse visual instantiations (compare a bumpy road to a bumpy rash). Thus, attribute-based zero-shot recognition remains in the “proof of concept” realm, in practice falling short of alternate transfer methods [21].

We propose an approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. Whereas existing methods take attribute predictions at face value, our method during training acknowledges the known biases of the mid-level attribute models. Specifically, we develop a random forest algorithm that, given attribute signatures for each category, exploits the attribute classifiers’ receiver operating characteristics to select discriminative and predictable decision nodes. We further generalize the idea to account for unreliable class-attribute associations. Finally, we extend the solution to the “few-shot” setting, where a small number of category-labeled images are also available for training.

We demonstrate the idea on three large datasets of object and scene categories, and show its clear advantages over status quo models. Our results suggest the valuable role attributes can play for low-cost object category learning, in spite of the inherent difficulty in learning them reliably.

2 Related Work

Most existing zero-shot models take a two-stage classification approach: given a novel image, first its attributes are predicted, then its class label is predicted as a function of those attributes. For example, in [3, 16, 27]

, each unseen object class is described by a binary indicator vector (“signature”) over its attributes; a new image is mapped to the unseen class with the signature most similar to its attribute predictions. The probabilistic Direct Attribute Prediction (DAP) method 

[7] takes a similar form, but adds priors for the classes and attributes and computes a MAP prediction of the unseen class label. A topic model variant is explored in [28]. The DAP model has gained traction and is often used in other work [21, 17, 26]. In all of the above methods, as in ours, training an unseen class amounts to specifying its attribute signature. In contrast to our approach, none of the existing methods account for attribute unreliability when learning an unseen category. As we will see in the results, this has a dramatic impact on generalization.

We stress that attribute unreliability is distinct from attribute strength. The former (our focus) pertains to how reliable the mid-level classifier is, whereas the latter pertains to how strongly an image exhibits an attribute (e.g., as modeled by relative [17] or probabilistic [7] attributes). PAC bounds on the tolerable error for mid-level classifiers are given in [16], but that work does not propose a solution to mitigate the influence of their uncertainty.

While the above two-stage attribute-based formulation is most common, an alternative zero-shot strategy is to exploit external knowledge about class relationships to adapt classifiers to an unseen class. For example, an unseen object’s classifier can be estimated by combining the nearest existing classifiers (trained with images) in the ImageNet hierarchy 

[21, 12], or by combining classifiers based on label co-occurrences [11]. In a similar spirit, label embeddings [1] or feature embeddings [4] can exploit semantic information for zero-shot predictions. Unlike these models, we focus on defining new categories through language-based description (with attributes). This has the advantage of giving a human supervisor direct control on the unseen class’s definition, even if its attribute signature is unlike that observed in any existing trained model.

Some zero-shot models generalize to the “few-shot” case where a small number of labels are available [28, 24, 12, 1]. We show how the proposed random forest model can learn simultaneously from signatures and labeled images, enabling few-shot learning with unreliable attribute predictions.

Acknowledging that attribute classifiers are often unreliable, recent work abandons purely semantic attributes in favor of discovering mid-level features that are both detectable and discriminative for a set of class labels [10, 20, 24, 13, 27, 1]. However, there is no guarantee that the discovered features will align with semantic properties, particularly “nameable” ones. This typically makes them inapplicable to zero-shot learning, since a human supervisor can no longer define the unseen class with concise semantic terms. Nonetheless, one can attempt to assign semantics post-hoc (e.g.,  [27]). We demonstrate that our method can benefit zero-shot learning with such discovered (pseudo)-attributes as well.

Our idea for handling unreliable attributes in random forests is related to fractional tuples

for handling missing values in decision trees 

[19]. In that approach, points with missing values are distributed down the tree in proportion to the observed values in all other data. Similar concepts are explored in [25] to handle features represented as discrete distributions and in [14] to propagate instances with soft node memberships. Our approach also entails propagating training instances in proportion to uncertainty. However, our zero-shot scenario is distinct, and, accordingly, the training and testing domains differ in important ways. At training time, rather than build a decision tree from labeled data points, we construct each tree using the unseen classes’ attribute signatures. Then, at test time, the inputs are attribute classifier predictions. Furthermore, we show how to propagate both signatures and data points through the tree simultaneously, which makes it possible to account for inter-dependencies among the input dimensions and also enables a few-shot extension.

3 Approach

Given a vocabulary of visual attributes, each unseen class is described in terms of its attribute signature , which is an -dimensional vector where gives the association of attribute with class .222We use “class” and “category” to refer to an object or scene, e.g., zebra or beach, and “attribute” to refer to a property, e.g., striped or sunny. “Unseen” means we have no training images for that class. Typically the association values would be binary—meaning that the attribute is always present/absent in the class—but they may also be real-valued when such fine-grained data is available. We model each unseen class with a single signature (e.g., whales are big and gray). However, it is straightforward to handle the case where a class has a multi-modal definition (e.g., whales are big and gray OR whales are big and black), by learning a zero-shot model per “mode”. Whether the attribute vocabulary is hand-designed [7, 3, 17, 26, 21] or discovered [27, 10, 20], our approach assumes it is expressive enough to discriminate between the categories.

Suppose there are unseen classes of interest, for which we have no training images. Our zero-shot method takes as input the attribute signatures and a dataset of images labeled with attributes, and produces a classifier for each unseen class as output. At test time, the goal is to predict which unseen class appears in a novel image.

In the following, we first describe the initial stage of building the attribute classifiers (Sec. 3.1). Then we introduce a zero-shot random forest trained with attribute signatures (Sec. 3.2). Next we explain how to augment that training procedure to account for attribute unreliability (Sec. 3.2.2) and signature uncertainty (Sec. 3.2.3). Finally, we present an extension to few-shot learning (Sec. 3.3).

3.1 Learning the attribute vocabulary

As in any attribute-based zero-shot method [3, 7, 16, 21, 17, 6, 26], we first must train classifiers to predict the presence or absence of each of the attributes in novel images. Importantly, the images used to train the attribute classifiers may come from a variety of objects/scenes and need not contain any instances of the unseen categories. The fact that attributes are shared across category boundaries is precisely what allows zero-shot learning.

Let denote the attribute training set comprised of images. Each is a descriptor (e.g., HOG, SIFT bag of words, etc.) for image , and each is a binary -dimensional label vector specifying which attributes are present in that image, i.e., indicates attribute is present in .333For simplicity we assume all images are labeled for all attributes, but there could easily be separate training sets for each attribute for . The validation data must have all attributes labeled for each image. To learn a mapping from these descriptors to attribute presence scores, we train support vector machines (SVM), one per attribute. Let denote the probabilistic output from the -th such SVM, as computed with Platt scaling.

In addition, during random forest training (Sec. 3.2.2), we will use a disjoint validation training set, , consisting of attribute-labeled images, to gauge the error tendencies of each attribute classifier. This entails evaluating the data’s receiver operating characteristic (ROC) values at a given operating point . For example, the false positive rate for attribute is determined by the count of instances for which and .

3.2 Zero-shot random forests

Next we introduce our key contribution: a random forest model for zero-shot learning.

3.2.1 Basic formulation: Signature random forest

First we define a basic random forest training algorithm for the zero-shot setting. The main idea is to train an ensemble of decision trees using attribute signatures—not image descriptors or vectors of attribute predictions. In the zero-shot setting, this is all the training information available. Later, at test time, we will have an image in hand, and we will apply the trained random forest to estimate its class posteriors.

Recall that the -th unseen class is defined by its attribute signature . We treat each such signature as the lone positive “exemplar” for its class, and discriminatively train random forests to distinguish all the signatures, . We take a one-versus-all approach, training one forest for each unseen class. So, when training class , all other class signatures are the negatives.

For each class, we build an ensemble of decision trees in a breadth-first manner. Each tree is learned by recursively splitting the signatures into subsets at each node, starting at the root. Let denote an indicator vector of length that records which signatures appear at node . For the root node, all signatures are present, so we have . Following the typical random forest protocol [2], the training instances are recursively split according to a randomized test; it compares one dimension of the signature against a threshold, then propagates each one to the left child or right child depending on the outcome, yielding indicator vectors and . Specifically, we have

and .

Thus, during training we must choose two things at each node: the query attribute and the threshold , represented jointly as the split . We sample a limited number of combinations and choose the one that maximizes the expected information gain :

(1)
(2)

where is the entropy of a general distribution . The 1-norm on the indicator vectors sums up the occurrences of each signature, which for now are binary. Since we are training a zero-shot forest to discriminate class from the rest, the distribution over class labels at node is a length-2 vector:

(3)

We grow each tree in the forest to a fixed, maximum depth, terminating a branch prematurely if less than 5% of training samples have reached a node on it. We learn trees per forest.

Given a novel test image , we compute its predicted attribute signature by applying the attribute SVMs. Then, to predict the posterior for class , we use to traverse to a leaf node in each tree of ’s forest. Let denote the fraction of positive training instances at a leaf node in tree of the forest for class .444For this basic formulation, note that each leaf will return or . Then , the average of the posteriors across the ensemble.

If we somehow had perfect attribute classifiers, this basic zero-shot random forest (in fact, one such tree alone) would be sufficient. Next, we show how to adapt the training procedure defined so far to account for their unreliability.

3.2.2 Accounting for attribute prediction unreliability

While our training “exemplars” are the true attribute signatures for each unseen class, the test images will have only approximate estimates of the attributes they contain. We therefore augment the zero-shot random forest to account for this unreliability during training. The main idea is to generalize the recursive splitting procedure above such that a given signature can pursue multiple paths down the tree. Critically, those paths will be determined by the false positive/true positive rates of the individual attribute predictors. In this way, we expand each idealized training signature into a distribution in the predicted attribute space. Essentially, this preemptively builds in the appropriate “cushion” of expected errors when choosing discriminative splits.

Implementing this idea requires two primary extensions to the formulation in Sec. 3.2.1: (i) we inject the validation data and its associated receiver operating characteristics into the tree formation process, and (ii) we redefine the information gain to account for the partial propagation of training signatures. We explain each of these components in turn next.

Now, in addition to signatures, at each node we maintain a subset of the validation data (see Sec. 3.1). The data is recursively propagated down the tree following the splits, as they are chosen. Let denote the set of validation data inherited at node . At the root, .

For any node , let be a real-valued indicator vector, such that records the fractional occurrence of the training signature for class at node . At the root, . For a split at node , a signature splits into the left and right child nodes according to its receiver operating characteristic (ROC) for attribute at the operating point specified by . In particular, we have:

(4)

where and is the -th attribute’s label for image . When , these are the true positive and false negative rates at threshold , respectively; when , they are the false positive and true negative rates. To illustrate what this equation means, consider a class “elephant” known to have the attribute “gray”. If the “gray” attribute classifier at the chosen threshold fires only on 60% of the “gray” samples in the validation set i.e., TPR=0.6, then only fraction of the “elephant” signature is passed on to the positive (left) node. This process repeats through more levels until fractions of the single “elephant” signature have reached all leaf nodes. Thus, a single class signature emulates the estimated statistics of a full training set of class-labeled instances with attribute predictions.

We stress two things about the validation data propagation. First, the data in is labeled by attributes only; it has no unseen class labels and never features in the information gain computation. Its only role is to estimate the ROC values. Second, the recursive sub-selection of the validation data is important to capture the dependency of TPR/FPR’s at higher level splits. For example, if we were to select split at the root, then the fractional signatures pushed to the left child must all have , meaning that for a candidate split at the left child, where , the correct TPR and FPR are both 1. This is accounted for when we use to compute ROC, but would not have been had we just used . Thus, our formulation properly accounts for dependencies between attributes when selecting discriminative thresholds, an issue not addressed by existing methods for missing [19] or uncertain features [25].

When building a zero-shot tree conscious of attribute unreliability, we choose the split maximizing the expected information gain according to the fractionally propagated signatures:

(5)

The distribution is computed as in Eqn. (3).

The discriminative splits under this criterion will be those that not only distinguish the unseen classes but also persevere (at test time) as a strong signal in spite of the attribute classifiers’ error tendencies. This means the trees will prefer both reliable attributes that are discriminative among the classes, as well as less reliable attributes coupled with intelligently selected operating points that remain distinctive. Furthermore, they will omit splits that, though highly discriminative in terms of idealized signatures, were found to be “unlearnable” among the validation data. For example, in the extreme case, if an attribute classifier cannot distinguish positives and negatives, meaning that TPR=FPR, then the signatures of all classes are equally likely to propagate to the left or right, i.e., and for all , which yields an information gain of 0 in Eqn. (5) (see Sec 6.1). Thus, our method, while explicitly making the best of imperfect attribute classification, inherently prefers more learnable attributes.

The proposed approach produces classifiers for unseen categories with zero category-labeled images. The attribute-labeled validation data plays an important role in our solution’s robustness. Were that data to perfectly represent the true attribute errors on images from the unseen classes (which we cannot access, of course, because images from those classes appear only at test time), then our training procedure would be equivalent to building a random forest on the test samples’ attribute classifier outputs.

3.2.3 Accounting for class signature uncertainty

Beyond attribute classifier unreliability, our framework can also deal with another source of zero-shot uncertainty: instances of a class often deviate from class-level attribute signatures. To tackle this, we redefine the soft indicators in Eqn. 4, appending a term to account for annotation noise (see Sec 6.2)). Equivalently, in implementation, we add perturbed copies of each class ’s “exemplar” attribute signature (e.g., for binary signatures, with some fraction of bits flipped in each copy) to the forest training data. Please see Sec 6.2 for details.

3.3 Extending to few-shot random forests

Our approach admits a natural extension to few-shot training. In this case, we are given not only attribute signatures, but also a dataset consisting of a small number of images with their class labels. We essentially use the signatures as a prior for selecting good tree splits that also satisfy the traditional training examples. The information gain on the signatures is as defined in Sec. 3.2.2, while the information gain on the images is as defined in Sec. 3.2.1. The latter reflects the fact that the training images, like the evaluation data , are actual attribute classifier outputs and thus require no uncertainty model during propagation. Using some notation shortcuts, for few-shot training we recursively select the split that maximizes the combined information gain:

(6)

where controls the role of the signature-based prior. Intuitively, we can expect lower values of to suffice as the size of increases, since with more training examples we can more precisely learn the class’s appearance. This few-shot extension can be interpreted as a new way to learn random forests with descriptive priors.

4 Experiments

Datasets and setup

We use three datasets: (1) Animals with Attributes (AwA[7] ( attributes, unseen classes, 30,475 total images), (2) aPascal/aYahoo objects (aPY[3] ( attributes, unseen classes, 15,339 images) (3) SUN scene attributes (SUN[18] ( attributes, unseen classes, 14,340 images). These datasets capture a wide array of categories (animals, indoor and outdoor scenes, household objects, etc.) and attributes (parts, affordances, habitats, shapes, materials, etc.). The attribute-labeled images originate from 40, 20, and 707 “seen” classes in each dataset, respectively; we use the class labels solely to map to attribute annotations. We use the unseen class splits specified in [8] for AwA and aPY, and randomly select the 10 unseen classes for SUN (see Sec 6.5). For all three, we use the features provided with the datasets, which include color histograms, SIFT, PHOG, and others (see [8, 3, 18] for details).

Following [7], we train attribute SVMs with combined -kernels, one kernel per feature channel, and set . Our method reserves 20% of the attribute-labeled images as ROC validation data, then pools it with the remaining 80% to train the final attribute classifiers. Our method and all baselines have access to exactly the same amount of attribute-labeled data.

We report all random forest results as mean and standard error measured over 20 random trials. We build random forests with

trees. Based on cross-validation, we use tree depths of (AwA-9, aPY-6, SUN-8), and generate tests per node (AwA-(10,7), aPY-(8,2), SUN-(4,5)). When too few validation data points ( positives or negatives) reach a node , we revert to computing statistics over the full validation set rather than .

Baselines

In addition to several state-of-the-art published results and ablated variants of our method, we also compare to two baselines: (1) signature rf: random forests trained on class-attribute signatures as described in Sec. 3.2.1, without an attribute unreliability model, and (2) dap: Direct Attribute Prediction [7, 8], which is a leading attribute-based zero-shot object recognition method widely used in the literature [7, 3, 16, 27, 7, 21, 17, 26].555We use the authors’ code: http://attributes.kyb.tuebingen.mpg.de/

4.1 Zero-shot object and scene recognition

Controlled noise experiments

Our approach is designed to overcome the unreliability of attribute classifiers. To glean insight into how it works, we first test it with controlled noise in the test images’ attribute predictions. We start with hypothetical perfect attribute classifier scores for in class , then progressively add noise to represent increasing errors in the predictions. We examine two scenarios: (1) where all attribute classifiers are equally noisy, and (2) where the average noise level varies per attribute. See Sec 6.4 for details on the noise model.

Figure 1 shows the results using AwA. By definition, all methods are perfectly accurate with zero noise. Once the attributes are unreliable (i.e., noise ), however, our approach is consistently better. Furthermore, our gains are notably larger in the second scenario where noise levels vary per attribute (right plot), illustrating how our approach properly favors more learnable attributes as discussed in Sec. 3.2.2. In contrast, signature-rf is liable to break down with even minor imperfections in attribute prediction. These results affirm that our method benefits from both (1) estimating and accounting for classifier noisiness and (2) avoiding uninformative attribute classifiers.

Figure 1: Zero-shot accuracy on AwA as a function of attribute uncertainty, for controlled noise scenarios.
Method/Dataset AwA aPY SUN
dap 40.50 18.12 52.50
signature-rf 36.65 0.16 12.70 0.38 13.20 0.34
ours w/o roc prop, sig uncertainty 39.97 0.09 24.25 0.18 47.46 0.29
ours w/o sig uncertainty 41.88 0.08 24.79 0.11 56.18 0.27
ours 43.01 0.07 26.02 0.05 56.18 0.27
ours+true roc 54.22 0.03 33.54 0.07 66.65 0.31
Table 1: Zero-shot learning accuracy on all three datasets. Accuracy is percentage of correct category predictions on unseen class images, standard error.
(a) Few-shot. Stars denote selected .
Method Accuracy
Lampert et al. [7] 40.5
Yu and Aloimonos [28] 40.0
Rohrbach et al. [22] 35.7
Kankuekul et al. [6] 32.7
Yu et al. [27] 48.3
ours (named attributes) 43.0 0.07
ours (discovered attributes) 48.7 0.09
(b) Zero-shot vs. state of the art
Figure 2: (a) Few-shot results. (b) Zero-shot results on AwA compared to the state of the art.
Real unreliable attributes experiments

Next we present the key zero-shot results for our method applied to three challenging datasets using over 250 real attribute classifiers. Table 1 shows the results. Our method significantly outperforms the existing dap method [8]. This is an important result: dap is today the most commonly used model for zero-shot object recognition, whether using this exact dap formulation [7, 21, 17, 26] or very similar non-probabilistic variants [3, 27]. Furthermore, this demonstrates that modeling only the confidence of an attribute’s presence in a test image (which dap does) is inadequate; our idea to characterize their error tendencies during training is valuable. Our substantial improvements over signature-rf also confirm it is imperative to model attribute classifier unreliability. Our gains over dap are especially large on SUN and aPY, which have fewer positive training samples per attribute, leading to less reliable attribute classifiers—exactly where our method is needed most. If we repeat the same experiment on AwA reducing to 500 randomly chosen images for attribute training, our gain over dap widens to 8 points (28.0 0.9 vs. 20.42).

Table 1 also helps isolate the impact of two components of our method: the model of signature uncertainty (see ours w/o sig uncertainty), and the recursive propagation of validation data (see ours w/o roc prop, sig uncertainty). For the latter, we further compute TPR/FPRs globally on the full validation dataset rather than for node-specific subsets . We see both aspects contribute to our full method’s best performance (see ours). Finally, ours+true roc provides an “upper bound” on the accuracy achievable with our method for these datasets; this is the result attainable were we to use the unseen class images as validation data . This also points to an interesting direction for future work: to better model expected error rates on images with unseen attribute combinations. Our initial attempts in this regard included focusing validation data on seen class images with signatures most like those of the unseen classes, but the impact was negligible.

Figure (b)b compares our method against all published results on AwA, using both named and discovered attributes. When using standard AwA named attributes, our method comfortably outperforms all prior methods. Further, when we use the discovered attributes from [27], we perform comparably to their attribute decoding method, achieving the state-of-the-art for this well-studied zero-shot benchmark. This result was obtained using a simple generalization of our method to handle the continuous attribute strength signatures of [27], quantizing each dimension into 6 bins.

4.2 Few-shot object and scene recognition

Finally, we demonstrate our few-shot extension. Figure (a)a shows the results, as a function of both the amount of labeled training images and the prior-weighting parameter (cf. Sec 3.3).666These are for AwA; see Sec 6.3 for similar results on the other two datasets. When , we rely solely on the training images ; when , we rely solely on the attribute signatures i.e., zero-shot learning. As a baseline, we compare to a method that uses solely the few training images to learn the unseen classes (dotted lines). We see the clear advantage of our attribute signature prior for few-shot random forest training. Furthermore, we see that, as expected, the optimal shifts towards 0 as more samples are added; yet even with 200 training images in , the prior plays a role (e.g., , the best on the blue curve). The star per curve indicates the value our method selects automatically with cross-validation.

5 Conclusion

We introduced a zero-shot training approach that models unreliable attributes—both due to classifier predictions and uncertainty in their association with unseen classes. Our results on three challenging datasets indicate the method’s promise, and suggest that the elegance of zero-shot learning need not be abandoned in spite of the fact that visual attributes remain very difficult to predict reliably. In future work, we plan to develop extensions to accommodate inter-attribute correlations in the random forest tests and multi-label random forests to improve scalability for many unseen classes.

6 Supplementary material

This section contains details from the supplementary material for our NIPS 2014 paper “Zero-shot Recognition with Unreliable Attributes” that were omitted from the paper to meet length constraints.

Sec 6.1 shows how unlearnable attributes are avoided by our method. Sec 6.2 discusses the details of the signature uncertainty model introduced in Sec 3.2.3. Sec 6.3 shows more few-shot results, as a continuation of Sec 4.2 above. Sec 6.4 gives additional details for our controlled noise experiments (Sec 4.1). Sec 6.5 lists the 10 SUN database test classes chosen at random.

6.1 Unlearnable attributes

As a sanity check, we show how accounting for classifier unreliability as detailed in Sec 3.2.2 also inherently avoids unlearnable attributes. For the extreme case of completely unlearnable attributes, the classifier cannot tell between positives and negatives, so that TPR=FPR (regardless of threshold). If a candidate split tested at any node involves such an attribute , then signatures of all classes are equally likely to propagate to the left or right, i.e., and for all . In other words, and are multiples of . Plugging into Eqn. (3), we see that this means .

Further plugging this into Eqn. (5), we see:

(7)
(8)
(9)
(10)

Since is constrained to be , this split will never be chosen by our method.

6.2 Class signature uncertainty

In Sec 3.2.3, we summarize a method to deal with uncertainty in class-attribute signatures. This is achieved by appropriately modifying the soft indicator vectors, which in implementation terms, amounts to adding perturbed copies of each “exemplar” signature to the training set. We now describe the former in detail, and show how it is equivalent to the latter.

Repeating Eqn 4, when we assume perfect class signatures, we set:

(11)

where the probabilities are simply the TPR and FNR respectively on the validation data subset

at node respectively. Now, to account for the class signature uncertainty, we expand out the probabilities in terms of the TPR/FNR and a new term reflecting the signature uncertainty. Specifically, denote by the true attribute signature of instance as opposed to its annotated signature . Then,

(12)

where runs over all possible values of the . The first terms on the RHS in the above equations represent the familiar TPR and FNR respectively (computed from the validation data), while the second term captures the non-trivial dependency between the true attribute value and the annotation. The above changes in the computation of probabilities exactly model the effect of training data expansion by adding an infinite number of perturbed variants of the attribute signatures, perturbed as per .

In expanding the probabilities thus, we have implicitly assumed the following structure for dependencies among , and :

(13)

With other dependency assumptions, it is possible to derive variants of this method with differences in the probability expansions of Eqn 12.

In implementing this attribute uncertainty model, we observed that it was generally very common for instances labeled positive for a given attribute to be actually negative (due to occlusions etc) but the reverse was uncommon. This is understandable because we use class-level associations e.g., images of class “person” may not “have hands” for any number of reasons, while the class “person” does “have hands” as per its class-level attribute signature. For this reason, we restricted ourselves to flipping (a fraction of) only the positive bits in the attribute signatures. Based on cross-validation we flipped 0.15, 0.3 and 0.0 fractions of the bits on AwA, aPY and SUN respectively. On SUN alone, zero-shot recognition does not benefit from modeling uncertainty in attribute annotations. We believe that this is because there is little scope for in-class variation in attribute signatures among the SUN scenes, since the attributes are of four types: “functional affordances”, “materials”, “surface properties” and “spatial envelope” [18]. All these types of attributes are closely related to the scene labels themselves, and are unlikely to be missing from class instances for reasons such as occlusion, since they are not usually localized to specific parts of instance images e.g., images belonging to the “mountain” category (from SUN) are nearly always marked with the affordance “climbing”(attribute in SUN) - it is very unlikely that the “climbing” affordance would be taken away because of the way a mountain is pictured. In contrast, say a “person” (category in aPY) may have images without “hands” (attribute in aPY) as discussed above, simply because of occlusions.

6.3 Few Shot results

Figure 3: Few-shot results for (left) aPY and (right) SUN: Overall trends are similar to those obtained for AwA

Fig 3 shows the few-shot results for aPY and SUN with 50 and 100 shots, similar to Fig 3 in the paper. Interestingly, on SUN, our zero-shot learning approach beats 100-shot attribute prediction based learning (on the 10 test classes). Overall trends remain similar to those on AwA, discussed in the paper.

6.4 Noise model

The synthetic attribute classifier scores used in Sec 4.1 are constructed by corrupting a hypothetical perfect attribute classifier’s scores with progressively increasing noise. Specifically, for noise setting 0, our synthetic attribute classifier scores are for in class . For noise level , we (1) decrease scores on positive samples and (2) increase scores on negative samples by adding noise as follows: , where

is drawn from the exponential distribution with mean

: . The keeps all scores in .

For the two scenarios shown in Fig 1 of the paper, we did the following. For scenario 1 (equally noisy classifiers), all classifiers were corrupted with noise drawn from an exponential distribution with mean (as above). For scenario 2 (attribute-specific noise levels), we draw the mean noise of each attribute classifier itself from a new exponential distribution whose mean is (plotted along the x-axis in Fig 1, right side).

6.5 SUN test classes

The ten SUN test classes picked at random were: “inn/indoor”,“flea market/indoor”, “lab classroom”, “outhouse/outdoor”, “chemical plant”, “mineshaft”, “lake/natural”, “shoe shop”, “art school” and “archive”.

Acknowledgements: We thank Christoph Lampert ([7]) and Felix Yu ([27]) for graciously sharing their code and data for comparison and reuse.

References

  • [1] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-Embedding for Attribute-Based Classification. In CVPR, 2013.
  • [2] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [3] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
  • [4] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013.
  • [5] P. Gärdenfors. Conceptual Spaces: The Geometry of Thought, volume 106. 2000.
  • [6] P. Kankuekul, A. Kawewong, S. Tangruamsub, and O. Hasegawa. Online incremental attribute-based zero-shot learning. In CVPR, 2012.
  • [7] C. Lampert, H. Nickisch, and S. Harmeling. Learning to Detect Unseen Object Classes by Between-class Attribute Transfer. In CVPR, 2009.
  • [8] C. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. TPAMI, 2014.
  • [9] H. Larochelle, D. Erhan, and Y. Bengio. Zero-data learning of new tasks. In AAAI, 2008.
  • [10] D. Mahajan, S. Sellamanicka, and V. Nair. A joint learning framework for attribute models and object descriptions. In ICCV, 2011.
  • [11] T. Mensink, E. Gavves, and C. Snoek. COSTA: Co-occurrence statistics for zero-shot classification. In CVPR, 2014.
  • [12] T. Mensink and J. Verbeek. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. In ECCV, 2012.
  • [13] R. Mittelman, H. Lee, B. Kuipers, and S. Savarese.

    Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines.

    In CVPR, 2013.
  • [14] C. Olaru and L. Wehenkel. A complete fuzzy decision tree technique. Fuzzy Sets and Systems, 138(2):221–254, Sept 2003.
  • [15] D. Osherson, E. Smith, T. Myers, E. Shafir, and M. Stob. Extrapolating human probability judgment. Theory and Decision, 36:103–129, 1994.
  • [16] M. Palatucci, D. Pomerleau, G. Hinton, and T. Mitchell. Zero-shot Learning with Semantic Output Codes. In NIPS, 2009.
  • [17] D. Parikh and K. Grauman. Relative attributes. In ICCV, 2011.
  • [18] G. Patterson and J. Hays. SUN Attribute Database: Discovering, Annotating, and Recognizing Scene Attributes. In CVPR, 2012.
  • [19] J. Quinlan. Induction of decision trees. Machine learning, pages 81–106, 1986.
  • [20] M. Rastegari, A. Farhadi, and D. Forsyth. Attribute discovery via predictable discriminative binary codes. In ECCV, 2012.
  • [21] M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer and zero-shot learning in a large-scale setting. In CVPR, 2011.
  • [22] M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where – and why? semantic relatedness for knowledge transfer. In CVPR, 2010.
  • [23] E. Rosch and B. Lloyd. Cognition and categorization. 1978.
  • [24] V. Sharmanska, N. Quadrianto, and C. Lampert. Augmented attribute representations. In ECCV, 2012.
  • [25] S. Tsang, B. Kao, K. Yip, W.-S. Ho, and S. Lee. Decision Trees for Uncertain Data. IEEE Transactions on Knowledge and Data Engineering, 23(1):64–78, January 2011.
  • [26] N. Turakhia and D. Parikh. Attribute dominance: what pops out? In ICCV, 2013.
  • [27] F. Yu, L. Cao, R. Feris, J. Smith, and S.-F. Chang. Designing Category-Level Attributes for Discriminative Visual Recognition. In CVPR, 2013.
  • [28] X. Yu and Y. Aloimonos.

    Attribute-based transfer learning for object categorization with zero/one training example.

    In ECCV, 2010.