A Distributed Approach towards Discriminative Distance Metric Learning

05/11/2019 ∙ by Jun Li, et al. ∙ University of Technology Sydney ibm Microsoft 7

Distance metric learning is successful in discovering intrinsic relations in data. However, most algorithms are computationally demanding when the problem size becomes large. In this paper, we propose a discriminative metric learning algorithm, and develop a distributed scheme learning metrics on moderate-sized subsets of data, and aggregating the results into a global solution. The technique leverages the power of parallel computation. The algorithm of the aggregated distance metric learning (ADML) scales well with the data size and can be controlled by the partition. We theoretically analyse and provide bounds for the error induced by the distributed treatment. We have conducted experimental evaluation of ADML, both on specially designed tests and on practical image annotation tasks. Those tests have shown that ADML achieves the state-of-the-art performance at only a fraction of the cost incurred by most existing methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Comparing objects of interest is a ubiquitous activity and defining characteristic of learning-based systems. The comparison can be explicit, as in the nearest neighbour rule, or be encoded in a learned model, as in the neural networks. In all cases, to let the past experience have any help in making decisions about unseen objects, one must compare the objects to those with known information. A natural measure is the Euclidean distance. Despite its wide application, the Euclidean distance may not suit all problems. For example, Euclidean distance is directly affected by the scaling of individual features in the representation of the data. Features of high multitude have strong influence on the measure of similarity regardless its relevance to the task. Without accounting for relevance, Euclidean distance is particularly problematic when the data is of high dimension, and the informative structures of the data population are difficult to distinguish from meaningless fluctuations.

The problem of the Euclidean distance suggests to adapt the metric in the learning process, so that distance measure is conducive to the subsequent recognition tasks. Along this line of research, a family of distance metric learning techniques has been developed [Xin03][She02][Hoi06][Gol05][Wei06][Fouad13][Shen10b], and proven useful in a number of practical applications [Hoi00][Kan06]. However, a major difficulty of metric learning arises from the time and space cost of those algorithms. To construct an effective distance metric, relevance of the raw features should be evaluated w.r.t. each other, instead of individually. Given raw features, this means covariance terms to be dealt with. In cases where is large but the number of samples, , is moderate, an alternative formulation of the problem allows learning metric from pair-wise correlations between samples, which entails a complexity of . However, the problem becomes irreducibly complex when both and are large, which is common in practical problems. In fact, due to technical necessities such as iterations in optimisation, a realistic metric learning algorithm generally involves a complexity of the cubic order, such as or , rather than squared one, which further limits the scalability of metric learning.

The focus of this paper is to address the seemingly inexorable complexity of distance metric learning. We first develop a discriminative metric learning method, where categorical information are utilised to direct the construction of the distance metric. More importantly, the metric learning algorithm embraces the “divide-and-conquer” strategy to deal with large volume of high-dimensional data. We derive a distributed and parallel metric learning scheme, which can be implemented in consistence with the MapReduce

[mapredu] computational framework. In particular, we split a large sample into multiple smaller subsets. The basic discriminative metric learning is applied on each of the subsets. The separately learned metrics are summarised into the final result via an aggregation procedure, and the scheme is named aggregated distance metric learning (ADML), accordingly.

Corresponding to the Map steps in the MapReduce paradigm, the computation on the subsets are independent with each other, and can be conducted in parallel tasks. The data can be stored in a distributed system, and no individual subroutine needs access to the whole data at once. So the technique is less affected by the physical storage in dealing with large volume of data. The aggregation step represents the Reduce step in MapReduce, where the input of the aggregation algorithm is the learned metrics on the subsets. Aggregation takes sum of those learned metrics, where the operation collapses the inputs arriving in arbitrary order, taking only moderate space given the subspace representation of the metrics.

The whole scheme scales well with the dimension and sample size of the data. The basic discriminative metric learning algorithm adopts subspace representation of the learned metric. If the sizes of individual subsets and the subspace dimension are fixed, the space and time taken by the metric learning on the subsets grow linearly with respect to the input dimension . The aggregation time grows linearly with as well. Thus in theory, the ideal implementation of ADML has desirably benign time and space cost. The time cost is linear with respect to the input dimension and independent to the total sample size 111The first step of aggregation sums up the metrics learned on subsets. The summation could be parallelised in theory, which renders the cost of the entire scheme independent to the sample size . However, in practice, the parallel tasks are limited by the physical computing power and communication, where the time cost of summation is negligible compared to the rest of the learning. Thus practical implementation of the aggregation uses serial summation, whose time cost is linear w.r.t. ., and the space cost is the size of the data in terms of storage and in terms of the volatile memory for on-the-fly computation, where represents the size of the learned metric, and and are large compared to and . For practical computational facilities, where parallel tasks are limited and incurs communication overheads, the learning and aggregation of metrics on the subsets can only be partially parallelised. Thus the time cost by ADML grows linearly with, instead of being independent to, the sample size . This practically achievable time complexity still scales better with the problem size compared with wholistic methods.

We provide theoretical guarantee for the distributed computation scheme. In particular, the upper bound has been provided for deviation by which the metric obtained by ADML can differ from the one would be resulted if the discriminative metric learning were applied on the whole data. The bound of deviation supports the usage of distributed computation in metric learning, and caps the price we have to pay for the gain of efficiency.

The effectiveness of ADML has also been demonstrated by empirical study. We test the method using both synthetic data and on a practical image annotation task. The empirical results corroborate our motivation for ADML, that the distributed learning achieves much superior efficiency with little or no sacrifice of accuracy compared to state-of-the-art metric learning methods.

Ii Background

The need for appropriate similarity measure is rooted in the fundamental statistical learning theory, in particular in the interplay between complexity and generalisation. In general, a learning algorithm produces a predictor by matching a set of hypotheses against observed data. The theory tells that the confidence about the predictor’s performance on unseen data is affected by the complexity of the hypothesis set

[Vap98]. While the complexity of a hypothesis (function) set is gauged by the possible variations on the domain of input. Therefore, the way of measuring similarity between the data samples essentially characterises the hypothesis set and affects how a learned predictor generalises to unseen data.

It has been noticed that the intuitive measurement of Euclidean metric becomes unreliable for high dimensional data, i.e. the number of raw features is large [Liu06]. Therefore, recently much attention has been paid to adapting the distance metrics to the data and analysis tasks. In [Xin03], Xing et al. proposed to adaptively construct a distance measure from the data, and formulated the task as a convex optimisation problem. The technique was successfully applied to clustering tasks. An effective way of learning a metric and simultaneously discovering parsimonious representation of data is to construct a mapping, which projects the data into a subspace in which the Euclidean distance reproduces the desired metric. The link between metric learning and data representation connects the proposed method to a broad context. In [Yeung07], metric learning is formulated as a kernel learning problem, and solved via convex optimisation. Relevance component analysis (RCA) [She02] learns a transformation with accounting for the equivalent constraints presenting in the data, i.e. pairs of samples are known to be of the same category and thus preferably to be measured as close to each other by the objective metric. In [Hoi06]

, Hoi et al. proposed discriminant component analysis (DCA), which equips RCA with extra capability of dealing with negative constraints, sample pairs to be scattered far apart by a desirable metric. Both RCA and DCA can be solved by eigenvalue decomposition, which is practically faster than the convex optimisation in

[Xin03]. When the ultimate goal of learning a distance metric is to preserve or enhance discriminative information, research has shown that the local geometry in the neighbourhoods of individual samples is effective. The raw features can be assessed by [Dom02, Has96]

by their contribution to the posterior probabilities at individual samples. In

[Gol05], a learning scheme has been proposed to optimise classification based on nearest neighbour rule. Large margin objective has also been proven effective in deriving supervised metric learning approaches [Wei06, Shen10b]. In [Fouad13], a scheme has been developed to incorporate auxiliary knowledge to assist metric learning. More comprehensive overview of related research can be found in [Liu06]. As we have discussed in the last section, a major concern of most existing metric learning approaches is the scalability. For example, the constrained convex programming [Xin03, She02, Hoi06, Gol05, Wei06] limits their applicability in large-scale practical problems.

Online processing is another strategy of solving large scale problem [Grippo00], which is complement to parallel computing and often of practical interest when dealing with data from the web. Online distance metric learning algorithms has been proposed [Dav07][Sha04]. These online metric learning techniques are based on serial steps, where the metric is updated after new observations arrive. Serial online schemes are able to deal with large problems, but slow to arrive the final solution. To improve efficiency, parallel implementation is a natural strategy. Many basic operations have fast parallelised implementation, such as Intel’s Math Kernel Library; and some learning techniques are parallelised at the algorithm level [Cat08][Rai09][Chu07]. While ADML is a novel metric learning method supporting fully distributed realisation, where subset metrics can be computed on different subsets simultaneously, and with reduced computational cost.

Iii Discriminative distance metric learning

We are concerned with constructing a Mahalanobis distance metric in the data feature space, which incorporates the discriminative information, i.e. the class membership of individual samples. In particular, the Mahalanobis distance between is defined by a positive semi-definite matrix , The goal of the proposed discriminative distance metric learning (DDML) is to find a matrix , so that between samples of the same class tends to be small and that between samples of different classes tends to be large. Such a distance metric will benefit subsequent tasks such as classification or recognition.

However, a universally satisfactory Mahalanobis distance metric is usually difficult to obtain. Practical data are often complex objects consisting of many features, where the number of features,

, is much greater than the actual degrees of freedom. Thus the variations in the data span nonlinear low-dimensional manifolds precluding global Mahalanobis distance metric. To preserve the nonlinear structures in the data, we set up the learning objective locally and let a local metric encodes both the discriminative information and the local geometric structure of the data. The overall distance metric is computed by unifying all the local metrics in an optimisation framework.

In particular, the cost of a distance metric is defined on data patches associated with individual samples. Given a sample , the corresponding patch consists of a small number of samples that are close to and carrying discriminative information. In particular, we select the nearest neighbours from the same class as , and select nearest neighbours from different classes. The patch, , is the joint of .

The local criterion of metric learning on is motivated by encoding the discriminative information into the geometry induced by the objective metric. A local metric specifies the within- and between-class distances on as

(1)

Furthermore, to account for the locally linear geometry of the data manifold, is considered as being induced by a subspace projection , where . Note that such a subspace representation of the Mahalanobis distance implies a concise parameterisation of .

Therefore, the principle of metric learning becomes an optimisation over local transformation ,

(2)

where is a trading-off between small and large ; and is obtained by organising the coefficients of the quadratic form,

where

is a row vector concatenated by

’s and ’s, and represents an operator generating an diagonal matrix from an -dimensional vector.

Unifying the optimisations on individual local patches, we reach the objective function for global distance metric learning,

(3)

where and is an matrix selecting the samples in a local patch from the data set, so that .

The DDML objective (3) is readily solved by eigenvalue decomposition, where the columns of are of the vectors associated with the smallest eigenvalues of . Generally, the eigenvalue decomposition involved in solving in (3) will be more efficient than methods based on iterative convex optimisation methods, e.g. NCA [She02] and LMNN [Wei06].

For most practical applications, DDML is still facing several difficulties, because both the volume and dimension of practical data are usually high, and also because in real-life applications, the data is often organised in distributed databases and acquired as streams. In the following section, we will address these problems by applying the idea of “divide and conquer” [Lin11] to DDML and propose a scalable aggregated distance metric learning algorithm.

Iv Aggregated Distance Metric Learning

In this section, we first introduce the distributed computation of DDML, aggregated distance metric learning (ADML), and then prove that metric learned by ADML is consistent with the result of performing DDML directly on the entire dataset. We further improve the efficiency of ADML by introducing an alternative aggregation technique, and show its consistency attributes. The time and spatial complexities are discussed for the proposed algorithms with an important observation that we can compute discriminative metric learning independent of the data size and only linearly depending on the data dimension.

Iv-a Aggregated Distance Metric Learning

The idea of ADML can be sketched as follows: dividing the whole data into several subsets, performing DDML on each of the subset and finally aggregating the results into a consolidate solution. It is important not to confuse the split of data in ADML with the use of local patch in DDML. We recall that the objective function in DDML is defined on a patch-by-patch base to account for the local geometry of the data manifold, where each patch is constructed with regard to the entire dataset. While in ADML, subsets are generally not associated with local geometry (preferably so), and only one of the subsets is visible to a DDML algorithm.

We randomly split a dataset of samples into subsets . This reduces the cost of constructing patches in metric learning: in ADML, the patch of a sample is constructed by finding nearest neighbours within the subset to which the sample is belonging.

Let the objective function of learning in (3) be . Using the similar form as , we can derive a local objective function with respect to a subset ,

(4)

where and is defined similarly to in (3), but is confined within . The metric on subset is then characterised by , where we denote the local solution be .

The next task is to consolidate the local solutions into a global metric . Straightforward linear combination would be the most intuitive approach. However,

are solutions to locally defined optimisation problems and linear interpolation may damage their optimality and yield invalid solutions. Therefore, we design the aggregation based on the optimal conditions of the objective functions. A local solution

is a stationary point of the corresponding , where the gradient vanishes,

(5)

Recall that is defined in (4). For a global solution , it is ideal for it to fulfil condition (5) in all local subsets , which is generally impossible. Thus secondarily, we want the violations to cancel out among all the subsets and the sum of local gradients to vanish

(6)

Directly summing up (5) over , we reach

(7)

Comparing (7) and (6) gives an aggregation rule: the overall solution satisfies

(8)
(9)

The computation of (9) is intuitively plausible: the aggregation has the form of interpolation of the local with the “weights” being . We will show that the aggregation also enjoys desirable theoretical attributes in following discussions.

There are two further remarks regarding the derivation and computation of the aggregation. First, directly solving for following (4) involves eigenvalue decomposition of a matrix, which is expensive when is large. The cost can be alleviated by exploiting that is known to be decomposed as . In the scenario of a large , a subset generally has and samples independent vectors in . Thus can be represented as . Solving for becomes finding such that

where is a diagonal matrix. It is sufficient to find a where , where translates to an eigenvalue decomposition of an matrix .

The second remark is on the relation between the two characteristics , the solution of local DDML: (i) is obtained by eigenvalue decomposition and minimises (4) and (3) and (ii) satisfies stationary point condition (5). We do not include the orthogonal constraint on in (5), because at the aggregation stage, we have already got the local solutions complying with the constraint. In fact, if we consider one column of and formulate the orthogonal constraint using Lagrange multiplier, , then the stationary condition with respect to leads to the solution of eigenvalue decomposition.

Iv-B Consistency of Aggregation

In this section, we prove that the aggregated distance metric is consistent with the one we would achieve if we performed the learning using a wholistic approach. The essential idea in brief is as follows.

The law of large numbers dictates that the empirical solution of (

3), say, , approaches to an “ideal solution” , the solution to (3) if we could have constructed the problem using the entire data population as . We first notice that the same argument applies to the local solutions – solution of (3) on a subset approaches as start including infinitely many random samples, which is the case because of the random split of data. Thus to show the aggregated solution approaches the wholistic solution , we can show that both of them approach . We prove approaches through . In fact, our key contribution is to establish that given some target , the distance between the aggregated solution and is bounded in terms of the distance between the local solutions and (by a constant factor). The result is formally specified by the following theorem.

Theorem 1.

Let be the local solutions to (4) on the subsets. If in (9) is invertible and is the aggregation of , we have for some target , where

  • represents the spectral norm of a matrix, , where stands for the maximum or minimum eigenvalue of a matrix;

  • , and is the number of subsets.

Moreover, if each is positive definite, we have , where .

We sketch the proof of the theorem as follows, and provide technical details in Appendix. The first part, i.e. , is achieved by applying matrix triangle inequality [Hor90] and then manipulating the max/min eigenvalues for each item in the summation. For the second part, the proof is similar to the first part, additionally, with a bound of of being derived in terms of of individual .

By replacing the target matrix with and , Theorem 1 straightforwardly shows the consistency of ADML.

Iv-C Orthogonalised Aggregated DML

We have introduced ADML algorithm and shown that its divide-and-conquer scheme is valid in theory. However, for practical implementation, inverting for aggregation is a computationally demanding step if the dimension of data is high. In this section, we will introduce an efficient operation to replace the matrix inversion, as well as show that the alternate implementation of aggregation enjoys consistency attributes similar to what we have established above.

We recall our earlier discussion that in (9

) serves as a normalising factor in a matrix interpolation. To avoid computing the inversion, we employ singular value decomposition (SVD) for the mixing of the mapping matrices in the weighted sum

. SVD is relatively inexpensive compared to inversion, because in general we let be a mapping to a subspace of much lower dimension than the raw data space, and has much less columns than rows. Formally, we define the new aggregation rule by the following SVD

(10)

where is a diagonal matrix and has orthonormal columns. We summarise the steps for a practical implementation of ADML in Algorithm 1. The consistency attributes are established by the following theorem.

Theorem 2.

Given locally learned and aggregated by rule in (10), we have for some target , where .

A proof of Theorem 2 can be reached following the similar route of proving Theorem 1. The details are provided in Appendix. The consistency of the alternated aggregation can then be established.

Iv-D Temporal and Spatial Complexity

ADML-I represents the aggregation by the rule of (9); and ADML-II represents aggregation by rule (10). , : total number of samples, : number of samples in individual subsets : data dimension, : number of subsets, : number of columns in .

Table I: Step-wise Time Complexity

One of the main motivations of ADML is to scale metric learning up for practical applications. Table I lists the time complexity of the steps of the learning algorithms discussed in this section. Note that the complexity of ADML algorithms are for serial processing of subsets, each of size . However, ADML can be parallelised. Independent sub-routines of subset metric learning can be carried out simultaneously. The time of the metric learning on subsets can be accelerated by a factor up to . The spatial complexity of the aggregated algorithms is also smaller than that of the straightforward implementation of DDML. In a parallel implementation, each computational node needs only to store one subset in its memory, and the computation takes less memory as well.

V Experiments

In this section, we conduct experimental studies on synthetic and real data to validate the proposed metric learning technique. In all the tests, the aggregation rule discussed in Sec. IV-C is employed (ADML-II, referred to as ADML hereafter), because it has superior temporal and spatial efficiency. We assess the methods by how the learned metric aids data representation, as well as their empirical costs of computational resources.

V-a Experiments on synthetic data

Figure 1: A set of data points drawn from the synthetic distribution. The green and red points belong to two classes, being slightly disturbed from two coiled surfaces. Note the range of the data distribution along the Z-axis (the one that is mostly perpendicular to the canvas) is greater than those along the X-Y axis.
Figure 2: How distance metric learning and subspace mapping affect classification. (a) comparison between three classification tests: nearest neighbour rule and SVM in the raw 3D data space, and SVM in projected 2D space. (b) number of support vectors the SVMs used in 3D and 2D spaces.

The algorithm is first tested on synthetic data consisting of two classes of 3D points distributed on two curved surfaces. The surfaces are coiled together in the 3D space, and the data points deviate away from the surfaces by small disturbances. The experiment has been repeated 20 times using randomly sampled datasets, each containing approximately 17,000 samples. Fig. 1 illustrates one set of samples drawn from the population. In this experiment, we will first verify the principal motivation that an appropriate distance metric helps analysis, before turning to the distributed implementation of ADML.

The first test is by using the basic method developed in Sec. III, which allows us to examine the effect of DML without complication caused by any divide-aggregation scheme. The metric is represented by a matrix

, because the variance relevant to classification spans across two dimensions (the X- and Y-axis). Although DML naturally lends itself to classification by nearest neighbour rule, which will be our classifier of choice in the following experiments, for the basic idea verification, we will use an SVM as the classifier. This is to confirm that DML and the subsequent subspace mapping represent the data in a form that benefits generic analysis. The basic DML algorithm has been applied to a subset of about 3,500 samples of each of the 20 random datasets, where the sample size is reduced due to the high computational cost of the straightforward optimisation of (

3). Of each subset, 80% samples are used for training and validation and 20% for testing. Validation on the first subset shows that DML is insensitive to the algorithm parameters, so the chosen parameter set , and is used throughout this experiment (including following experiments of ADML on the full datasets). Note that DML without model selection in most (19 out of 20) tests has a subtle beneficial effect on the generality of the results: our primary goal is to examine whether DML helps analysis for a generic scenario; and thus the choice of SVM as the classifier in the test stage should remain unknown to the DML algorithm in the training stage as the data transformer, in order to prevent the DML being “overfit” to SVM-oriented metrics. The parameters of SVM have been determined by cross-validation in all tests. Fig. 2 (a) shows the classification performance by the first nearest neighbour rule, SVM in the 3D data space and SVM in the DML-mapped 2D space. From the figure, one can tell a meaningful improvement of classification performance achieved by providing SVM the DML-mapped data over that of applying SVM on the raw data. The results clearly show the DML-induced subspace mapping does not only preserve useful information, but also make the information more salient by suppressing irrelevant variance through dimension reduction. Furthermore, in the DML-mapped space, the superior classification performance has been achieved at a lower computational cost. Fig. 2 (b) displays the statistics of the number of support vectors used by the SVMs in the raw 3D space and the DML-mapped 2D space. In the reduced space, SVM can do with a fraction of the support vectors that it has to use in the raw data space for the same job.

In the second part of this experiment, we will examine our main idea of the divide-aggregate implementation of DML, i.e. the ADML algorithm. For the remaining tests in this subsection, each full dataset is split into 50-50% training and test subsets. The main focus is how the division of the training data affects the learned distance metrics.

Figure 3: Statistics of pair-wise distances of within- and between-class point pairs. Each subplot contains two histograms. The upper (blue) histogram shows distances between 10,000 same-class point pairs, normalised into [0,1]. Similarly, the lower (red) histogram corresponds to cross-class pairs. Subplots (a–f) display distance measured by ADML with different subdivisions of the data (subset size ranging from 200 to 1,200). Subplot (g) displays the histograms of Euclidean distance (EUC) and (h) shows histograms of LDA-induced distance (LDA).

Fig. 3 shows a close and explicit inspection of how the learned distance metrics are connected to the class membership of the data. We randomly sample 10,000 pairs of points, compute the distance between the pair according to the learned metrics and normalise the distances to 222Note each DML method has been run on 20 randomly generated datasets. So 10,000 point pairs are drawn from each of those datasets and the distance is computed w.r.t. the respectively learned metrics.. Then two histograms are generated from the set of normalised distance, one for those that both points in the pair are from the same class, and another histogram corresponds to those that the pair of points are from different classes. The two histograms are shown in the subplots in Fig. 3, where the upper (blue) part represents the same-class pairs and the lower (red) part represents the cross-class pairs. A good metric should make the cross-class distance greater than the same-class distance, especially for those smallest distances, which correspond to nearest neighbours. The subplots in the figure compare such statistical distinctions of several metrics. As shown by the figure, DML leads to more informative distance metric than the original Euclidean metric in the 3D space, which is consistent with what we have observed in the earlier part of this experiment. For comparison, the widely used linear discriminative analysis (LDA) has also been used to project the data to 2D. Fig. 3 shows that LDA-induced distance metric make distinctions between the same- and cross-class point pairs, but the difference is less than that resulted by ADML-learned metric. A possible explanation is that the same-class covariance of these datasets is close to the cross-class covariance. Thus LDA, relying on the two covariance statistics, suffers from the confusion between the two groups of covariance.

More importantly, the results shows ADML is reasonably stable over the aggressive sub-divisions of the data. In particular, because ADML trades accuracy for speed, ideally the sacrifice of performance would be mild with regard to the partition of the data into smaller subsets (lower costs). The results show that in these tests, the resultant distance metric begins to help classification when the subset size is as small as 200, and becomes distinctive between same- and cross-class pairs when the subset size is greater than 400. Therefore the parallel computation approximates the discrimination objective satisfactorily.

Figure 4: 2D projections induced by metric learning. The figure shows 2D projections obtained by ADML (first three rows of different subset sizes, 200, 600 and 1200) and LDA (last row) to project six randomly generated datasets (columns). Projections distinguishing samples from the two classes (green and red, better viewed in colours on screen) are more preferred.

If a learned metric is represented by and the data points are , represents a low dimensional projection. The data points in the projection space reflect the geometry induced by the learned metric. Fig. 4 shows the projections of six random datasets, where the projections are produced by ADML using different data partitions, as well as by LDA. It is interesting to see that given only 200 samples to individual local learners, ADML starts producing projections that distinguish the two classes. With larger sample sizes, the results are more reliable and the projections become more discriminative. On the other hand, with all samples for training, LDA performed unstably in these datasets, which may be due to the large irrelevant component in the data covariance, as we have discussed above.

V-B Automatic Image Annotation

For a practical test, we apply metric learning algorithms to find helpful distance measure for automatic annotation. Given a list of interested concepts, the task of annotating an image is to determine whether each of the concepts is present in the image. Essentially, annotation is to establish a connection between the content of an image and its appearance, which is known as the problem of semantic gap [Sme00]. A natural solution to the problem is to identify the image in question with respect to a set of reference images that have been labelled with interested tags of concepts. Therefore, given the features of visual appearance, successful annotation relies on finding useful geometry in the appearance space, which reflects the connection between images in the semantic space; and annotation will benefit from effective metric learning methods. In the following report, we discuss the details of applying metrics learned by different techniques to annotate a large scale image dataset.

V-B1 Task specification and evaluation criterion

The NUS-WIDE dataset 333http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm [Chu09]

has been used for the annotation test. The dataset has been compiled by gathering publicly available images from Flickr. To facilitate comparison, the visual appearance of each image is represented by six sets of standard features. These features are: colour histogram (of 64 bins), colour correlogram (of 144 bins), edge direction histogram (of 73 bins), wavelet texture (of 128 coefficients), block-wise colour moments (of 255 elements), and bag-of-words feature based on the SIFT descriptors (of 500 bags). There are totally 1134 visual features for each image. The raw data are centred at zero, and each feature is normalised to have unit variance. The task is to choose from 81 tags for each image. The tags are determined independently, thus each image can have more than one tag.

The annotation procedure is based on the nearest neighbour rule. To decide the contents of an image, up to 15 of its nearest neighbours are retrieved according to the tested distance metric. We compare the level of presence of the tags among the nearest neighbours of an image with their background levels in the reference dataset, and predict the tags of the test image accordingly. If a tag is assigned to percent of all images in the reference dataset, and is assigned to percent of images among the nearest neighbours, the tag is predicted to be present in the test image if . The number of nearest neighbours used for predicting the tags has been determined by cross validation in our tests (from 1 to 15). The nearest neighbour rule has been shown effective for the annotation task [Jeo03]. More importantly, compared to other (multi-label) classification methods, the nearest neighbour rule provides the most direct assessment of the learned distance metrics using different techniques. The annotation reflects how the semantic information in the reference set is preserved in the learned distance metric in the appearance feature space.

F1-scores is measured for the prediction performance of each tag among the test images. The average value of all the tags is reported as a quantitative criterion. Having predicted the tags for a set of test images, the F1-score is measured as suggested in [Rij79], with the following combination of precision and recall

(11)

where precision and recall are two ratios, “correct positive prediction to all positive prediction” and “correct positive prediction to all positive”. F1 score ranges from 0 to 1, higher scores mean better result.

In the experiment, ADML and state-of-the-art distance metric learning methods are tested, as well as several baseline metrics particularly designed for image annotation. These distance metrics are:

  • Baseline, Euclidean (Base-EUC): the basic baseline metric is the Euclidean distance in the space of raw features (normalised to zero-mean and unit-variance). This metric does not require training.

  • Baseline, Joint Equal Contribution (Base-JEC): suggested by [Makadia08], this baseline attributes equal weights to the six different types of image features. This metric does not require training.

  • Baseline,

    -penalised Logistic Regression (Base-LR): also suggested by

    [Makadia08], this baseline attributes -penalised weights to the six types of image features to maximise dissimilarity and minimise similarity according to given tags. This metric is requires adaptation of the weights. However, after preprocessing, the penalised regression problem is basic and without altering the Euclidean metric of individual feature set. Thus we do not consider the training cost.

  • Tag Propagation (TagProp): suggested by [tagprop], a specialised image annotation method. This technique weighs the six feature sets by considering the nearest neighbours of individual images.

  • Distance metric learning methods discussed in Section II including Xing et al.’s method (Xing) [Xin03], Relevance Component Analysis (RCA) [She02], Discriminative Component Analysis (DCA) [Hoi06], Large Margin Nearest Neighbour Classifier (LMNN) [Wei06] and Information-Theoretic Metric Learning (ITML) [Dav07].

For Xing, RCA, DCA, ITML and ADML, the supervision is in the form of similar/dissimilar constraints. In the tests, 100,000 image pairs are randomly sampled from the dataset to measure the “background” number of shared concepts between images. Then a pair of image is considered similar if they share more concepts (common tags) than the background number, or dissimilar otherwise.

In all the tests, algorithm parameters are chosen by cross validation. For ADML, the important settings are as follows. The number of columns of matrix varies from to (subspace dimension). The within-class neighbourhood size varies from to and the between-class neighbourhood size varies from to . The coefficient has been chosen from to . We have tried different sizes for the subsets from to .

V-B2 Test on medium-sized subsets

We first access the learned and baseline metrics with respect to different sizes of data and different levels of complexity of the annotation task. Four medium-sized subsets of the NUSWIDE datasets are compiled. Each subset consists of images with a small number (2-5) of labels. In particular, we take the two most popular concepts in the dataset, then let Subset I contain images with tags of at least one of the two concepts. Subset II subsumes Subset I, and includes extra images of the third most popular concepts. The construction carries on until we have Subset IV consisting of images with the five most popular concepts. Of all the samples in each subset, 60% of the samples are used for training and 40% for test. Among the training samples, about 15% are withheld for cross validation to fix algorithm parameters.

Each column represents a measure of distance. Each subset corresponds to a pair of rows, where “trn/tst” represents the number of training (including validation) and test samples, "F1": the average F1-scores of predicting the labels as defined in (11), "T": the wall time of the training stage. Detailed explanation of the results and further analysis are provided in Subsection V-B2.

Table II: Annotation Performances and Time Costs on Subsets I-IV of the NUS-WIDE Dataset.

The annotation and evaluation processes follow the discussion above. The average F1-scores achieved by each metric in each subset are listed in Table II. The results show that accuracy of predicting the concepts is affected by the distance metrics. When the task becomes more complex (to predict more concepts), the learned metrics are increasingly more helpful than the baseline metrics derived from the standard Euclidean distance. More specifically, in these tests, the more effective learning techniques are those considering the local discriminative geometry of individual samples, such as ITML, LMNN and ADML, compared to those working on the global data distribution, such as DCA and RCA. In all the tests, ADML has provided a distance metric achieving superior or close-to-the-best annotation performance.

More relevant to our main motivation of the efficiency of metric learning, Table II lists the time cost of the tested techniques. We report the wall time elapsed during the training stage of the algorithms. In addition to theoretical computational complexity, wall time also provides a practical guide of the time cost. This is important because the parallel processing of ADML needs a comprehensive assessment including the communication time and other technical overheads, which is difficult to gauge using only the CPU time of the core computations. The discussion in [Blelloch96] provides a more comprehensive view of complexity in the light of parallel computation.

The reported time has been recorded on a computer with the following configuration. The hardware settings are 2.9GHz Intel Xeon E5-2690 (8 Cores), with 20MB L3 Cache 8GT/s QPI (Max Turbo Freq. 3.8GHz, Min 3.3GHz) and 32GB 1600MHz ECC DDR3-RAM (Quad Channel). We have used the Matlab implementation of the metric learning algorithms (Xing, RCA, DCA, ITML and LMNN) provided by their authors. To facilitate comparison, we implement ADML in Matlab as well, with the help of Parallel Computation Toolbox. The algorithms are run on Matlab 2012b with six workers.

In the tests, ADML has achieved good performance with high efficiency. Moreover, at the algorithmic level, the relative advantage of ADML could be greater than that is shown by the practical results in Table II. First, all algorithms are tested on the same Matlab environment, which in the background invokes Intel Math Kernel Library (MKL) for the fundamental mathematical libraries. MKL accelerates all algorithms, however, MKL competes with ADML’s parallel local metric learners for limited cores on a machine. It can be expected that ADML will benefit greater from increasing number of processors than the rival algorithms. This will also apply if advanced computational hardware is utilised such as GPU- or FPGA-based implementations[fpga, Rai09]. Second, multiple subset sizes (from 200 to 1,000) have been explored for ADML, and the reported performance is the one that performed best on the validation set. However, as we have shown in the tests on the synthetic dataset in Subsection V-A, the size of the subsets has limited effect on the final metric. We will discuss this issue in more details later in the next experiment on the full dataset.

V-B3 Test on the full dataset

Table III: Annotation of NUS-WIDE Dataset

The annotation test has been conducted on all the images in the NUS-WIDE dataset that are labelled with at least one concept. As above, 60% images are used for training and 40% are used for test, giving training images and test images. Of the training images, 15% are used for cross validation to choose algorithm parameters. There are 81 concepts to be labelled in this dataset.

Table III shows the annotation performances of the tested metrics. There are similar trends as those obtained on the four subsets in the last experiment. With the complexity of predicting 81 concepts, the advantage of learned metrics over the baseline metrics is more significant. ADML yields superior metrics using less time compared to the rival methods.

Figure 5: Annotation Performance and Time Cost w.r.t. Subset Sizes

In the experiments above, we choose the parameters for ADML by cross validation and configure the algorithm for optimal annotation performance. On the other hand, the primary motivation for ADML is to save the time cost of metric learning, and the setting of the subset size is a major factor of the overall time cost. Thus it is useful to study how ADML behaves under varying sizes of the subsets. Fig. 5 shows the F1-scores using the metrics learned with different subset sizes. The figure also compares the wall time cost in the training processes. The experiment demonstrates that ADML is stable w.r.t. different data partitions. Thus the algorithm can achieve desirable time efficiency with little sacrifice in the quality of learned distance metrics. Note that this conclusion corroborates the result on the synthetic data shown in Fig. 3.

Figure 6: Annotation Performance w.r.t. Within- and Between-class Neighbourhoods
Figure 7: Annotation Performance w.r.t. and subspace dimension

In our experiment, the performance of ADML is relatively stable with respect to the other algorithm settings, including the size of within- and between-class neighbourhoods and , the weight and the subspace dimension (refer to Section III for detailed explanations of the parameters). Fig. 6 and Fig. 7 show the annotation performance of ADML with varying , , and . Note the adjustment of the parameters are grouped into two pairs, and , so that we report more behaviours of ADML using less space.

Vi Conclusion

In this paper, we have proposed a distance metric learning approach that emphasises on encoding categorical information of data. We further derive a part-based solution of the algorithm: the metrics are learned on subsets divided from the entire dataset, and those partial solutions are aggregated into a final distance metric. The aggregated distance metric learning (ADML) technique takes advantage of distributed and parallel computation, and makes metric learning efficient in terms of both time and space usage.

To justify the gain in efficiency, we provide support for the learning quality of ADML in both the theoretical and practical aspects. Theoretically, error bounds are proved showing that the divide-and-conquer technique will give results close to that would be resulted by the performing the discriminative learning method on the entire dataset. Empirically, the properties of the metric learned by ADML have been shown helpful in reflecting the intrinsic relations in the data and helping practical image annotation task. The success of the partition-based processing may also be explained by the theory about the bias-variance trade-off [Has03]. Learning on a subset, as opposed to the entire data set, may introduce extra bias; on the other hand, ADML can be seen as the weighted combination of results from each subset, which will cause the decrease of variance. Thus, the overall performance may not be affected seriously. Theoretical exploration in this direction make an interesting research subject.

It worth noting that in our empirical study, ADML is implemented using Matlab and the Parallel Computation Toolbox. This implementation facilitates the comparison with other metric learning techniques, but as discussed above, the parallel computation in ADML overlaps those inherited from the Intel’s MKL fundamental subroutines (utilised by Matlab’s low-level operations). ADML can be implemented full-fledged MapReduce architecture, which enables the algorithm to scale up to very large scale and deal with data across multiple nodes of high-performance clusters. A sketch of a distributed implementation is given in Algorithm 1. The algorithm has been tested practically on a cluster. Via OpenMPI, we distributed 32 processes across 8 nodes of Intel Xeon E5 series, each having 8 cores of 2.9 or 3.4GHz444In particular, to avoid irrelevant variables in a complex multi-user environment, we selected machines of light load at the time of testing (<1%), and utilised only 50% of the cores available. The master process communicated with 31 workers via an intranet.. The data are sampled from the distribution as discussed in Subsection V-A. The master process in Algorithm 1 learned metrics from more than 188M samples in 855s, where it aggregated 200k local learned metrics from 31 worker process. In contrast, a sequential implementation of the basic algorithm in Section III took 1718s to learn from 1.88M samples, 1% of the data tackled by the parallel Algorithm 1.

The map-steps, “adml_map()”, compute subset metrics according to (4) (See Section IV). The “adml_reduce” does the aggregation according to (10). The pseudo code is assuming a Message Passing Interface (MPI) as the MapReduce architecture. It is easily convertible to other major programming languages and MapReduce realisation such as Java-Hadoop. Note that for a completely distributed implementation, the procedure can be further decentralised by adjusting the data-accessing in Line 10. The master process may send instructions of retrieving locally stored data, instead of loading the data and sending them to the worker processes.

Algorithm 1 Pseudo code of implementing ADML in a MapReduce framework.

Theorem 1

Proof:

Part I: If is invertible, we are allowed to write . Subtracting from the aggregation rule (9) gives

Matrix triangle inequality [Hor90] states that , thus

(12)

Since for a symmetric matrix , ,

(13)

Combining (12) and (13), we have the first part of Theorem 1. ∎

For the second part, we will need the following lemma.

Lemma 3.

If , and are positive definite matrices, then .

Proof:

As the sum of positive definite matrices, is a positive definite matrix itself. For a positive definite matrix,

Thus it is sufficient to show there exists some such that

Since for any , , we have

Lemma 3 leads to the second part of Theorem 1: since , we can combine with (13) and arrive the desired conclusion.

Theorem 2

Proof:

The aggregation rule (10) indicates

(14)

Subtracting from both sides of (14) gives

(15)
(16)
(17)

where (15) and (16) are by the the matrix triangle inequality. The relation (17) translates directly to Theorem 2.∎

References