The notion of pairwise metric—used throughout this survey as a generic term for distance, similarity or dissimilarity function—between data points plays an important role in many machine learning, pattern recognition and data mining techniques.111Metric-based learning methods were the focus of the recent SIMBAD European project (ICT 2008-FET 2008-2011). Website: http://simbad-fp7.eu/ For instance, in classification, the
-Nearest Neighbor classifier(Cover and Hart, 1967) uses a metric to identify the nearest neighbors; many clustering algorithms, such as the prominent -Means (Lloyd, 1982)
, rely on distance measurements between data points; in information retrieval, documents are often ranked according to their relevance to a given query based on similarity scores. Clearly, the performance of these methods depends on the quality of the metric: as in the saying “birds of a feather flock together”, we hope that it identifies as similar (resp. dissimilar) the pairs of instances that are indeed semantically close (resp. different). General-purpose metrics exist (e.g., the Euclidean distance and the cosine similarity for feature vectors or the Levenshtein distance for strings) but they often fail to capture the idiosyncrasies of the data of interest. Improved results are expected when the metric is designed specifically for the task at hand. Since manual tuning is difficult and tedious, a lot of effort has gone intometric learning, the research topic devoted to automatically learning metrics from data.
1.1 Metric Learning in a Nutshell
Although its origins can be traced back to some earlier work (e.g., Short and Fukunaga, 1981; Fukunaga, 1990; Friedman, 1994; Hastie and Tibshirani, 1996; Baxter and Bartlett, 1997), metric learning really emerged in 2002 with the pioneering work of Xing et al. (2002) that formulates it as a convex optimization problem. It has since been a hot research topic, being the subject of tutorials at ICML 2010222http://www.icml2010.org/tutorials.html and ECCV 2010333http://www.ics.forth.gr/eccv2010/tutorials.php and workshops at ICCV 2011,444http://www.iccv2011.org/authors/workshops/ NIPS 2011555http://nips.cc/Conferences/2011/Program/schedule.php?Session=Workshops and ICML 2013.666http://icml.cc/2013/?page_id=41
The goal of metric learning is to adapt some pairwise real-valued metric function, say the Mahalanobis distance , to the problem of interest using the information brought by training examples. Most methods learn the metric (here, the positive semi-definite matrix in ) in a weakly-supervised way from pair or triplet-based constraints of the following form:
Must-link / cannot-link constraints (sometimes called positive / negative pairs):
Relative constraints (sometimes called training triplets):
A metric learning algorithm basically aims at finding the parameters of the metric such that it best agrees with these constraints (see Figure 1 for an illustration), in an effort to approximate the underlying semantic metric. This is typically formulated as an optimization problem that has the following general form:
is a loss function that incurs a penalty when training constraints are violated,is some regularizer on the parameters of the learned metric and is the regularization parameter. As we will see in this survey, state-of-the-art metric learning formulations essentially differ by their choice of metric, constraints, loss function and regularizer.
After the metric learning phase, the resulting function is used to improve the performance of a metric-based algorithm, which is most often -Nearest Neighbors (-NN), but may also be a clustering algorithm such as -Means, a ranking algorithm, etc. The common process in metric learning is summarized in Figure 2.
Metric learning can potentially be beneficial whenever the notion of metric between instances plays an important role. Recently, it has been applied to problems as diverse as link prediction in networks (Shaw et al., 2011)
, state representation in reinforcement learning(Taylor et al., 2011), music recommendation (McFee et al., 2012), partitioning problems (Lajugie et al., 2014), identity verification (Ben et al., 2012), webpage archiving (Law et al., 2012), cartoon synthesis (Yu et al., 2012) and even assessing the efficacy of acupuncture (Liang et al., 2012), to name a few. In the following, we list three large fields of application where metric learning has been shown to be very useful.
The objective of many information retrieval systems, such as search engines, is to provide the user with the most relevant documents according to his/her query. This ranking is often achieved by using a metric between two documents or between a document and a query. Applications of metric learning to these settings include the work of Lebanon (2006); Lee et al. (2008); McFee and Lanckriet (2010); Lim et al. (2013).
Many problems in bioinformatics involve comparing sequences such as DNA, protein or temporal series. These comparisons are based on structured metrics such as edit distance measures (or related string alignment scores) for strings or Dynamic Time Warping distance for temporal series. Learning these metrics to adapt them to the task of interest can greatly improve the results. Examples include the work of Xiong and Chen (2006); Saigo et al. (2006); Kato and Nagano (2010); Wang et al. (2012a).
1.3 Related Topics
We mention here three research topics that are related to metric learning but outside the scope of this survey.
While metric learning is parametric (one learns the parameters of a given form of metric, such as a Mahalanobis distance), kernel learning is usually nonparametric: one learns the kernel matrix without any assumption on the form of the kernel that implicitly generated it. These approaches are thus very powerful but limited to the transductive setting and can hardly be applied to new data. The interested reader may refer to the recent survey on kernel learning by Abbasnejad et al. (2012).
Multiple kernel learning
Unlike kernel learning, Multiple Kernel Learning (MKL) is parametric: it learns a combination of predefined base kernels. In this regard, it can be seen as more restrictive than metric or kernel learning, but as opposed to kernel learning, MKL has very efficient formulations and can be applied in the inductive setting. The interested reader may refer to the recent survey on MKL by Gönen and Alpaydin (2011).
Supervised dimensionality reduction aims at finding a low-dimensional representation that maximizes the separation of labeled data and in this respect has connections with metric learning,888Some metric learning methods can be seen as finding a new feature space, and a few of them actually have the additional goal of making this feature space low-dimensional.
although the primary objective is quite different. Unsupervised dimensionality reduction, or manifold learning, usually assume that the (unlabeled) data lie on an embedded low-dimensional manifold within the higher-dimensional space and aim at “unfolding” it. These methods aim at capturing or preserving some properties of the original data (such as the variance or local distance measurements) in the low-dimensional representation.999These approaches are sometimes referred to as “unsupervised metric learning”, which is somewhat misleading because they do not optimize a notion of metric. The interested reader may refer to the surveys by Fodor (2002) and van der Maaten et al. (2009).
1.4 Why this Survey?
As pointed out above, metric learning has been a hot topic of research in machine learning for a few years and has now reached a considerable level of maturity both practically and theoretically. The early review due to Yang and Jin (2006) is now largely outdated as it misses out on important recent advances: more than 75% of the work referenced in the present survey is post 2006. A more recent survey, written independently and in parallel to our work, is due to Kulis (2012). Despite some overlap, it should be noted that both surveys have their own strengths and complement each other well. Indeed, the survey of Kulis takes a more general approach, attempting to provide a unified view of a few core metric learning methods. It also goes into depth about topics that are only briefly reviewed here, such as kernelization, optimization methods and applications. On the other hand, the present survey is a detailed and comprehensive review of the existing literature, covering more than 50 approaches (including many recent works that are missing from Kulis’ paper) with their relative merits and drawbacks. Furthermore, we give particular attention to topics that are not covered by Kulis, such as metric learning for structured data and the derivation of generalization guarantees.
We think that the present survey may foster novel research in metric learning and be useful to a variety of audiences, in particular: (i) machine learners wanting to get introduced to or update their knowledge of metric learning will be able to quickly grasp the pros and cons of each method as well as the current strengths and limitations of the research area as a whole, and (ii) machine learning practitioners interested in applying metric learning to their own problem will find information to help them choose the methods most appropriate to their needs, along with links to source codes whenever available.
Note that we focus on general-purpose methods, i.e., that are applicable to a wide range of application domains. The abundant literature on metric learning designed specifically for computer vision is not addressed because the understanding of these approaches requires a significant amount of background in that area. For this reason, we think that they deserve a separate survey, targeted at the computer vision audience.
This survey is almost self-contained and has few prerequisites. For metric learning from feature vectors, we assume that the reader has some basic knowledge of linear algebra and convex optimization (if needed, see Boyd and Vandenberghe, 2004, for a brush-up)
. For metric learning from structured data, we assume that the reader has some familiarity with basic probability theory, statistics and likelihood maximization. The notations used throughout this survey are summarized in Table1.
|Set of real numbers|
|Set of -dimensional real-valued vectors|
|Set of real-valued matrices|
|Cone of symmetric PSD real-valued matrices|
|Input (instance) space|
|Output (label) space|
|Set of must-link constraints|
|Set of cannot-link constraints|
|Set of relative constraints|
|An arbitrary labeled instance|
|An arbitrary vector|
|An arbitrary matrix|
|Trace of matrix|
|Hinge loss function|
|String of finite size|
The rest of this paper is organized as follows. We first assume that data consist of vectors lying in some feature space . Section 2 describes key properties that we will use to provide a taxonomy of metric learning algorithms. In Section 3, we review the large body of work dealing with supervised Mahalanobis distance learning. Section 4 deals with recent advances and trends in the field, such as linear similarity learning, nonlinear and local methods, histogram distance learning, the derivation of generalization guarantees and semi-supervised metric learning methods. We cover metric learning for structured data in Section 5, with a focus on edit distance learning. Lastly, we conclude this survey in Section 6 with a discussion on the current limitations of the existing literature and promising directions for future research.
2 Key Properties of Metric Learning Algorithms
Except for a few early methods, most metric learning algorithms are essentially “competitive” in the sense that they are able to achieve state-of-the-art performance on some problems. However, each algorithm has its intrinsic properties (e.g., type of metric, ability to leverage unsupervised data, good scalability with dimensionality, generalization guarantees, etc) and emphasis should be placed on those when deciding which method to apply to a given problem. In this section, we identify and describe five key properties of metric learning algorithms, summarized in Figure 3. We use them to provide a taxonomy of the existing literature: the main features of each method are given in Table LABEL:tab:mlvectsum.101010Whenever possible, we use the acronyms provided by the authors of the studied methods. When there is no known acronym, we take the liberty of choosing one.
We will consider three learning paradigms:
Fully supervised: the metric learning algorithm has access to a set of labeled training instances , where each training example is composed of an instance and a label (or class) . is a discrete and finite set of labels (unless stated otherwise). In practice, the label information is often used to generate specific sets of pair/triplet constraints , for instance based on a notion of neighborhood.111111These constraints are usually derived from the labels prior to learning the metric and never challenged. Note that Wang et al. (2012b) propose a more refined (but costly) approach to the problem of building the constraints from labels. Their method alternates between selecting the most relevant constraints given the current metric and learning a new metric based on the current constraints.
Weakly supervised: the algorithm has no access to the labels of individual training instances: it is only provided with side information in the form of sets of constraints . This is a meaningful setting in a variety of applications where labeled data is costly to obtain while such side information is cheap: examples include users’ implicit feedback (e.g., clicks on search engine results), citations among articles or links in a network. This can be seen as having label information only at the pair/triplet level.
Semi-supervised: besides the (full or weak) supervision, the algorithm has access to a (typically large) sample of unlabeled instances for which no side information is available. This is useful to avoid overfitting when the labeled data or side information are scarce.
Form of Metric
Clearly, the form of the learned metric is a key choice. One may identify three main families of metrics:
Linear metrics, such as the Mahalanobis distance. Their expressive power is limited but they are easier to optimize (they usually lead to convex formulations, and thus global optimality of the solution) and less prone to overfitting.
Nonlinear metrics, such as the histogram distance. They often give rise to nonconvex formulations (subject to local optimality) and may overfit, but they can capture nonlinear variations in the data.
Local metrics, where multiple (linear or nonlinear) local metrics are learned (typically simultaneously) to better deal with complex problems, such as heterogeneous data. They are however more prone to overfitting than global methods since the number of parameters they learn can be very large.
With the amount of available data growing fast, the problem of scalability arises in all areas of machine learning. First, it is desirable for a metric learning algorithm to scale well with the number of training examples (or constraints). As we will see, learning the metric in an online way is one of the solutions. Second, metric learning methods should also scale reasonably well with the dimensionality of the data. However, since metric learning is often phrased as learning a matrix, designing algorithms that scale reasonably well with this quantity is a considerable challenge.
Optimality of the Solution
This property refers to the ability of the algorithm to find the parameters of the metric that satisfy best the criterion of interest. Ideally, the solution is guaranteed to be the global optimum—this is essentially the case for convex formulations of metric learning. On the contrary, for nonconvex formulations, the solution may only be a local optimum.
As noted earlier, metric learning is sometimes formulated as finding a projection of the data into a new feature space. An interesting byproduct in this case is to look for a low-dimensional projected space, allowing faster computations as well as more compact representations. This is typically achieved by forcing or regularizing the learned metric matrix to be low-rank.
3 Supervised Mahalanobis Distance Learning
This section deals with (fully or weakly) supervised Malahanobis distance learning (sometimes simply referred to as distance metric learning), which has attracted a lot of interest due to its simplicity and nice interpretation in terms of a linear projection. We start by presenting the Mahalanobis distance and two important challenges associated with learning this form of metric.
The Mahalanobis distance
This term comes from Mahalanobis (1936) and originally refers to a distance measure that incorporates the correlation between features:
where and are random vectors from the same distribution with covariance matrix . By an abuse of terminology common in the metric learning literature, we will in fact use the term Mahalanobis distance to refer to generalized quadratic distances, defined as
and parameterized by , where is the cone of symmetric positive semi-definite (PSD) real-valued matrices (see Figure 4).121212Note that in practice, to get rid of the square root, the Mahalanobis distance is learned in its more convenient squared form . ensures that satisfies the properties of a pseudo-distance: ,
Note that when is the identity matrix, we recover the Euclidean distance. Otherwise, one can express as , where where is the rank of . We can then rewrite as follows:
Thus, a Mahalanobis distance implicitly corresponds to computing the Euclidean distance after the linear projection of the data defined by the transformation matrix . Note that if is low-rank, i.e., , then it induces a linear projection of the data into a space of lower dimension . It thus allows a more compact representation of the data and cheaper distance computations, especially when the original feature space is high-dimensional. These nice properties explain why learning Mahalanobis distance has attracted a lot of interest and is a major component of metric learning.
This leads us to two important challenges associated with learning Mahalanobis distances. The first one is to maintain
in an efficient way during the optimization process. A simple way to do this is to use the projected gradient method which consists in alternating between a gradient step and a projection step onto the PSD cone by setting the negative eigenvalues to zero.131313Note that Qian et al. (2013)
have proposed some heuristics to avoid doing this projection at each iteration.However this is expensive for high-dimensional problems as eigenvalue decomposition scales in . The second challenge is to learn a low-rank matrix (which implies a low-dimensional projection space, as noted earlier) instead of a full-rank one. Unfortunately, optimizing subject to a rank constraint or regularization is NP-hard and thus cannot be carried out efficiently.
The rest of this section is a comprehensive review of the supervised Mahalanobis distance learning methods of the literature. We first present two early approaches (Section 3.1). We then discuss methods that are specific to -nearest neighbors (Section 3.2), inspired from information theory (Section 3.3), online learning approaches (Section 3.4), multi-task learning (Section 3.5) and a few more that do not fit any of the previous categories (Section 3.6).
3.1 Early Approaches
The approaches in this section deal with the PSD constraint in a rudimentary way.
MMC (Xing et al.)
The seminal work of Xing et al. (2002) is the first Mahalanobis distance learning method.141414Source code available at: http://www.cs.cmu.edu/~epxing/papers/ It relies on a convex formulation with no regularization, which aims at maximizing the sum of distances between dissimilar points while keeping the sum of distances between similar examples small:
The algorithm used to solve (1) is a simple projected gradient approach requiring the full eigenvalue decomposition of at each iteration. This is typically intractable for medium and high-dimensional problems.
S&J (Schultz & Joachims)
The method proposed by Schultz and Joachims (2003) relies on the parameterization , where is fixed and known and diagonal. We get:
By definition, is PSD and thus one can optimize over the diagonal matrix and avoid the need for costly projections on the PSD cone. They propose a formulation based on triplet constraints:
where is the squared Frobenius norm of , the ’s are “slack” variables to allow soft constraints151515This is a classic trick used for instance in soft-margin SVM (Cortes and Vapnik, 1995). Throughout this survey, we will consistently use the symbol to denote slack variables. and is the trade-off parameter between regularization and constraint satisfaction. Problem (2) is convex and can be solved efficiently. The main drawback of this approach is that it is less general than full Mahalanobis distance learning: one only learns a weighting of the features. Furthermore, must be chosen manually.
3.2 Approaches Driven by Nearest Neighbors
The objective functions of the methods presented in this section are related to a nearest neighbor prediction rule.
NCA (Goldberger et al.)
The idea of Neighbourhood Component Analysis161616Source code available at: http://www.ics.uci.edu/~fowlkes/software/nca/ (NCA), introduced by Goldberger et al. (2004), is to optimize the expected leave-one-out error of a stochastic nearest neighbor classifier in the projection space induced by . They use the decomposition and they define the probability that is the neighbor of by
Then, the probability that is correctly classified is:
They learn the distance by solving:
Note that the matrix can be chosen to be rectangular, inducing a low-rank . The main limitation of (3) is that it is nonconvex and thus subject to local maxima. Hong et al. (2011) later proposed to learn a mixture of NCA metrics, while Tarlow et al. (2013) generalize NCA to -NN with .
MCML (Globerson & Roweis)
Shortly after Goldberger et al., Globerson and Roweis (2005) proposed MCML (Maximally Collapsing Metric Learning), an alternative convex formulation based on minimizing a KL divergence between and an ideal distribution, which can be seen as attempting to collapse each class to a single point.171717An implementation is available within the Matlab Toolbox for Dimensionality Reduction:
http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html Unlike NCA, the optimization is done with respect to the matrix and the problem is thus convex. However, like MMC, MCML requires costly projections onto the PSD cone.
LMNN (Weinberger et al.)
Large Margin Nearest Neighbors181818Source code available at: http://www.cse.wustl.edu/~kilian/code/code.html (LMNN), introduced by Weinberger et al. (2005; 2008; 2009), is one of the most widely-used Mahalanobis distance learning methods and has been the subject of many extensions (described in later sections). One of the reasons for its popularity is that the constraints are defined in a local way: the nearest neighbors (the “target neighbors”) of any training instance should belong to the correct class while keeping away instances of other classes (the “impostors”). The Euclidean distance is used to determine the target neighbors. Formally, the constraints are defined in the following way:
The distance is learned using the following convex program:
where controls the “pull/push” trade-off. The authors developed a special-purpose solver—based on subgradient descent and careful book-keeping—that is able to deal with billions of constraints. Alternative ways of solving the problem have been proposed (Torresani and Lee, 2006; Nguyen and Guo, 2008; Park et al., 2011; Der and Saul, 2012). LMNN generally performs very well in practice, although it is sometimes prone to overfitting due to the absence of regularization, especially in high dimension. It is also very sensitive to the ability of the Euclidean distance to select relevant target neighbors. Note that Do et al. (2012)
highlighted a relation between LMNN and Support Vector Machines.
3.3 Information-Theoretic Approaches
The methods presented in this section frame metric learning as an optimization problem involving an information measure.
RCA (Bar-Hillel et al.)
Relevant Component Analysis191919Source code available at: http://www.scharp.org/thertz/code.html (Shental et al., 2002; Bar-Hillel et al., 2003, 2005) makes use of positive pairs only and is based on subsets of the training examples called “chunklets”. These are obtained from the set of positive pairs by applying a transitive closure: for instance, if and , then , and belong to the same chunklet. Points in a chunklet are believed to share the same label. Assuming a total of points in chunklets, the algorithm is very efficient since it simply amounts to computing the following matrix:
where chunklet consists of and is its mean. Thus, RCA essentially reduces the within-chunklet variability in an effort to identify features that are irrelevant to the task. The inverse of is used in a Mahalanobis distance. The authors have shown that (i) it is the optimal solution to an information-theoretic criterion involving a mutual information measure, and (ii) it is also the optimal solution to the optimization problem consisting in minimizing the within-class distances. An obvious limitation of RCA is that it cannot make use of the discriminative information brought by negative pairs, which explains why it is not very competitive in practice. RCA was later extended to handle negative pairs, at the cost of a more expensive algorithm (Hoi et al., 2006; Yeung and Chang, 2006).
ITML (Davis et al.)
Information-Theoretic Metric Learning202020Source code available at: http://www.cs.utexas.edu/~pjain/itml/ (ITML), proposed by Davis et al. (2007), is an important work because it introduces LogDet divergence regularization that will later be used in several other Mahalanobis distance learning methods (e.g., Jain et al., 2008; Qi et al., 2009). This Bregman divergence on positive definite matrices is defined as:
where is the dimension of the input space and is some positive definite matrix we want to remain close to. In practice, is often set to (the identity matrix) and thus the regularization aims at keeping the learned distance close to the Euclidean distance. The key feature of the LogDet divergence is that it is finite if and only if is positive definite. Therefore, minimizing provides an automatic and cheap way of preserving the positive semi-definiteness of . ITML is formulated as follows:
where are threshold parameters and the trade-off parameter. ITML thus aims at satisfying the similarity and dissimilarity constraints while staying as close as possible to the Euclidean distance (if ). More precisely, the information-theoretic interpretation behind minimizing
is that it is equivalent to minimizing the KL divergence between two multivariate Gaussian distributions parameterized byand . The algorithm proposed to solve (5) is efficient, converges to the global minimum and the resulting distance performs well in practice. A limitation of ITML is that , that must be picked by hand, can have an important influence on the quality of the learned distance. Note that Kulis et al. (2009) have shown how hashing can be used together with ITML to achieve fast similarity search.
SDML (Qi et al.)
With Sparse Distance Metric Learning (SDML), Qi et al. (2009)
specifically deal with the case of high-dimensional data together with few training samples, i.e.,. To avoid overfitting, they use a double regularization: the LogDet divergence (using or where is the covariance matrix) and -regularization on the off-diagonal elements of . The justification for using this -regularization is two-fold: (i) a practical one is that in high-dimensional spaces, the off-diagonal elements of
are often very small, and (ii) a theoretical one suggested by a consistency result from a previous work in covariance matrix estimation(Ravikumar et al., 2011) that applies to SDML. They use a fast algorithm based on block-coordinate descent (the optimization is done over each row of ) and obtain very good performance for the specific case .
3.4 Online Approaches
In online learning (Littlestone, 1988), the algorithm receives training instances one at a time and updates at each step the current hypothesis. Although the performance of online algorithms is typically inferior to batch algorithms, they are very useful to tackle large-scale problems that batch methods fail to address due to time and space complexity issues. Online learning methods often come with regret bounds, stating that the accumulated loss suffered along the way is not much worse than that of the best hypothesis chosen in hindsight.212121A regret bound has the following general form: , where is the number of steps, is the hypothesis at time and is the best batch hypothesis.
POLA (Shalev-Shwartz et al.)
POLA (Shalev-Shwartz et al., 2004), for Pseudo-metric Online Learning Algorithm, is the first online Mahalanobis distance learning approach and learns the matrix as well as a threshold . At each step , POLA receives a pair , where if and if , and performs two successive orthogonal projections:
Projection of the current solution onto the set , which is done efficiently (closed-form solution). The constraint basically requires that the distance between two instances of same (resp. different) labels be below (resp. above) the threshold with a margin 1. We get an intermediate solution that satisfies this constraint while staying as close as possible to the previous solution.
Projection of onto the set , which is done rather efficiently (in the worst case, one only needs to compute the minimal eigenvalue of ). This projects the matrix back onto the PSD cone. We thus get a new solution that yields a valid Mahalanobis distance.
A regret bound for the algorithm is provided.
LEGO (Jain et al.)
LEGO (Logdet Exact Gradient Online), developed by Jain et al. (2008), is an improved version of POLA based on LogDet divergence regularization. It features tighter regret bounds, more efficient updates and better practical performance.
RDML (Jin et al.)
RDML (Jin et al., 2009) is similar to POLA in spirit but is more flexible. At each step , instead of forcing the margin constraint to be satisfied, it performs a gradient descent step of the following form (assuming Frobenius regularization):
where is the projection to the PSD cone. The parameter implements a trade-off between satisfying the pairwise constraint and staying close to the previous matrix . Using some linear algebra, the authors show that this update can be performed by solving a convex quadratic program instead of resorting to eigenvalue computation like POLA. RDML is evaluated on several benchmark datasets and is shown to perform comparably to LMNN and ITML.
MDML (Kunapuli & Shavlik)
MDML (Kunapuli and Shavlik, 2012), for Mirror Descent Metric Learning, is an attempt of proposing a general framework for online Mahalanobis distance learning. It is based on composite mirror descent (Duchi et al., 2010), which allows online optimization of many regularized problems. It can accommodate a large class of loss functions and regularizers for which efficient updates are derived, and the algorithm comes with a regret bound. Their study focuses on regularization with the nuclear norm (also called trace norm) introduced by Fazel et al. (2001) and defined as , where the
’s are the singular values of.222222Note that when , , which is much cheaper to compute. It is known to be the best convex relaxation of the rank of the matrix and thus nuclear norm regularization tends to induce low-rank matrices. In practice, MDML has performance comparable to LMNN and ITML, is fast and sometimes induces low-rank solutions, but surprisingly the algorithm was not evaluated on large-scale datasets.
3.5 Multi-Task Metric Learning
This section covers Mahalanobis distance learning for the multi-task setting (Caruana, 1997), where given a set of related tasks, one learns a metric for each in a coupled fashion in order to improve the performance on all tasks.
mt-LMNN (Parameswaran & Weinberger)
Multi-Task LMNN232323Source code available at: http://www.cse.wustl.edu/~kilian/code/code.html (Parameswaran and Weinberger, 2010) is a straightforward adaptation of the ideas of Multi-Task SVM (Evgeniou and Pontil, 2004) to metric learning. Given related tasks, they model the problem as learning a shared Mahalanobis metric as well as task-specific metrics and define the metric for task as
Note that , hence is a valid pseudo-metric. The LMNN formulation is easily generalized to this multi-task setting so as to learn the metrics jointly, with a specific regularization term defined as follows:
where controls the regularization of . When , the shared metric is simply the Euclidean distance, and the formulation reduces to independent LMNN formulations. On the other hand, when , the task-specific matrices are simply zero matrices and the formulation reduces to LMNN on the union of all data. In-between these extreme cases, these parameters can be used to adjust the relative importance of each metric: to set the overall level of shared information, and to set the importance of with respect to the shared metric. The formulation remains convex and can be solved using the same efficient solver as LMNN. In the multi-task setting, mt-LMNN clearly outperforms single-task metric learning methods and other multi-task classification techniques such as mt-SVM.
MLCS (Yang et al.)
MLCS (Yang et al., 2011) is a different approach to the problem of multi-task metric learning. For each task , the authors consider learning a Mahalanobis metric
parameterized by the transformation matrix . They show that can be decomposed into a “subspace” part and a “low-dimensional metric” part such that . The main assumption of MLCS is that all tasks share a common subspace, i.e., , . This parameterization can be used to extend most of metric learning methods to the multi-task setting, although it breaks the convexity of the formulation and is thus subject to local optima. However, as opposed to mt-LMNN, it can be made low-rank by setting and thus has many less parameters to learn. In their work, MLCS is applied to the version of LMNN solved with respect to the transformation matrix (Torresani and Lee, 2006). The resulting method is evaluated on problems with very scarce training data and study the performance for different values of . It is shown to outperform mt-LMNN, but the setup is a bit unfair to mt-LMNN since it is forced to be low-rank by eigenvalue thresholding.
GPML (Yang et al.)
The work of Yang et al. (2012) identifies two drawbacks of previous multi-task metric learning approaches: (i) MLCS’s assumption of common subspace is sometimes too strict and leads to a nonconvex formulation, and (ii) the Frobenius regularization of mt-LMNN does not preserve geometry. This property is defined as being the ability to propagate side-information: the task-specific metrics should be regularized so as to preserve the relative distance between training pairs. They introduce the following formulation, which extends any metric learning algorithm to the multi-task setting:
where is the loss function for the task based on the training pairs/triplets (depending on the chosen algorithm), is a Bregman matrix divergence (Dhillon and Tropp, 2007) and is a predefined metric (e.g., the identity matrix ). mt-LMNN can essentially be recovered from (6) by setting and additional constraints . The authors focus on the von Neumann divergence:
where is the matrix logarithm of . Like the LogDet divergence mentioned earlier in this survey (Section 3.3), the von Neumann divergence is known to be rank-preserving and to provide automatic enforcement of positive-semidefiniteness. The authors further show that minimizing this divergence encourages geometry preservation between the learned metrics. Problem (6) remains convex as long as the original algorithm used for solving each task is convex, and can be solved efficiently using gradient descent methods. In the experiments, the method is adapted to LMNN and outperforms single-task LMNN as well as mt-LMNN, especially when training data is very scarce.
TML (Zhang & Yeung)
Zhang and Yeung (2010) propose a transfer metric learning (TML) approach.242424Source code available at: http://www.cse.ust.hk/~dyyeung/ They assume that we are given independent source tasks with enough labeled data and that a Mahalanobis distance has been learned for each task . The goal is to leverage the information of the source metrics to learn a distance for a target task, for which we only have a scarce amount of labeled data. No assumption is made about the relation between the source tasks and the target task: they may be positively/negatively correlated or uncorrelated. The problem is formulated as follows:
where is the hinge loss, . The first two terms are classic (loss on all possible pairs and Frobenius regularization) while the third one models the relation between tasks based on a positive definite covariance matrix . Assuming that the source tasks are independent and of equal importance, can be expressed as
where denotes the task covariances between the target task and the source tasks, and denotes the variance of the target task. Problem (7) is convex and is solved using an alternating procedure that is guaranteed to converge to the global optimum: (i) fixing and solving for , which is done online with an algorithm similar to RDML, and (ii) fixing and solving for , leading to a second-order cone program whose number of variables and constraints is linear in the number of tasks. In practice, TML consistently outperforms metric learning methods without transfer when training data is scarce.
3.6 Other Approaches
In this section, we describe a few approaches that are outside the scope of the previous categories. The first two (LPML and SML) fall into the category of sparse metric learning methods. BoostMetric is inspired from the theory of boosting. DML- revisits the original metric learning formulation of Xing et al. RML deals with the presence of noisy constraints. Finally, MLR learns a metric for solving a ranking task.
LPML (Rosales & Fung)
The method of Rosales and Fung (2006) aims at learning matrices with entire columns/rows set to zero, thus making low-rank. For this purpose, they use
norm regularization and, restricting their framework to diagonal dominant matrices, they are able to formulate the problem as a linear program that can be solved efficiently. However,norm regularization favors sparsity at the entry level only, not specifically at the row/column level, even though in practice the learned matrix is sometimes low-rank. Furthermore, the approach is less general than Mahalanobis distances due to the restriction to diagonal dominant matrices.
SML (Ying et al.)
SML252525Source code is not available but is indicated as “coming soon” by the authors. Check:
http://www.enm.bris.ac.uk/staff/xyy/software.html (Ying et al., 2009), for Sparse Metric Learning, is a distance learning approach that regularizes with the mixed norm defined as
which tends to zero out entire rows of (as opposed to the
norm used in LPML), and therefore performs feature selection. More precisely, they set, where (the set of orthonormal matrices) and , and solve the following problem:
where is the trade-off parameter. Unfortunately, regularized problems are typically difficult to optimize. Problem (8) is reformulated as a min-max problem and solved using smooth optimization (Nesterov, 2005). Overall, the algorithm has a fast convergence rate but each iteration has an complexity. The method performs well in practice while achieving better dimensionality reduction than full-rank methods such as Rosales and Fung (2006). However, it cannot be applied to high-dimensional problems due to the complexity of the algorithm. Note that the same authors proposed a unified framework for sparse metric learning (Huang et al., 2009, 2011).
BoostMetric (Shen et al.)
BoostMetric262626Source code available at: http://code.google.com/p/boosting/ (Shen et al., 2009, 2012) adapts to Mahalanobis distance learning the ideas of boosting, where a good hypothesis is obtained through a weighted combination of so-called “weak learners” (see the recent book on this matter by Schapire and Freund, 2012). The method is based on the property that any PSD matrix can be decomposed into a positive linear combination of trace-one rank-one matrices. This kind of matrices is thus used as weak learner and the authors adapt the popular boosting algorithm Adaboost (Freund and Schapire, 1995) to this setting. The resulting algorithm is quite efficient since it does not require full eigenvalue decomposition but only the computation of the largest eigenvalue. In practice, BoostMetric achieves competitive performance but typically requires a very large number of iterations for high-dimensional datasets. Bi et al. (2011) further improve the scalability of the approach, while Liu and Vemuri (2012) introduce regularization on the weights as well as a term to reduce redundancy among the weak learners.
Dml- (Ying et al., Cao et al.)
Note that by setting we recover MMC. The authors show that (9) is convex for and can be cast as a well-known eigenvalue optimization problem called “minimizing the maximal eigenvalue of a symmetric matrix”. They further show that it can be solved efficiently using a first-order algorithm that only requires the computation of the largest eigenvalue at each iteration (instead of the costly full eigen-decomposition used by Xing et al.). Experiments show competitive results and low computational complexity. A general drawback of DML- is that it is not clear how to accommodate a regularizer (e.g., sparse or low-rank).
RML (Huang et al.)
Robust Metric Learning (Huang et al., 2010) is a method that can successfully deal with the presence of noisy/incorrect training constraints, a situation that can arise when they are not derived from class labels but from side information such as users’ implicit feedback. The approach is based on robust optimization (Ben-Tal et al., 2009): assuming that a proportion of the training constraints (say triplets) are incorrect, it minimizes some loss function for any fraction of the triplets:
where is taken to be the hinge loss and is defined as
In other words, Problem (10) minimizes the worst-case violation over all possible sets of correct constraints. can be replaced by its convex hull, leading to a semi-definite program with an infinite number of constraints. This can be further simplified into a convex minimization problem that can be solved either using subgradient descent or smooth optimization (Nesterov, 2005). However, both of these require a projection onto the PSD cone. Experiments on standard datasets show good robustness for up to 30% of incorrect triplets, while the performance of other methods such as LMNN is greatly damaged.
MLR (McFee & Lankriet)
The idea of MLR (McFee and Lanckriet, 2010), for Metric Learning to Rank, is to learn a metric for a ranking task, where given a query instance, one aims at producing a ranked list of examples where relevant ones are ranked higher than irrelevant ones.272727Source code is available at: http://www-cse.ucsd.edu/~bmcfee/code/mlr Let the set of all permutations (i.e., possible rankings) over the training set. Given a Mahalanobis distance and a query , the predicted ranking consists in sorting the instances by ascending . The metric learning is based on Structural SVM (Tsochantaridis et al., 2005):
where is the nuclear norm, the trade-off parameter, the Frobenius inner product, the feature encoding of an input-output pair ,282828The feature map is designed such that the ranking which maximizes is the one given by ascending . and the “margin” representing the loss of predicting ranking instead of the true ranking . In other words, assesses the quality of ranking with respect to the best ranking and can be evaluated using several measures, such as the Area Under the ROC Curve (AUC), Precision-at- or Mean Average Precision (MAP). Since the number of constraints is super-exponential in the number of training instances, the authors solve (11) using a 1-slack cutting-plane approach (Joachims et al., 2009) which essentially iteratively optimizes over a small set of active constraints (adding the most violated ones at each step) using subgradient descent. However, the algorithm requires a full eigendecomposition of at each iteration, thus MLR does not scale well with the dimensionality of the data. In practice, it is competitive with other metric learning algorithms for -NN classification and a structural SVM algorithm for ranking, and can induce low-rank solutions due to the nuclear norm. Lim et al. (2013) propose R-MLR, an extension to MLR to deal with the presence of noisy features292929Notice that this is different from noisy side information, which was investigated by the method RML (Huang et al., 2010) presented earlier in this section. using the mixed norm as in SML (Ying et al., 2009). R-MLR is shown to be able to ignore most of the irrelevant features and outperforms MLR in this situation.
4 Other Advances in Metric Learning
So far, we focused on (linear) Mahalanobis metric learning which has inspired a large amount of work during the past ten years. In this section, we cover other advances and trends in metric learning for feature vectors. Most of the section is devoted to (fully and weakly) supervised methods. In Section 4.1, we address linear similarity learning. Section 4.2 deals with nonlinear metric learning (including the kernelization of linear methods), Section 4.3 with local metric learning and Section 4.4 with metric learning for histogram data. Section 4.5 presents the recently-developed frameworks for deriving generalization guarantees for supervised metric learning. We conclude this section with a review of semi-supervised metric learning (Section 4.6).
4.1 Linear Similarity Learning
Although most of the work in linear metric learning has focused on the Mahalanobis distance, other linear measures, in the form of similarity functions, have recently attracted some interest. These approaches are often motivated by the perspective of more scalable algorithms due to the absence of PSD constraint.
SiLA (Qamar et al.)
SiLA (Qamar et al., 2008) is an approach for learning similarity functions of the following form:
where is not required to be PSD nor symmetric, and is a normalization term which depends on and . This similarity function can be seen as a generalization of the cosine similarity, widely used in text and image retrieval (see for instance Baeza-Yates and Ribeiro-Neto, 1999; Sivic and Zisserman, 2009)
. The authors build on the same idea of “target neighbors” that was introduced in LMNN, but optimize the similarity in an online manner with an algorithm based on voted perceptron. At each step, the algorithm goes through the training set, updating the matrix when an example does not satisfy a criterion of separation. The authors present theoretical results that follow from the voted perceptron theory in the form of regret bounds for the separable and inseparable cases. In subsequent work,Qamar and Gaussier (2012) study the relationship between SiLA and RELIEF, an online feature reweighting algorithm.
gCosLA (Qamar & Gaussier)
gCosLA (Qamar and Gaussier, 2009) learns generalized cosine similarities of the form
where . It corresponds to a cosine similarity in the projection space implied by . The algorithm itself, an online procedure, is very similar to that of POLA (presented in Section 3.4). Indeed, they essentially use the same loss function and also have a two-step approach: a projection onto the set of arbitrary matrices that achieve zero loss on the current example pair, followed by a projection back onto the PSD cone. The first projection is different from POLA (since the generalized cosine has a normalization factor that depends on ) but the authors manage to derive a closed-form solution. The second projection is based on a full eigenvalue decomposition of , making the approach costly as dimensionality grows. A regret bound for the algorithm is provided and it is shown experimentally that gCosLA converges in fewer iterations than SiLA and is generally more accurate. Its performance is competitive with LMNN and ITML. Note that Nguyen and Bai (2010) optimize the same form of similarity based on a nonconvex formulation.
OASIS (Chechik et al.)
OASIS303030Source code available at: http://ai.stanford.edu/~gal/Research/OASIS/ (Chechik et al., 2009, 2010) learns a bilinear similarity with a focus on large-scale problems. The bilinear similarity has been used for instance in image retrieval (Deng et al., 2011) and has the following simple form:
where is not required to be PSD nor symmetric. In other words, it is related to the (generalized) cosine similarity but does not include normalization nor PSD constraint. Note that when is the identity matrix, amounts to an unnormalized cosine similarity. The bilinear similarity has two advantages. First, it is efficiently computable for sparse inputs: if and have and nonzero features, can be computed in time. Second, unlike the Mahalanobis distance, it can define a similarity measure between instances of different dimension (for example, a document and a query) if a rectangular matrix is used. Since is not required to be PSD, Chechik et al. are able to optimize in an online manner using a simple and efficient algorithm, which belongs to the family of Passive-Aggressive algorithms (Crammer et al., 2006). The initialization is , then at each step , the algorithm draws a triplet and solves the following convex problem:
where is the trade-off parameter between minimizing the loss and staying close from the matrix obtained at the previous step. Clearly, if , then is the solution of (12). Otherwise, the solution is obtained from a simple closed-form update. In practice, OASIS achieves competitive results on medium-scale problems and unlike most other methods, is scalable to problems with millions of training instances. However, it cannot incorporate complex regularizers. Note that the same authors derived two more algorithms for learning bilinear similarities as applications of more general frameworks. The first one is based on online learning in the manifold of low-rank matrices (Shalit et al., 2010, 2012) and the second on adaptive regularization of weight matrices (Crammer and Chechik, 2012).
SLLC (Bellet et al.)
Similarity Learning for Linear Classification (Bellet et al., 2012b) takes an original angle by focusing on metric learning for linear classification. As opposed to pair and triplet-based constraints used in other approaches, the metric is optimized to be -good (Balcan et al., 2008a), a property based on an average over some points which has a deep connection with the performance of a sparse linear classifier built from such a similarity. SLLC learns a bilinear similarity and is formulated as an efficient unconstrained quadratic program:
where is a set of reference points randomly selected from the training sample, is the margin parameter, is the hinge loss and the regularization parameter. Problem (13) essentially learns such that training examples are more similar on average to reference points of the same class than to reference points of the opposite class by a margin . In practice, SLLC is competitive with traditional metric learning methods, with the additional advantage of inducing extremely sparse classifiers. A drawback of the approach is that linear classifiers (unlike -NN) cannot naturally deal with the multi-class setting, and thus one-vs-all or one-vs-one strategies must be used.
As OASIS and SLLC, Cheng (2013) also proposes to learn a bilinear similarity, but focuses on the setting of pair matching (predicting whether two pairs are similar). Pairs are of the form , where and potentially have different dimensionality, thus one has to learn a rectangular matrix . This is a relevant setting for matching instances from different domains, such as images with different resolutions, or queries and documents. The matrix is set to have fixed rank . RSL (Riemannian Similarity Learning) is formulated as follows:
where is some differentiable loss function (such as the log loss or the squared hinge loss). The optimization is carried out efficiently using recent advances in optimization over Riemannian manifolds (Absil et al., 2008) and based on the low-rank factorization of . At each iteration, the procedure finds a descent direction in the tangent space of the current solution, and a retractation step to project the obtained matrix back to the low-rank manifold. It outputs a local minimum of (14). Experiments are conducted on pair-matching problems where RSL achieves state-of-the-art results using a small rank matrix.
4.2 Nonlinear Methods
As we have seen, work in supervised metric learning has focused on linear metrics because they are more convenient to optimize (in particular, it is easier to derive convex formulations with the guarantee of finding the global optimum) and less prone to overfitting. In some cases, however, there is nonlinear structure in the data that linear metrics are unable to capture. The kernelization of linear methods can be seen as a satisfactory solution to this problem. This strategy is explained in Section 4.2.1. The few approaches consisting in directly learning nonlinear forms of metrics are addressed in Section 4.2.2.
4.2.1 Kernelization of Linear Methods
The idea of kernelization is to learn a linear metric in the nonlinear feature space induced by a kernel function and thereby combine the best of both worlds, in the spirit of what is done in SVM. Some metric learning approaches have been shown to be kernelizable (see for instance Schultz and Joachims, 2003; Shalev-Shwartz et al., 2004; Hoi et al., 2006; Torresani and Lee, 2006; Davis et al., 2007) using specific arguments, but in general kernelizing a particular metric algorithm is not trivial: a new formulation of the problem has to be derived, where interface to the data is limited to inner products, and sometimes a different implementation is necessary. Moreover, when kernelization is possible, one must learn a matrix. As the number of training examples gets large, the problem becomes intractable.
have proposed general kernelization methods based on Kernel Principal Component Analysis(Schölkopf et al., 1998), a nonlinear extension of PCA (Pearson, 1901). In short, KPCA implicitly projects the data into the nonlinear (potentially infinite-dimensional) feature space induced by a kernel and performs dimensionality reduction in that space. The (unchanged) metric learning algorithm can then be used to learn a metric in that nonlinear space—this is referred to as the “KPCA trick”. Chatpatanasiri et al. (2010)
showed that the KPCA trick is theoretically sound for unconstrained metric learning algorithms (they prove representer theorems). Another trick (similar in spirit in the sense that it involves some nonlinear preprocessing of the feature space) is based on kernel density estimation and allows one to deal with both numerical and categorical attributes(He et al., 2013)
. General kernelization results can also be obtained from the equivalence between Mahalanobis distance learning in kernel space and linear transformation kernel learning(Jain et al., 2010, 2012), but are restricted to spectral regularizers. Lastly, Wang et al. (2011) address the problem of choosing an appropriate kernel function by proposing a multiple kernel framework for metric learning.
Note that kernelizing a metric learning algorithm may drastically improve the quality of the learned metric on highly nonlinear problems, but may also favor overfitting (because pair or triplet-based constraints become much easier to satisfy in a nonlinear, high-dimensional kernel space) and thereby lead to poor generalization performance.
4.2.2 Learning Nonlinear Forms of Metrics
A few approaches have tackled the direct optimization of nonlinear forms of metrics. These approaches are subject to local optima and more inclined to overfit the data, but have the potential to significantly outperform linear methods on some problems.
LSMD (Chopra et al.)
Chopra et al. (2005) pioneered the nonlinear metric learning literature. They learn a nonlinear projection parameterized by a vector such that the distance in the low-dimensional target space is small for positive pairs and large for negative pairs. No assumption is made about the nature of : the parameter
corresponds to the weights in a convolutional neural network and can thus be an arbitrarily complex nonlinear mapping. These weights are learned through back-propagation and stochastic gradient descent so as to minimize a loss function designed to make the distance for positive pairs smaller than the distance of negative pairs by a given margin. Due to the use of neural networks, the approach suffers from local optimality and needs careful tuning of the many hyperparameters, requiring a significant amount of validation data in order to avoid overfitting. This leads to a high computational complexity. Nevertheless, the authors demonstrate the usefulness of LSMD on face verification tasks.
NNCA (Salakhutdinov & Hinton)
Nonlinear NCA (Salakhutdinov and Hinton, 2007)
is another distance learning approach based on deep learning. NNCA first learns a nonlinear, low-dimensional representation of the data using a deep belief network (stacked Restricted Boltzmann Machines) that is pretrained layer-by-layer in an unsupervised way. In a second step, the parameters of the last layer are fine-tuned by optimizing the NCA objective (Section3.2). Additional unlabeled data can be used as a regularizer by minimizing their reconstruction error. Although it suffers from the same limitations as LSMD due to its deep structure, NNCA is shown to perform well when enough data is available. For instance, on a digit recognition dataset, NNCA based on a 30-dimensional nonlinear representation significantly outperforms -NN in the original pixel space as well as NCA based on a linear space of same dimension.
SVML (Xu et al.)
Xu et al. (2012) observe that learning a Mahalanobis distance with an existing algorithm and plugging it into a RBF kernel does not significantly improve SVM classification performance. They instead propose Support Vector Metric Learning (SVML), an algorithm that alternates between (i) learning the SVM model with respect to the current Mahalanobis distance and (ii) learning a Mahalanobis distance that minimizes a surrogate of the validation error of the current SVM model. Since the latter step is nonconvex in any event (due to the nonconvex loss function), the authors optimize the distance based on the decomposition , thus there is no PSD constraint and the approach can be made low-rank. Frobenius regularization on may be used to avoid overfitting. The optimization procedure is done using a gradient descent approach and is rather efficient although subject to local minima. Nevertheless, SVML significantly improves standard SVM results.
GB-LMNN (Kedem et al.)
Kedem et al. (2012)
propose Gradient-Boosted LMNN, a nonlinear method consisting in generalizing the Euclidean distance with a nonlinear transformationas follows:
This nonlinear mapping takes the form of an additive function , where are gradient boosted regression trees (Friedman, 2001) of limited depth and corresponds to the mapping learned by linear LMNN. They once again use the same objective function as LMNN and are able to do the optimization efficiently, building on gradient boosting. On an intuitive level, the tree selected by gradient descent at each iteration divides the space into regions, and instances falling in the same region are translated by the same vector—thus examples in different regions are translated in different directions. Dimensionality reduction can be achieved by learning trees with -dimensional output. In practice, GB-LMNN seems quite robust to overfitting and performs well, often achieving comparable or better performance than LMNN and ITML.
HDML (Norouzi et al.)
Hamming Distance Metric Learning (Norouzi et al., 2012a) proposes to learn mappings from real-valued vectors to binary codes on which the Hamming distance performs well.313131Source code available at: https://github.com/norouzi/hdml Recall that the Hamming distance between two binary codes of same length is simply the number of bits on which they disagree. A great advantage of working with binary codes is their small storage cost and the fact that exact neighbor search can be done in sublinear time (Norouzi et al., 2012b). The goal here is to optimize a mapping that projects a -dimensional real-valued input to a -dimensional binary code. The mapping takes the general form:
where can be any function differentiable in , is the element-wise sign function and is a real-valued vector representing the parameters to be learned. For instance, can be a nonlinear transform obtained with a multilayer neural network. Given a relative constraint , denote by , and their corresponding binary codes given by . The loss is then given by
In the other words, the loss is zero when the Hamming distance between and is a at least one bit smaller than the distance between and . HDML is formalized as a loss minimization problem with norm regularization on . This objective function is nonconvex and discontinuous, but the authors propose to optimize a continuous upper bound on the loss which can be computed in time, which is efficient as long as the code length remains small. In practice, the objective is optimized using a stochastic gradient descent approach. Experiments show that relatively short codes obtained by nonlinear mapping are sufficient to achieve few constraint violations, and that a -NN classifier based on these codes can achieve competitive performance with state-of-the-art classifiers. Neyshabur et al. (2013) later showed that using asymmetric codes can lead to shorter encodings while maintaining similar performance.
4.3 Local Metric Learning
The methods studied so far learn a global (linear or nonlinear) metric. However, if the data is heterogeneous, a single metric may not well capture the complexity of the task and it might be beneficial to use multiple local metrics that vary across the space (e.g., one for each class or for each instance).323232The work of Frome et al. (2007) is one of the first to propose to learn multiple local metrics. However, their approach is specific to computer vision so we chose not to review it here.
This can often be seen as approximating the geodesic distance defined by a metric tensor(see Ramanan and Baker, 2011, for a review on this matter). It is typically crucial that the local metrics be learned simultaneously in order to make them meaningfully comparable and also to alleviate overfitting. Local metric learning has been shown to significantly outperform global methods on some problems, but typically comes at the expense of higher time and memory requirements. Furthermore, they usually do not give rise to a consistent global metric, although some recent work partially addresses this issue (Zhan et al., 2009; Hauberg et al., 2012).
M-LMNN (Weinberger & Saul)
Multiple Metrics LMNN333333Source code available at: http://www.cse.wustl.edu/~kilian/code/code.html (Weinberger and Saul, 2008, 2009) learns several Mahalanobis distances in different parts of the space. As a preprocessing step, training data is partitioned in clusters. These can be obtained either in a supervised way (using class labels) or without supervision (e.g., using -Means). Then, metrics (one for each cluster) are learned in a coupled fashion in the form of a generalization of the LMNN’s objective, where the distance to a target neighbor or an impostor is measured under the local metric associated with the cluster to which belongs. In practice, M-LMNN can yield significant improvements over standard LMNN (especially with supervised clustering), but this comes at the expense of a higher computational cost, and important overfitting (since each local metric can be overly specific to its region) unless a large validation set is used (Wang et al., 2012c).
GLML (Noh et al.)
The work of Noh et al. (2010), Generative Local Metric Learning, aims at leveraging the power of generative models (known to outperform purely discriminative models when the training set is small) in the context of metric learning. They focus on nearest neighbor classification and express the expected error of a 1-NN classifier as the sum of two terms: the asymptotic probability of misclassification and a metric-dependent term representing the bias due to finite sampling. They show that this bias can be minimized locally by learning a Mahalanobis distance at each training point . This is done by solving, for each training instance, an independent semidefinite program that has an analytical solution. Each matrix is further regularized towards a diagonal matrix in order to alleviate overfitting. Since each local metric is computed independently, GLML can be very scalable. Its performance is competitive on some datasets (where the assumption of Gaussian distribution to model the distribution of data is reasonable) but can perform very poorly on more complex problems (Wang et al., 2012c). Note that GLML does not straightforwardly extend to the -NN setting for . Shi et al. (2011) use GLML metrics as base kernels to learn a global kernel in a discriminative manner.
Bk-means (Wu et al.)
Wu et al. (2009, 2012) propose to learn Bregman distances (or Bregman divergences), a family of metrics that do not necessarily satisfy the triangle inequality or symmetry (Bregman, 1967). Given the strictly convex and twice differentiable function , the Bregman distance is defined as:
It generalizes many widely-used measures: the Mahalanobis distance is recovered by setting , the KL divergence (Kullback and Leibler, 1951) by choosing (here,
is a discrete probability distribution), etc. Wu et al. consider the following symmetrized version:
where is a point on the line segment between and . Therefore, amounts to a Mahalanobis distance parameterized by the Hessian matrix of which depends on the location of and . In this respect, learning can be seen as learning an infinite number of local Mahalanobis distances. They take a nonparametric approach by assuming to belong to a Reproducing Kernel Hilbert Space associated to a kernel function where is a strictly convex function (set to in the experiments). This allows the derivation of a representer theorem. Setting leads to the following formulation based on classic positive/negative pairs: