Given data pairs labeled as either “similar” or “dissimilar”, distance metric learning [58, 51, 11] learns a distance measure in such a way that similar examples are placed close to each other while dissimilar ones are separated apart. The learned distance metrics are important to many downstream tasks, such as retrieval , classification  and clustering . One commonly used distance metric between two examples is: [51, 55, 9], which is parameterized by projection vectors (in .
Many works [50, 55, 48, 42, 9] have proposed orthogonality-promoting DML to learn distance metrics that are (1) balanced: performing equally well on data instances belonging to frequent and infrequent classes; (2) compact: using a small number of projection vectors to achieve a “good” metric, (i.e., capturing well the relative distances of the data pairs); (3) generalizable
: reducing the overfitting to training data. Regarding balancedness, under many circumstances, the frequency of classes, defined as the number of examples belonging to each class, can be highly imbalanced. Classic DML methods are sensitive to the skewness of the frequency of the classes: they perform favorably on frequent classes whereas less well on infrequent classes — a phenomenon also confirmed in our experiments in Section7. However, infrequent classes are of crucial importance in many applications, and should not be ignored. For example, in a clinical setting, many diseases occur infrequently, but are life-threatening. Regarding compactness, the number of the projection vectors entails a tradeoff between performance and computational complexity [16, 55, 42]. On one hand, more projection vectors bring in more expressiveness in measuring distance. On the other hand, a larger incurs a higher computational overhead since the number of weight parameters in grows linearly with . It is therefore desirable to keep small without hurting much ML performance. Regarding generalization performance, in the case where the sample size is small but the size of is large, overfitting can easily happen.
To address these three issues, many studies [58, 51, 11, 20, 61, 25, 63] propose to regularize the projection vectors to be close to being orthogonal. For balancedness, they argue that, without orthogonality-promoting regularization (OPR), the majority of projection vectors learn latent features for frequent classes since these classes have dominant signals in the dataset; through OPR, the projection vectors uniformly “spread out”, giving both infrequent and frequent classes a fair treatment and thus leading to a more balanced distance metric (see  for details). For compactness, they claim that: “diversified” projection vectors bear less redundancy and are mutually complementary; as a result, a small number of such vectors are sufficient to achieve a “good” distance metric. For generalization performance, they posit that OPR imposes a structured constraint on the function class of DML, hence reduces model complexity.
While these orthogonality-promoting DML methods have shown promising results, they have three problems. First, they involve solving non-convex optimization problems where the global solution is extremely difficult, if not impossible, to obtain. Second, no formal analysis is conducted regarding why OPR can promote balancedness. Third, while the generalization error (GE) analysis of OPR has been studied in 
, it is incomplete. In this analysis, they first show that the upper bound of GE is a function of cosine similarity (CS), then show that CS and the regularizer are somewhat aligned in shape. They did not establish a direct relationship between the GE bound and the regularizer.
In this paper, we aim at addressing these problems by making the following contributions:
We relax the nonconvex, orthogonality-promoting DML problems into convex problems and develop efficient proximal gradient descent algorithms. The algorithms only run once with a single initialization, and hence are much more efficient than existing non-convex methods.
We perform theoretical analysis which formally reveals the relationship between OPR and balancedness: stronger OPR leads to more balancedness.
We perform generalization error (GE) analysis which shows that reducing the convex orthogonality-promoting regularizers can reduce the upper bound of GE.
We apply the learned distance metrics for information retrieval to healthcare, texts, images, and sensory data. Compared with non-convex baseline methods, our approaches achieve higher computational efficiency and are more capable of improving balancedness, compactness and generalizability.
2 Related Works
2.1 Distance Metric Learning
Many studies [58, 51, 11, 20, 61, 25, 63] have investigated DML. Please refer to [28, 49] for a detailed review. Xing et al.  learn a Mahalanobis distance by minimizing the sum of distances of all similar data pairs subject to the constraint that the sum of all dissimilar pairs is no less than 1. Weinberger et al.  propose large margin metric learning, which is applied for k-nearest neighbor classification. For each data example , they first obtain nearest neighbors based on Euclidean distance. Among the neighbors, some (denoted by ) have the same class label with and others (denoted by ) do not. Then a projection matrix is learned such that where and . Davis et al.  learn a Mahalanobis distance such that the distance between similar pairs is no more than a threshold and the distance between dissimilar pairs is no greater than a threshold . Guillaumin et al. 
define a probability of the similarity/dissimilarity label conditioned on the Mahalanobis distance:
, where the binary variableequals to 1 if and have the same class label. is learned by maximizing the conditional likelihood of the training data. Kostinger et al.  learn a Mahalanobis distance metric from equivalence constraints based on likelihood ratio test. The Mahalanobis matrix is computed in one shot, without going through an iterative optimization procedure. Ying and Li 
formulate DML as an eigenvalue optimization problem. Zadeh et al.
propose a geometric mean metric learning approach, based on the Riemannian geometry of positive definite matrices. Similar to, the Mahalanobis matrix has a closed form solution without iterative optimization.
To avoid overfitting in DML, various regularization approaches have been explored. Davis et al.  regularize the Mahalanobis matrix to be close to another matrix that encodes prior information, where the closeness is measured using log-determinant divergence. Qi et al.  use regularization to learn sparse distance metrics for high-dimensional, small-sample problems. Ying et al.  use norm to simultaneously encourage low-rankness and sparsity. Trace norm is leveraged to encourage low-rankness in [37, 32]. Qian et al.  apply dropout to DML. Many works [50, 16, 55, 59, 42, 9] study diversity-promoting regularization in DML or hashing. They define regularizers based on squared Frobenius norm [50, 13, 16, 9] or angles [55, 59] to encourage the projection vectors to approach orthogonal. Several works [31, 52, 18, 22, 48] impose strict orthogonal constraint on the projection vectors. As observed in previous works [50, 13] and our experiments, strict orthogonality hurts performance. Isotropic hashing [24, 15]
encourages the variances of different projected dimensions to be equal to achieve balance. Carreira-Perpinán and Raziperchikolaei
propose a diversity hashing method which first trains hash functions independently and then introduces diversity among them based on classifier ensembles.
2.2 Orthogonality-Promoting Regularization
Orthogonality-promoting regularization has been studied in other problems as well, including ensemble learning, latent variable modeling, classification and multitask learning. In ensemble learning, many studies [30, 2, 39, 62] promote orthogonality among the coefficient vectors of base classifiers or regressors, with the aim to improve generalization performance and reduce computational complexity. Recently, several works [65, 3, 10, 56] study orthogonality-promoting regularization of latent variable models (LVMs), which encourages the components in LVMs to be mutually orthogonal, for the sake of capturing infrequent patterns and reducing the number of components without sacrificing modeling power. In these works, various orthogonality-promoting regularizers have been proposed, based on determinantal point process [27, 65] and cosine similarity [62, 3, 56]. In multi-way classification, Malkin and Bilmes  propose to use the determinant of a covariance matrix to encourage orthogonality among classifiers. Jalali et al.  propose a class of variational Gram functions (VGFs) to promote pairwise orthogonality among vectors. While these VGFs are convex, they can only be applied to non-convex DML formulations. As a result, the overall regularized DML is non-convex and is not amenable for convex relaxation.
In the sequel, we review two families of orthogonality-promoting regularizers.
Determinantal Point Process
 employed the determinantal point process (DPP)  as a prior to induce orthogonality in latent variable models. DPP is defined over vectors: , where is a kernel matrix with and as a kernel function. denotes the determinant of a matrix. A configuration of with larger probability is deemed to be more orthogonal. The underlying intuition is that: represents the volume of the parallelepiped formed by vectors in the kernel-induced feature space. If these vectors are closer to being orthogonal, the volume is larger, which results in a larger . The shortcoming of DPP is that it is sensitive to vector scaling. Enlarging the magnitudes of vectors results in larger volume, but does not essentially affects the orthogonality of vectors.
Pairwise Cosine Similarity
Several works define orthogonality-promoting regularizers based on the pairwise cosine similarity among component vectors: if the cosine similarity scores are close to zero, then the components are closer to being orthogonal. Given component vectors, the cosine similarity between each pair of components and is computed: . Then these scores are aggregated as a single score. In , these scores are aggregated as . In , the aggregation is performed as where . In , the aggregated score is defined as mean of minus the variance of .
Distance Metric Learning
Given data pairs labeled either as “similar” or “dissimilar” , DML [58, 51, 11] aims to learn a distance metric under which similar examples are close to each other and dissimilar ones are separated far apart. There are many ways to define a distance metric. Here, we present two popular choices. One is based on linear projection [51, 55, 9]. Given two examples , a linear projection matrix can be utilized to map them into a -dimensional latent space. The distance metric is then defined as their squared Euclidean distance in the latent space: . can be learned by minimizing : , which aims at making the distances between similar examples as small as possible while separating dissimilar examples with a margin using a hinge loss. We call this formulation as projection matrix-based DML (PDML). PDML is a non-convex problem where the global optimal is difficult to achieve. Moreover, one needs to manually tune the number of projection vectors, typically via cross-validation, which incurs substantial computational overhead.
The other popular choice of distance metric is , which is cast from by replacing with a positive semidefinite (PSD) matrix . This is known as the Mahalanobis distance . Correspondingly, the PDML formulation can be transformed into a Mahalanobis distance-based DML (MDML) problem: , which is a convex problem where the global solution is guaranteed to be achievable. It also avoids tuning the number of projection vectors. However, the drawback of this approach is that, in order to satisfy the PSD constraint, one needs to perform eigen-decomposition of in each iteration, which incurs complexity.
To encourage orthogonality between two vectors and , one can make their inner product close to zero and their norm , close to one. For a set of vectors , their near-orthogonality can be achieved by computing the Gram matrix where , then encouraging
to be close to an identity matrix. Off the diagonal ofand are and zero, respectively. On the diagonal of and are and one, respectively. Making close to effectively encourages to be close to zero and close to one, which therefore encourages and to be close to orthogonal.
BMDs can be used to measure the “closeness” between two matrices. Let denote real symmetric matrices. Given a strictly convex, differentiable function , a BMD is defined as , where denotes the trace of matrix . Different choices of lead to different divergences. When , the BMD is specialized to the squared Frobenius norm (SFN) . If , where denotes the matrix logarithm of , the divergence becomes , which is referred to as von Neumann divergence (VND) . If where denotes the determinant of , we get the log-determinant divergence (LDD) : .
In PDML, to encourage orthogonality among the projection vectors (row vectors in ), Xie et al.  define a family of regularizers which encourage the BMD between the Gram matrix and an identity matrix to be small. can be specialized to different instances, based on the choices of . Under SFN, becomes , which is used in [50, 13, 16, 9] to promote orthogonality. Under VND, becomes . Under LDD, becomes .
4 Convex Relaxation
The PDML-BMD problem is non-convex, where the global optimal solution of is very difficult to achieve. We seek a convex relaxation and solve the relaxed problem instead. The basic idea is to transform PDML into MDML and approximate the BMD regularizers with convex functions.
4.1 Convex Approximations of the BMD Regularizers
The approximations are based on the properties of eigenvalues. Given a full-rank matrix (), we know that is a full-rank matrix with positive eigenvalues and is a rank-deficient matrix with zero eigenvalues and positive eigenvalues that equal to . For a general positive definite matrix whose eigenvalues are , we have , and . Next, we leverage these facts to seek convex relaxations of the BMD regularizers.
A convex SFN regularizer
The eigenvalues of are and those of are . Then . Therefore, the SFN regularizer equals to , where is a Mahalanobis matrix and . It is well-known that the trace norm of a matrix is a convex envelope of its rank . We use to approximate and get , where the right hand side is a convex function. Dropping the constant, we get the convex SFN (CSFN) regularizer defined over :
A convex VND regularizer
Given the eigen-decomposition where the eigenvalue equals to , based on the property of the matrix logarithm, we have where . Then , where the eigenvalues are . Then . Now we consider a matrix , where is a small scalar. Using similar calculation, we have . Performing certain algebra (see Appendix A), we get . Replacing with , approximating with and dropping constant , we get the convex VND (CVND) regularizer:
whose convexity is shown in .
A convex LDD regularizer
We have and . Certain algebra shows that . After replacing with , approximating with and discarding constants, we obtain the convex LDD (CLDD) regularizer:
where the convexity of is proved in . Note that in [11, 40], an information theoretic regularizer based on log-determinant divergence is applied to encourage the Mahalanobis matrix to be close to the identity matrix. This regularizer requires to be full rank; in contrast, by associating a large weight to the trace norm , our CLDD regularizer encourages to be low-rank. Since , reducing the rank of reduces the number of projection vectors in .
We discuss the errors in convex approximation, which are from two sources: one is the approximation of using where the error is controlled by and can be arbitrarily small (by setting to be very small); the other is the approximation of the matrix rank using the trace norm. Though the error of the second approximation can be large, it has been both empirically and theoretically  demonstrated that decreasing the trace norm can effectively reduce rank. We empirically verify that decreasing the convexified CSFN, CVND and CLDD regularizers can decrease the original non-convex counterparts SFN, VND and LDD (see Appendix D.3). A rigorous analysis is left for future study.
4.2 DML with a Convex BMD Regularization
Given these convex BMD (CBMD) regularizers (denoted by ), we relax the non-convex PDML-BMD problems into convex MDML-CBMD formulations by replacing with and replacing the non-convex BMD regularizers with :
We use stochastic proximal subgradient descent algorithm  to solve the MDML-CBMD problems. The algorithm iteratively performs the following steps until convergence: (1) randomly sampling a mini-batch of data pairs, computing the subgradient of the data-dependent loss (the first and second term in the objective function) defined on the mini-batch, then performing a subgradient descent update: , where is a small stepsize; and (2) applying proximal operators associated with the regularizers to . The gradient of the CVND regularizer is . To compute , we first perform an eigen-decomposition: , then take the log of every eigenvalue in which gets us a new diagonal matrix , and finally compute as . In the CLDD regularizer, the gradient of is , which can also be computed by eigen-decomposition. Next, we present the proximal operators.
5.1 Proximal Operators
Given the regularizer , the associated proximal operator is defined as: , subject to . Let be the eigenvalues of and be the eigenvalues of , then the above problem can be equivalently written as:
where is a regularizer-specific scalar function. This problem can be decomposed into independent ones: (P) , subject to , for , which can be solved individually.
For SFN where and , the problem (P) is simply a quadratic programming problem. The optimal solution is
For VND where and , by taking the derivative of the objective function in problem (P) w.r.t and setting the derivative to zero, we get . The root of this equation is: , where is the Wright omega function . If this root is negative, then the optimal is 0; if this root is positive, then the optimal could be either this root or 0. We pick the one that yields the lowest . Formally, , where .
For LDD where and , by taking the derivative of w.r.t and setting the derivative to zero, we get a quadratic equation: , where and . The optimal solution is achieved either at the positive roots (if any) of this equation or 0. We pick the one that yields the lowest . Formally, , where .
In this algorithm, the major computation workload is eigen-decomposion of -by- matrices, with a complexity of . In our experiments, since is no more than 1000, is not a big bottleneck. Besides, these matrices are symmetric, the structures of which can thus be leveraged to speed up eigen-decomposition. In implementation, we use the MAGMA111http://icl.cs.utk.edu/magma/ library that supports the efficient eigen-decomposition of symmetric matrices on GPU. Note that the unregularized MDML also requires the eigen-decomposition (of ), hence adding these CBMD regularizes does not substantially increase additional computation cost.
6 Theoretical Analysis
In this section, we present theoretical analysis of balancedness and generalization error.
6.1 Analysis of Balancedness
In this section, we analyze how the nonconvex BMD regularizers that promote orthogonality affect the balancedness of the distance metrics learned by PDML-BMD222The analysis of convex BMD regularizers in MDML-CBMD will be left for future work.. Specifically, the analysis focuses on the following projection matrix: . We assume there are classes, where class has a distribution and the corresponding expectation is . Each data sample in and is drawn from the distribution of one specific class. We define and . Further, we assume has full rank (which is the number of the projection vectors), and let denote the eigen-decomposition of , where with .
We define an imbalance factor (IF) to measure the (im)balancedness. For each class , we use the corresponding expectation to characterize this class.
We define the Mahalanobis distance between two classes and as: . We define the IF among all classes as:
The motivation of such a definition is: for two frequent classes, since they have more training examples and hence contributing more in learning , DML intends to make their distance large; whereas for two infrequent classes, since they contribute less in learning (and DML is constrained by similar pairs which need to have small distances), their distance may end up being small. Consequently, if classes are imbalanced, some between-class distances can be large while others small, resulting in a large IF. The following theorem shows the upper bounds of IF.
Let denote the ratio between and and assume . Suppose the regularization parameter and distance margin are sufficiently large: and , where and depend on and . If and , then we have the following bounds for the IF333Please refer to Appendix B.1 for the definition of and the detailed proof..
For the VND regularizer , if , the following bound of the IF holds:
where is an increasing function defined in the following way. Let , which is strictly increasing on and strictly decreasing on and let be the inverse function of on , then for .
For the LDD regularizer , we have
As can be seen, the bounds are increasing functions of the BMD regularizers and . Decreasing these regularizers would reduce the upper bounds of the imbalance factor, hence leading to more balancedness. For SFN, such a bound cannot be derived.
6.2 Analysis of Generalization Error
In this section, we analyze how the convex BMD regularizers affect the generalization error in MDML-CBMD problems. Following , we use distance-based error to measure the quality of a Mahalanobis distance matrix . Given the sample and where the total number of data pairs is , the empirical error is defined as and the expected error is . Let be optimal matrix learned by minimizing the empirical error: . We are interested in how well performs on unseen data. The performance is measured using generalization error: . To incorporate the impact of the CBMD regularizers , we define the hypothesis class of to be . The upper bound controls the strength of regularization. A smaller entails stronger promotion of orthogonality. is controlled by the regularization parameter in Eq.(4). Increasing reduces . For different CBMD regularizers, we have the following generalization error bound.
Suppose , then with probability at least , we have:
For the CVND regularizer,
For the CLDD regularizer,
For the CSFN regularizer,
From these generalization error bounds (GEBs), we can see two major implications. First, CBMD regularizers can effectively control the GEBs. Increasing the strength of CBMD regularization (by enlarging ) reduces , which decreases the GEBs since they are all increasing functions of . Second, the GEBs converge with rate , where is the number of training data pairs. This rate matches with that in [5, 47].
On the three imbalanced datasets – MIMIC, EICU, Reuters, we show the mean AUC (averaged on 5 random train/test splits) on all classes (A-All) and infrequent classes (A-IF) and the balance score. On the rest 4 balanced datasets, A-All is shown. The AUC on frequent classes and the standard errors are in AppendixD.3.
We used 7 datasets in the experiments: two electronic health record datasets MIMIC (version III)  and EICU (version 1.1) ; two text datasets Reuters444http://www.daviddlewis.com/resources/testcollections/reuters21578/ and 20-Newsgroups (News)555http://qwone.com/~jason/20Newsgroups/; two image datasets Stanford-Cars (Cars)  and Caltech-UCSD-Birds (Birds) ; and one sensory dataset 6-Activities (Act) . The MIMIC-III dataset contains 58K hospital admissions of 47K patients who stayed within the intensive care units (ICU). Each admission has a primary diagnosis (a disease), which acts as the class label of this admission. There are 2833 unique diseases. We extract 7207-dimensional features from demographics, clinical notes, and lab tests. The EICU dataset contains 92K ICU admissions diagnosed with 2175 unique diseases. 3743-dimensional features are extracted from demographics, lab tests, vital signs, and past medical history. For the Reuters datasets, after removing documents that have more than one labels and removing classes that have less than 3 documents, we are left with 5931 documents and 48 classes. Documents in Reuters and News are represented with tfidf vectors where the vocabulary size is 5000. For the two image datasets Birds and Cars, we use the VGG16 convolutional neural network trained on the ImageNet  dataset to extract features, which are the 4096-dimensional outputs of the second fully-connected layer. The 6-Activities dataset contains sensory recordings of 30 subjects performing 6 activities (which are the class labels). The features are 561-dimensional sensory signals. For the first six datasets, the features are normalized using min-max normalization along each dimension and the feature dimension is reduced to 1000 using PCA. Since there is no standard split of the training/test set, we perform five random splits and average the results of the five runs. Dataset statistics are summarized in Table 1
. More details of the datasets and feature extraction are deferred to AppendixD.1.
Two examples are considered as similar if they belong to the same class and dissimilar if otherwise. The learned distance metrics are applied for retrieval (using each test example to query the rest of the test examples) whose performance is evaluated using the Area Under precision-recall Curve (AUC)  which is the higher, the better. Note that the learned distance metrics can also be applied to other tasks such as clustering and classification. Due to the space limit, we focus on retrieval. We apply the proposed convex regularizers CSFN, CVND, CLDD to MDML. We compare them with two sets of baseline regularizers. The first set aims at promoting orthogonality, which are based on determinant of covariance (DC) , cosine similarity (CS) , determinantal point process (DPP) [27, 65], InCoherence (IC) , variational Gram function (VGF) [64, 21], decorrelation (DeC) , mutual angles (MA) , squared Frobenius norm (SFN) [50, 13, 16, 9], von Neumann divergence (VND) , log-determinant divergence (LDD) , and orthogonal constraint (OC) [31, 48]. All these regularizers are applied to PDML. The other set of regularizers are not designed particularly for promoting orthogonality but are commonly used, including norm, norm , norm , trace norm (Tr) , information theoretic (IT) regularizer , and Dropout (Drop) . All these regularizers are applied to MDML. One common way of dealing with class-imbalance is over-sampling (OS) , which repetitively draws samples from the empirical distributions of infrequent classes until all classes have the same number of samples. We apply this technique to PDML and MDML. In addition, we compare with vanilla Euclidean distance (EUC) and other distance learning methods including large margin nearest neighbor (LMNN) metric learning, information theoretic metric learning (ITML) , logistic discriminant metric learning (LDML) , metric learning from equivalence constraints (MLEC) , geometric mean metric learning (GMML) , and independent Laplacian hashing with diversity (ILHD) . The PDML-based methods except PDML-OC are solved with stochastic subgradient descent (SSD). PDML-OC is solved using the algorithm proposed in . The MDML-based methods are solved with proximal SSD. The learning rate is set to 0.001. The mini-batch size is set to 100 (50 similar pairs and 50 dissimilar pairs). We use 5-fold cross validation to tune the regularization parameter among and the number of projection vectors (of the PDML methods) among . In CVND and CLDD, is set to be . The margin is set to be 1. In the MDML-based methods, after the Mahalanobis matrix (rank ) is learned, we factorize it into where (see Appendix D.2), then perform retrieval based on , which is more efficient than that based on . Each method is implemented on top of GPU using the MAGMA library. The experiments are conducted on a GPU-cluster with 40 machines.
The training time taken by different methods to reach convergence is shown in Table 7. For the non-convex, PDML-based methods, we report the total time taken by the following computation: tuning the regularization parameter (4 choices) and the number of projection vectors (NPVs, 6 choices) on a two-dimensional grid via 3-fold cross validation ( experiments in total); for each of the 72 experiments, the algorithm restarts 5 times666Our experiments show that for non-convex methods, multiple re-starts are of great necessity to improve performance. For example, for PDML-VND on MIMIC with 100 projection vectors, the AUC is non-decreasing with the number of re-starts: the AUC after re-starts are 0.651, 0.651, 0.658, 0.667, 0.667., each with a different initialization, and picks the one yielding the lowest objective value. In total, the number of runs is . For the MDML-based methods, there is no need to restart multiple times or tune the NPVs. The total number of runs is . As can be seen from the table, the proposed convex methods are much faster than the non-convex ones, due to the greatly reduced number of experimental runs, although for each single run the convex methods are less efficient than the non-convex methods due to the overhead of eigen-decomposition. The unregularized MDML takes the least time to train since it has no parameters to tune and runs only once. On average, the time of each single run in MDML-(CSFN,CVND,CLDD) is close to that in the unregularized MDML, since an eigen-decomposition is required anyway regardless of the presence of the regularizers.
Next, we verify whether CSFN, CVND and CLDD are able to learn more balanced distance metrics. On three datasets MIMIC, EICU and Reuters where the classes are imbalanced, we consider a class as “frequent” if it contains more than 1000 examples, and “infrequent” if otherwise. We measure AUCs on all classes (A-All), infrequent classes (A-IF) and frequent classes (A-F), then define a balance score (BS) as . A smaller BS indicates more balancedness. As shown in Table 2, MDML-(CSFN,CVND,CLDD) achieve the highest A-All on 6 datasets and the highest A-IF on all 3 imbalanced datasets. In terms of BS, our convex methods outperform all baseline DML methods. These results demonstrate our methods can learn more balanced metrics. By encouraging the projection vectors to be close to being orthogonal, our methods can reduce the redundancy among vectors. Mutually complementary vectors can achieve a broader coverage of latent features, including those associated with infrequent classes. As a result, these vectors improve the performance on infrequent classes and lead to better balancedness. Thanks to their convexity nature, our methods can achieve the global optimal solution and outperform the non-convex ones that can only achieve a local optimal and hence a sub-optimal solution. Comparing (PDML,MDML)-OS with the unregularized PDLM/MDML, we can see that over-sampling indeed improves balancedness. However, this improvement is less significant than that achieved by our methods. In general, the orthogonality-promoting (OP) regularizers outperform the non-OP regularizers, suggesting the effectiveness of promoting orthogonality. The orthogonal constraint (OC) [31, 48] imposes strict orthogonality, which may be too restrictive that hurts performance. ILHD  learns binary hash codes, which makes retrieval more efficient, however, it achieves lower AUCs due to the quantization errors. MDML-(CSFN,CVND,CLDD) outperform popular DML approaches including LMNN, LDML, MLEC and GMML, demonstrating their competitive standing in the DML literature.
Next we verify whether the learned distance metrics by MDML-(CSFN,CVND,CLDD) are compact. Table 4 shows the numbers of the projection vectors (NPVs) that achieve the AUCs in Table 2. For MDML-based methods, the NPV equals to the rank of the Mahalanobis matrix since . We define a compactness score (CS) which is the ratio between A-All (given in Table 2) and NPV. A higher CS indicates achieving higher AUC by using fewer projection vectors. From Table 4, we can see that on 5 datasets, MDML-(CSFN,CVND,CLDD) achieve larger CSs than the baseline methods, demonstrating their better capability in learning compact distance metrics. Similar to the observations in Table 2, CSFN, CVND and CLDD perform better than non-convex regularizers, and CVND, CLDD perform better than CSFN. The reduction of NPV improves the efficiency of retrieval since the computational complexity grows linearly with this number. Together, these results demonstrate that MDML-(CSFN,CVND,CLDD) outperform other methods in terms of learning both compact and balanced distance metrics.
As can be seen from Table 2, our methods MDML-(CVND,CLDD) achieve the best AUC-All. In Table 10 (Appendix D.3), it is shown that MDML-(CVND,CLDD) have the smallest gap between training and testing AUC. This indicates that our methods are better capable of reducing overfitting and improving generalization performance.
In this paper, we have attempted to address three issues of existing orthogonality-promoting DML methods, which include computational inefficiency and lacking theoretical analysis in balancedness and generalization. To address the computation issue, we perform a convex relaxation of these regularizers and develop a proximal gradient descent algorithm to solve the convex problems. To address the analysis issue, we define an imbalance factor (IF) to measure (im)balancedness and prove that decreasing the Bregman matrix divergence regularizers (which promote orthogonality) can reduce the upper bound of the IF, hence leading to more balancedness. We provide a generalization error (GE) analysis showing that decreasing the convex regularizers can reduce the GE upper bound. Experiments on datasets from different domains demonstrate that our methods are computationally more efficient and are more capable of learning balanced, compact and generalizable distance metrics than other approaches.
Appendix A Convex Approximations of BMD Regularizers
Approximation of VND regularizer
Given , according to the property of matrix logarithm, , where . Then , where the eigenvalues are . Since , we have . Now we consider a matrix , where is a small scalar. The eigenvalues of this matrix are