AAAI2021_MDR
Official Code of AAAI 2021 Paper "Multilevel Distance Regularization for Deep Metric Learning"
view repo
We propose a novel distancebased regularization method for deep metric learning called Multilevel Distance Regularization (MDR). MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels that represents a degree of similarity between a pair. In the training stage, the model is trained with both MDR and an existing loss function of deep metric learning, simultaneously; the two losses interfere with the objective of each other, and it makes the learning process difficult. Moreover, MDR prevents some examples from being ignored or overly influenced in the learning process. These allow the parameters of the embedding network to be settle on a local optima with better generalization. Without bells and whistles, MDR with simple Triplet loss achieves thestateoftheart performance in various benchmark datasets: CUB2002011, Cars196, Stanford Online Products, and InShop Clothes Retrieval. We extensively perform ablation studies on its behaviors to show the effectiveness of MDR. By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability.
READ FULL TEXT VIEW PDF
Learning the representation and the similarity metric in an endtoend
f...
read it
Many recent works advancing deep learning tend to focus on large scale
s...
read it
Deep Metric Learning (DML) is arguably one of the most influential lines...
read it
Mutual learning is an ensemble training strategy to improve generalizati...
read it
Recently, deep learning have achieved promising results in Estimated Tim...
read it
Deep metric learning (DML) is a popular approach for images retrieval,
s...
read it
Regularization plays a vital role in machine learning optimization. One ...
read it
Official Code of AAAI 2021 Paper "Multilevel Distance Regularization for Deep Metric Learning"
Deep Metric Learning (DML) aims to learn an appropriate metric that measures the semantic difference between a pair of images as a distance between embedding vectors. Many research areas such as image retrieval
sohn2016improved; yuan2017hard; oh2017deep; duan2018deep; ge2018deepand face recognition
NormFace; SphereFace; CosFace; ArcFace are based on DML to seek appropriate metrics among instances. Those studies focus on devising a better loss function for DML.Most of previous loss functions sohn2016improved; bromley1994signature; hadsell2006dimensionality; yideep2014; hoffer2015deep; schroff2015facenet use binary supervision that indicates whether a given pair is positive or negative. Their common objective is to minimize the distance between a positive pair and maximize the distance between a negative pair (Figure (a)a). However, without any constraints, a model trained with such objective is prone to overfitting on a training set because positive pairs can be aligned too closely while the negative pairs can be aligned too far in the embedding space. Therefore, several loss functions employ additional terms to avoid positive pairs to be too close and negative pairs to be too far, e.g., margin in Triplet loss schroff2015facenet and Constrastive loss hadsell2006dimensionality. Despite these attempts, they still can suffer from overfitting due to the lack of explicit regularization for the distances.
Our insight is that a learning procedure of DML can be enhanced by explicitly regularizing the distance between pairs to disturb a loss function of DML from optimizing an embedding network; one easy way to constrain a distance is to pull the value of the distance to a predefined level. Conventional loss functions of DML adjust the distance according to its label, on the other hand, explicit distancebased regularization prevents the distance from deviating from the predefined level. Those two interfere with the objective of each other, thus it makes the learning process difficult and allows the embedding network to be more robust for generalization. Additionally, we consider multiple levels with disjoint intervals to regularize distances, not a single level, because a degree of interclass similarity or intraclass variation can be different depending on classes or instances.
We propose a novel method called Multilevel Distance Regularization (MDR) that makes the conventional loss functions of DML have difficulty in converging by holding each distance so that it does not deviate from the belonging level. At first, MDR normalizes pairwise distances among the embedding vectors of a minibatch, with their mean and standard deviation to obtain the objective degree of similarity between a pair by considering overall distribution. MDR defines the multiple levels that represent various degrees of similarity for pairwise distances, and the levels and the belonging distances are trained to approach each other (Figure
(b)b). A conventional loss function of DML struggles to optimize a model by overcoming the disturbance from the proposed regularization. Therefore, the learning process succeeds in learning a model with a better generalization ability. We summarize our contributions:[leftmargin=*]
We introduce MDR, a novel regularization method for DML. The method disturbs optimizing pairwise distances by preventing them from deviating from its belonging level for better generalization.
MDR achieves thestateoftheart performance on various benchmark datasets CUB200; Cars196; Song2016DeepML; InShop of DML. Moreover, our extensive ablation studies show that MDR can be adopted to any backbone networks and any distancebased loss functions to improve the performance of a model.
Loss Function. Improving the loss function is one of the key objectives in recent DML studies. One family of loss functions sohn2016improved; bromley1994signature; schroff2015facenet; Song2016DeepML; wang2019multi; wu2017sampling focuses on optimizing pairwise distance between instances. The common objective of these functions is to minimize the distance between positive pairs and to maximize the distance between negative pairs in an embedding space. Contrastive loss bromley1994signature samples pairs of two instances, whereas Triplet loss schroff2015facenet samples triplets of anchor, positive and negative instances; then both losses optimize the distance between the sampled instances. Also, Global Loss kumar2016learning
minimizes the mean and variance of all pairwise distances between positive examples and maximizes the mean of pairwise distances between all negative examples; Global Loss helps to optimize examples that are not selected by the example mining of DML. Histogram Loss
ustinova2016learningminimizes the probability that a randomly sampled positive pair has a smaller similarity than randomly sampled negative pairs. To extend the number of relations explored at once, NPair
sohn2016improved samples a positive and all negative instances for each example in a given minibatch; similar loss functions Song2016DeepML; wang2019multi also sample a large number of instances to fully explore the pairwise relations in the minibatch. On the other, some loss functions cakir2019deep; revaud2019learning focus on learning to rank according to the similarity between pairs. The performance of loss functions optimizing pairwise distance can be changed by a sampling method, thus, several studies focused on the pair sampling suh2019stochastic; schroff2015facenet; wu2017sampling for stable learning and better accuracy. A recent work wang2020cross even samples pairs across minibatches to collect a sufficient number of negative examples. Instead of designing a sampling method manually, a work roth2020padsemploys reinforcement learning to learn the policy for sampling. As a regularizer, MDR can be combined with those loss functions to improve the generalization ability of a model.
Generalization Ability. Another goal of DML is to improve the generalization ability of a given model. An ensemble of multiple heads that share the backbone network opitz_2018_pami; Kim_2018_ECCV; JACOB_2019_ICCV; sanakoyeu2019divide has the key objective of diversifying each head to achieve reliable embedding. Boosting can be used to reweight the importance of instances differently on each head opitz_2018_pami; sanakoyeu2019divide, or a spatial attention module can be used to differentiate a spatial region on which each head focuses Kim_2018_ECCV. HORDE JACOB_2019_ICCV
makes each head approximate a different higherorder moment. Those methods focus on changing the architecture of a model, but our MDR, as a regularizer, focuses on making a learning procedure harder to improve generalization ability. Without adding any extra computational costs or changing the architecture of the model, MDR can be easily integrated with those DML methods by simply adding our loss function.
In this section, we introduce a new regularization method called Multilevel Distance Regularization (MDR), which makes the learning procedure difficult by preventing each pairwise distance from deviating from a corresponding level, to learn a robust feature representation.
We describe the detailed procedure of MDR to regulate pairwise distances in three steps (Figure 2).
(1) Distance Normalization. This step is performed to obtain an objective degree of distance by considering overall distribution for stable regularization. Here, an embedding network maps an image into an embedding vector with a certain dimensionality: . A distance is defined as Euclidean distance between two given embedding vectors, . We normalize the distance as:
(1) 
where is mean of distances and is standard deviation of distances a set of pairs, which is for all instances of a minibatch. To more widely consider the overall dataset, we employ the momentum updates:
(2)  
where and are respectively the momented mean and momented standard deviation at iteration , and is the momentum. With the momented statistics, the normalized distance is rewritten:
(3) 
(2) Level Assignment. MDR designates a level that acts as a regularization goal for each normalized distance. We define a set of levels , and the levels are initialized with predefined values; each level is interpreted as a multiplier of the standard deviation of the normalized distance. is an assignment function that outputs whether the given distance and the given level are the closest or not, and is defined as:
(4) 
By adopting the assignment function, MDR selects valid regularization levels for each distance with the consideration of various degrees of similarities.
(3) Regularization. Finally, this step is performed to prevent pairwise distances from deviating from its belonging level. MDR minimizes the difference between a given normalized pairwise distance and the assigned level:
(5) 
The levels are learnable parameters and are updated to optimally regularize the pairwise distances. Each normalized distance is trained to become closer to the assigned level; the assigned level is also trained to become closer to the corresponding distances. As iterations pass, the levels are trained to properly divide the normalized distances into multiple intervals. Each level is a representative value of a certain interval in the normalized distance. We describe the initial configuration of the levels in Section 4.3.
In conclusion, MDR has two functional effects of regularization: (1) the multiple levels of MDR disturbs optimizing the pairwise distances among examples, (2) the outermost levels of MDR prevents the positive pairs from getting too close and the negative pairs from getting too far. By the formal effect, the learning process does not easily suffer from overfitting. By the latter effect, the learning process does not suffer from diminishing of the loss from easy examples, and also, does not suffer from being too biased to certain examples such as hard examples. Therefore, MDR stabilizes the learning procedure to achieve a better generalization ability on a test dataset.


Loss Function. The proposed MDR can be applied to any loss functions such as Contrastive loss bromley1994signature, Triplet loss schroff2015facenet and Margin loss wu2017sampling. We mostly adopted Triplet loss as baseline for our experiments:
(6) 
where is a set of triplets of an anchor , a positive , and a negative sampled from a minibatch. is a margin. The final loss function is defined as the sum of and with a multiplier that balances the losses:
(7) 
optimizes the model by minimizing the distance of positive pairs and maximizing the distance of negative pairs. regularize the pairwise distances by constraining the distances with multiple levels. The embedding network is trained simultaneously with different objectives.
Embedding Normalization Trick for MDR. In our learning procedure, Normalization ( Norm) is not adopted because it can disturb the proper regularization effect of MDR. However, the lack of Norm can cause difficulty in finding appropriate hyperparameters of such as margin in Triplet loss, because any prior knowledge of the scale of embedding vectors is not given. To overcome the difficulty, we normalize by dividing the embedding vectors by during the training stage, such that the expected pairwise distance is one: . We adopt this trick on several loss functions such as Constrastive loss hadsell2006dimensionality, Margin loss wu2017sampling, and Triplet loss in our experiments.
To show the effectiveness of MDR and its behaviors, we extensively perform ablation studies and experiments. We follow the standard evaluation protocol and data splits proposed in Song2016DeepML. For an unbiased evaluation, we conduct 5 independent runs for each experiment and report the mean and the standard deviation of them.
Datasets. We employ the four standard datasets of deep metric learning for evaluations: CUB2002011 CUB200 (CUB200), Cars196 Cars196, Stanford Online Product Song2016DeepML (SOP) and InShop Clothes Retrieval InShop (InShop). CUB200 has 5,864 images of first 100 classes for training and 5,924 images of the rest classes for evaluation. Cars196 has 8,054 images of first 98 classes for training and 8,131 images of the rest classes for evaluation. SOP has 59,551 images of 11,318 classes for training and 60,502 images of the rest classes for evaluation. InShop has 25,882 images of 3,997 classes for training, and the remaining 7,970 classes with 26,830 images are partitioned into two subsets (query set and gallery set) for evaluation.
Embedding Network.
All the compared methods and our method use the Inception architecture with Batch Normalization (IBN)
ioffe2015batchas a backbone network. IBN is pretrained for ImageNet ILSVRC 2012 dataset
deng2009imagenet and then finetuned on the target dataset. We attach a fullyconnected layer, where its output activation is used as an embedding vector, after the last pooling layer of IBN. For models trained with MDR, Norm is not applied to the embedding vectors because it disturbs the effect of the regularization. For a fair comparison with the conventional implementation of Triplet loss schroff2015facenet that is used as a baseline, we apply Norm to those models.Learning. We employ Adam kingma2014adam optimizer with a weight decay of . For CUB200 and Cars196, a learning rate and the size of minibatch are set to and 128. For SOP and InShop, a learning rate and the size of minibatch are set to and 256. We mainly apply our method to Triplet loss schroff2015facenet. As a triplet sampling method, we employ the distance weighted sampling wu2017sampling. The margin of Triplet loss is set to 0.2. We summarized the hyperparameters of MDR: the configuration of the levels is initialized to three levels of , and the momentum is set to . is set differently for each dataset: for CUB200, for Cars196 and for SOP and InShop. For most of the datasets, of is enough to improve a given model; on CUB200, a strong regularization is more effective because it is a small dataset with only 5,864 training images where a model may easily suffer from overfitting. Those hyperparameters are not very sensitive to tune, and we explain the effects of each hyperparameter in the ablation studies at Section 4.3.
Image Setting. During training, we follow the standard image augmentation process Song2016DeepML; wang2019multi with the following order: resizing to , random cropping, random horizontal flipping, and resizing to . For evaluation, images are centercropped.
We show the comparison of MDR and the recent stateoftheart methods (Table 1). All compared methods use embedding vectors of 512 dimensionality. Our baseline model is trained by Triplet loss without Norm (Triplet) and we also report the conventional Triplet with Norm (Triplet+ Norm). The lack of constraints of Norm on the embedding space results in poor generalization performance, and it is known that Triplet loss is effective when Norm is applied schroff2015facenet. However, the models with MDR outperform the Triplet+ Norm models on all the datasets. Those results prove the effectiveness of the proposed distancebased regularization.
Experimental Results. MDR improves performance on all the datasets, and, in particular, the improvements are significantly high on the smallsized datasets. For CUB200, MDR improves 3.7 percentage points on Recall@1 compared to the conventional Triplet+ Norm; the result is 11.5 percentage points higher than Recall@1 of the Triplet. For Cars196, MDR improves 8.7 percentage points on Recall@1 compared to the conventional Triplet+ Norm; the result is 12.3 percentage points higher than Recall@1 of the Triplet. MDR also improves the recall performance compared to the baselines on SOP and InShop. Moreover, our method significantly outperforms the other stateoftheart methods in all recall criteria for all datasets.



CUB200  

Norm at Inference  
Triplet  
Triplet+MDR 
We extensively perform ablation studies on the behaviors of the proposed MDR.
Backbone Network. MDR can be widely applicable to any backbone networks (Table (a)a). We apply MDR on IBN ioffe2015batch, ResNet18 (R18) and ResNet50 he2016deep (R50), and achieve significant improvements for all backbone networks. Especially, a lightweight backbone, R18, with MDR even outperforms the baseline models with a heavyweight backbone such as R50 and IBN on both datasets.
Loss Function. Our MDR also can be widely applicable to any distancebased loss function (Table (b)b). We apply MDR on Constrastive loss hadsell2006dimensionality, Margin loss wu2017sampling and Triplet loss. MDR achieves significant improvements for all loss functions.
Level Configuration . Even though the levels are learnable, we should properly set the number of levels and the initial values of levels. We perform experiments on various initial configurations of levels and validate the importance of the learnability of levels (Table (c)c). From the experiments, we find that a sufficiently spaced configuration is better than a tightly spaced configuration; is better than , and a configuration of three levels is sufficient.
Effectiveness in Small Dimensionality. We perform an experiment on various dimensionalities of embedding vector such as , , , and . MDR significantly improves the Recall@1 of the models, especially in small dimensionality. In the experiment, our MDR only with 64 dimensionality is similar to or surpasses the performance of other methods with 512 dimensionality (Figure (a)a). The result indicates that our MDR constructs a highly efficient embedding space in compact dimensionality. Moreover, the improvements are larger compared to Triplet+ Norm for all dimensionality.
Prevention of Overfitting as Regularizer. We investigate the learning curves of three models: Triplet, Triplet+ Norm and Triplet+MDR (Figure (b)b). There are two crucial observations: (1) on the training set, Triplet+MDR is less overfitted than the other two methods, but it shows the most high performance on the test set., (2) the recall of Triplet+MDR does not drop until the end of learning, unlike the other methods, which suffer from severe overfitting. These observations indicate that our MDR is an effective regularizer for DML.
Equalizing the TwoNorm of Embedding Vectors. We find that the embedding vectors of a model trained with MDR have almost the same twonorm (Figure (a)a and (b)b). This shows that the embedding vectors are almost located on a hypersphere, even though the model is trained without Norm. Therefore, the model trained with MDR achieves similar performance even if Norm is applied at inference time (Table 3). This observation implies that MDR has similar effects of Norm at the end of the training, even though MDR is a distancebased regularization and Norm is normbased regularization.
Discriminative Representation. To show the effectiveness of our method, we visualize how MDR constructs an embedding space. In the embedding space of Triplet and Triplet+ Norm, the class centers are often aligned closely to each other (Figure (a)a and (b)b). However, in an embedding space of Triplet+MDR, the class centers are evenly spaced with a large margin (Figure (c)c). This result indicates that MDR constructs a more discriminative representation than the conventional methods.
Qualitative Analysis on Level Assignment. In the step of the level assignment, a lower level indicates that the pairs are closely aligned in the embedding space and vice versa. Most of the positive pairs are belonging to between level 1 and 2, and most of the negative pairs are belonging to between level 2 and 3. However, hardpositive pairs may belong to level 3 while hardnegative also may belong to level 1 (Figure 6). Therefore, levels are assigned to each pair regardless of given binary supervision. The learning procedure tried to overcome the disturbance that pulls the distances to belonging levels by considering the various degrees of distances; this multilevel disturbance leads to the improvement of the generalization ability.
We introduce a new distancebased regularization method that elaborately adjusts the pairwise distance into multiple levels for better generalization. We prove the effectiveness of MDR by showing the improvements that greatly exceed the existing methods, and by extensively performing the ablation studies of its behaviors. By applying our MDR, many methods can be significantly improved without any extra burdens at inference time.
We would like to thank AI R&D team of Kakao Enterprise for the helpful discussion.
In particular, we would like to thank Yunmo Park who designed the visual materials.
Potential Ethical Impact
Due to the gap between a training dataset and realworld data,
it is important to build a reliable model with better generalization ability across the unseen dataset, e.g.
test set, for its practicality. Our MDR is a regularization method to improve the generalization ability of a deep neural network on the task of deep metric learning. As positive aspects, our method can be applied to many practical applications such as image retrieval and item recommendation. These applications are utilized for our conveniences and the proposed MDR can improve their performance more reliably. We believe that our method does not have particular negative aspects because it is a fundamental method that assists conventional approaches to improve reliability on unseen datasets.
Comments
There are no comments yet.