Generalization in Metric Learning: Should the Embedding Layer be the Embedding Layer?

03/08/2018 ∙ by Nam Vo, et al. ∙ 0

Many recent works advancing deep learning tend to focus on large scale setting with the goal of more effective training and better fitting. This goal might be less applicable to the case of small to medium scale. Studying deep metric learning under such setting, we reason that better generalization could be a big contributing factor to improvement of previous works, as well as the goal for further improvement. We investigate using other layers in a deep metric learning system (beside the embedding layer) for feature extraction and analyze how well they perform on training data and generalize to testing data. From this study, we suggest a new regularization practice and demonstrate state-of-the-art performance on 3 fine-grained image retrieval benchmarks: Cars-196, CUB-200-2011 and Stanford Online Product.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

Code Repositories

generalization-dml

Generalization in Metric Learning: Should the Embedding Layer be the Embedding Layer?


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.