# Algorithms for metric learning via contrastive embeddings

We study the problem of supervised learning a metric space under discriminative constraints. Given a universe X and sets S, D⊂X 2 of similar and dissimilar pairs, we seek to find a mapping f:X→ Y, into some target metric space M=(Y,ρ), such that similar objects are mapped to points at distance at most u, and dissimilar objects are mapped to points at distance at least ℓ. More generally, the goal is to find a mapping of maximum accuracy (that is, fraction of correctly classified pairs). We propose approximation algorithms for various versions of this problem, for the cases of Euclidean and tree metric spaces. For both of these target spaces, we obtain fully polynomial-time approximation schemes (FPTAS) for the case of perfect information. In the presence of imperfect information we present approximation algorithms that run in quasipolynomial time (QPTAS). Our algorithms use a combination of tools from metric embeddings and graph partitioning, that could be of independent interest.

READ FULL TEXT