Learning Type-Aware Embeddings for Fashion Compatibility

03/25/2018
by   Mariya I. Vasileva, et al.
0

Outfits in online fashion data are composed of items of many different types (e.g. top, bottom, shoes, accessories) that share some stylistic relationship with each other. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and demonstrates that doing so leads to such representation. Jointly learning item similarity and compatibility in an end-to-end fashion enables our model to support a range of novel queries that exploit item type which prior work has been ill-equipped to handle. For example, one can look for a set of tops that (a) substitute for a particular top in an outfit and (b) varies widely (thereby encouraging browsing). To evaluate the learned representation, we collect a new dataset containing some 68,306 outfits created by users on the Polyvore website. Our approach obtains 3-5 prediction and fill-in-the-blank tasks using our dataset as well as an established smaller dataset. Our extensive qualitative and quantitative evaluation provides strong evidence that the proposed type-respecting embedding strategy outperforms existing methods while also allowing for a variety of useful queries.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset