Neural Compatibility Modeling with Attentive Knowledge Distillation

04/17/2018
by   Xuemeng Song, et al.
0

Recently, the booming fashion sector and its huge potential benefits have attracted tremendous attention from many research communities. In particular, increasing research efforts have been dedicated to the complementary clothing matching as matching clothes to make a suitable outfit has become a daily headache for many people, especially those who do not have the sense of aesthetics. Thanks to the remarkable success of neural networks in various applications such as image classification and speech recognition, the researchers are enabled to adopt the data-driven learning methods to analyze fashion items. Nevertheless, existing studies overlook the rich valuable knowledge (rules) accumulated in fashion domain, especially the rules regarding clothing matching. Towards this end, in this work, we shed light on complementary clothing matching by integrating the advanced deep neural networks and the rich fashion domain knowledge. Considering that the rules can be fuzzy and different rules may have different confidence levels to different samples, we present a neural compatibility modeling scheme with attentive knowledge distillation based on the teacher-student network scheme. Extensive experiments on the real-world dataset show the superiority of our model over several state-of-the-art baselines. Based upon the comparisons, we observe certain fashion insights that add value to the fashion matching study. As a byproduct, we released the codes, and involved parameters to benefit other researchers.

READ FULL TEXT

page 3

page 7

page 9

research
05/04/2022

Generalized Knowledge Distillation via Relationship Matching

The knowledge of a well-trained deep neural network (a.k.a. the "teacher...
research
01/28/2021

Learning Matching Representations for Individualized Organ Transplantation Allocation

Organ transplantation is often the last resort for treating end-stage il...
research
11/08/2022

Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study

Mixup is a popular data augmentation technique based on creating new sam...
research
07/06/2021

Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation

Though convolutional neural networks are widely used in different tasks,...
research
09/15/2022

On-Device Domain Generalization

We present a systematic study of domain generalization (DG) for tiny neu...
research
10/10/2020

Distilling a Deep Neural Network into a Takagi-Sugeno-Kang Fuzzy Inference System

Deep neural networks (DNNs) demonstrate great success in classification ...

Please sign up or login with your details

Forgot password? Click here to reset