Semantic-Guided Feature Distillation for Multimodal Recommendation

08/06/2023
by   Fan Liu, et al.
0

Multimodal recommendation exploits the rich multimodal information associated with users or items to enhance the representation learning for better performance. In these methods, end-to-end feature extractors (e.g., shallow/deep neural networks) are often adopted to tailor the generic multimodal features that are extracted from raw data by pre-trained models for recommendation. However, compact extractors, such as shallow neural networks, may find it challenging to extract effective information from complex and high-dimensional generic modality features. Conversely, DNN-based extractors may encounter the data sparsity problem in recommendation. To address this problem, we propose a novel model-agnostic approach called Semantic-guided Feature Distillation (SGFD), which employs a teacher-student framework to extract feature for multimodal recommendation. The teacher model first extracts rich modality features from the generic modality feature by considering both the semantic information of items and the complementary information of multiple modalities. SGFD then utilizes response-based and feature-based distillation loss to effectively transfer the knowledge encoded in the teacher model to the student model. To evaluate the effectiveness of our SGFD, we integrate SGFD into three backbone multimodal recommendation models. Extensive experiments on three public real-world datasets demonstrate that SGFD-enhanced models can achieve substantial improvement over their counterparts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2021

Modality-specific Distillation

Large neural networks are impractical to deploy on mobile devices due to...
research
03/26/2021

Multimodal Knowledge Expansion

The popularity of multimodal sensors and the accessibility of the Intern...
research
09/06/2018

RDPD: Rich Data Helps Poor Data via Imitation

In many situations, we have both rich- and poor- data environments: in a...
research
12/21/2021

Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix

In the context of multi-modality knowledge distillation research, the ex...
research
03/02/2023

Learning From Yourself: A Self-Distillation Method for Fake Speech Detection

In this paper, we propose a novel self-distillation method for fake spee...
research
10/13/2021

False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation

Personalized outfit recommendation has recently been in the spotlight wi...
research
05/12/2023

Knowledge Soft Integration for Multimodal Recommendation

One of the main challenges in modern recommendation systems is how to ef...

Please sign up or login with your details

Forgot password? Click here to reset