Collaborative Recommendation Model Based on Multi-modal Multi-view Attention Network: Movie and literature cases

05/24/2023
by   Zheng Hu, et al.
0

The existing collaborative recommendation models that use multi-modal information emphasize the representation of users' preferences but easily ignore the representation of users' dislikes. Nevertheless, modelling users' dislikes facilitates comprehensively characterizing user profiles. Thus, the representation of users' dislikes should be integrated into the user modelling when we construct a collaborative recommendation model. In this paper, we propose a novel Collaborative Recommendation Model based on Multi-modal multi-view Attention Network (CRMMAN), in which the users are represented from both preference and dislike views. Specifically, the users' historical interactions are divided into positive and negative interactions, used to model the user's preference and dislike views, respectively. Furthermore, the semantic and structural information extracted from the scene is employed to enrich the item representation. We validate CRMMAN by designing contrast experiments based on two benchmark MovieLens-1M and Book-Crossing datasets. Movielens-1m has about a million ratings, and Book-Crossing has about 300,000 ratings. Compared with the state-of-the-art knowledge-graph-based and multi-modal recommendation methods, the AUC, NDCG@5 and NDCG@10 are improved by 2.08 experiments to explore the effects of multi-modal information and multi-view mechanism. The experimental results show that both of them enhance the model's performance.

READ FULL TEXT

page 10

page 14

research
08/14/2023

MM-GEF: Multi-modal representation meet collaborative filtering

In modern e-commerce, item content features in various modalities offer ...
research
08/08/2023

Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation

Multi-modal recommendation systems, which integrate diverse types of inf...
research
08/01/2023

Relation-Aware Distribution Representation Network for Person Clustering with Multiple Modalities

Person clustering with multi-modal clues, including faces, bodies, and v...
research
07/17/2020

A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation

Multi-modal neural machine translation (NMT) aims to translate source se...
research
12/16/2022

On Safe and Usable Chatbots for Promoting Voter Participation

Chatbots, or bots for short, are multi-modal collaborative assistants th...
research
05/31/2018

Collaborative Multi-modal deep learning for the personalized product retrieval in Facebook Marketplace

Facebook Marketplace is quickly gaining momentum among consumers as a fa...
research
05/19/2022

Detect Professional Malicious User with Metric Learning in Recommender Systems

In e-commerce, online retailers are usually suffering from professional ...

Please sign up or login with your details

Forgot password? Click here to reset