MMRec: Simplifying Multimodal Recommendation

02/02/2023
by   Xin Zhou, et al.
0

This paper presents an open-source toolbox, MMRec for multimodal recommendation. MMRec simplifies and canonicalizes the process of implementing and comparing multimodal recommendation models. The objective of MMRec is to provide a unified and configurable arena that can minimize the effort in implementing and testing multimodal recommendation models. It enables multimodal models, ranging from traditional matrix factorization to modern graph-based algorithms, capable of fusing information from multiple modalities simultaneously. Our documentation, examples, and source code are available at <https://github.com/enoche/MMRec>.

READ FULL TEXT

page 1

page 2

page 3

research
12/10/2021

Multimodal Interactions Using Pretrained Unimodal Models for SIMMC 2.0

This paper presents our work on the Situated Interactive MultiModal Conv...
research
06/29/2023

Ducho: A Unified Framework for the Extraction of Multimodal Features in Recommendation

In multimodal-aware recommendation, the extraction of meaningful multimo...
research
09/25/2022

Multimodal Exponentially Modified Gaussian Oscillators

Acoustic modeling serves audio processing tasks such as de-noising, data...
research
11/19/2021

GRecX: An Efficient and Unified Benchmark for GNN-based Recommendation

In this paper, we present GRecX, an open-source TensorFlow framework for...
research
11/13/2022

A Tale of Two Graphs: Freezing and Denoising Graph Structures for Multimodal Recommendation

Multimodal recommender systems utilizing multimodal features (e.g. image...
research
03/01/2023

RIFT2: Speeding-up RIFT with A New Rotation-Invariance Technique

Multimodal image matching is an important prerequisite for multisource i...
research
03/29/2022

Balanced Multimodal Learning via On-the-fly Gradient Modulation

Multimodal learning helps to comprehensively understand the world, by in...

Please sign up or login with your details

Forgot password? Click here to reset