DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation

06/22/2022
by   Zhu Sun, et al.
2

Recently, one critical issue looms large in the field of recommender systems – there are no effective benchmarks for rigorous evaluation – which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.

READ FULL TEXT

page 9

page 10

page 11

page 12

research
08/29/2022

Understanding Diversity in Session-Based Recommendation

Current session-based recommender systems (SBRSs) mainly focus on maximi...
research
03/03/2021

Elliot: a Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation

Recommender Systems have shown to be an effective way to alleviate the o...
research
11/28/2022

Towards Reliable Item Sampling for Recommendation Evaluation

Since Rendle and Krichene argued that commonly used sampling-based evalu...
research
10/06/2022

A Theory of Dynamic Benchmarks

Dynamic benchmarks interweave model fitting and data collection in an at...
research
09/07/2022

A Systematical Evaluation for Next-Basket Recommendation Algorithms

Next basket recommender systems (NBRs) aim to recommend a user's next (s...
research
02/07/2023

On the Theories Behind Hard Negative Sampling for Recommendation

Negative sampling has been heavily used to train recommender models on l...
research
01/28/2022

Hyper-Class Representation of Data

Data representation is often of the natural form with their attribute va...

Please sign up or login with your details

Forgot password? Click here to reset