DeepAI AI Chat
Log In Sign Up

Revenge of MLP in Sequential Recommendation

by   Yiheng Jiang, et al.

Sequential recommendation models sequences of historical user-item interactive behaviors (or referred as token) to better infer dynamic preferences. Fueled by the improved neural network architectures such as RNN, CNN and Transformer, this field has enjoyed rapid performance boost in the past years. Recent progress on all-MLP models lights on an efficient method with less intensive computation, token-mixing MLP, to learn the transformation patterns among historical behaviors. However, due to the inherent fully-connection design that allows the unrestricted cross-token communication and ignores the chronological order, we find that directly applying token-mixing MLP into sequential recommendation leads to subpar performance. In this paper, we present a purely MLP-based sequential recommendation architecture TriMLP with a novel Triangular Mixer where the modified MLP endows tokens with ordered interactions. As the cross-token interaction in MLP is actually matrix multiplication, Triangular Mixer drops the lower-triangle neurons in the weight matrix and thus blocks the connections from future tokens, which prevents information leakage and improves prediction capability under the standard auto-regressive training fashion. To further model long and short-term preferences on fine-grained level, the mixer adopts a dual-branch structure based on the delicate MLP described above, namely global and local mixing, to separately capture the sequential long-range dependencies and local patterns. Empirical study on 9 different scale datasets (contain 50K~20M behaviors) of various benchmarks, including MovieLens, Amazon and Tenrec, demonstrates that TriMLP attains promising and stable accuracy/efficiency trade-off, i.e., averagely surpasses several state-of-the-art baselines by 5.32% and saves 8.44% inference time cost.


Learning from History and Present: Next-item Recommendation via Discriminatively Exploiting User Behaviors

In the modern e-commerce, the behaviors of customers contain rich inform...

Token Transformer: Can class token help window-based transformer build better long-range interactions?

Compared with the vanilla transformer, the window-based transformer offe...

Multi-Interactive Attention Network for Fine-grained Feature Learning in CTR Prediction

In the Click-Through Rate (CTR) prediction scenario, user's sequential b...

PoNet: Pooling Network for Efficient Token Mixing in Long Sequences

Transformer-based models have achieved great success in various NLP, vis...

Modeling the Past and Future Contexts for Session-based Recommendation

Long session-based recommender systems have attacted much attention rece...

Disentangling Past-Future Modeling in Sequential Recommendation via Dual Networks

Sequential recommendation (SR) plays an important role in personalized r...