Learning Large-scale Universal User Representation with Sparse Mixture of Experts

07/11/2022
by   Caigao Jiang, et al.
0

Learning user sequence behaviour embedding is very sophisticated and challenging due to the complicated feature interactions over time and high dimensions of user features. Recent emerging foundation models, e.g., BERT and its variants, encourage a large body of researchers to investigate in this field. However, unlike natural language processing (NLP) tasks, the parameters of user behaviour model come mostly from user embedding layer, which makes most existing works fail in training a universal user embedding of large scale. Furthermore, user representations are learned from multiple downstream tasks, and the past research work do not address the seesaw phenomenon. In this paper, we propose SUPERMOE, a generic framework to obtain high quality user representation from multiple tasks. Specifically, the user behaviour sequences are encoded by MoE transformer, and we can thus increase the model capacity to billions of parameters, or even to trillions of parameters. In order to deal with seesaw phenomenon when learning across multiple tasks, we design a new loss function with task indicators. We perform extensive offline experiments on public datasets and online experiments on private real-world business scenarios. Our approach achieves the best performance over state-of-the-art models, and the results demonstrate the effectiveness of our framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2020

SBERT-WK: A Sentence Embedding Method by Dissecting BERT-based Word Models

Sentence embedding is an important research topic in natural language pr...
research
09/27/2019

Reweighted Proximal Pruning for Large-Scale Language Representation

Recently, pre-trained language representation flourishes as the mainstay...
research
07/02/2022

GUIM – General User and Item Embedding with Mixture of Representation in E-commerce

Our goal is to build general representation (embedding) for each user an...
research
10/20/2021

Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling

Universal user representation is an important research topic in industry...
research
10/05/2021

MoEfication: Conditional Computation of Transformer Models for Efficient Inference

Transformer-based pre-trained language models can achieve superior perfo...
research
04/17/2021

Robust Embeddings Via Distributions

Despite recent monumental advances in the field, many Natural Language P...
research
10/08/2020

Detect All Abuse! Toward Universal Abusive Language Detection Models

Online abusive language detection (ALD) has become a societal issue of i...

Please sign up or login with your details

Forgot password? Click here to reset