MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive Model Selection

01/22/2020 ∙ by Mi Luo, et al. ∙ HUAWEI Technologies Co., Ltd. National University of Singapore Microsoft 0

Recommender systems often face heterogeneous datasets containing highly personalized historical data of users, where no single model could give the best recommendation for every user. We observe this ubiquitous phenomenon on both public and private datasets and address the model selection problem in pursuit of optimizing the quality of recommendation for each user. We propose a meta-learning framework to facilitate user-level adaptive model selection in recommender systems. In this framework, a collection of recommenders is trained with data from all users, on top of which a model selector is trained via meta-learning to select the best single model for each user with the user-specific historical data. We conduct extensive experiments on two public datasets and a real-world production dataset, demonstrating that our proposed framework achieves improvements over single model baselines and sample-level model selector in terms of AUC and LogLoss. In particular, the improvements may lead to huge profit gain when deployed in online recommender systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In recommender systems, deep learning has played an increasingly important role in discovering useful behavior patterns from huge amount of user data and providing precise and personalized recommendation in various scenarios 

(Wang et al., 2015; Cheng et al., 2016; He and Chua, 2017; Wang et al., 2019). Data from one user may be sparse and insufficient to support effective model training. In practice, deep neural networks are trained collaboratively on a large number of users, and it is important to distinguish the specific users to make personalized recommendation. Certain user identification processes are therefore often performed in alignment with the model training procedure, such as encoding a unique ID or user history information for each user(Zhou et al., 2018), or fine-tuning the recommender on user local data before making recommendations (Chen et al., 2018).

Although certain recommendation models could achieve better overall performance than other models, it is unlikely that there is a single model that performs better than other models for every user (Ekstrand et al., 2015; Ekstrand and Riedl, 2012). In other words, the best performance on different users may be achieved by different recommendation models. We observed this phenomenon on both private production and public datasets. For instance, in an online advertising system, multiple CTR prediction models are deployed simultaneously (Zhou et al., 2018). We found that no single model performs best on all users. Moreover, in terms of averaged evaluation, no single model achieves the all-time best performance. This implies that the performance of recommendation models is sensitive to user-specific data. Consequently, user-level model design in deep recommender systems is of both research interests and practical values.

In this work, we address the problem of user-level model selection to improve personalized recommendation quality. Given a collection of deep models, the goal is to select the best model from them for each individual user or to combine these models to maximize their strengths. We introduce a model selector on top of specific recommendation models to decide which model to use for an user. Considering the fast adaptation ability of the recently revived meta-learning, we formulate the model selection problem under the meta-learning setting and propose which trains the model selector and the recommendation models via the meta-learning methodology (Schmidhuber, 1987; Thrun and Pratt, 2012; Andrychowicz et al., 2016; Vinyals et al., 2016; Ravi and Larochelle, 2017; Finn et al., 2017; Huang et al., 2019).

Meta-learning algorithms learn to efficiently solve new tasks by extracting prior information from a number of related tasks. Of particular interest are optimization-based approaches, such as the popular Model-Agnostic Meta-Learning (MAML) algorithm (Finn et al., 2017)

, that apply to a wide range of models whose parameters are updated by stochastic gradient descent (SGD), with little requirement on the model structure. MAML involves a bi-level meta-learning process. The outer loop is on task level, where the algorithm maintains an

initialization for the parameters. The objective is to optimize the initialization such that when applied to a new task, the initialization leads to optimal performance on the test set after one or a few gradient updates on the training set. The inner loop is on sample level and executed within tasks. Receiving the initialization maintained in the outer loop, the algorithm adapts parameters on the support (training) set and evaluates the model on the query (test) set. The evaluation result on test set returns a loss signal to the outer loop. After meta-training, in the meta-testing or deployment phase the learned initialization enables fast adaptation on new tasks.

Mete-Learning is well-suited for model selection if we regard each task as learning to predict user preference for selecting models. As shown in Figure 1, in our method, we use optimization-based meta-learning methods to construct that learns to make model selection from a number of tasks, where a task consists of data from one user. Given a recommendation request as input,

outputs a probability distribution over the recommendation models. In the meta-training phase, an initialization for

is optimized through episodic learning (Finn et al., 2017). In each episode, a batch of tasks are sampled, each with a support set and a query set. On the support set of each task, a soft model selection is made based on the output of . The parameters of are updated using the training loss obtained by comparing the final prediction with ground truth. Then the adapted is evaluated on the query set, and test loss is similarly computed to update the initialization in the outer loop. The recommendation models are updated together in the outer loop, which can be optionally pre-trained before the meta-training process. In the deployment phase, with the learned initialization, adapts to individual users using personalized historical data (support sets), and aggregates results of recommendation models for new queries.

Figure 1. The framework.

We experimentally demonstrate effectiveness of our proposed method on two public datasets and a production dataset. In all experiments, significantly improves over baseline models in terms of AUC and LogLoss, indicating that can effectively weigh towards better models at the user level. We also observe that pre-training the recommendation models is crucial to express the power of .

Contributions. To summarize, our contributions are three-fold. Firstly, we address the problem of model selection for recommender systems, motivated by the observation of varying performance of different models among users on public and production datasets. Secondly, we propose a novel framework which introduces meta-learning to formulate a user-level model selection module in a hybrid recommender system that involves the combination of two or more recommendation models. This framework can be trained end-to-end and requires no manual definition of meta-features. To the best of our knowledge, this is the first work to study recommendation model selection problem from the optimization-based meta-learning perspective. Thirdly, we run extensive experiments on both public and private production datasets to provide the insight into which level to optimize in model selection. The results indicate that can improve the performance over single model baseline and sample-level selector, showing the potential of in real-world recommender systems.

2. Related Work

Since we study how to apply meta-learning for model selection in a hybrid recommender system, we first survey relevant work on meta-learning and model selection. Besides, we initially observed the varying performances of recommendation models in a real-world industrial CTR prediction problem. Hence we also review some classic CTR prediction models.

2.1. Optimization-Based Meta-Learning

In meta-learning, or “learning to learn”, the goal is to learn a model on a collection of tasks, such that it can achieve fast adaptation to new tasks (Chen et al., 2019). One research direction is metric-based meta-learning, aiming to learn the similarity between samples within tasks. Representative works include Matching Network (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017). Another promising direction is optimization-based meta-learning which has recently demonstrated effectiveness on few-shot classification problems by “learning to fine-tune”. Among the various methods, some focus on learning an optimizer such as the LSTM-based meta-learner (Ravi and Larochelle, 2017) and the Meta Networks with an external memory (Munkhdalai and Yu, 2017). Another research branch aims to learn a good model initialization (Finn et al., 2017; Li et al., 2017; Nichol and Schulman, 2018), such that the model has optimal performance on a new task with limited samples after a small number of gradient updates. In our work, we consider MAML (Finn et al., 2017) and Meta-SGD (Li et al., 2017) which are model- and task-agnostic. These optimization-based meta-learning algorithms promise to extract and propagate transferable representations of prior tasks. As a result, if we regard each task as learning to predict user preference for selecting recommendation models, each user will not only receive personalized model selection suggestions but also benefit from the choices of other users who have similar latent features.

2.2. Model Selection for Recommender Systems

In recommender systems, there is no single-best model that gives the optimal results for each user due to the heterogeneous data distributions among users. This means that the recommendation quality largely varies between different users (Ekstrand et al., 2014) and some users may receive unsatisfactory recommendations. One way to solve this problem is to give users the right to choose or switch the recommenders. As a result, explicit feedback can be collected from a subset of users to generate initial states for new users (Ekstrand et al., 2015; Dooms, 2013; Resnick et al., 1994). Another solution is a hybrid recommender system (Burke, 2002), which combines multiple models to form a complete recommender. This type of recommender can blend the strengths of different recommendation models. There are two types of methods to hybridize recommenders. One is to make a soft selection choice, that is, to compute a linear combination of individual scoring functions of different recommenders. A well-known work is feature-weighted-linear-stacking (FWLS) (Sill et al., 2009)

which learns the coefficients of model predictions with linear regression. The other line of research is to make a hard decision to select the best individual model for the entire dataset

(Cunha et al., 2016, 2018), for each user (Ekstrand and Riedl, 2012) or for each sample (Collins et al., 2018). However, most of the works mentioned above are limited to collaborative filtering algorithms and require manually defined meta-features which is very time-consuming. Besides, despite the considerable performance improvement, methods like FWLS mainly focus on sample-level optimization which lacks interpretability about why some models work well for particular users, but not for others. In contrast, our proposed can be trained end-to-end without extra meta-features. To our knowledge, our proposed framework is the first to explore the model selection problem for CTR Prediction, rather than collaborative filtering. We also provide an insight into which level to optimize in model selection by conducting extensive experiments for sample-level and user-level model selection.

Figure 2. The performances of four models in one day.

2.3. CTR Prediction

Click-through rate (CTR) prediction is an important task in cost-per-click (CPC) advertising system. Model architectures for CTR prediction have evolved from shallow to deep. As a simple but effective model, Logistic Regression has been widely used in the advertising industry 

(Chapelle et al., 2015; McMahan et al., 2013)

. Considering feature conjunction, Rendel presented Factorization Machines (FMs) which learn the weight of feature conjunction by factorizing it into a product of two latent vectors 

(Rendle, 2010). As a variant of FM, Field-aware Factorization Machines (FFM) has been proven to be effective in some CTR prediction competitions (Juan et al., 2016, 2017). To capture higher-order feature interactions, model architectures based on deep networks have been subsequently developed. Examples include Deep Crossing  (Shan et al., 2016), Wide & Deep  (Cheng et al., 2016), PNN  (Qu et al., 2016), DeepFM  (Guo et al., 2017) and DIN  (Zhou et al., 2018).

3. Performance analysis

In this section, we firstly present our observations about the varying online performance of recommendation models in a real industrial advertising system. Next, we conduct some pilot experiments to quantify this phenomenon with two public datasets.

3.1. Model Performance in Online Test

In order to compare the performances of different models, we implement four state-of-the-art CTR prediction models, including shallow models and deep models. Then we deploy these models in a large-scale advertising system to verify the varying performances of them through online A/B test.

Experimental Setting.

Users have been split into four groups, each of which contains at least one million users. Each user group receives recommendations from one of the four models. Our advertising system uses first price ranking approach, which means the candidate ads are ranked by bid*pCTR and displayed with the descending order. The bid is offered by the advertisers and the pCTR is generated by our CTR prediction model. The effective cost per mille (eCPM) is used as the evaluation metric:

(1)

Observations in Online Experiments. We present the trends of eCPM values for four models within 24 hours in Figure 2. Because of the commercial confidential, the absolute values of eCPM are hidden. We see that during the online A/B test, there is no single model which can achieve all-time best performance. For example, in general, Model I and Model III perform poorly during the day. However, Model I and Model III achieve leading performances from 7 a.m. to 8 a.m. and from 5 p.m. to 6 p.m. respectively. We also notice that although Model IV performs best on average, its eCPM is lower than that of some other models in particular time periods.

[1pt] Dataset LR FM FFM DeepFM
Movielens-1m 21.37 18.49 20.11 40.03
Amazon-Electronics 13.73 13.61 20.08 52.58
[1pt]
Table 1. User proportion of different models.

3.2. Model Performance on Public Datasets

We conducted some pilot experiments on MovieLens  (Harper and Konstan, 2015) and Amazon Review  (He and McAuley, 2016a) datasets to quantify the varying performance of models over different users. We consider four models (LR, FM (Rendle, 2010), FFM (Juan et al., 2016) and DeepFM (Guo et al., 2017)). We select the best model for each user by comparing the LogLoss.

As shown in Table 1, in general, DeepFM performs better than other models: It is the best model for nearly 40 users in MovieLens, and the best for more than 52 users in Amazon. Although FM is the least popular model for both datasets, there are still 18.49 users in MoviesLens and 13.61 users in Amazon choosing FM.

4. Methodology

In this section, we elaborate technical details for our proposed model selection framework . Suppose there is a set of users, where each user has a dataset available for model training. A data point consists of feature and label . Note that our proposed framework provides a general training protocol for recommendation models, and is independent of specific model structure and data format.

4.1. The Framework

The framework consists of two major modules: the base models module and the model selection module. Next we describe the details of the workflow.

Base models module. A base model refers to a parameterized recommendation model, such as LR or DeepFM. A model with parameter is denoted by , such that given feature , the model outputs as the prediction for the ground truth label . Suppose in the base models module there are models , where is parameterized by . Note that the ’s could have different structures, and hence contain distinct parameters ’s. In general the module allows different input features for different base models, while in what follows we assume all models have the same input form for ease of exposition.

Model selection module. This module contains a model selector that operates on top of the base models module. The model selector takes as input the data feature and outputs of base models where , and outputs a distribution on base models. Suppose is parameterized by , the selection result is thus . In practice,

can be a multilayer perceptron (MLP) that takes

only as input (without ) and generates a distribution over the base models, and the final prediction is the corresponding weighted average .

4.2. Meta-training

The key ingredient that differentiates with previous model selection approaches is that we use meta-learning to learn the model selector , as shown in Algorithm 1. Our algorithm extends MAML into the framework. The original MAML is applied to a single prediction model, while in our case MAML is used to jointly learn the model selector and base models.

Data: Training set for user
1 Initialize for with , and for ;
2 Denote and ;
3 (Optional) Pretrain using ;
4 foreach episode  do
5       Sample a set of users from ;
6       foreach user  do
7             Sample and from ;
8             foreach  do
9                   ;
10                   ;
11                  
12             end foreach
13            ;
14             ;
15             foreach  do
16                   ;
17                   ;
18                  
19             end foreach
20            ;
21            
22       end foreach
23      ;
24      
25 end foreach
Algorithm 1

Episodic Meta-training. The meta-training process proceeds in an episodic manner. In each episode, a batch of users are sampled as tasks from a large training population (line 5). For each user , a support set and a query set are sampled from , which are considered as “training” and “test” sets in the task corresponding to user , respectively (line 7). We adopt the common practice in meta-learning literature that guarantees no intersection between and to improve generalization capacity. After an in-task adaptation procedure is performed for each task (lines 8–18), at the end of an episode, the initialization for the model selector and for base models are updated according to the loss signal received from in-task adaptation (line 20). Here the initialization is maintained and will be adapted to new user when deployed. Next we describe the in-task adaptation procedure.

In-task Adaptation. Given the currently maintained parameters and , the first iterates the support set to generate a per-item distribution on base models (line 9), and then get a final prediction which is a convex combination of outputs (line 10). The training loss is computed by averaging over data points in (line 12), where

is a pre-defined loss function. In this work we focus on CTR prediction problems and use LogLoss as the loss function:

(2)

where indicates if the data point is a positive sample. Then a gradient update step is performed to parameters of the base models and model selector, leading to a new set of parameters and adapted to the specific task (line 13). The test loss is then computed on the query set in a similar way as computing training loss, using the updated parameters of base models and model selector instead (lines 14–18). Note that by keeping the path of in-task adaptation (from to ), the test loss can be expressed as a function of and , which is passed to the outer loop for updating and using gradient descent methods such as SGD or Adam.

Jointly Meta-training and . We further note that and are updated together in the outer loop (line 20) that serve as initialization for the base models and model selector, respectively. The parameters are updated to adapt to each user (line 13). This step is crucial for to operate at the user level, i.e., to execute user-level model selection via base models and model selector modules adaptive to specific users. The episodic meta-learning procedure plays an important role to obtain learnable initialization for to enable fast adaptation on users. The objective of meta-training can be formulated as follows:

(3)

Learning Inner Learning Rate . The inner learning rate , which is often a hyper-parameter in normal model training protocols, can also be learned in meta-learning approaches by considering the test loss as a function of as well. Li et al. (Li et al., 2017) showed that learning per-parameter inner learning rate (a vector of same length as ) achieves consistent improvement over MAML for regression and image classification. Algorithm 1 can be slightly modified accordingly: in line 13, the inner update step becomes:

(4)

where denotes Hadamard product. Considering , as a function of , the outer update step in line 20 becomes:

(5)

where gradients flow to through and . The objective function can be accordingly written as:

(6)

In practice we find that learning a vector could significantly boost the performance of for recommendation tasks.

Meta-testing/Deployment. Meta-testing on new tasks follows the same in-task adaptation procedure as in meta-training (lines 7–17), after which evaluation metrics are computed such as AUC and LogLoss. A separate group of meta-testing users (with no intersection with meta-training users) may be considered to justify the generalization capacity of meta-learning on new tasks.

Simplifying . We propose a simplified version of meta-training for , where no in-task adaptation for base models is required. The base models are pre-trained before meta-training and then fixed. The model selector is trained episodically. We note that this procedure is also in the meta-learning paradigm since is updated using user-wise mini-batches, where for each user the distribution is generated using a support set , and evaluated by computing test loss on a separate query set . This enables to learn at user level and generalize to new users efficiently. At meta-testing phase, base models as well as the model selector are fixed, and the training set is simply used for the model selector to generate a distribution over base models. The simplified may be of particular interest in practical recommender systems where in-task adaptation is restricted due to computation and time costs, such as news recommendation for mobile users using on-device models.

5. experiment

In this section, we evaluate the empirical performance of the proposed method, and mainly focus on CTR Prediction tasks where the prediction quality plays a very important role and has a direct impact on the business revenue. We experiment with two public datasets and a real-world production dataset. The statistics of the selected datasets are summarized in Table 2. We raise and try to address two major research questions:

  • RQ1: Can model selection help CTR Prediction?

  • RQ2: What benefits could bring to personalized model selection?

[1pt] Dataset Users Items Samples Features
Movielens-1m 6,040 3,952 1,000,209 14,025
Amazon-Electronics 192,403 63,001 1,689,188 319,687
Production Dataset 7,684 2,420 3,333,246 11,860
[1pt]
Table 2. Statistics of selected datasets.

5.1. Datasets

Movielens-1m. Movielens-1m (Harper and Konstan, 2015) contains 1 million movie ratings from 6040 users and each user has at least 20 ratings. We regard 5-star and 4-star ratings as positive feedbacks and label them with 1, and label the rest with 0. We select the following features: user_id, age, gender, occupation, user_history_genre, user_history_movie, movie_id, movie_genre, day of week and season.

Amazon-Electronics. Amazon Review Dataset (He and McAuley, 2016a) contains user reviews and metadata from Amazon and has been widely used for product recommendation. We select a subset called Amazon-Electronics from the collection and shape it into a binary classification problem like Movielens-1m. Following (He and McAuley, 2016b), we use the 5-core setting to retain users with at least 5 ratings. The selected features include user_id, item_id,item_category, season, user_history_item (including 5 products recently rated), user_history_categories.

Production Dataset. To demonstrate the effectiveness of our proposed methods on real-world application with natural data distribution over users, we also evaluate our methods on a large production dataset from an industrial recommendation task. Our goal is to predict the probability that a user will click on the recommended mobile services based on his or her history behavior. In this dataset, each user has at least 203 history records.

5.2. Baselines

We compare the proposed methods with two kinds of competitors: single models and hybrid recommenders with model selectors.

[1pt] Model Movielens. Amazon. Production.
AUC LogLoss RelaImpr AUC LogLoss RelaImpr AUC LogLoss RelaImpr
LR 0.7914 0.55112 -1.45% 0.6981 0.46374 -6.29% 0.7813 0.54011 -29.83%
FM 0.7928 0.54917 -0.98% 0.6953 0.46242 -7.62% 0.8821 0.42618 -4.69%
FFM 0.7936 0.54826 -0.71% 0.7114 0.45216 0.00% 0.8850 0.42469 -3.97%
DeepFM 0.7957 0.54672 0.00% 0.7101 0.45696 -0.61% 0.9009 0.39215 0.00%
Perfect Sample-level Selector 0.9008 0.41079 35.54% 0.8411 0.37088 61.35% 0.9710 0.26043 17.49%
Perfect User-level Selector 0.8187 0.51829 7.78% 0.8135 0.38999 48.30% 0.9051 0.37835 1.05%
Sample-level Selector 0.7963 0.54482 0.20% 0.7121 0.45152 0.33% 0.9011 0.39137 0.05%
User-level Selector 0.7999 0.54058 1.42% 0.7124 0.45117 0.47% 0.9013 0.39109 0.10%
-Simplified 0.8036 0.53550 2.67% 0.7134 0.45044 0.95% 0.9022 0.39095 0.32%
0.8047 0.53531 3.04% 0.7141 0.44996 1.28% 0.9023 0.39036 0.35%
[1pt]
Table 3. AUC and LogLoss Results.

Single Models. We consider three types of model architectures, including linear (LR), low rank (FM (Rendle, 2010) and FFM (Juan et al., 2016)) and deep models (DeepFM (Guo et al., 2017)

). The latent dimension of FM and FFM is set to 10. The field numbers of FFM for Movielens, Amazon and Production are 22, 18 and 8 respectively. For DeepFM, the dropout setting is 0.9. The network structures for Movielens, Amazon and production datasets are 256-256-256, 400-400-400 and 400-400-400 respectively. We use ReLU as the activation function.

Sample-level Selector and User-level Selector. These two methods are used as model selection competitors. They are designed to predict the model probability distribution for each sample and for each user. 80% local data of each user is used for training and the rest for testing. Then the local data of all users is collected to generate the whole training and testing data. While training, 75% training data is firstly used to train four CTR prediction models in a mini-batch way (Li et al., 2014)

. The batch size is set to 1000. Then the pretrained recommenders predict the CTR values and Logloss for the remaining training data. For the two baselines, we give sample-level and user-level labels from 0-3 by comparing LogLoss respectively. As for additional meta-features used to train a 400-400-400 MLP classifier, we consider the CTR prediction values of the four recommenders. While testing, the final prediction for each sample or each user is the weighted average of the predicted values of the individual models.

5.3. Settings and Evaluation Metrics

For , the division of user local data is the same as the division for sample-level MLP selector. During meta-training process, the training data of each user is further divided into 75% support set and 25% query set. During meta-testing phase, the model selector and base models are firstly fine-tuned before evaluating on the testing data. The performance metrics used in our experiments are AUC (Area under ROC), LogLoss and RelaImpr. RelaImpr is calculated as follows:

(7)

For pre-training of CTR models, we use FTRL optimizer (McMahan and Streeter, 2010) for LR and Adam optimizer (Kingma and Ba, 2014) for FM, FFM and DeepFM. The mini-batch size is 1000. For , we use Meta-SGD (Li et al., 2017) to adaptively learn the inner learning rate . The initial value of inner learning rate for Movielens, Amazon and Production dataset is 0.001, 0.0001, 0.001. The outer learning rate is set to of . In each episode of meta-training, the numbers of active users are 10. We use a 200-200-200 MLP as the model selector.

5.4. Performance of Model Selection

RQ1: Overall Performance Comparison. To investigate RQ1, we study the performance of baselines and on three datasets, the results are summarized in Table 3. To explore the potential and limit of model selection approaches, we compute the upper bound through two perfect model selectors: (1) perfect sample-level selector which chooses the best model for each sample; (2) perfect user-level selector which chooses the best model for each user. First, comparing single model baselines with hybrid recommender with model selection, we see that all model selection methods achieve a considerable improvement in terms of AUC and Logloss. This result is highly encouraging, indicating the effectiveness of model selection methods. Second, comparing the sample-level selectors with the user-level selectors, we find that perfect sample-level model selector is expected to achieve greater improvements than perfect user-level selector. However, in the last four rows of Table 3, we show the performance of actual selectors and observe that user-level selectors achieve higher AUC and lower Logloss, rather than the sample-level selector. This discovery implies that the differences between samples may be too subtle for the selector to be well fitted. In contrast, the latent characteristics of different users vary widely, which makes the work well. Finally, we compare and -simplified, finding that the performance of the simplified version dropped slightly. This verifies our argument in Section 4 that the in-task adaptation could make model selection more user-specific.

RQ2: Performance Distribution Analysis. Despite the overall improvement, it is also worth studying RQ2: In what ways does help model selection? To this end, we further investigate the testing loss distribution on all users with MovieLens-1m dataset. Figure 3

shows the kernel density estimation of

and DeepFM which is a strong single model baseline. We observe that

not only leads to lower mean LogLoss but also achieves more concentrated loss distribution with lower variance. This shows that

encourages a more fair loss distribution across users and is powerful to model heterogeneous users. The above observations verify the effectiveness of our proposed methods in terms of personalized model selection.

Figure 3. KDE for Movielens.

6. Conclusions

In this work, we addressed the problem of model selection for recommender systems, motivated by the observation of varying performance of different models among users on public and private datasets. We initiated the study of user-level model selection problems in recommendation from the meta-learning perspective, and proposed a new framework to formulate a user-level model selection module. We also ran extensive experiments on both public and private production datasets, showing that can improve the performance over single model baseline and sample-level selector. This shows the potential of in real-world recommender systems.

References

  • M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas (2016) Learning to learn by gradient descent by gradient descent. In NIPS, Cited by: §1.
  • R. Burke (2002) Hybrid recommender systems: survey and experiments. User Modeling and User-adapted Interaction 12 (4), pp. 331–370. Cited by: §2.2.
  • O. Chapelle, E. Manavoglu, and R. Rosales (2015) Simple and scalable response prediction for display advertising. ACM Transactions on Intelligent Systems and Technology (TIST) 5 (4), pp. 61. Cited by: §2.3.
  • F. Chen, Z. Dong, Z. Li, and X. He (2018) Federated meta-learning for recommendation.. arXiv: Learning. Cited by: §1.
  • W. Chen, Y. Liu, Z. Kira, Y. F. Wang, and J. Huang (2019) A closer look at few-shot classification. arXiv preprint arXiv:1904.04232. Cited by: §2.1.
  • H. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, et al. (2016) Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems, pp. 7–10. Cited by: §1, §2.3.
  • A. Collins, D. Tkaczyk, and J. Beel (2018) One-at-a-time: a meta-learning recommender-system for recommendation-algorithm selection on micro level. arXiv preprint arXiv:1805.12118. Cited by: §2.2.
  • T. Cunha, C. Soares, and A. De Carvalho (2018) Metalearning and recommender systems: a literature review and empirical study on the algorithm selection problem for collaborative filtering. Information Sciences 423, pp. 128–144. Cited by: §2.2.
  • T. Cunha, C. Soares, and A. C. de Carvalho (2016) Selecting collaborative filtering algorithms using metalearning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 393–409. Cited by: §2.2.
  • S. Dooms (2013) Dynamic generation of personalized hybrid recommender systems. In Proceedings of the 7th ACM conference on Recommender systems, pp. 443–446. Cited by: §2.2.
  • M. D. Ekstrand, F. M. Harper, M. C. Willemsen, and J. A. Konstan (2014) User perception of differences in recommender algorithms. In Proceedings of the 8th ACM Conference on Recommender systems, pp. 161–168. Cited by: §2.2.
  • M. D. Ekstrand, D. Kluver, F. M. Harper, and J. A. Konstan (2015) Letting users choose recommender algorithms: an experimental study. In Proceedings of the 9th ACM Conference on Recommender Systems, pp. 11–18. Cited by: §1, §2.2.
  • M. Ekstrand and J. Riedl (2012) When recommenders fail: predicting recommender failure for algorithm selection and combination. In Proceedings of the sixth ACM conference on Recommender systems, pp. 233–236. Cited by: §1, §2.2.
  • C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §1, §1, §1, §2.1.
  • H. Guo, R. Tang, Y. Ye, Z. Li, and X. He (2017) DeepFM: a factorization-machine based neural network for ctr prediction. arXiv preprint arXiv:1703.04247. Cited by: §2.3, §3.2, §5.2.
  • F. M. Harper and J. A. Konstan (2015) The movielens datasets: history and context. Cited by: §3.2, §5.1.
  • R. He and J. McAuley (2016a) Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pp. 507–517. Cited by: §3.2, §5.1.
  • R. He and J. McAuley (2016b) VBPR: visual bayesian personalized ranking from implicit feedback. In

    Thirtieth AAAI Conference on Artificial Intelligence

    ,
    Cited by: §5.1.
  • X. He and T. Chua (2017) Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 355–364. Cited by: §1.
  • Y. Huang, W. Huang, L. Li, and Z. Li (2019) Meta-learning pac-bayes priors in model averaging. arXiv preprint arXiv:1912.11252. Cited by: §1.
  • Y. Juan, D. Lefortier, and O. Chapelle (2017) Field-aware factorization machines in a real-world online advertising system. In Proceedings of the 26th International Conference on World Wide Web Companion, pp. 680–688. Cited by: §2.3.
  • Y. Juan, Y. Zhuang, W. Chin, and C. Lin (2016) Field-aware factorization machines for ctr prediction. In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 43–50. Cited by: §2.3, §3.2, §5.2.
  • D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.3.
  • M. Li, T. Zhang, Y. Chen, and A. J. Smola (2014) Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661–670. Cited by: §5.2.
  • Z. Li, F. Zhou, F. Chen, and H. Li (2017) Meta-sgd: learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835. Cited by: §2.1, §4.2, §5.3.
  • H. B. McMahan, G. Holt, D. Sculley, M. Young, D. Ebner, J. Grady, L. Nie, T. Phillips, E. Davydov, D. Golovin, et al. (2013) Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1222–1230. Cited by: §2.3.
  • H. B. McMahan and M. Streeter (2010) Adaptive bound optimization for online convex optimization. arXiv preprint arXiv:1002.4908. Cited by: §5.3.
  • T. Munkhdalai and H. Yu (2017) Meta networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2554–2563. Cited by: §2.1.
  • A. Nichol and J. Schulman (2018) Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999. Cited by: §2.1.
  • Y. Qu, H. Cai, K. Ren, W. Zhang, Y. Yu, Y. Wen, and J. Wang (2016) Product-based neural networks for user response prediction. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1149–1154. Cited by: §2.3.
  • S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In ICLR, Cited by: §1, §2.1.
  • S. Rendle (2010) Factorization machines. In 2010 IEEE International Conference on Data Mining, pp. 995–1000. Cited by: §2.3, §3.2, §5.2.
  • P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl (1994) GroupLens: an open architecture for collaborative filtering of netnews. In Proceedings of the 1994 ACM conference on Computer supported cooperative work, pp. 175–186. Cited by: §2.2.
  • J. Schmidhuber (1987) Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-… hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich. Cited by: §1.
  • Y. Shan, T. R. Hoens, J. Jiao, H. Wang, D. Yu, and J. Mao (2016) Deep crossing: web-scale modeling without manually crafted combinatorial features. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 255–262. Cited by: §2.3.
  • J. Sill, G. Takács, L. Mackey, and D. Lin (2009) Feature-weighted linear stacking. arXiv preprint arXiv:0911.0460. Cited by: §2.2.
  • J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087. Cited by: §2.1.
  • S. Thrun and L. Pratt (2012) Learning to learn. Springer Science & Business Media. Cited by: §1.
  • O. Vinyals, C. Blundell, T. Lillicrap, and D. Wierstra (2016) Matching networks for one shot learning. In NIPS, Cited by: §1, §2.1.
  • H. Wang, N. Wang, and D. Yeung (2015) Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1235–1244. Cited by: §1.
  • X. Wang, X. He, M. Wang, F. Feng, and T. Chua (2019) Neural graph collaborative filtering. arXiv preprint arXiv:1905.08108. Cited by: §1.
  • G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, X. Ma, Y. Yan, J. Jin, H. Li, and K. Gai (2018) Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1059–1068. Cited by: §1, §1, §2.3.