Data Optimisation for a Deep Learning Recommender System

by   Gustav Hertz, et al.

This paper advocates privacy preserving requirements on collection of user data for recommender systems. The purpose of our study is twofold. First, we ask if restrictions on data collection will hurt test quality of RNN-based recommendations. We study how validation performance depends on the available amount of training data. We use a combination of top-K accuracy, catalog coverage and novelty for this purpose, since good recommendations for the user is not necessarily captured by a traditional accuracy metric. Second, we ask if we can improve the quality under minimal data by using secondary data sources. We propose knowledge transfer for this purpose and construct a representation to measure similarities between purchase behaviour in data. This to make qualified judgements of which source domain will contribute the most. Our results show that (i) there is a saturation in test performance when training size is increased above a critical point. We also discuss the interplay between different performance metrics, and properties of data. Moreover, we demonstrate that (ii) our representation is meaningful for measuring purchase behaviour. In particular, results show that we can leverage secondary data to improve validation performance if we select a relevant source domain according to our similarly measure.



There are no comments yet.


page 1

page 2

page 3

page 4


PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation

Transfer learning is an effective technique to improve a target recommen...

Limits to Surprise in Recommender Systems

In this study, we address the challenge of measuring the ability of a re...

Recommending Burgers based on Pizza Preferences: Addressing Data Sparsity with a Product of Experts

In this paper we describe a method to tackle data sparsity and create re...

Privacy-Adversarial User Representations in Recommender Systems

Latent factor models for recommender systems represent users and items a...

Random Graphs for Performance Evaluation of Recommender Systems

The purpose of this article is to introduce a new analytical framework d...

Goal-driven Command Recommendations for Analysts

Recent times have seen data analytics software applications become an in...

Aligning Intraobserver Agreement by Transitivity

Annotation reproducibility and accuracy rely on good consistency within ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last few years, considerations of user privacy and data security have gained attention in the AI community along with principles such as algorithmic fairness and bias (see e.g. [2, 24]). This is relevant for learning algorithms that make decisions based on data from users: movie recommendations, product suggestions, loan applications and match-making sites are common applications, to mention just but a few. Indeed, the 2012 White House report on privacy and consumer data [17], and the recent EU General Data Project Regulation in 2018, deem considerations of privacy and security unavoidable for the deployer of user-centric systems.

On the other side of the coin is the fact that progress in deep learning is partially enabled by large training sets of representative data. Neural networks have the potential and a proven record of modelling input-to-output mappings with unlimited complexity and flexibility (

[11, 33, 36] are some examples), but they are data greedy, especially to generalise well to unseen test data. “More is more” is thus the reign in the development phase of deep learning systems, typically on a centralised repository of collected data. In many applications, such as image classification and machine translation, this is not an issue, while interaction with users requires care in both the collection and storage of their data.

This paper considers data optimisation for a deep-learning recommender system. First, we study how the recommender system’s performance on validation data depends on the size of the training data. To this end, we use a set of performance metrics that are designed to measure the quality of a recommendation. This since ‘good’ recommendation for the user is not necessarily represented by an observable output variable; we do not have a ground-truth as our primary target,111This in contrast to e.g. image classification, where we have access to a true output label (‘cat’, ‘dog’ etc.) associated with the input image.

but rather heuristic measures for the success of a recommendation. From experiments, we conclude that there is an optimal amount of training data beyond which the performance of the model either saturates or decreases. We discuss this in terms of properties of both the metrics and the generating data distribution. Second, we study how we can improve the performance under a minimal data requirement. We use knowledge transfer for this purpose under the assumption that we can leverage data from secondary sources. This since our study is based on a multi-market setting. We propose a representation of purchase behaviour for a similarity measure to judge which secondary data distribution is most suitable for our task at hand. We study the effectiveness of the knowledge transfer and show that we can achieve significant performance gains in the target domain by leveraging a source domain according to our similarity measure.

In the next section, we discuss the motivation and background of our work along with related literature. We describe our setup in Section 3, including purchase data, recommender algorithm, performance metrics and construction behind the representation for our similarity measure. Section 4 describes the experiments and discusses their results. Section 5 summaries and concludes our work.

2 Background and Related Work

The main motivation behind our work is to put data requirements along with user privacy at the heart of the development and deployment of AI-based systems for recommendations.

Minimal necessary data

In a recent paper [19], the authors advocate a set of best-practice principles of minimal necessary data and training-data requirements analysis. We follow their motivation, which builds on the assumption that data-greed is also a (bad) habit in the development of recommender systems. The authors emphasise that data from users is a liability; respect for user privacy should be considered during data collection, and the data itself protected and securely stored. At the same time, they state that there is a clear notion of performance saturation in the square-error metric when the amount of training data reaches a certain point. Hence, it is not only a desire in view of user privacy to analyse requirements on training data, but also highly sensible: collecting more than necessary data should always be discouraged, and the trade-off between marginal performance to additional data act as guiding principle.

A similar motivation is central in [4]. The authors suggests a differential data analysis for understanding which data contributes to performance in recommender systems, and propose that less useful data should be discarded based on the analysis. While we fully share their motivation and view that performance saturates with data size (as empirically confirmed in [19]), we like to highlight the post-hoc nature of their analysis. The choice of which particular data should be collected and eventually discarded is made after the data has been analysed. In particular, it is in the hands of the system owner. If the control of data collection is shifted to the user itself, such tools for removal of data are less powerful.

While we put user privacy and ex-ante minimal collection of data as our primary motivator, there is also studies on how algorithms perform when the amount of training data is naturally limited, see e.g. [7]. For recommender systems, this is the cold-start problem and [5] analyse how algorithms compare in this situation. Another approach to circumvent limitations on data is to use contextual information about items as basis for the recommender system in place of sensitive user-information. [35] propose an approach where the system’s interaction with user data can be minimal, at least when there is strong a-priori knowledge about how items relate in a relevant way to recommendations.

Differential privacy and federated learning

As in [4], the notion of privacy is commonly associated with data as a tangible asset, with focus on secure storage, distribution and also location of user data. Decentralisation [22] is a recent method that circumvents liabilities of centralising the data, by a distribution of the algorithm to use data only locally on-device. Similarly, the idea of differential privacy is incorporated in deep learning in [1] to protect users from adversarial attacks that could lead to retrieval of sensitive data. While this stream of research is equally important, we highlight privacy concerns already at the level of usage

of data. If, for privacy considerations, ownership of the control of usage is given to users, there is probably much less data to collect in the very first place.

Meaningful recommendations

In the context of recommendations, it is not clear that common metrics of accuracy represents performance in a way that is meaningful for the user, see e.g. [23]. First, there is no obvious target that is directly representative for performance of an algorithm. Second, since recommender systems are feedback systems, they should ultimately be validated in an online manner, see [37, 8, 21]. However, combining accuracy with offline diversity metrics like coverage and novelty can help the process of optimizing recommender systems for online performance, [21]. We therefore use metrics that are heuristically constructed to measure performance in recommendations in terms of quality for the user: top-k accuracy, novelty and coverage [9, 3].

Knowledge transfer

For recommendations, it is common practice to use transfer learning in collaborative algorithms, see e.g.

[27, 28, 20] and the survey [29]. While these approaches typically address data sparsity and missing ratings for collaborative recommendations, we focus on improving performance under limited amounts of training data in the target domain by inductive instance-based learning (see [26]). We consider a multi-market setup, and hypothesise that we can make an informed selection of source data for our knowledge transfer, by measuring similarities across source and target distributions.

3 Experimental Setup

In this section, we describe the data and recommendation algorithm that are at the base of our study. We describe a set of performance metrics that we use to evaluate user experience. We then discuss the construction behind our similarity measure that we use to transfer knowledge between different markets.

3.1 Data

We model the recommender system on real data with historical product purchases. We do experiments on two dataset from different sources. Our primary dataset is made available to us from a retailer of durable consumer goods and it is collected from their online store. The second set is a publicly available dataset with purchase data.

A purchase is as a sequence of item ID:s (representing products) bought by a user with preserved order of how the items where added to the shopping cart. We also refer to a purchase as a session. We remove purchases with a single item from the data, and cut sequences to contain at most 64 items. Note that there is no user-profiling: the same user might generate several -sequences if the purchases are from different sessions, and there are no identifier or information about the user attached to the sequences.

The online store operates in a number of countries and the purchase data for our study is gathered from twelve geographical markets. The datasets have different sizes (total number of sequences ) and the time-period for collection varies. There is also some slight variations in the product-range available in each market. Data from one market generally has 300,000–1,600,000 purchases collected during a period no longer than two years, while between 10,000 and 20,000 unique items are available in each market. See also Table 1 for markets used in experiments in Section 4.1.

As secondary data, we use the publicly available yoochoose dataset from the Recsys 2015 challenge [31]. The purchase data in this dataset is comparable to a medium-sized market from the online store, such as Canada. It contains a similar amount of purchases ( in yoochoose and in the Canadian dataset), number of unique products ( in yoochoose and in the Canadian dataset), while it has a slightly less concentrated popularity distribution over how frequently a product is purchased, see Figure 1.

The popularity of an item in a particular market is defined by Equation (1) such that for the most popular (most purchased) item in that market while for item(s) not purchased at all:


Here is the dataset of all items purchased in a market, and the most popular product in that market. We use to denote the product catalogue, i.e. the set of unique items available in a market.

Figure 1: A comparison of the relative popularity distribution between the yoochoose dataset and a selection of the datasets from the online store. The distribution of Span is hidden by the (solid blue) line of the yoochoose distribution.

We partition the dataset of each market into a validation () and training set () by random assignment, i.e with no regard to chronology of when sessions where recorded (note that the set of sequences is randomised, not individual items ). This to avoid seasonality effects on our results. We further distribute the training set into 10 equally-sized buckets, and construct 10 new partitions by taking the first bucket as the first set ( of ), the first and second bucket as the second set ( of ) etc., see Figure 2. This will be used in the experiments to asses how validation performance depends on different sizes of training data. Note that the validation set is kept the same for all partitions of the training set.

Figure 2: Partition of data. For each market, the validation set is of the total data. The remaining is the training data . In the experiments, 10% to 100% of is used for training on different sized datasets.

3.2 Recommender algorithm

We use a recurrent neural network to model the sequential purchase data. This type of networks have been successfully applied to recommender systems and they are particularly well suited to session-based recommendations with no user profiling,


There are several variations and extensions of this model. As example, [15] propose to augment the input space with feature information, [34] use data augmentation to improve performance while a session-aware model is proposed in [30] with hierarchical recurrent network.

The high-level architecture of our network is shown in Figure 3

. We use a long short-term memory (LSTM) model as the recurrent layer

[16]. A key mechanism in the LSTM is self-loops that creates stable connections between units through time, while the scale of integration is dynamically changed by the input sequence. This enable interactions over multiple time-scales and also prevents gradients from vanishing during training, see e.g. [10].

The (one-step) input to the network is an item

and the (one-step) output is a probability vector

for predicting the next item

that will be added to cart. We use one-hot encoding for the items:

(and ) is a column vector with length equal to the number of unique items , where the element corresponding to the active item is 1 while the remaining elements are zero.

Training takes a session as an example, where each item is used to predict the subsequent item in the sequence. We use categorical cross entropy222The output vector will have the same dimension as the one-hot encoded output target, and denotes the dot product between two such vectors.


such that is the training loss of one example with 64 items. With training examples collected from a market, the total cost


is used for optimisation objective during training of the parameters in the network.

The first hidden layer of the network is an embedding that maps each item to a real valued -dimensional column vector

where is an weight matrix. If the :th element in is active, this is a simple look-up operation where is a copy of the :th column of . The layer thus outputs a continuous vector representation of each categorical input: it embeds each (on-hot encoded) item ID into the geometrical space . Importantly, the embedding is learned from data such that distances in have a meaning in the context of our learning problem. Items that are commonly co-purchased will be close to each other in the embedding space. We will further exploit this property in Section 3.4.

The second hidden layer is a LSTM cell that maps the current input and its previous output (this is the recurrent feature) to an updated output:

where is an -dimensional vector. The key component is an internal state that integrates information over time with a self-loop. A set of gates , and respectively control the flow of the internal state, external inputs and new output of the layer:

, and are input weights, recurrent weights and biases; is the sigmoid activation and denotes an element-wise product. All gates have the same construction. For the forget gate

where superscript indicates that weights and biases are associated with . A corresponding and are used for the input and output gate respectively. Due to the sigmoid, the gates can take a value between 0 and 1 to continuously turn off/on the interaction of their variable, and the gating is also controlled by the inputs to the layer. We use to collect all weight matrices of the layer, and for biases. These are , while -matrices are and -matrices since they operate on the recurrent state.

Figure 3: The network implementation with input and output dimensions of each layer. The input is a sequence of 64 items, each of which is a one-hot encoded vector of length , the number of available items in the product catalog. We use a dimension of 20 for the embedding and 50 output units of the LSTM cell. The output of the network is a probability for each of the items.

The last hidden layer is a densely connected layer with a softmax activation to output a probability vector:


This is a probability distribution over all

available products: has the dimension (the same as the target ), and its :th element is the probability that the next item is the :th product according to the one-hot encoding.

The parameters in the network are collected with

We learn from a training set of sessions by minimising the total cost (3) on with the Adam optimiser [18].

Recommendations by the trained network are based on the next-step prediction from a user’s session up to the most recent item that was added to cart. For recommending a single item, we use the maximum probability in . Similarly, for a ranked list of top- recommendations, we take the items that have the largest probabilities. We denote this list with

and use when we want to emphasise that the target for the recommendation is the (true) item .

Learning-to-rank loss

While categorical cross entropy (2) is the default loss for multiclass classification with neural networks, it is not the only choice. For recommender systems, there are several alternatives and learning to rank criteria are popular since the recommender’s problem is ultimately ranking a list of items. A challenge with cross entropy for recommendations is also the sparse nature of the classification problem. Processing a training example will update parameters associated with only one node of , the prediction probability for the active, ‘positive’ item of . Predictions of all the other

‘negative’ items will not be improved by the training example. Moreover, since the distribution of how often items are active is typically very skewed in data (see Figure

1), the model is mostly trained at predicting (the small set of) popular items with many positive observations.

To this end, Bayesian personalised ranking (BPR) is a successful criterion for recommendations that optimise predictions also for negative items [32]. It considers pairwise preferences for a positive and a negative item, such that the model score is maximised for the former and minimized for the latter. In the experiments, as complement to cross entropy, we use a BPR loss adapted to recurrent neural networks by [14]


is a set of uniformly sampled indices of negative items and is used to denote the :th element of while is the positive element, i.e. the active element of . The model score is the pre-activation of the output layer (4) and is the logistic sigmoid.

Graph based recommender algorithm

For comparison, we also use a graph-based algorithm where purchase sessions are modeled by a Markov chain with discrete state space. Each unique item in the product catalog is a state in the Markov chain, and the transition probability between a pair of products is estimated by maximum likelihood, i.e. from how frequently the two products occur consecutively in a purchase of the training dataset.

To generate a list of recommendations based on items that has been added to the shopping cart, , we use a random walk according to the Markov chain. We start the walk at each item in cart and take two random steps (two transitions). This is then repeated 500 times. We select the products that have occurred most frequently in all random walks from all products as recommendations for the user.

3.3 Performance metrics

Performance of multiclass classification algorithms, such as recommender systems, is commonly evaluated with metrics of accuracy, recall or F-measures. However, these do not necessarily give a good representation of how satisfied the user is with item suggestions, see e.g. [37, 8]. Therefore, to better quantify performance in terms of user experience, we use a combination of three metrics for evaluation: top-K accuracy, catalog coverage and novelty.

Top-K accuracy is a proxy measure of how often suggestions by the recommender system aligns with the actual item selected by the user, see e.g. [6]. The algorithm predicts a list of items for the next item of the user’s purchase. If any of these is the true next item, the recommendation is considered successful. Top-K accuracy is then the ratio of successes on a validation dataset

where is one if the recommendations include the true item , zero otherwise, and is the cardinality of the validation set.

Catalog coverage, as defined in Equation (6), is the proportion of items in the product catalog that are actively being suggested by the recommender algorithm on a validation set, see e.g. [9]. A high coverage provides the user with a more detailed and nuanced picture of the product catalog.


where is used to denote that the cardinality is taken over the set of all unique items in the union of recommendations.

Novelty is a metric that aspires to capture how different a recommendation is compared to items previously seen by the user; [3]. Since our data does not contain user identifiers, it is no possible to say which items have been viewed by the user before the purchase. Instead, we use the popularity of an item (1) as a proxy and define the metric as


where is the relative popularity (between and ) of the :th item in , the list of items recommended for . Less popular items are therefore premiered in this metric.

3.4 Similarity measure

As for the online store, it is common that a retailer operates in several related markets. In this case, there is good reason to use knowledge transfer to leverage data across markets. We propose a similarity measure for this purpose, to better judge which ‘foreign’ market data (source domain) is suitable for developing a recommendation system in the ‘domestic’ market (target domain).

One possible approach to measure data compatibility between domains is by using embeddings. These are continuous representations of the data that are inferred from the learning problem. In particular, the learned representation has meaning such that geometry in the embedding space corresponds to contextual relationships. A striking example is the embedding technique of the word2vec model, which successfully captures semantic similarities by distances in the embedding space [25].

We use a model for the recommender system with an embedding layer that captures purchase behaviour in a markets (the first hidden layer in the neural network, see Section 3.2 ). Distances in these embeddings of different markets are at the base of our similarity measure.

To construct the measure, we first train the network on a dataset with data from all markets, to obtain a global representation with no regional dependencies. We then remove all layers except for the initial embedding layer from the model. The input to this layer is an (encoded) item ID and the output is a vector with length that represents the item in the embedding space. For a purchase with 64 items, we concatenate the corresponding 64 embedding vectors to obtain a vector of length . We take this as a (global) representation of a particular purchase.

We use these vectors to measure similarities between purchase behaviour in different markets. We take purchase vectors from each market, and compute a centroid from Equation (8). We let this centroid represent the (local) purchase behaviour in that market.

is the computed centroid for market , is the representation for purchase in that market , and is the number of such embedding vectors used to compute the centroid:


Finally, we compute the similarity between two markets with and

by cosine similarity:

Figure 4: Validation performance in top-4 accuracy (a) catalog coverage (b) and novelty (c) as a function of training data size from six different markets of the online store. Figure (d) shows catalog coverage validated on the yoochoose-dataset.

4 Experimental Results

We conduct two sets of experiments in the section. In the first, we investigate how the amount of training data affects the recommendation system’s performance. In the second, we investigate if training data from secondary sources can improve performance, and if our similarity measure can be useful in the selection of such data.

4.1 Performance of size of training data

In this experiment, we analyse how performance on the validation data varies as we include more and more examples in the training set used for learning network parameters. The training data is partitioned as explained in Section 3.1. Note that we use the same hold-out validation set for each training set, it is just the amount of training data that is changed (increased in size). For the network, we use the architecture in Figure 3 with an embedding dimension and

hidden units of the LSTM cell. In the learning process, we also keep a constant setting for each training set: we optimise with mini-batch gradient descent, with batch size of 64 examples and run for 25 epochs. When we increase the size of the training set, we cold-start the optimisation with uniform Xavier initiation.

We repeat the experiment on data from the online store on six different market to further nuance the results (with six separate networks) as well as on the yoochoose dataset; see Table 1.

Market Data (#purchases) Catalog (#items)
Canada 490,000 12,000
France 676,000 15,0000
Germany 1,636,000 18,000
Spain 346,000 11,000
Sweden 386,000 11,000
Poland 368,000 10,000
Yoochoose 510,000 14,000
Table 1: Description of data.

We report on all three metrics to evaluate performance, and use a list of items for top-K accuarcy. Results are shown in Figure 3(a)-3(c) for the primary data, and in Figure 3(d) for the secondary data (we only include a plot of the catalog coverage metric).

From figure 3(a) we observe a clear tendency that as the amount of training data increases, top-4 accuracy on validation data (quickly) increases up to a threshold before it levels off. This logarithmic behavior of the increase suggests that there is a law of diminishing returns due to the irreducible generalisation error that cannot be surpassed with larger amounts of training data [12]. We see this for all studied markets. This is on a par with the saturation effect reported in [19]. However, while they conclude a decline of accuracy with the squared error metric on training data, we look at validation performance, with metrics purposely designed for measuring quality of recommendations.

The overall difference in performance between markets is most likely due to how much the purchasing patterns vary within a market; i.e. the degree of variability—entropy—in the generating data distribution. If purchasing patterns vary more within a market, predicting them will be less trivial. Accuracy can also be connected to the popularity distribution of a market, shown in Figure 1. For example Germany achieves high levels of top-4 accuracy around 0.27 while its popularity distribution is concentrated over relatively few products. The network can then achieve high accuracy by concentrating its recommendations on a smaller active set of popular items. Similarly, Spain has a ‘flatter’ popularity distribution as compared to other markets, such that the network has to learn to recommend a larger set. It has an top-4 accuracy that levels off around 0.2.

For the yoochoose data, a similar decline in increasing validation performance is observed while the network reaches a higher level of top-4 accuracy around 0.43. This indicates that purchases are much easier to predict in general in this dataset: since its popularity distribution is very similar to Spain (see Figure 1) it has a relatively large active set of popular items. Still, it learns to recommend this set with high overall top-4 accuracy.

Figure 3(b) shows an opposite relationship between catalog coverage and the amount of training data. Catalog coverage on the validation set is low for the smallest sized training sets. When reaching a sufficient amount of data, the network learns to ‘cover’ the catalog, and the metric peaks. After this peak, catalog coverage decreases as more training data is added. This is observed for all markets. However, we observe a different pattern on the yoochoose dataset, see Figure 3(d). On this dataset we see a law of diminishing returns, similar to what we observed for top-K accuracy in Figure 3(a). An explanation for this behaviour could be that the data distribution has less variability, such that the yoochoose dataset is more predictable, which the high level of top-K accuracy indicate. If the dataset is easier to predict, the recommender system could more accurately predict items from the active set of items while the network is regular enough to cover the full product catalog. In contrast, since the datasets from the online store seem to be less predictable, the recommender system learns to recommend the most popular items when it is trained to more data. The network is less regular such that it leaves the less popular items out, which thereby lowers catalog coverage.

Again, validation performance in terms of catalog coverage makes a good case for preferring minimal necessary data: If catalog coverage is an important metric for the recommender, there is indeed a trade-off and even decrease in performance when the amount of the training data is increased.

Figure 3(c) shows novelty on the validation set as a function of the amount of training data. The impact on novelty from training size is less clear than for top-K accuracy and catalog coverage: For all markets except Germany, novelty decreases when the amount of training data increases. On the yoochoose data, novelty is rather constant with no clear effect from the size of the training set. Recommendations are generally more novel in the French and German markets. These are also the two markets with more concentrated popularity distributions, such that less popularity is assigned to a larger portion of the recommended items in the metric. This is probably a contributing factor to their high levels of validation novelty.

Performance in terms of novelty is quite robust for a couple of markets and yoochoose, while we see a general decline in nolvelty on the validations set for the other markets. This indicates yet again that there is a (positive) trade-off between performance and training data size.

In all, we see a strong case and good reasons for a view towards minimal necessary training data when developing and deploying a recommender system. This in all considered metrics, with the effect of training size intervened with the underlying data distribution and typical purchase behaviour of users. For top-K accuracy, there is a saturation and thus diminishing marginal performance when we increase the amount of training data. For catalog coverage there is a decline in performance for the market data from the online store, and saturation on the more predictable yoochoose data. Similarly, we also see declining performance in novelty for markets with a larger active set of popular item, while the metric is relatively constant in the other markets and the yoochoose data.

Ranking loss.

To complement the analysis, we repeat the experiment and use Bayesian personalised ranking loss (5) for training to the Swedish market. We keep the same model parameters and architecture (except for the softmax activation of the output layer), and use a sampling size of negative items. For the resulting validation performance show in Figure 5, the trends from the above analysis continuous to hold. As for cross entropy, there is a clear saturation in top-4 accuracy for BPR. As promised by the ranking criterion, the overall accuracy is higher at the same amount of data, but notably, the return on additional data is the same for the two losses: there is positive shift in accuracy when training with BPR. In return, there is a negative shift in catalog coverage. We still see a small increase followed a decline and saturation, but the general level of coverage is 15–20 percentage points lower. This is the trade off for achieving the higher accuracy.

Figure 5: Validation performance when training with Bayesian personalised ranking loss on Swedish data: top-4 accuracy (left) and catalog coverage (right).

Alternative methods

To further complement the analysis, we use the same experimental setup with the Markov chain model, this time on data from the French market. Here there results are less clear: top-4 accuracy on validation data is 0.15 when transitions are estimated on 10% of training data, and stays around 0.160.005 when 20%–100% of the training data is used. Similarly, a rather constant trend around 0.25 is observed for catalog coverage. This is probably due to the fact that only one state, a single item, is used to predict the next item in the sequence (this is the Markov property) according to transition probabilities. Estimation of these, from frequencies of pairwise consecutive items in training data, is relatively stable from the point of using 10%–20% of data. Adding more does not have a large effect on estimated parameters, neither will it help the model to predict more complex sequential patterns.

Figure 6: Similarities in purchase behaviour between different markets according to the measure based on an embedding learned by the neural network.

4.2 Cross-market knowledge transfer

In this experiment, we first use the similarly measure of Section 3.4 to compare purchase behaviours of 12 markets from data of the online store. We then hypothesise if a model trained with a similar market performs better than a model trained to a dissimilar market.

Figure 6 contains computed similarities between the markets. We note that there is a clear correlation between geographical closeness and similarity measures. For example, the most similar countries to Germany in terms of purchase behaviour are Austria and Netherlands, while the most similar markets to China are South Korea and Japan. It is also possible to see that European markets are similar to each other while dissimilar to Asian markets.

Figure 7: Top-4 accuracy plotted as a function of dataset size when testing how well models trained on data from a secondary market performs on a local validation dataset.

Our hypothesis is that similarities in purchase behaviour could be used do judge if data from one market could be leveraged for training a recommender system in another market. We conduct an empirical study for this purpose. We construct a test to imitate a situation where one small market (here denoted A) lacks enough data to reach sufficient performance levels. To address the shortage, data from another similar market is added to the training dataset, to enable training with sufficient data to reach desirable performance levels on a validation set. We use top-K accuracy to evaluate this experiment, as it is the metric that gives information about how well the recommender system predicts purchase behaviour in a market.

To test if our similarity measure is suitable for selection, we train three models on data from three different markets, denoted A, B and C. We then evaluate all three models using a validation dataset from market A. The markets are selected so that A and B have high similarity () according to our measure, and A and C are dissimilar (). If the similarity measure performs as expected, the model trained on data from market B should outperform the model trained on data from market C when validated on market A. We conduct the test for three different triplets of markets. The results are presented in Figure 7.

As baseline, the model is trained and validated on data from the same market. This is the same setting as in Section 4.1 with results shown in Figure 3(a). We use the baseline to judge how well the models with data from one market and then validated on another perform, compared to using data from the same market where it is validated.

In figure 6(b) and 6(c) the model trained on data from a similar markets according to the suggested measure, B, achieves higher Top-K accuracy than the model trained on data from a dissimilar market C. However, in figure 6(a) the model trained on data from the dissimilar market, C, achieves a slightly higher Top-K accuracy than the model trained on data from the similar market.

It is worth noting that there is a significant difference in achieved top-K accuracy between the baseline and the models trained on data from other markets, for all experiments reported in Figure 7. The exception is the case when data from the Netherlands has been used to train a model which has been validated on data from Germany, where top-K accuracy levels are very similar.

In all three examples in figure 7 there is a significant difference in the top-K accuracy levels between the two markets, confirming that from which secondary market data is used is indeed important.

5 Summary and Conclusion

Performance as a function of dataset size

In this paper, we show that a recommender system experience evident saturating effects in validation performance when the size of the training dataset grows. We opted to use a combination of three performance metrics, to better estimate quality of the user experience. The three metrics are top-K-accuracy, catalog coverage and novelty.

The saturating effect is especially evident in top-K-accuracy which approaches a maximum level for the specific problem. Furthermore, we observe different behaviours for catalog coverage depending on which dataset we investigate. On the different datasets from the online store, we observe a decrease in coverage when the amount of training data increases, while on the yoochoose dataset we observe a saturating behaviour similar to the top-K-accuracy.

All of our experiments, considering all metrics, indicate saturating effects on validation data when the amount of training data is increased. Our results thus further confirm results in [19] and complement their study in two aspects: evaluating on a validation set to capture the system’s generalisation performance, and evaluating in both accuracy and diversity metrics that are purposefully designed to capture good recommendations for the user.

We find that there is an apparent trade-off between accuracy-focused metrics such as top-K-accuracy, and diversity-focused metrics such as catalog coverage and novelty, as indicated in [38]. Hence, an optimal amount of data for training can be determined if a notion of optimality is clearly defined: Depending on what is prioritized in the system design, the optimal amount of data varies. If accuracy is of high importance more data could lead to improvements. But if catalog coverage and novelty are more important, less data could be used without sacrificing too much accuracy.

From these results, we conclude that sufficient validation performance can be achieved without striving towards gathering as much data as possible. These results also give room for higher considerations of user privacy. Corporations could be more complaint towards users who are hesitant of sharing their personal data for privacy concerns, without risking significant performance losses within recommender systems.

Similarity measure and knowledge transfer

We propose a method for constructing a similarity measure, to measure compatibility between datasets in terms of purchase behaviour. First, our tests confirm that it is possible to use data from a secondary market to complement a smaller dataset. Second, we show that validation performance depends on which market the data is taken from, and that performance varies significantly. In none of our experiments, the models trained on data from another market were able to perform better than the model trained on data from the original market. However, if we trained to a ‘similar’ market, the performance was generally better than when training to a ‘dissimilar’ market.

Our proposed metric shows some promise in the sense that it manages to capture how geographical information correlates with purchase behaviour. For instance, it predicts that Germany, Netherlands, Switzerland and Austria have similar behaviour. The metric also successfully predicted which data would be most successful for knowledge transfer in two out of the three tested cases. We find these results interesting and promising for future research.


Similar to [19] we conclude that it is not sensible to gather ever increasing amounts of data for training recommender systems. When giving the user a choice of sharing their personal data for the purpose of personalization, some users will inevitably opt out. Our results show that this decrease in data collected does not necessarily have to be a disadvantage. We have shown that due to saturation effects in performance there is an amount of data that is enough to reach sufficient performance levels. Secondly, we have shown that if there are not enough data available to reach sufficient performance levels it is possible to use data from other similar domains to complete small datasets.

Finally, we propose a metric to judge what domains that are similar and where datasets are compatible to a higher extent. The results are promising and point us in a direction to further research on how to efficiently exploit such secondary data with transfer learning methods.

With these results, we believe that there is a strong case for working towards minimal necessary data within recommender systems. This has considerable privacy benefits without necessarily having unfavourable effects on performance.


We thank Kim Falk for helpful discussions.


  • [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang (2016) Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. Cited by: §2.
  • [2] M. Al-Rubaie and J. M. Chang (2019)

    Privacy-preserving machine learning: threats and solutions

    IEEE Security & Privacy 17 (2), pp. 49–58. Cited by: §1.
  • [3] P. Castells, S. Vargas, and J. Wang (2011-01) Novelty and diversity metrics for recommender systems: choice, discovery and relevance. Proceedings of International Workshop on Diversity in Document Retrieval (DDR), pp. . Cited by: §2, §3.3.
  • [4] R. Chow, H. Jin, B. Knijnenburg, and G. Saldamli (2013) Differential data analysis for recommender systems. In Proceedings of the 7th ACM conference on Recommender systems, pp. 323–326. Cited by: §2, §2.
  • [5] P. Cremonesi and R. Turrin (2009) Analysis of cold-start recommendations in iptv systems. In Proceedings of the third ACM conference on Recommender systems, pp. 233–236. Cited by: §2.
  • [6] K. Falk (2019) Practical recommender systems. Manning Publications. External Links: ISBN 9781617292705, LCCN 2018287423, Link Cited by: §3.3.
  • [7] G. Forman and I. Cohen (2004)

    Learning from little: comparison of classifiers given little training

    In European Conference on Principles of Data Mining and Knowledge Discovery, pp. 161–172. Cited by: §2.
  • [8] F. Garcin, B. Faltings, O. Donatsch, A. Alazzawi, C. Bruttin, and A. Huber (2014) Offline and online evaluation of news recommender systems at In Proceedings of the 8th ACM Conference on Recommender Systems, RecSys ’14, New York, NY, USA, pp. 169–176. External Links: ISBN 9781450326681, Link, Document Cited by: §2, §3.3.
  • [9] M. Ge, C. Delgado, and D. Jannach (2010-01) Beyond accuracy: evaluating recommender systems by coverage and serendipity. RecSys’10 - Proceedings of the 4th ACM Conference on Recommender Systems, pp. 257–260. External Links: Document Cited by: §2, §3.3.
  • [10] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio (2016) Deep learning. Vol. 1, MIT press Cambridge. Cited by: §3.2.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Delving deep into rectifiers: surpassing human-level performance on imagenet classification


    Proceedings of the IEEE international conference on computer vision

    pp. 1026–1034. Cited by: §1.
  • [12] J. Hestness, S. Narang, N. Ardalani, G. F. Diamos, H. Jun, H. Kianinejad, Md. M. A. Patwarym, Y. Yang, and Y. Zhou (2017) Deep learning scaling is predictable, empirically. CoRR abs/1712.00409. External Links: Link, 1712.00409 Cited by: §4.1.
  • [13] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk (2015) Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Cited by: §3.2.
  • [14] B. Hidasi and A. Karatzoglou (2018) Recurrent neural networks with top-k gains for session-based recommendations. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 843–852. Cited by: §3.2.
  • [15] B. Hidasi, M. Quadrana, A. Karatzoglou, and D. Tikk (2016) Parallel recurrent neural network architectures for feature-rich session-based recommendations. In Proceedings of the 10th ACM conference on recommender systems, pp. 241–248. Cited by: §3.2.
  • [16] S. Hochreiter and J. Schmidhuber (1997-12) Long short-term memory. Neural computation 9, pp. 1735–80. External Links: Document Cited by: §3.2.
  • [17] W. House (2012) Consumer data privacy in a networked world: a framework for protecting privacy and promoting innovation in the global digital economy. White House, Washington, DC, pp. 1–62. Cited by: §1.
  • [18] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
  • [19] M. Larson, A. Zito, B. Loni, and P. Cremonesi (2017) Towards minimal necessary data: the case for analyzing training data requirements of recommender algorithms. In FATREC Workshop on Responsible Recommendation Proceedings, Cited by: §2, §2, §4.1, §5, §5.
  • [20] B. Li, Q. Yang, and X. Xue (2009) Can movies and books collaborate? cross-domain collaborative filtering for sparsity reduction. In

    Twenty-First international joint conference on artificial intelligence

    Cited by: §2.
  • [21] A. Maksai, F. Garcin, and B. Faltings (2015)

    Predicting online performance of news recommender systems through richer evaluation metrics

    In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys ’15, New York, NY, USA, pp. 179–186. External Links: ISBN 9781450336925, Link, Document Cited by: §2.
  • [22] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273–1282. Cited by: §2.
  • [23] S. M. McNee, J. Riedl, and J. A. Konstan (2006) Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI’06 extended abstracts on Human factors in computing systems, pp. 1097–1101. Cited by: §2.
  • [24] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan (2019) A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635. Cited by: §1.
  • [25] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: §3.4.
  • [26] S.J. Pan and Q. Yang (2010) A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering 22 (10), pp. 1345–1359. Cited by: §2.
  • [27] W. Pan, N. Liu, E. Xiang, and Q. Yang (2011) Transfer learning to predict missing ratings via heterogeneous user feedbacks. In IJCAI, Cited by: §2.
  • [28] W. Pan, E. Xiang, N. Liu, and Q. Yang (2010) Transfer learning in collaborative filtering for sparsity reduction. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 24. Cited by: §2.
  • [29] W. Pan (2016) A survey of transfer learning for collaborative recommendation with auxiliary data. Neurocomputing 177, pp. 447–453. External Links: Link, Document Cited by: §2.
  • [30] M. Quadrana, A. Karatzoglou, B. Hidasi, and P. Cremonesi (2017) Personalizing session-based recommendations with hierarchical recurrent neural networks. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 130–137. Cited by: §3.2.
  • [31] Recsys Recys 2015 challenge. Note: on 2020-09-09 Cited by: §3.1.
  • [32] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme (2012) BPR: bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618. Cited by: §3.2.
  • [33] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484–489. Cited by: §1.
  • [34] Y. K. Tan, X. Xu, and Y. Liu (2016) Improved recurrent neural networks for session-based recommendations. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pp. 17–22. Cited by: §3.2.
  • [35] M. Tegnér (2020) Online learning for distributed and personal recommendations—a fair approach. In ICML 2020, 2nd Workshop on Human in the Loop Learning, Cited by: §2.
  • [36] O. Vinyals, Ł. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton (2015) Grammar as a foreign language. In Advances in neural information processing systems, pp. 2773–2781. Cited by: §1.
  • [37] J. Yi, Y. Chen, J. Li, S. Sett, and T. W. Yan (2013) Predictive model performance: offline and online evaluations. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, New York, NY, USA, pp. 1294–1302. External Links: ISBN 9781450321747, Link, Document Cited by: §2, §3.3.
  • [38] T. Zhou, Z. Kuscsik, J. Liu, M. Medo, J. R. Wakeling, and Y. Zhang (2010) Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences 107 (10), pp. 4511–4515. Cited by: §5.