JSCN: Joint Spectral Convolutional Network for Cross Domain Recommendation

10/18/2019 ∙ by Zhiwei Liu, et al. ∙ 18

Cross-domain recommendation can alleviate the data sparsity problem in recommender systems. To transfer the knowledge from one domain to another, one can either utilize the neighborhood information or learn a direct mapping function. However, all existing methods ignore the high-order connectivity information in cross-domain recommendation area and suffer from the domain-incompatibility problem. In this paper, we propose a Joint Spectral Convolutional Network (JSCN) for cross-domain recommendation. JSCN will simultaneously operate multi-layer spectral convolutions on different graphs, and jointly learn a domain-invariant user representation with a domain adaptive user mapping module. As a result, the high-order comprehensive connectivity information can be extracted by the spectral convolutions and the information can be transferred across domains with the domain-invariant user mapping. The domain adaptive user mapping module can help the incompatible domains to transfer the knowledge across each other. Extensive experiments on 24 Amazon rating datasets show the effectiveness of JSCN in the cross-domain recommendation, with 9.2% improvement on recall and 36.4% improvement on MAP compared with state-of-the-art methods. Our code is available online  [%s].



There are no comments yet.


page 1

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recommending users with a set of preferred items is still an open problem [18, 39, 9, 24, 4, 5], especially when the dataset is very sparse. To remedy the data sparsity issue, broad-leraning based model [36] and cross-domain recommender system [14, 24] are proposed where the information from other source domains can be transferred to the target domain. To transfer the knowledge from one domain to another, one can use the overlapping users [14, 5, 24, 12] in two ways: (1) the neighborhood information of common users stores the structure information of different domains with which we can do cross-domain recommendation [33, 5]; or (2) we can learn a mapping function [24, 14]

to project latent vectors learned in one domain into another, and thus the knowledge can be transferred.

Fig. 1: A toy example of high-order connectivity information in cross-domain recommendation. The upper/green part is the target domain and the below/blue part is the source domain.

However, all existing methods ignore the high-order connectivity information [30]. High-order connectivity information consists of all the neighborhood information, the neighbors of all the neighbors, and so on by using the linkage information in the graph. The high-order connectivity information is explained in Figure 1 wherein the middle part user A and user C are the overlapping users, the upper/green part is the target domain, and the lower/blue part is the source domain. For example, in the target domain (only the upper part), user D has a connection with item . Merely with the neighbor-based information [9, 14, 35], item and item should be ranked similarly since the neighbor user (i.e. user C) of user D has no direct connections with them. However, with the high-order connectivity information, we argue that user D should prefer item more than item as there is a path from item 2 to user D 222item 2–user B–item 3–user C–item 4–user D, while item is only connected with user A and apart from the others. Moreover, the preference ranking may be different if taking account of the source domain (considering both the upper and lower graphs). We can find two paths 333item 1–user A–item 5–user C–item 4–user D and item 1–user A–item 6–user C–item 4–user D from item to user D compared with the single path from item to user D. Hence user D may prefer item more than item if the high-order connectivity information across domains is included. However, the high-order connectivity problem is not well studied yet in the cross-domain recommendation.

To capture the connectivity information in a graph, one can transform the graph into the frequency domain by applying the spectral graph theory [17, 3, 30]. In spectral theory [30, 2]

, the spectrum of a graph extracts the comprehensive connectivity information of a graph with the graph Fourier transformer in terms of the eigenvectors of the graph laplacian 

[30]. Based on this, we can design the spectral convolutional network [39, 1] whose convolutions are linear operators diagonalized by the Fourier basis. With the spectral convolutional network, nodes in a graph are represented as spectral vectors [17, 39]. When it comes to bipartite graphs, we can learn the spectral representations of users and items to capture the connectivity information. The spectral representation models the high-order non-linear interactions among users and items with multi-layer spectral convolutions. Hence, recalling the problem discussed before in Figure 1, in the spectral domain, the item 1 will be closer to user D than item 2 as there exist more connections from item 1 to user D than those from item 2.

Fig. 2: Mapping users to different domains. Each user has domain invariant user representation on the left, which is projected as different domain-specific latent vectors over a specific domain on the right. Different colors on the right represent different domains.

However, different domains may be incompatible with each other which is also called as domain-incompatibility problem [29] in the cross-domain recommendation. For instance, if the target domain is a Movie domain where users are connected with the movie items, and the source domain is a Clothing domain where users are connected with the clothing items, they will be incompatible with each other since the behavior of users varies a lot. The information from the source domain cannot be directly utilized in the target domain. Thus we need to propose some mapping methods [24, 15, 20] as a bridge for the information transferring.

In this paper, unlike previous direct mapping methods [24, 15, 33], we view the latent vectors of a user in a specific domain as an interest projection from a domain-invariant representation. We show an illustration of mapping the domain-invariant user representation to a domain-specific user latent vectors in Fig. 2. To learn transferable representations, we jointly learn the domain-invariant representation of users across different domains. The joint convolution can capture the high-order connectivity information across different domains and learn domain-invariant representations by keeping the spectral similarity of the overlapping users. Based on this, we design a Joint Spectral Convolutional Network (JSCN) to fuse the information from multiple domains. JSCN will simultaneously operate multi-layer spectral convolution on the graph from each domain. Then the extracted spectral features can be shared across different graphs with the domain-invariant representations. Since JSCN jointly learns the spectral representations on different graphs, the high-order comprehensive connectivity information can be shared across domains. And because of the domain-invariant representations of users, JSCN alleviates the domain-incompatibility problem. We summarize our main contributions as follows:

  • [leftmargin=*]

  • Transferable spectral representation: To the best of our knowledge, it is the first work to study how to transfer the spectral representation of bipartite graphs, which captures the high-order non-linear interactions of user-item both within domain and across domains.

  • Joint spectral convolution on graphs: In this paper, we design a joint spectral convolutional network for learning the representations of multiple graphs concurrently. The high-order comprehensive connectivity information can be shared across different graphs.

  • Domain adaptive module: To deal with the domain incompatibility problem, we apply a novel domain adaptive module to jointly learn the domain-invariant spectral representations of users, with which we can implement the joint convolution on graphs and share information across different domains.

The rest of the paper is organized as follows. In Sec. II, we review some previous works related to this paper. Then in Sec. III, we introduce the definitions of the notations and concepts, as well as the problem. In Sec. IV, we present the proposed model and the formulation of the model. Finally, in Sec. V we discuss the experiment before we draw a conclusion in Sec. VI.

Ii Related Work

In this section we give a brief review of two closely related areas: (1) deep learning based recommender system; and (2) cross-domain Recommendation.

Ii-a Deep learning based recommender system

Since [28] introduces deep learning into recommender system (RS), [41, 9, 10]

propose deep neural network based RS to learn from either explicit or implicit data. To counter the sparsity problem, some scholars propose to utilize deep learning techniques to build a hybrid recommender system.

[32] and [34]

introduce Convolutional Neural Networks (CNN) and Deep Belief Network (DBN) assist in representation learning for music data. These approaches above pre-train embeddings of users and items with matrix factorization and utilize deep models to fine-tune the learned item features based on item content. In

[4], a multi-view deep model is built to utilize item information from more than one domain. [16] integrates a CNN with PMF to analyze documents associated with items to predict users’ future explicit ratings. [40] leverages two parallel neural networks to jointly model latent factors of users and items. To incorporate visual signals into RS, [8, 22, 25, 7]

propose CNN-based models to incorporate visual signals into RS. They make use of visual features extracted from product images using deep networks to enhance the performance of RS.

[35, 38] investigates how to leverage the multi-view information to improve the quality of recommender systems. Due to the limited space, readers can refer to [37] for more works on deep recommender systems.

Ii-B Cross-domain recommendation and broad learning

Broad Learning [36] is a way to transfer the information from different domains, which focuses on fusing and mining multiple information sources of large volumes and diverse varieties. To solve the cold-start problem in item recommendation, cross-domain recommendation is proposed by either learning shallow embedding with factorization machine [14, 31, 23, 33] or learning deep embedding with neural networks [24, 26, 13, 12, 21]. When learning shallow embedding, CMF [31] jointly factorizes the user-item interaction matrices from different domains. In order to model the domain information explicitly, CDTF [14] and CDCF [23] is designed where the former factorizes the user-item-domain triadic relation and the later models the source domain information as the context information of users. When learning the deep embedding of users and items, CSN [26] is introduced firstly in multi-task learning scenario, where a convolutional network with cross-stitch units can share the parameters across different domains. This idea is extended later by CoNet [12] with cross connections across different networks where shared mapping matrices is introduced to transfer the knowledge. Additionally, EMCDR [24]

transfers the knowledge across source and target domains with multi-layer perceptron. Our proposed JSCN model also jointly learns a deep embedding for both in-domain and cross-domain information.

Iii PRELIMINARIES and Definition

In this section, the preliminaries and definitions are presented. At first, we formally define the user-item bipartite graph and the corresponding connectivity matrices. Then we define the bipartite graph domain as well as the source domain and target domain before we formulate our problem. The important notations used in this paper are summarized in Table I.

Definition 1

(Bipartite Graph). A bipartite user-item graph with vertices and edges for recommendation is defined as , where and are two disjoint vertex sets, i.e. user set and item set, respectively. Every edge is in the form as , denoting the interaction of a user with an item , e.g. an item is viewed/purchased/liked by a user.

A bipartite graph describes the interactions among users and items, thus we can define an implicit feedback matrix [27, 9] for a given bipartite graph as


where and are the -th user in the user set and -th item in the item set , respectively.

Given an implicit feedback matrix of a bipartite graph , the corresponding adjacent matrix can be defined as


where the adjacent matrix is an matrix and is the number of nodes in the bipartite graph, i.e., .

With the adjacent matrix of a bipartite graph, a laplacian matrix of a bipartite graph can be calculated as



is the identity matrix and

is a diagonal matrix where each entry on the diagonal denotes the sum of all the elements in the corresponding row of the adjacent matrix, i.e. .

In this paper, we focus on the cross-domain recommendation. Thus we would combine the information from a set of bipartite graphs and then recommend items to users. In each domain, we have a categorical mapping function which projects the items into a specific category, e.g. Movies, describing the type of the items in the domain. We assume all the items belongs to one domain and thus we have the definition of graph domain.

Definition 2

(Bipartite graph domain) A Bipartite graph domain is defined on a categorical mapping function of items. Two bipartite graphs and are in different domains if and only if .

The source domain bipartite graph is the source interaction bipartite graph of users and items, which provides auxiliary information for target domain bipartite graph where we would recommend items to users. We would integrate the information across the source domain and target domain, and make a recommendation in the target domain.

Definition 3

(Problem Definition). Given a set of source domain bipartite graphs and a target domain graph , we aim at recommending each user in with a ranked list of items from which have no existing interaction with that user in graph . The source domains share a set of common users with each other, and the shared users between pairwise source domains can be denoted as set . Meanwhile, the target domain also shares a set of common users with each of the source domains, which denoted as set .

Notation Description
bipartite graph, source graph, target graph
set of users
set of items
user, item
set of common users
common user

eigenvectors, diag-matrix of eigenvalues

input dimension of feature vector
spectral convolution parameter in each layer
, user, item latent vectors
, source, target invariant user representation
dimension of spectral latent vectors
dimension of domain-invariant representation
domain related user mapping function
categorical mapping function of items
TABLE I: Important notations

Iv Proposed Model

In this section, we explain the spectral convolution network for collaborative filtering [39] first before we introduce the domain invariant user representation. After that, we will present our proposed Joint Spectral Convolution Network (JSCN) for cross-domain recommendation. Finally, we will formulate the adaptive user mapping mechanism. The overall framework of our proposed model is given in Fig. 3. We use triangles and squares denoting users and items, respectively. Different colors for users and items denote different domains. And the same numbers on squares represent common users in different domains.

Fig. 3: The framework of training joint spectral convolution network (JSCN) model. At first, we randomly initialize the users (below part) and items (upper part) in the input graphs. Secondly, we learn spectral latent vectors of users and item with -layer spectral convolution network (SP). Then we map the spectral latent vectors of users to domain invariant user representations with user mapping function . Finally, we minimize the distance of common users in domain invariant user representation space.

Iv-a Spectral Convolution on Graph

Given a bipartite graph , we would like to learn an embedding for each of the node, i.e. user or item, as illustrated in the first step in Fig. 3. At first, users and items are represented as -dimensional vectors, and all the user and item latent vectors can be grouped together and represented as matrices and respectively, where and . With the graph structure information, the spectral convolutional operator is defined [30, 39, 3] based on the eigenvectors and the corresonding eigenvalues as


In Eq. (4), the term preserves the structure information of the bipartite graph, where is the convolutional filter to extract the spectral feature, and

denotes the logistic sigmoid function. It is the SP layer in the second step in Fig. 


With multiple spectral convolutional operators on the original feature vectors , we construct a -layer spectral convolutional network on bipartite graph as shown in Eq. (5), with which we could learn the spectral representations of the nodes in the graph,


where and (). After -layer spectral convolutional operations, we represent the users and items as latent vectors and respectively by either concatenating the extracted spectral features vectors at each layer or using the spectral feature vectors at the last layer. It corresponds to the third step in Fig. 3.

In terms of the loss function, we apply the

BPR-loss as suggested in [27, 39] to compute the in-domain loss, which models the in-domain user-item interactions,


where the are the triples that sampled from user-item interaction records in which denotes the index of a user, denotes the index of an item with which the user has interaction, and denotes the index of an item with which the user has no interaction. And we apply dot product of user vector and item vector. Unlike pair-wise learning process [19], BPR-loss maximizes the difference between and with the assumption that users prefer observed items over unobserved items . We use denotes the user latent vector of user , and and denote the item latent vector of item and item , respectively.

Iv-B Domain Invariant User Representation

With the in-domain loss , we could learn both the user and item latent vectors from the multi-layer spectral convolutional network. Recall the problem definition in Def. 3, we have a set of source domain bipartite graphs and one target domain bipartite graph , and every domain has a set of overlapping users with each other.

A user requires different aspects w.r.t. different domains that lead to different user latent vectors, but we prefer invariant user representation across different domains, and hence we define the domain invariant user representation as , from which we generate the domain-specific latent vector with corresponding domain-related user mapping function as

For example, has a set of common users with the target domain , which is denoted as . With the in-domain loss, we learn the domain specific user latent vectors individually for and as and respectively. is generated from the domain-independent user representations by the corresponding domain-related user mapping function . is generated from the domain-independent user representations by the corresponding domain-related user mapping function . With the inverse function of the user mapping function , denoted as , we can obtain the domain invariant user representation from the domain specific user latent vector as , which is the fourth step in Fig. 3.

Since we have the domain invariant user representations, each user in should be represented as a same representation both in and . To make this constraint trainable, we construct the cross-domain loss as the distance of the domain invariant user representations as:


where denotes the common users between source domains and as defined in Def. (3). , and denotes the domain invariant representation of the anchor user w.r.t. the corresponding domain-independent user representations , and , respectively.

Iv-C Joint Spectral Convolutional Network

The cross-domain loss combines the information across different domains with the domain invariant user representation of the common users. Even if a common user only exists in part of all the domains, the information can be shared across different domains, as the effect of collaborative filtering. But we cannot directly learn the domain invariant representation, and thus instead, we learn the user and item latent vectors with the in-domain loss. Then we apply the inverse function of the user mapping function to learn the domain-invariant user representations. And the cross-domain loss can be written as:


where , and denote the latent vector of the common user w.r.t. the corresponding domain-specific user latent vectors , and , respectively. We present this in the fifth step in Fig. 3. Hence the joint spectral convolution model has the loss function as:


where is the in-domain loss of the source domain , is the in-domain loss of the target domain , and is the regularization term defined as:


where is the regularization hyper-parameter.

Iv-D Adaptive User Mapping Module

As described in Sec. IV-B, we can use the inverse function of the domain-related user mapping function to generate the domain-invariant user representation from the spectral user latent vector. We define this inverse function as the adaptive user mapping function, which can either be a linear mapping function or a neural network based non-linear function [9]. For simplicity, here we only present the linear mapping function, which leads to


where the is the domain adaptive matrix w.r.t. graph domain . This mapping function is a kind of structural regularization [42] of different domains. It turns out the mapping can transfer the spectral information during the joint learning process.

With this adaptive user mapping matrix, we can rewrite the cross-domain loss as:


where and are two adaptive user mapping matrix corresponding with the domain and respectively.

Iv-E Optimization and Prediction

We follow the optimization approach in [11, 39]

to learn the spectral latent vectors and domain invariant user mapping with RMSprop. The RMSprop is an adaptive version of gradient descent which controls the step size with respect to the absolute value of the gradient. It is done by scaling the updated value of each weight by a running average of its gradient norm.

For the prediction, we focus on improving the performance on the target domain. We use the spectral representation and of users and items respectively in the target domain to make a recommendation. For a specific user , we predict the user’s preference over an item as , then we sort the preferences as the ranking list for recommendation.

V Experiment

In this section, we introduce the dataset first. After that, we discuss the baselines that we compare in this paper. Then we give the experimental settings such as the evaluation metrics. Finally, we present the experiments in details. Through the experiment, we respond to the following research questions:

  • [leftmargin=*]

  • RQ1: Does the source domain information help to improve the recommendation performance in target domain?

  • RQ2: Will spectral feature be better in improving the cross-domain recommendation performance?

  • RQ3: Can the adaptive user mapping help to transfer the information across different domains?

V-a Dataset

In this paper, we use the Amazon rating dataset [7], where we find the interactions of users and items. The rating data where a user rates an item scoring from to is from May 1996 - July 2014. The dataset consists of different domains, we present part of the statistics as in Table II. The original dataset is the rating data, we follow the convention in [39, 9] to transform the data into implicit interactions.

Domain Name # User # Item # Rating
Movies and TV  k  k  k
Clothing, Shoes and Jewelry  k  k  k
Apps for Android  k  k  k
Amazon Instant Video *  k  k  k
TABLE II: Dataset statistics 1

Each domain shares a set of common users with other domains. In the experiment, we use the Amazon Instant Video dataset as the target domain and the other domains as the source domains.

  • [leftmargin=*]

  • Target Domain: Amazon Instant Video consists of ratings among  users and videos originally. Following the convention [39, 9], we ignore the users with less than interactions, and the final domain has users, items with ratings (connections), and the sparsity is .

  • Source Domain: We use the other datasets as the source domain and part of the statistics of the dataset are illustrated in Table II and Table III. And in the experiment, we compare different source domains and illustrate their contributions to the target domain.

V-B Baseline

To answer the previous research questions, we compare our proposed model and methods with some state-of-the-art methods. The major task is defined in Def. 3 which focuses on improving the recommendation performance in the target domain. And we categorize the baseline methods into two groups: (1) Single domain based methods. To answer RQ1 we should compare our model with other models that are non-cross-domain, e.g., BPR [27], NCF [9], and SpectralCF [39]. (2) Cross-domain based methods. For RQ2, we will investigate the capability of spectral feature in transferring the information across different domains, e.g., CMF [31], CDCF [23], CoNet [12] and our proposed model JSCN. For RQ3, we compare the different version of our proposed model to study the function of the adaptive user mapping. We introduce these methods as followings:

  • [leftmargin=*]

  • BPR [27]: BPR is a Bayesian Personalized Ranking based Matrix Factorization method, which introduces a pair-wise loss into the Matrix Factorization to be optimized for ranking [6].

  • NCF [9]: Neural Collaborative Filtering applies neural architecture replacing the inner product of latent factors. Thus it can model the non-linear interaction of items and users.

  • SpectralCF [39]: Spectral Collaborative Filtering is the SOTA work to learn the spectral feature of users and items, which is based on the BPR pair-wise loss.

  • CMF [31]: Collective Matrix Factorization is a matrix factorization based cross domain rating prediction model. In this paper, we change the rating to 0/1 w.r.t. the implicit interaction of users and items.

  • CDCF [23]: Cross-Domain Collaborative Filtering method model the user-item interaction as the context feature for the factorization machine. With arbitrary source domain, CDCF can treat them as input feature of users, and learn the latent vectors for both users and items.

  • CoNet [12]: It is the SOTA deep learning method to learn a shared cross-domain mapping matrix such that the information can be transferred. CoNet enables dual knowledge transferring across domains by introducing cross connections from one base network to another and vice versa. We implement the model with the code published by the author 444http://home.cse.ust.hk/~ghuac/conet-code_only-hu-cikm18-20181115.zip.

  • JSCN-: Joint Spectral Convolution Network is our proposed model to learn a cross-domain recommender system. It is based on graph convolutional network to transfer the spectral feature of users across different domains. This model is a simple version without the adaptive user mapping, only enforcing the spectral vector in different domains to be similar.

  • JSCN-: It is the complete version of our proposed model, which includes adaptive user mapping.

V-C Experimental Setting

Different from the rating score prediction task, the interaction prediction models in this paper should predict items that are interacted with users in the top ranking list. Thus in the experiment, we utilize the Recall@K and MAP@K to evaluate the performance of models. We usually have thousands of valid items in a given domain, we use to present the performance of models.

For the baseline methods, we select the dimension of latent vectors from for BPR and SpectralCF. And we follow the suggestion in original papers for NCF to train 3-layer MLP. We implement the CMF model by using the 0-1 interaction matrix. For CDCF, the dimension is set to

which is the same as all the cross-domain based model. For our proposed model, there are some hyperparameters requiring tuning. To reduce the complexity of the proposed model, we would let the dimension of invariant user representation equal to the dimension of the spectral latent vector, i.e.,

. And we set the convolutional dimension parameter . The number of filters is important to the performance of the model. And with the validation on different source domain datasets, we find when the number of filters , the performance of JSCN is the best for most of the source domains. We present the validation on JSCN- with source domain as Apps for Android in Figure 4. And we use the linear mapping for domain adaptive part as suggested in Sec. V-G. For the training process, we set the learning rate as and the regularization weight as .

Fig. 4: Validation performance of JSCN- for the hyper-parameter the number of convolutional layers w.r.t. MAP@20 and Recall@20 on target domain.

V-D Cross-domain Comparison

To answer RQ1, in this experiment part, we would compare the single domain based methods with the cross-domain based models on the same domain. The target domain is the Amazon Instant Video dataset. And to answer RQ2, we would use the same source domain to compare different cross-domain based methods. To answer the RQ3, we would compare the performance of different versions of JSCN, i.e., JSCN- and JSCN-. In this section, we would use three different source domain datasets to improve the recommendation performance, which are Movies and TV, Clothing, Shoes and Jewelry and Apps for Android. We analyze the results in details.

(a) Movies and TV
(b) Clothing, Shoes and Jewelry
(c) Apps for Android
Fig. 5: Performance comparison w.r.t. Recall@K on target domain Amazon Instant Video, and with source domain Movies and TV,Clothing, shoes and Jewelry and Apps for Android respectively.
(a) Movies and TV
(b) Clothing, Shoes and Jewelry
(c) Apps for Android
Fig. 6: Performance comparison w.r.t. MAP@K on target domain Amazon Instant Video, and with source domain Movies and TV,Clothing, Shoes and Jewelry and Apps for Android respectively.

In Fig. 5, we present the performance of different models on the target domain w.r.t. Recall@K. And in Fig. 6, we show the performance w.r.t. MAP@K. For the cross-domain based models, JSCN- performs the best compared to all the other methods. JSCN- improves the performance of SpectralCF by on recall on average, and on MAP on average, which answers that cross-domain information can improve the performance. CMF cannot achieve a good performance compared to the other cross-domain based models. Among all the single domain based models, according to the result in [39] and our results, SpectralCF is the best model compared to NCF and BPR as it can not only model the positive and negative interactions of user-item but also, with the graph convolution, model the interaction in a high-order non-linear way. From the result, some cross-domain based models cannot always surpass the single domain based models.

CDCF, CoNet, JSCN-, and JSCN- can all well transfer the information across different domains. But since CDCF and CoNet has no spectral convolutional architecture, it cannot capture the high-order interactions of user-item. From our results, SpectralCF can achieve comparable performance with CDCF and CoNet even without source domain information. This suggests that we should apply spectral convolution to transfer the information across different domains. CoNet can transfer the information that learned from the neural networks and shared across different networks. But it cannot capture the high-order information across domain. JSCN- beats the performance of CoNet by on recall in average and on MAP in average, which answers that the spectral representation generated by JSCN can improve the performance in cross-domain recommendation.

The users in source domain Movies and TV should request similar aspects of items with the users in target domain Amazon Instant Video as the items are similar. Thus it is straightforward to transfer the information across these two compatible domains. The result is illustrated in Fig. 4(a) and Fig. 5(a). The performance of JSCN- and JSCN- are relatively close. However, the source domain Clothing, Shoes and Jewelry is incompatible with the target domain. From the result in Fig 4(b) and Fig 5(b), we can find both JSCN- and CDCF cannot improve the performance compared to SpectralCF. But JSCN- learns the domain-invariant user representation which can transfer the information even the domain is incompatible. As a result, the adaptive user mapping in JSCN- is important to transfer the information across different domains even if the domains are incompatible. JSCN- beats the performance of JSCN- by 9.2% on recall in average and 36.4% on MAP in average, which answers that the adaptive user mapping can solve the domain incompatible problem thus improve the performance in cross-domain recommendation.

V-E Comparison with Different Source Domains

Fig. 7: Performance of JSCN- w.r.t. MAP@20 of the cross-domain recommendation on target domain.

In this section, we report the cross-domain recommendation results of JSCN- on the target domain with different source domains w.r.t. MAP@20 in Fig. 7. Since the recall performance varies little for different source domains, and due to the space limitation of the paper, we choose not to show the results of recall.

The best result is from source domain Apps for Android. And we can find that even if some of the source domains are incompatible with the target domain Amazon Instant Video, e.g. Clothing, Shoes and Jewelry, the cross-domain recommendation performs well. Even if some of the source domains e.g. Home and Kitchen, Health and Personal Care, and Office Products, perform not that well compared with other source domains, they still improve the performance of SpectralCF by , and respectively, which suggests the benefits of source domain information and the effectiveness of our proposed model.

V-F Multi-Source JSCN

label Domain Name # User # Item # Ratings
Home and Kitchen  k  k  k
2 Health & Personal Care  k  k  k
3 Office Products  k  k  k
TABLE III: Dataset Statistics 2

From the results in Sec. V-E, we notice that our model performs differently given different source domain. Some source domains cannot provide enough information and hence the cross-domain recommendation results are not that good compared to the other source domains. The JSCN model can combine the information from source domains and share the information together to improve the performance on the target domain. In this section, we conduct the experiment on training JSCN models on multiple source domains.

We select three source domains: Home and Kitchen, Health and Personal Care, and Office Products, which perform worst compared with the other source domains. The domain statistics are summarized in Table III. We conduct the experiment by choosing two out of three source domains to jointly learn the JSCN model. Hence we have and in this experiment. The comparison result is presented in Fig. 8.

Fig. 8: Perfomrance of JSCN- w.r.t. MAP@K with different source domains and their combinations. The domain label can be referred in Table  III

From the result, when we can find that multiple source domains can improve the performance compared with single source domain. Especially the combination of Home and Kitchen and Health and Personal Care source domains improve the performance by 37.2% on average compared using each one of the two domains. This experiment can prove the JSCN can jointly learn the information from multiple source domains. When we find the performance is a little bit worse than the combination of source domain 1 and source domain 2 (but still better than the other two combinations), which suggests that also requires tuning. The reason why is better than is that the source domain has smaller density value 555Density: 1 Home and Kitchen : , 2 Health and Personal Care : and 3 Office Products : compared with the other 2 domains, which can induce more disturbance to the model.

V-G Domain Adaptive Module

In this part, we compare the performance of JSCN- and JSCN- with different mapping functions. Recall that JSCN- is the simple version of the joint spectral convolutional network which enforces the common user latent vector to be similar without the domain adaptive module. As for the domain adaptive module of JSCN-, we have either the linear mapping or non-linear multi-layer perceptron (MLP) mapping. We use four source domains, i.e. Books, Movies and TV (MT), Clothing, Shoes and Jewelry (CSJ) and Apps for Android (AfA).

Source Domain Books MT CSJ AfA
JSCN- 0.02374 0.02291 0.02076 0.02103
JSCN--MLP 0.02678 0.02375 0.02654 0.02537
JSCN- 0.02769 0.02364 0.02877 0.03043
TABLE IV: The MAP@100 result of variants of JSCN
Source Domain Books MT CSJ AfA
JSCN- 0.2011 0.2021 0.2050 0.2032
JSCN--MLP 0.2107 0.2165 0.2112 0.2097
JSCN- 0.2187 0.2179 0.2155 0.2217
TABLE V: The Recall@100 result of variants of JSCN

From the result in Table IV and Table V, we can find JSCN- performs much better than JSCN-

, which shows the effectiveness of the domain adaptive user mapping module. One interesting observation is the linear mapping beats the non-linear mapping. Since the non-linear mapping requires tuning a lot of hyper-parameters, such as choosing the activation function and the dimension of the hidden layer, we suggest using the linear mapping function for learning the invariant user vector. One possible explanation for this observation is that since the spectral vectors are already low dimensional vectors, MLP can easily find a mapping function such that the invariant user vectors in different domains to be the same, hence over-fitting the user vectors. As over-fitting will harm the structural regularization 

[42] of the domain adaptive user mapping, the information cannot be transferred in a good way compared with linear mapping.

Vi Conclusion

In this paper, we design a Joint Spectral Convolutional Network (JSCN) to solve the cross-domain recommendation problem. Firstly, JSCN operates multi-layer spectral convolutions on different graphs simultaneously. Secondly, JSCN maps the learned spectral latent vectors to a domain invariant user representation with adaptive user mapping module. Finally, JSCN minimizes both the in-domain loss in the spectral latent vector space and the cross-domain loss in the domain invariant user representation space to learn the parameters. From the experiment, we can answer three questions: 1)JSCN can use the source domain information to improve the recommendation performance; 2) the spectral convolutions in JSCN can capture the comprehensive connectivity information to improve the performance in cross-domain recommendation; 3) the adaptive user mapping of learning the domain-invariant representation can help to transfer knowledge across different domains.

Vii Acknowledgement

This work is supported in part by NSF under grants III-1526499, III-1763325, III-1909323, CNS-1930941, and CNS-1626432. This work is also partially supported by NSF through grant IIS-1763365 and by FSU.


  • [1] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun (2013) Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cited by: §I.
  • [2] F. R. Chung and F. C. Graham (1997) Spectral graph theory. American Mathematical Soc.. Cited by: §I.
  • [3] M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pp. 3844–3852. Cited by: §I, §IV-A.
  • [4] A. M. Elkahky, Y. Song, and X. He (2015) A multi-view deep learning approach for cross domain user modeling in recommendation systems. In WWW, pp. 278–288. Cited by: §I, §II-A.
  • [5] A. Farseev, I. Samborskii, A. Filchenkov, and T. Chua (2017) Cross-domain recommendation via clustering on multi-layer graphs. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 195–204. Cited by: §I.
  • [6] Z. Gantner, S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme (2011) MyMediaLite: a free recommender system library. In Proceedings of the fifth ACM conference on Recommender systems, pp. 305–308. Cited by: 1st item.
  • [7] R. He and J. McAuley (2016) Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, pp. 507–517. Cited by: §II-A, §V-A.
  • [8] R. He and J. McAuley (2016) VBPR: visual bayesian personalized ranking from implicit feedback.. In AAAI, pp. 144–150. Cited by: §II-A.
  • [9] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. Chua (2017) Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp. 173–182. Cited by: §I, §I, §II-A, §III, §IV-D, 1st item, 2nd item, §V-A, §V-B.
  • [10] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk (2015)

    Session-based recommendations with recurrent neural networks

    arXiv preprint arXiv:1511.06939. Cited by: §II-A.
  • [11] G. Hinton, N. Srivastava, and K. Swersky

    Neural networks for machine learning lecture 6a overview of mini-batch gradient descent

    Cited by: §IV-E.
  • [12] G. Hu, Y. Zhang, and Q. Yang (2018) Conet: collaborative cross networks for cross-domain recommendation. In CIKM, pp. 667–676. Cited by: §I, §II-B, 6th item, §V-B.
  • [13] G. Hu, Y. Zhang, and Q. Yang (2018) MTNet: a neural approach for cross-domain recommendation with unstructured text. Cited by: §II-B.
  • [14] L. Hu, J. Cao, G. Xu, L. Cao, Z. Gu, and C. Zhu (2013) Personalized recommendation via cross-domain triadic factorization. In Proceedings of the 22nd international conference on World Wide Web, pp. 595–606. Cited by: §I, §I, §II-B.
  • [15] M. Kazama and I. Varga (2016)

    Cross domain recommendation using vector space transfer learning.

    In RecSys Posters, Cited by: §I, §I.
  • [16] D. Kim, C. Park, J. Oh, S. Lee, and H. Yu (2016) Convolutional matrix factorization for document context-aware recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 233–240. Cited by: §II-A.
  • [17] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §I.
  • [18] Y. Koren, R. Bell, and C. Volinsky (2009) Matrix factorization techniques for recommender systems. Computer (8), pp. 30–37. Cited by: §I.
  • [19] Y. Koren (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 426–434. Cited by: §IV-A.
  • [20] C. Li and S. Lin (2014) Matching users and items across domains to improve the recommendation quality. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 801–810. Cited by: §I.
  • [21] J. Liu, P. Zhao, Y. Liu, V. S. Sheng, F. Zhuang, J. Xu, X. Zhou, and H. Xiong (2019) Deep cross networks with aesthetic preference for cross-domain recommendation. arXiv preprint arXiv:1905.13030. Cited by: §II-B.
  • [22] Q. Liu, S. Wu, and L. Wang (2017) DeepStyle: learning user preferences for visual recommendation. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 841–844. Cited by: §II-A.
  • [23] B. Loni, Y. Shi, M. Larson, and A. Hanjalic (2014) Cross-domain collaborative filtering with factorization machines. In European conference on information retrieval, pp. 656–661. Cited by: §II-B, 5th item, §V-B.
  • [24] T. Man, H. Shen, X. Jin, and X. Cheng (2017) Cross-domain recommendation: an embedding and mapping approach. In

    Proceedings of the 26th International Joint Conference on Artificial Intelligence

    pp. 2464–2470. Cited by: §I, §I, §I, §II-B.
  • [25] J. McAuley, C. Targett, Q. Shi, and A. van den Hengel (2015) Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 43–52. Cited by: §II-A.
  • [26] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert (2016) Cross-stitch networks for multi-task learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 3994–4003. Cited by: §II-B.
  • [27] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme (2009) BPR: bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pp. 452–461. Cited by: §III, §IV-A, 1st item, §V-B.
  • [28] R. Salakhutdinov, A. Mnih, and G. Hinton (2007) Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pp. 791–798. Cited by: §II-A.
  • [29] Y. Shi, Q. Zhu, F. Guo, C. Zhang, and J. Han (2018) Easing embedding learning by comprehensive transcription of heterogeneous information networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2190–2199. Cited by: §I.
  • [30] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst (2013-05)

    The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains

    IEEE Signal Processing Magazine 30 (3), pp. 83–98. External Links: Document, ISSN 1053-5888 Cited by: §I, §I, §IV-A.
  • [31] A. P. Singh and G. J. Gordon (2008) Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 650–658. Cited by: §II-B, 4th item, §V-B.
  • [32] A. Van den Oord, S. Dieleman, and B. Schrauwen (2013) Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643–2651. Cited by: §II-A.
  • [33] X. Wang, Z. Peng, S. Wang, S. Y. Philip, W. Fu, and X. Hong (2018) Cross-domain recommendation for cold-start users via neighborhood based feature mapping. In International Conference on Database Systems for Advanced Applications, pp. 158–165. Cited by: §I, §I, §II-B.
  • [34] X. Wang and Y. Wang (2014) Improving content-based and hybrid music recommendation using deep learning. In Proceedings of the ACM International Conference on Multimedia, pp. 627–636. Cited by: §II-A.
  • [35] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W. Ma (2016) Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 353–362. Cited by: §I, §II-A.
  • [36] J. Zhang and P. S. Yu (2019) Broad learning through fusions - an application on social networks. Springer. External Links: Link, Document, ISBN 978-3-030-12527-1 Cited by: §I, §II-B.
  • [37] S. Zhang, L. Yao, and A. Sun (2017) Deep learning based recommender system: a survey and new perspectives. arXiv preprint arXiv:1707.07435. Cited by: §II-A.
  • [38] Y. Zhang, Q. Ai, X. Chen, and W. Croft (2017) Joint representation learning for top-n recommendation with heterogeneous information sources. CIKM. ACM. Cited by: §II-A.
  • [39] L. Zheng, C. Lu, F. Jiang, J. Zhang, and P. S. Yu (2018) Spectral collaborative filtering. In Proceedings of the 12th ACM Conference on Recommender Systems, pp. 311–319. Cited by: §I, §I, §IV-A, §IV-A, §IV-E, §IV, 1st item, 3rd item, §V-A, §V-B, §V-D.
  • [40] L. Zheng, V. Noroozi, and P. S. Yu (2017) Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 425–434. Cited by: §II-A.
  • [41] Y. Zheng, B. Tang, W. Ding, and H. Zhou (2016) A neural autoregressive approach to collaborative filtering. arXiv preprint arXiv:1605.09477. Cited by: §II-A.
  • [42] J. Zhou, J. Chen, and J. Ye (2011) Malsar: multi-task learning via structural regularization. Arizona State University 21. Cited by: §IV-D, §V-G.