1. Introduction
Users interact sequentially with items in many domains such as ecommerce (e.g., a customer purchasing an item) (bobadilla2013recommender, ; zhang2017deep, ), education (a student enrolling in a MOOC course) (liyanagunawardena2013moocs, ), healthcare (a patient exhibiting a disease) (johnson2016mimic, ), social networking (a user posting in a group in Reddit) (buntain2014identifying, ), and collaborative platforms (an editor editing a Wikipedia article) (iba2010analyzing, ). The same user may interact with different items over a period of time and these interactions dynamically change over time (DBLP:journals/debu/HamiltonYL17, ; DBLP:conf/recsys/PalovicsBKKF14, ; zhang2017deep, ; agrawal2014big, ; DBLP:conf/asunam/ArnouxTL17, ; raghavan2014modeling, ; DBLP:journals/corr/abs171110967, ). These interactions create a dynamic interaction network between users and items. Accurate realtime recommendation of items and predicting change in state of users over time are fundamental problems in these domains (DBLP:conf/wsdm/QiuDMLWT18, ; DBLP:conf/asunam/ArnouxTL17, ; DBLP:conf/sdm/LiDLLGZ14, ; DBLP:journals/corr/abs180401465, ; DBLP:conf/cosn/SedhainSXKTC13, ; walker2015complex, ; DBLP:conf/icwsm/Junuthula0D18, ). For instance, predicting when a student is likely to dropout of a MOOC course is important to develop early intervention measures for their continued education (kloft2014predicting, ; yang2013turn, ; chaturvedi2014predicting, ), and predicting when a user is likely to turn malicious on platforms, like Reddit and Wikipedia, is useful to ensure platform integrity (kumar2015vews, ; cheng2017anyone, ; ferraz2015rsc, ).
Learning embeddings from dynamic useritem interaction networks poses three fundamental challenges. We illustrate this using an example interaction network between three users and four items shown in Figure 1 (left). First, as users interact with items, their properties evolve over time. For example, the interest of a user may gradually change from purchasing books (item ) to movies (item ) to clothes (item ). Similarly, the properties of items change as different users interact with them. For instance, a book (item ) that is popular in older people (user at time ) may eventually become popular among the younger audience (users and at times and ). Second, a user’s property is influenced by the property of the item that it interacts with and conversely, an item’s property is influenced by the interacting user’s property. For instance, if purchases a book (item ) after it has won a Pulitzer Prize reflects a different behavior than if purchases before the prize. Third, interactions with common items create complex usertouser dependencies. For example, users and interact with item , so both users influence each other’s properties. The traditional methods of batching data during training treat all users independently and therefore they can not be applied to learn embeddings. As a result, existing embedding methods have to process the interactions oneatatime, and therefore, they are not scalable to a large number of interactions. Therefore, modeling the jointly evolving embeddings of users and items in a scalable way is crucial in making accurate predictions.
Representation learning, or learning lowdimensional embeddings of entities, is a powerful approach to represent the dynamic evolution of users and item properties (DBLP:journals/kbs/GoyalF18, ; zhang2017deep, ; dai2016deep, ; DBLP:conf/nips/FarajtabarWGLZS15, ; beutel2018latent, ; zhou2018dynamic, ). Existing representation learning algorithms, including randomwalk based methods (nguyen2018continuous, ; perozzi2014deepwalk, ; grover2016node2vec, ), dynamic network embedding methods (zhou2018dynamic, ; zhang2017learning, ), and recurrent neural networkbased algorithm (wu2017recurrent, ; beutel2018latent, ; zhu2017next, ), either generate static embeddings from dynamic interactions, learn embeddings of users only, treat users and items independently, or are not scalable to a large number of interactions.
Present work. Here we address the following problem: given a sequence of temporal interactions between users and items with a feature vector of the interaction at time , generate dynamic embeddings and for users and items at any time , such that they allow us to solve two prediction tasks: future interaction prediction and user state change prediction.
Present work (JODIE model): Here we present an algorithm, JODIE, which learns dynamic embeddings of users and items from temporal useritem interactions.^{1}^{1}1JODIE stands for Joint Dynamic UserItem Embeddings. Each interaction has an associated timestamp and a feature vector , representing the properties of the interaction (e.g., the purchase amount or the number of items purchased). The resulting user and item embeddings for the example network are illustrated in Figure 1 (right). We see that JODIE updates the user and item embeddings after every interaction, thus resulting in a dynamic embedding trajectory for each user and item. JODIE overcomes the shortcomings of the existing algorithms, as shown in Table 1.
In JODIE, each user and item has two embeddings: a static embedding and a dynamic embedding. The static embedding represents the entity’s longterm stationary property, while the dynamic embedding represent evolving property and are learned using the JODIE algorithm. This enables JODIE to make predictions from both the temporary and stationary properties of the user.
The JODIE model consists of three major components in its architecture: an update function, a project function, and a predict function.
The update function of JODIE has two Recurrent Neural Networks (RNNs) to generate the dynamic user and item embeddings. Crucially, the two RNNs are coupled to explicitly incorporate the interdependency between the users and the items. After each interaction, the user RNN updates the user embedding by using the embedding of the interacting item. Similarly, the item RNN uses the user embedding to update the item embedding. It should be noted that JODIE is easily extendable to multiple types of entities, by training one RNN for each entity type. In this work, we apply JODIE to the case of bipartite interactions between users and items.
A major innovation of JODIE is that it learns a project function
to forecast the embedding of users at any future time. Intuitively, the embedding of a user will change slightly after a short time elapses since its previous interaction (with any item), but the embedding can change significantly after a long time elapses. As a result, the embedding of the user needs to be estimated for accurate realtime predictions. To solve this challenge,
JODIE learns a project function that estimates the embedding of a user after some time elapses since its previous interaction. This function makes JODIE truly dynamic as it can generate dynamic user embeddings at any time.Finally, the third component of JODIE is the predict function
that predicts the future interaction of a user. An important design choice here is that the function directly outputs the embedding of the item that a user is most likely to interact with, instead of a probability score of interaction between a user and an item. As a result, we only need to do the expensive neural network forward pass
once in our model to generate the predicted item embedding and then find the item that has the embedding closest to the predicted embedding. On the other hand, existing models need to do the expensive forward pass times (once for each candidate item) and select the one with the highest score, which hampers its scalability.Present work (tBatch algorithm): Training models that learn on a sequence of interactions is challenging due to two reasons: (i) interactions with common items results in complex usertouser dependencies, and (ii) the interactions should be processed in increasing order of their time. The naive solution to generate dynamic embeddings is to process each interaction sequentially, which is not scalable to a large number of interactions such as in DeepCoevolve (dai2016deep, ) and Zhang et al. (zhang2017learning, ). Therefore, we propose a novel batching algorithm, called tBatch, that creates batches such that the interactions in each batch can be processed in parallel while still maintaining all usertouser dependencies. Each user and item appears at most once in every batch, and the temporallysorted interactions of each user (and item) appear in monotonically increasing batches. Batching in such a way results in massive parallelization. tBatch is a general algorithm that is applicable to any model that learns on a sequence of interactions. We experimentally validate that tBatch leads to a 8.5 and speedup in the training time of JODIE and DeepCoevolve (dai2016deep, ).
Present work (experiments): We conduct six experiments to evaluate the performance of JODIE on two tasks: predicting the next interaction of a user and predicting the change in state of users (when a user will be banned from social platforms and when a student will drop out from a MOOC course). We use four datasets from Reddit, Wikipedia, LastFM, and a MOOC course activity for our experiments. We compare JODIE with six stateoftheart algorithms from three categories: recurrent recommender algorithms (zhu2017next, ; beutel2018latent, ; wu2017recurrent, ), dynamic node embedding algorithm (nguyen2018continuous, )
, and coevolutionary algorithm
(dai2016deep, ). JODIE outperforms the best baseline algorithms on the interaction prediction task by up to 22.4% and up to 4.5% in predicting user state change. We further show that JODIE outperforms existing algorithms irrespective of the percentage of the training data and the size of the embeddings. As an additional experiment, we show that JODIE can predict which student will dropout of a MOOC course as early as five interactions in advance.Overall, in this paper, we make the following contributions:

Embedding algorithm: We propose a coupled Recurrent Neural Network model called JODIE to learn dynamic embeddings of users and items from a sequence of temporal interactions. A major contribution of JODIE is that it learns a function to project the user embeddings to any future time.

Batching algorithm: We propose a novel tBatch algorithm to create batches that can be run in parallel without losing usertouser dependencies. This batching technique leads to 8.5 speedup in JODIE and 7.4 speedup in DeepCoevolve.

Effectiveness: JODIE outperforms six stateoftheart algorithms in predicting future interactions and user state change predictions, by performing up to 22.4% better than six stateoftheart algorithms.
2. Related Work
Here we discuss the research closest to our problem setting spanning three broad areas. Table 1 compares their differences. Any algorithm that learns from sequence of interactions should have the following properties: it should be able to learn dynamic embeddings, for both users and items, in such a way that they are interdependent, and the method should be scalable. The proposed model JODIE satisfies all the desirable properties.
Deep recommender systems. Several recent models employ recurrent neural networks (RNNs) and variants (LSTMs and GRUs) to build recommender systems. RRN (wu2017recurrent, ) uses RNNs to generate dynamic user and item embeddings from rating networks. Recent methods, such as TimeLSTM (zhu2017next, ) and LatentCross (beutel2018latent, ) incorporate features directly into their model. However, these methods suffer from two major shortcomings. First, they take onehot vector of the item as input to update the user embedding. This only incorporates the item id and ignores the item’s current state. The second shortcoming is that models such as TimeLSTM and LatentCross generate embeddings only for users, and not for items.
JODIE overcomes these shortcomings by learning dynamic embeddings for both users and items in a mutuallyrecursive manner. In doing so, JODIE outperforms the best baseline algorithm by up to 22.4%.
Dynamic coevolution models. Methods that jointly learn representations of users and items have recently been developed using pointprocess modeling (wang2016coevolutionary, ; trivedi2017know, ) and RNNbased modeling (dai2016deep, ). The basic idea behind these models is similar to JODIE—user and item embeddings influence each other whenever they interact. However, the major difference between JODIE and these models is that JODIE learns a projection function to generate the embedding of the entities whenever they are involved in the interaction, while the projection function in JODIE enables us to generate an embedding of the user at any time. As a result, we observe that JODIE outperforms DeepCoevolve by up to 57.7% in both prediction tasks of next interaction prediction and state change prediction.
In addition, these models are not scalable as traditional methods of data batching during training can not be applied due to complex usertouser dependencies. JODIE overcomes this limitation by developing a novel batching algorithm, tBatch, which makes JODIE 9.2 faster than DeepCoevolve.
Temporal network embedding models. Several models have recently been developed that generate embeddings for the nodes (users and items) in temporal networks. CTDNE (nguyen2018continuous, ) is a stateoftheart algorithm that generates embeddings using temporallyincreasing random walks, but it generates one final static embedding of the nodes, instead of dynamic embeddings. Similarly, IGE (zhang2017learning, ) generates one final embedding of users and items from interaction graphs. Therefore, both these methods (CTDNE and IGE) need to be rerun for every a new edge to create dynamic embeddings. Another recent algorithm, DynamicTriad (zhou2018dynamic, ) learns dynamic embeddings but does not work on interaction networks as it requires the presence of triads. Other recent algorithms such as DDNE (DBLP:journals/access/LiZYZY18, ), DANE (DBLP:conf/cikm/LiDHTCL17, ), DynGem (goyal2018dyngem, ), Zhu et al. (zhu2016scalable, ), and Rahman et al. (DBLP:journals/corr/abs180405755, ) learn embeddings from a sequence of graph snapshots, which is not applicable to our setting of continuous interaction data. Recent models such as NPGLM model (DBLP:journals/corr/abs171000818, ), DGNN (DBLP:journals/corr/abs181010627, ), and DyRep (trivedi2018representation, ) learn embeddings from persistent links between nodes, which do not exist in interaction networks as the edges represent instantaneous interactions.
Our proposed model, JODIE overcomes these shortcomings by generating dynamic user and item embeddings. In doing so, JODIE also learns a projection function to predict the user embedding at a future time point. Moreover, for scalability during training, we propose an efficient training data batching algorithm that enables learning from largescale interaction data.
Recurrent  Temporal network  Proposed  

models  embedding models  model  
Property 
LSTM 
TimeLSTM (zhu2017next, ) 
RRN (wu2017recurrent, ) 
LatentCross (beutel2018latent, ) 
CTDNE (nguyen2018continuous, ) 
IGE (zhang2017learning, ) 
DeepCoevolve (dai2016deep, ) 
JODIE 
Dynamic embeddings  ✔  ✔  ✔  ✔  ✔  ✔  
Embeddings for users and items  ✔  ✔  ✔  ✔  ✔  
Learns joint embeddings  ✔  ✔  ✔  
Parallelizable/Scalable  ✔  ✔  ✔  ✔  ✔  ✔ 
3. JODIE: Joint Dynamic UserItem Embedding Model
In this section, we propose JODIE, a method to learn dynamic representations of users and items from a sequence of temporal useritem interactions . An interaction happens between a user and an item at time . Each interaction has an associated feature vector . The desired output is to generate dynamic embeddings for user and for item at any time . Table 2 lists the symbols used.
Our proposed model, called JODIE
is a dynamic embedding learning method that is reminiscent of the popular Kalman Filtering algorithm
(julier1997new, ).^{2}^{2}2Kalman filtering is used to accurately measure the state of a system using a combination of system observations and state estimates given by the laws of the system. Like the Kalman filter, JODIE uses the interactions (i.e., observations) to update the state of the interacting entities (users and items) via a trained update function. A major innovation in JODIE is that between two observations of a user, its state is estimated by a trained projection function that uses its previous observed state and the elapsed time to generate a projected When the entity’s next interaction is observed, its new states are updated again.The JODIE model is trained to accurately predict future interactions between users and items. Instead of predicting a probability score of interaction between a user and item, JODIE trains a predict function to directly output the embedding of the predicted item that a user will interact with. This has the advantage that it JODIE only needs to do one forward pass during inference to generate the item embedding, as opposed to times (once for each candidate item). We illustrate the three major operations of JODIE in Figure 2.
Static and Dynamic Embeddings. In JODIE, each user and item is assigned two types of embeddings: a static embedding and a dynamic embedding.
Static embeddings, and , do not change over time. These are used to express stationary properties such as the longterm interest of users. We use onehot vectors as static embeddings of all users and items, as advised in TimeLSTM (zhu2017next, ) and TimeAwareLSTM (baytas2017patient, ).
On the other hand, each user and item is assigned a dynamic embedding represented as and at time , respectively. These embeddings change over time to model their evolving behavior. The embeddings of a user and item are updated whenever they are involved in an interaction.
In JODIE, use both the static and dynamic embeddings to train the model to predict useritem interactions in order to leverage both the longterm and dynamic properties.
3.1. Learning dynamic embeddings with Jodie
Here we propose a mutuallyrecursive Recurrent Neural Network based model that learns dynamic embeddings of both users and items jointly. We will explain the three major components of the algorithm: update, project, and predict. Algorithm 1
shows the algorithm for each epoch.
3.1.1. Update operation using a coupled recurrent model.
In the update operation, the interaction between a user and item is used to update both their dynamic embeddings. Our model uses two separate recurrent neural networks for updates— is shared across all users and used to update user embeddings, and is shared among all items to update item embeddings. The state of the user RNN and item RNN represent the user and item embeddings, respectively.
When user interacts with item , updates the embedding by using the embedding of item as an input. This is in stark contrast to the popular use of items’ onehot vectors to update user embeddings (beutel2018latent, ; wu2017recurrent, ; zhu2017next, ), which makes these models infeasible as they scale only to a small number of items due to space complexity. Instead, we use the dynamic embedding of an item as it contains more information than just the item’s ‘id’, including its current state and its recent interactions with (any) user. Therefore, the use of item’s dynamic embeddings can generate more meaningful dynamic user embeddings. For the same reason, uses the dynamic user embedding to update the dynamic embedding of the item . This results in mutually recursive dependency between the embeddings. Figure 2 shows this in the “update function” block.
Formally,
where and represent the user and item embeddings before the interaction (i.e., those obtained after their previous interaction updates). and represent the time elapsed since ’s previous interaction (with any item) and ’s previous interaction (with any user), respectively, and are used as input to account for their frequency of interaction. Incorporating time has been shown to be useful in prior work (dai2016deep, ; wu2017recurrent, ; beutel2018latent, ; zhang2017deep, ). The interaction feature vector is also used as an input. The input vectors are all concatenated and fed into the RNNs. Variants of RNNs, such as LSTM and GRU, gave empirically similar performance in our experiments, so we use an RNN to reduce the number of trainable parameters.
Symbol  Meaning 

and  Dynamic embedding of user and item at time 
and  Dynamic embedding of user and item before time 
and  Static embedding of user and item 
and  Complete embedding of user and item a time 
and  User RNN and item RNN to update embeddings 
Embedding projection function  
Prediction function to output  
and  Projected embedding of user and item at time 
Predicted item embedding  
Feature at interaction 
3.1.2. Embedding projection operation.
Between two interactions of a user, its embedding may become stale as time more time elapses. Using stale embeddings lead to suboptimal predictions and therefore, it is crucial to estimate the embeddings in realtime. To address this, we create a novel projection operation that estimates the embedding of a user after some time elapses since its previous interaction.
In practice, consider the scenario when a recommendation needs to be made to a user when it logs into a system. For example, on an ecommerce website, if a user returns 5 minutes after a previous purchase, then its projected embedding would be close to its previous embedding. On the other hand, the projected embedding would drift farther if the user returned 10 days later. The use of projected embedding enables JODIE to make different recommendations to the same user at different points in time. Therefore, the instantaneous projected embedding of a user can be utilized to make efficient realtime recommendations. The projection operation is one of the major innovations of JODIE.
The projection function projects the embedding of a user after time has elapsed since its previous interaction at time . We represent the projected user embedding at time as .
The two inputs to the projection operation are ’s previous embedding at time and the value . We follow the method suggested in LatentCross (beutel2018latent, ) to incorporate time into the embedding. We first convert to a timecontext vector using a linear layer, where we initialize to a 0mean Gaussian. The projected embedding is then obtained as an elementwise product as follows:
The timecontext vector essentially acts as an attention vector to scale the past user embedding to the current state. The context linear layer is trained during the training phase.
In Figure 3, we show the projected embedding of user in our example network for different values of . We see that for smaller , the projected embedding is closer to the previous embedding , and it drifts farther as the value increases, showing the change in user’s state.
3.1.3. Predicting useritem interaction.
The JODIE model is trained to correctly predict future user and item interactions. To make prediction of the user’s interaction at time , we introduce a prediction function . This function takes the estimated user embedding along with its static embedding as input. Additionally, we also use the static and dynamic embedding of the item (the item from ’s last interaction at time ) as inputs. As item could interact with other users between time and , its embedding right before time , i.e., , would reflect more recent information. We use both the static and dynamic embeddings to predict using both the longterm and temporary properties of the user and items. The function is trained to output the complete (static and dynamic) predicted item embedding.
In practice, we use a fully connected layer as the function .
Note that instead of predicting a probability score of interaction between a user and a candidate item, JODIE directly predicts an item embedding. This is advantageous during inference time (i.e., making realtime predictions), as we would only need to do the expensive neural network forwardpass of the prediction layer once and select the item with the closest embedding using knearest neighbor search. This is advantageous over the standard approaches (dai2016deep, ; wu2017recurrent, ; beutel2018latent, ; DBLP:conf/kdd/DuDTUGS16, ) that generate a probability score as they need to do the forward pass times (once for each of the candidate items) to find the item with the highest probability score.
3.1.4. Loss function for training.
The entire JODIE model is trained to minimize the distance between the predicted item embedding and the ground truth item’s embedding at every interaction. Let interact with item at time . We calculate the loss as follows:
The first loss term minimizes the predicted embedding error. The last two terms are added to regularize the loss and prevent the consecutive dynamic embeddings of a user and item to vary too much, respectively. and are scaling parameters to ensure the losses are in the same range. Note that we do not use negative sampling during training as JODIE directly outputs the embedding of the predicted item.
Overall, Algorithm 1 describes the process in each epoch.
3.1.5. Extending the loss term for user state change prediction.
In certain prediction tasks, such as user state change prediction, additional training labels may be present for supervision. In those cases, we train another prediction function
to predict the label using the dynamic embedding of the user after an interaction. We calculate the crossentropy loss for categorical labels and add the loss to the above loss function with another scaling parameter. We explicitly do not just train to minimize only the crossentropy loss to avoid overfitting.
3.1.6. Differences between Jodie and DeepCoevolve
DeepCoevolve is the closest stateoftheart algorithm as it also trains two joint RNNs for generate dynamic embeddings. However, the key differences between JODIE and DeepCoevolve are the following: (i) JODIE uses a novel project function to estimate the embedding of a user and item at any time. Instead, DeepCoevolve maintains a constant embedding between two consecutive interactions of the same user/item. This makes JODIE truly dynamic. (ii) JODIE predicts the embedding of the next item that a user will interact with. In contrast, DeepCoevolve trains a function to predict the likelihood of interaction between a user and an item. This requires forward passes through the inference layer (for items) to select the item with the highest score. On the other hand, JODIE is scalable at inference time as it only requires one forward pass through the inference layer (see Section 3.1.3 for details).
Overall, in this section, we presented the JODIE algorithm to generate dynamic embeddings for users and items from a sequence of useritem interactions.
4. tBatch: TimeConsistent Batching Algorithm
Here we explain a general batching algorithm to parallelize the training of models that learn from a sequence of useritem interactions.
Training two mutuallyrecursive recurrent networks, in models such as DeepCoevolve (dai2016deep, ) and JODIE, introduces new challenges as it is fundamentally different from training a single RNN. The standard RNN models are trained through the standard Back Propagation Through Time (BPTT) mechanism. Methods such as RRN (beutel2018latent, ) and TLSTM (zhu2017next, ) that use single RNNs can split users into small batches because each user’s interaction sequence is treated independently (using onehot vector representations of items). This enables parallelism but ignores the usertouser interdependencies. On the other hand, in JODIE and DeepCoevolve, the two RNNs are mutually recursive to incorporate usertouser dependencies, and as a result, users’ interactions sequences are not independent. So, the two RNNs can not be trained independently. This new challenge requires new methods for efficient training.
First we propose two necessary conditions for any batching algorithm to work with coupled recurrent networks.
Condition 1 (Cobatching conditions).
A batching algorithm for coupled recurrent networks should satisfy the following two conditions:

Every user and every item can only appear at most once in a batch. This is required so that all interactions in the batch are completely independent so that the batch can be parallelized, and

The and interactions of a user or item should be assigned to batches and , respectively, such that . This is required since batches are processed sequentially, ’s embedding can then be used in the interaction.
The naive solution that satisfies both the above conditions is to process interactions oneatatime. However, this is very slow and can not be scaled to large number of interactions. This approach is used in existing methods such as Dai et al. (dai2016deep, ) and Zhang et al. (zhang2017learning, ).
Therefore, we propose a novel batching algorithm, called tBatch, that creates large batches that can be parallelized for faster training. tBatch is shown in Algorithm 2. tBatch takes as input the temporallysorted sequence of interactions and outputs a set of batches. It processes one interaction at a time in increasing order of time. The key idea of tBatch is that the interaction (say, between user and ) is assigned to the batch id after the largest batch id with any interaction of or yet. As a result, each batch only has unique user and item, and the batches can be processed in increasing order of their id, satisfying both the cobatching conditions.
Algorithmically, the variables and store the index of the last batch in which each user and item appears, respectively (lines 2 and 2). A total of empty batches are initialized which is the maximum number of resulting batches (line 2) and a count of the number of nonempty batches is maintained (line 2). All interactions are then sequentially processed, where the interaction is added to the batch after the one in which either or appear. All nonempty batches are returned at the end.
Example 4.1 (Running example).
In the example interaction network shown in Figure 1, tBatch results in the following batches:

Batch :

Batch :

Batch :

Batch :

Batch :
We can see that a total of 5 batches, there is a 45% decrease compared to the naive 9 batches. Note that in each batch, users and items appear at most once, and for the same user (and item), earlier transactions are assigned earlier batches.
Theorem 4.2 ().
tBatch algorithm satisfies the cobatching conditions.
Proof.
Lines 2 and 2 ensure that every batch contains a user and an item only once. This satisfies condition 1.
The interactions are added to batches in increasing order of time. Consider two interactions and in which a user appears. If is added to batch , the lastbatch index of is set of , such that when is processed, the index is at least (as shown in line 2). This satisfies condition 2 and completes the proof. ∎
Theorem 4.3 (Complexity).
The complexity of tBatch in creating the batches is , i.e., linear in the number of interactions, as each interaction is seen only once.
Overall, in this section, we presented the tBatch algorithm that creates batches from training data such that each batch can be parallelized. This leads to faster training of large scale interaction data. In Section 5.3, we experimentally validate that tBatch leads to a speedup between 7.4–8.5 in JODIE and DeepCoevolve.
5. Experiments
In this section, we experimentally validate the effectiveness of JODIE on two tasks: next interaction prediction and user state change prediction. We conduct experiments on three datasets each and compare with six strong baselines to show the following:

JODIE outperforms the best performing baseline by up to 22.4% in predicting the next interaction and up to 4.5% in predicting label changes.

We show that tBatch results in over 7.4 speedup in the runningtime of both JODIE and DeepCoevolve.

JODIE is robust in performance to the availability of training data.

We show that the performance of JODIE is stable with respect to the dimensionality of the dynamic embedding.

Finally, we show the usefulness of JODIE as an earlywarning system for label change.
We first explain the experimental setting and the baseline methods, and then illustrate the experimental results.
Experimental setting. We train all models by splitting the data by time, instead of splitting by user which would result in temporal inconsistency between training and test data. Therefore, we train all models on the first fraction of interactions, validate on the next fraction, and test on the next fraction of interactions.
For fair comparison, we use 128 dimensions as the dimensionality of the dynamic embedding for all algorithms and onehot vectors for static embeddings. All algorithms are run for 50 epochs, and all reported numbers for all models are for the test data corresponding to the best performing validation set.
Baselines. We compare JODIE with six stateoftheart algorithms spanning three algorithmic categories:

Recurrent neural network algorithms: in this category, we compare with RRN (wu2017recurrent, ), LatentCross (beutel2018latent, ), TimeLSTM (zhu2017next, ), and standard LSTM. These algorithms are stateoftheart in recommender systems and generate dynamic user embeddings. We use TimeLSTM3 cell for TimeLSTM as it performs the best in the original paper (zhu2017next, ), and LSTM cells in RRN and LatentCross models. As is standard, we use the onehot vector of items as inputs to these models.

Temporal network embedding algorithms: we compare JODIE with CTDNE (nguyen2018continuous, ) which is the stateoftheart in generating embeddings from temporal networks. As it generates static embeddings, we generate new embeddings after each edge is added. We use uniform sampling of neighborhood as it performs the best in the original paper (nguyen2018continuous, ).

Coevolutionary recurrent algorithms: here we compare with the stateoftheart algorithm, DeepCoevolve (dai2016deep, ), which has been shown to outperform other coevolutionary pointprocess algorithms (trivedi2017know, ). We use 10 negative samples per interaction for computational tractability.
Method  Wikipedia  LastFM  
MRR  Rec@10  MRR  Rec@10  MRR  Rec@10  
LSTM (zhu2017next, )  0.355  0.551  0.329  0.455  0.062  0.119 
TimeLSTM (zhu2017next, )  0.387  0.573  0.247  0.342  0.068  0.137 
RRN (wu2017recurrent, )  0.603  0.747  0.522  0.617  0.089  0.182 
LatentCross (beutel2018latent, )  0.421  0.588  0.424  0.481  0.148  0.227 
CTDNE (nguyen2018continuous, )  0.165  0.257  0.035  0.056  0.01  0.01 
DeepCoevolve (dai2016deep, )  0.171  0.275  0.515  0.563  0.019  0.039 
JODIE (proposed)  0.726  0.852  0.746  0.822  0.195  0.307 
% Improvement  12.3%  10.5%  22.4%  20.5%  4.7%  8.0% 
5.1. Experiment 1: Future interaction prediction
In this experiment, the task is to predict future interactions. The prediction task is: given all interactions till time , and the user involved in the interaction at time , which item will interact with (out of all items)?
We use three datasets in the experiments related to future interaction prediction:
Reddit post dataset: this dataset consists of one month of posts made by users on subreddits (pushshift, ). We selected the 1000 most active subreddits as items and the 10,000 most active users. This results in 672,447 interactions. We convert the text of the post into a feature vector representing their LIWC categories (pennebaker2001linguistic, ).
Wikipedia edits: this dataset is one month of edits made by edits on Wikipedia pages (wikidump, ). We selected the 1000 most edited pages as items and editors who made at least 5 edits as users (a total of 8227 users). This generates 157,474 interactions. Similar to the Reddit dataset, we convert the edit text into a LIWCfeature vector.
LastFM song listens: this dataset has one months of wholistenstowhich song information (lastfm, ). We selected all 1000 users and the 1000 most listened songs resulting in 1293103 interactions. In this dataset, interactions do not have features.
We select these datasets such that they vary in terms of users’ repetitive behavior: in Wikipedia and Reddit, a user interacts with the same item consecutively in 79% and 61% interactions, respectively, while in LastFM, this happens in only 8.6% interactions.
Experimentation setting. We use the first 80% data to train, next 10% to validate, and the final 10% to test. We measure the performance of the algorithms in terms of the mean reciprocal rank (MRR) and recall@10—MRR is the average of the reciprocal rank and recall@10 is the fraction of interactions in which the ground truth item is ranked in the top 10. Higher values for both are better. For every interaction, the ranking of ground truth item is calculated with respect to all the items in the dataset.
Results. Table 3 compares the results of JODIE with the six baseline methods. We observe that JODIE significantly outperforms all baselines in all datasets across both metrics on the three datasets (between 4.7% and 22.4%). Interestingly, we observe that our model performs well irrespective of how repetitive users are—it achieves up to 22.4% improvement in Wikipedia and Reddit (high repetition), and up to 8% improvement in LastFM . This means JODIE is able to learn to balance personal preference with users’ nonrepetitive interaction behavior. Moreover, among the baselines, there is no clear winner—while RRN performs the better in Reddit and Wikipedia, LatentCross performs better in LastFM. As CTDNE generates static embedding, its performance is low.
Overall, JODIE outperforms these baselines by learning efficient update, project, and predict functions.
Method  Wikipedia  MOOC  

LSTM (zhu2017next, )  0.523  0.575  0.686 
TimeLSTM (zhu2017next, )  0.556  0.671  0.711 
RRN (wu2017recurrent, )  0.586  0.804  0.558 
LatentCross (beutel2018latent, )  0.574  0.628  0.686 
DeepCoevolve (dai2016deep, )  0.577  0.663  0.671 
JODIE (proposed method)  0.599  0.831  0.756 
Improvement over best baseline  1.3%  2.7%  4.5% 
5.2. Experiment 2: User state change prediction
In this experiment, the task is to predict if an interaction will lead to a change in user, particularly in two use cases: predicting banning of users and predicting if a student will dropout of a course. Till a user is banned or dropsout, the label of the user is ‘0’, and their last interaction has the label ‘1’. For users that are not banned or do not dropout, the label is always ‘0’. This is a highly challenging task because of very high imbalance in labels.
We use three datasets for this task:
Reddit bans: we augment the Reddit post dataset (from Section 5.1) with ground truth labels of banned users from Reddit.This gives 366 true labels among 672,447 interactions (= 0.05%).
Wikipedia bans: we augment the Wikipedia edit data (from Section 5.1) with ground truth labels of banned users (wikidump, ). This results in 217 positive labels among 157,474 interactions (= 0.14%).
MOOC student dropout: this dataset consists of actions, e.g., viewing a video, submitting answer, etc., done by students on a MOOC online course (kddcup, ). This dataset consists of 7047 users interacting with 98 items (videos, answers, etc.) resulting in over 411,749 interactions. There are 4066 dropout events (= 0.98%).
Experimentation setting. Due to sparsity of positive labels, in this experiment we train the models on the first 60% interactions, validate on the next 20%, and test on the last 20% interactions. We evaluate the models using area under the curve metric (AUC), a standard metric in these tasks with highly imbalanced labels.
For the baselines, we train a logistic regression classifier on the training data using the dynamic user embedding as input. As always, for all models, we report the test AUC for the epoch with the highest validation AUC.
Results. Table 4 compares the performance of JODIE on the three datasets with the baseline models. We see that JODIE outperforms the baselines by up to 2.7% in the ban prediction task and by 4.5% in the dropout prediction task. As before, there is no clear winner among baselines—RRN performs the second best in predicting bans on Reddit and Wikipedia, while TimeLSTM is the second best in predicting dropouts.
Thus, JODIE is highly efficient in both link prediction and label change prediction.
Without tBatch  With tBatch  

DeepCoevolve  47.21  6.35 
JODIE  43.53  5.13 
5.3. Experiment 3: Effectiveness of tBatch
Here we empirically show the advantage of tBatch algorithm on coevolving recurrent models, namely our proposed JODIE and DeepCoevolve. Figure 4 shows the running time (in minutes) of one epoch of the Reddit dataset.^{3}^{3}3We ran the experiment on one NVIDIA Titan X Pascal GPUs with 12Gb of RAM at 10Gbps speed.
We make three crucial observations. First, we observe that JODIE is slightly faster than DeepCoevolve. We attribute it to the fact that JODIE does not use negative sampling while training because it directly generates the embedding of the predicted item. In contrast, DeepCoevolve requires training with negative sampling. Second, we see that our proposed JODIE + tBatch combination is 9.2 faster than the DeepCoevolve algorithm. Third, we observe that tBatch speedsup the runningtime of both JODIE and DeepCoevolve by 8.5 and 7.4, respectively.
Altogether, these experiments show that the proposed tBatch is very effective in creating parallelizable batches from complex temporal dependencies that exist in useritem interactions. Moreover, this also shows that tBatch is general and applicable to algorithms that learn from sequence of interactions.
5.4. Experiment 4: Robustness to training data
In this experiment, we check the robustness of JODIE by varying the percentage of training data and comparing the performance of the algorithms in both the tasks of interaction prediction and user state change prediction.
For interaction prediction, we vary the training data percentage from 10% to 80%. In each case, we take the 10% interactions after the training data as validation and the next 10% interactions next as testing. This is done to compare the performance on the same testing data size. Figure 5(a–c) shows the change in mean reciprocal rank (MRR) of all the algorithms on the three datasets, as the training data size is increased. We note that the performance of JODIE is stable as it does not vary much across the data points. Moreover, JODIE consistently outperforms the baseline models by a significant margin (by a maximum of 33.1%).
Similar is the case in user state change prediction. Here, we vary training data percent as 20%, 40%, and 60%, and in each case take the following 20% interactions as validation and the next 20% interactions as test. Figure 5(d) shows the AUC of all the algorithms on the Wikipedia dataset. We omit the other datasets due to space constraints, which have similar results. Again, we observe that JODIE is stable and consistently performs the best (better by up to 3.1%), irrespective of the training data size.
This shows the robustness of JODIE to the amount of available training data.
5.5. Experiment 5: Robustness to embedding size
Finally, we check the effect of the dynamic embedding size on the predictions. To do this, we vary the dynamic embedding dimension from 32 to 256, and calculate the mean reciprocal rank for interaction prediction on the LastFM dataset. The effect on other datasets is similar and omitted due to space constraints. The resulting figure is showing in Figure 6. We find that the embedding dimension size has little effect on the performance of JODIE and it performs the best overall.
5.6. Experiment 6: Jodie as an earlywarning system
In user state change prediction tasks such as predicting student dropout from courses and finding online malicious users, it is crucial to make the predictions early, in order to develop effective intervention strategies . For instance, if it can be predicted well in advance that a student is likely to drop a course (an ‘atrisk’ student), then steps can be taken by the teachers to ensure continued education of the student . Therefore, here we show that JODIE is effective in making early predictions for atrisk students.
To measure this, we calculate the change in student’s dropout probability predicted by JODIE as a function of the number of interactions till it drops out. Let us call the set of students who drop as and those who do not as . We plot the ratio of the predicted probability score of to the predicted score for (i.e., the expected score). A ratio of one means that the algorithm gives equal score to droppingout and nondroppingout students, while a ratio greater than one means that the algorithm gives a higher score to students that drop out compared to the students that do not. The average ratio is shown in Figure 7
, with 95% confidence intervals. Here we only consider the interactions that occur in the test set, to prevent direct training on the dropout interactions.
First, we observe from Figure 7 that the ratio score is higher than one as early as five interactions prior to the student droppingout. Second, we observe that as the student approaches its ‘final’ dropout interaction, its score predicted by JODIE increases steadily and spikes strongly at its final interaction. Both these observations together shows that JODIE identifies early signs of droppingout and predicts a higher score for these atrisk students.
We make similar observation for users before they get banned on Wikipedia and Reddit, but the ratio in both these cases is close to 1, indicating that there is low early predictability of when a user will be banned.
Overall, in this section, we showed that effectiveness and robustness of JODIE and tBatch in two tasks, in comparison to six stateoftheart algorithms. Moreover, we showed the usefulness of JODIE as an earlywarning system to identify student dropouts as early as five interactions prior to droppingout.
6. Conclusions
We proposed a coupled recurrent neural network model called JODIE that learns dynamic embeddings of users and items from a sequence of temporal interactions. The use of a novel project function, inspired by Kalman Filters, to estimate the user embedding at any time point is a key innovation of JODIE and leads to the advanced performance of JODIE. We also proposed the tBatch algorithm that creates parallelizable batches of training data, which results in massive speedup in running time.
There are several directions open for future work, such as learning embeddings of groups of users and items in temporal interactions and learning hierarchical embeddings of users and items. We will explore these directions in future work.
References
 [1] Kdd cup 2015. https://biendata.com/competition/kddcup2015/data/. Accessed: 20181105.
 [2] Reddit data dump. http://files.pushshift.io/reddit/. Accessed: 20181105.
 [3] Wikipedia edit history dump. https://meta.wikimedia.org/wiki/Data_dumps. Accessed: 20181105.
 [4] D. Agrawal, C. Budak, A. El Abbadi, T. Georgiou, and X. Yan. Big data in online social networks: user interaction analysis to model user behavior in social networks. In International Workshop on Databases in Networked Information Systems, pages 1–16. Springer, 2014.
 [5] T. Arnoux, L. Tabourier, and M. Latapy. Combining structural and dynamic information to predict activity in link streams. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, Sydney, Australia, July 31  August 03, 2017, pages 935–942, 2017.
 [6] T. Arnoux, L. Tabourier, and M. Latapy. Predicting interactions between individuals with structural and dynamical information. CoRR, abs/1804.01465, 2018.
 [7] I. M. Baytas, C. Xiao, X. Zhang, F. Wang, A. K. Jain, and J. Zhou. Patient subtyping via timeaware lstm networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 65–74. ACM, 2017.
 [8] A. Beutel, P. Covington, S. Jain, C. Xu, J. Li, V. Gatto, and E. H. Chi. Latent cross: Making use of context in recurrent recommender systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 46–54. ACM, 2018.
 [9] J. Bobadilla, F. Ortega, A. Hernando, and A. Gutiérrez. Recommender systems survey. Knowledgebased systems, 46:109–132, 2013.
 [10] C. Buntain and J. Golbeck. Identifying social roles in reddit using network structure. In Proceedings of the 23rd International Conference on World Wide Web, pages 615–620. ACM, 2014.
 [11] S. Chaturvedi, D. Goldwasser, and H. Daumé III. Predicting instructor’s intervention in mooc forums. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1501–1511, 2014.
 [12] J. Cheng, M. Bernstein, C. DanescuNiculescuMizil, and J. Leskovec. Anyone can become a troll: Causes of trolling behavior in online discussions. In CSCW: proceedings of the Conference on ComputerSupported Cooperative Work. Conference on ComputerSupported Cooperative Work, volume 2017, page 1217. NIH Public Access, 2017.
 [13] H. Dai, Y. Wang, R. Trivedi, and L. Song. Deep coevolutionary network: Embedding user and item features for recommendation. arXiv preprint arXiv:1609.03675, 2016.
 [14] N. Du, H. Dai, R. Trivedi, U. Upadhyay, M. GomezRodriguez, and L. Song. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 1317, 2016, pages 1555–1564, 2016.
 [15] M. Farajtabar, Y. Wang, M. GomezRodriguez, S. Li, H. Zha, and L. Song. COEVOLVE: A joint point process model for information diffusion and network coevolution. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 1954–1962, 2015.
 [16] A. Ferraz Costa, Y. Yamaguchi, A. Juci Machado Traina, C. Traina Jr, and C. Faloutsos. Rsc: Mining and modeling temporal activity in social media. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 269–278. ACM, 2015.
 [17] P. Goyal and E. Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowl.Based Syst., 151:78–94, 2018.
 [18] P. Goyal, N. Kamra, X. He, and Y. Liu. Dyngem: Deep embedding method for dynamic graphs. arXiv preprint arXiv:1805.11273, 2018.
 [19] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
 [20] W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017.

[21]
B. Hidasi and D. Tikk.
Fast alsbased tensor factorization for contextaware recommendation from implicit feedback.
InJoint European Conference on Machine Learning and Knowledge Discovery in Databases
, pages 67–82. Springer, 2012.  [22] T. Iba, K. Nemoto, B. Peters, and P. A. Gloor. Analyzing the creative editing behavior of wikipedia editors: Through dynamic social network analysis. ProcediaSocial and Behavioral Sciences, 2(4):6441–6456, 2010.
 [23] A. E. Johnson, T. J. Pollard, L. Shen, H. L. Liwei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark. Mimiciii, a freely accessible critical care database. Scientific data, 3:160035, 2016.
 [24] S. J. Julier and J. K. Uhlmann. New extension of the kalman filter to nonlinear systems. In Signal processing, sensor fusion, and target recognition VI, volume 3068, pages 182–194. International Society for Optics and Photonics, 1997.
 [25] R. R. Junuthula, M. Haghdan, K. S. Xu, and V. K. Devabhaktuni. The block point process model for continuoustime eventbased dynamic networks. CoRR, abs/1711.10967, 2017.
 [26] R. R. Junuthula, K. S. Xu, and V. K. Devabhaktuni. Leveraging friendship networks for dynamic link prediction in social interaction networks. In Proceedings of the Twelfth International Conference on Web and Social Media, ICWSM 2018, Stanford, California, USA, June 2528, 2018., pages 628–631, 2018.
 [27] M. Kloft, F. Stiehler, Z. Zheng, and N. Pinkwart. Predicting mooc dropout over weeks using machine learning methods. In Proceedings of the EMNLP 2014 Workshop on Analysis of Large Scale Social Interaction in MOOCs, pages 60–65, 2014.
 [28] S. Kumar, F. Spezzano, and V. Subrahmanian. Vews: A wikipedia vandal early warning system. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 607–616. ACM, 2015.
 [29] J. Li, H. Dani, X. Hu, J. Tang, Y. Chang, and H. Liu. Attributed network embedding for learning in a dynamic environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06  10, 2017, pages 387–396, 2017.
 [30] T. Li, J. Zhang, P. S. Yu, Y. Zhang, and Y. Yan. Deep dynamic network embedding for link prediction. IEEE Access, 6:29219–29230, 2018.

[31]
X. Li, N. Du, H. Li, K. Li, J. Gao, and A. Zhang.
A deep learning approach to link prediction in dynamic networks.
In Proceedings of the 2014 SIAM International Conference on Data Mining, Philadelphia, Pennsylvania, USA, April 2426, 2014, pages 289–297, 2014.  [32] T. R. Liyanagunawardena, A. A. Adams, and S. A. Williams. Moocs: A systematic study of the published literature 20082012. The International Review of Research in Open and Distributed Learning, 14(3):202–227, 2013.
 [33] Y. Ma, Z. Guo, Z. Ren, Y. E. Zhao, J. Tang, and D. Yin. Dynamic graph neural networks. CoRR, abs/1810.10627, 2018.
 [34] G. H. Nguyen, J. B. Lee, R. A. Rossi, N. K. Ahmed, E. Koh, and S. Kim. Continuoustime dynamic network embeddings. In 3rd International Workshop on Learning Representations for Big Networks (WWW BigNet), 2018.
 [35] R. Pálovics, A. A. Benczúr, L. Kocsis, T. Kiss, and E. Frigó. Exploiting temporal influence in online recommendation. In Eighth ACM Conference on Recommender Systems, RecSys ’14, Foster City, Silicon Valley, CA, USA  October 06  10, 2014, pages 273–280, 2014.
 [36] J. W. Pennebaker, M. E. Francis, and R. J. Booth. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001, 2001.
 [37] B. Perozzi, R. AlRfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
 [38] J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018, Marina Del Rey, CA, USA, February 59, 2018, pages 459–467, 2018.
 [39] V. Raghavan, G. Ver Steeg, A. Galstyan, and A. G. Tartakovsky. Modeling temporal activity patterns in dynamic social networks. IEEE Transactions on Computational Social Systems, 1(1):89–107, 2014.
 [40] M. Rahman, T. K. Saha, M. A. Hasan, K. S. Xu, and C. K. Reddy. Dylink2vec: Effective feature representation for link prediction in dynamic networks. CoRR, abs/1804.05755, 2018.
 [41] S. Sajadmanesh, J. Zhang, and H. R. Rabiee. Continuoustime relationship prediction in dynamic heterogeneous information networks. CoRR, abs/1710.00818, 2017.
 [42] S. Sedhain, S. Sanner, L. Xie, R. Kidd, K. Tran, and P. Christen. Social affinity filtering: recommendation through finegrained analysis of user interactions and activities. In Conference on Online Social Networks, COSN’13, Boston, MA, USA, October 78, 2013, pages 51–62, 2013.

[43]
R. Trivedi, H. Dai, Y. Wang, and L. Song.
Knowevolve: Deep temporal reasoning for dynamic knowledge graphs.
In International Conference on Machine Learning, pages 3462–3471, 2017.  [44] R. Trivedi, M. Farajtbar, P. Biswal, and H. Zha. Representation learning over dynamic graphs. arXiv preprint arXiv:1803.04051, 2018.
 [45] P. B. Walker, S. G. Fooshee, and I. Davidson. Complex interactions in social and event network analysis. In International Conference on Social Computing, BehavioralCultural Modeling, and Prediction, pages 440–445. Springer, 2015.
 [46] Y. Wang, N. Du, R. Trivedi, and L. Song. Coevolutionary latent feature processes for continuoustime useritem interactions. In Advances in Neural Information Processing Systems, pages 4547–4555, 2016.
 [47] C.Y. Wu, A. Ahmed, A. Beutel, A. J. Smola, and H. Jing. Recurrent recommender networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 495–503. ACM, 2017.
 [48] D. Yang, T. Sinha, D. Adamson, and C. P. Rosé. Turn on, tune in, drop out: Anticipating student dropouts in massive open online courses. In Proceedings of the 2013 NIPS Datadriven education workshop, volume 11, page 14, 2013.
 [49] S. Zhang, L. Yao, and A. Sun. Deep learning based recommender system: A survey and new perspectives. arXiv preprint arXiv:1707.07435, 2017.
 [50] Y. Zhang, Y. Xiong, X. Kong, and Y. Zhu. Learning node embeddings in interaction graphs. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 397–406. ACM, 2017.
 [51] L.k. Zhou, Y. Yang, X. Ren, F. Wu, and Y. Zhuang. Dynamic network embedding by modeling triadic closure process. In AAAI, 2018.
 [52] L. Zhu, D. Guo, J. Yin, G. Ver Steeg, and A. Galstyan. Scalable temporal latent space inference for link prediction in dynamic social networks. IEEE Transactions on Knowledge and Data Engineering, 28(10):2765–2777, 2016.

[53]
Y. Zhu, H. Li, Y. Liao, B. Wang, Z. Guan, H. Liu, and D. Cai.
What to do next: modeling user behaviors by timelstm.
In
Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, IJCAI17
, pages 3602–3608, 2017.
Comments
There are no comments yet.