I Introduction
When dealing with sparse vectors, the recommendation model based on deep learning has the problem of huge parameter magnitude for the model. For engineering, the cost of storage space is also extremely expensive. The above reasons make deep learning very inconvenient for the processing of sparse vectors, so an embedding method that can encode objects with a lowdimensional vector and still retain its properties is very important for the recommendation model based on deep learning.
In the deep learning recommendation system, the embedding method has three potential application scenarios:

As the embedding layer in the deep learning network, the embedding method can make the conversion from highdimensional sparse feature vectors to lowdimensional dense feature vectors;

As part of the preprocessing, the embedding method can generate embedding feature vectors, which are connected with other features and then used as input to the deep learning network;

As one of the recall layers or recall methods of the recommendation system, it can calculate the embedding vectors similarity between users and items to generate a recommendation list.
Google published three articles of Word2Vec [15, 16, 23]
for the embedding technology in 2013. Immediately afterwards, Microsoft expanded Word2Vec to Item2Vec, making it directly extend from the field of Natural Language Processing to any field that can generate sequences such as recommendations, advertisements and searching
[2]. Coincidentally, Airbnb subsequently proposed their own Item2Vec model [9]. However, the relationship between entities such as users and items are becoming more and more complex, and it is no longer a pure sequence relationship, but more a graphical data [12]. Therefore, applying the graph embedding methods in the recommendation system has attracted more and more attention from academic.DeepWalk [20] is a preliminary model that uses random walk to convert graphical relationships into sequential relationships. Although DeepWalk can be applied to very largescale networks, it is only suitable for unweighted graphs, but not for weighted graphs. Compared with DeepWalk’s pure random walk sequence generation method, LINE [26] introduces the firstorder and secondorder proximity relationships into the objective function, which can make the final distribution of the embedding vectors of items more balanced and smoother. Node2Vec [10] then improves DeepWalk by combining depthfirst search and breadthfirst search. In this way the final embedding structure can express the overall and local structure of complex graphics. Compared with Node2Vec’s improvement of the walk mode, the SDNE [27] model mainly solves the problem of the local structure and global structure of embedding graphics from the design of the objective function. Compared with LINE’s practice of learning the local structure and the global structure separately, SDNE performs the overall optimization together, which is more conducive to obtaining the overall optimal embedding vectors. However, all the current approaches are proposed for a single environment, and they cannot be applied to real static environment and complex dynamic environment at the same time.
In this paper, we propose Dualmodal Graph Embedding Method (DGEM). DGEM works in different modes according to different application scenarios; static mode (SDGEM) in static recommendation environment and dynamic mode (DDGEM) in dynamic recommendation environment. In this way, we solve the problem of xxx. In the static recommendation environment, SDGEM extracts the graph structure model by establishing a directed weighted item graph and uses random walk of unequal probability to capture the vertex attributes of the item graph. Based on the generated item sequence data, we construct the item graph embedding vector by the Word2Vec method and feed the deep neural network for recommendation. In the dynamic recommendation environment, DDGEM introduces the time state to track the update of the item graph and improves the unequal probability random walk strategy in the static recommendation environment to capture the vertex attribute in the dynamic item graph. We also add auxiliary information to enhance the special solitary point expression. In this way, the timing dependence between items can be better utilized and the recommendation performance in a dynamic environment can be improved. To improve the user experience, we further introduce the application users interests as a feature for deep learning.
To demonstrate the performance of DGEM, we construct comprehensive experiments based on the Amazon electronic product data subset. The experimental results show that 1) DGEM can use random walk to mine higherorder neighbor relationships to make up for the sparse purchasing behavior and enhance the expression ability of the model; 2) DGEM introduces the time state, which alleviates the inherent scalability and cold start problems of the recommended system.
Ii Background and Related Work
With the continuous advancement of communication technology and network platforms, information data has shown an exponentially explosive growth trend, which has brought about serious information overload problems [24, 4]. To solve this problem, the recommendation system in the applications developed by service providers has become an essential part. In realworld ecommerce applications [28], graph data such as social networks between users, product networks between items, and interaction networks between users and items are everywhere. Through the analysis of graph data, we can deeply understand the user’s social structure, the relevance of the item, and the interaction between the user and the item. To solve the problem of graph analysis, industry and academia have proposed many methods to perform the analysis. Among them, graph embedding technology, which uses graph vertex representation method in vector space, has received more and more attention from researchers in recent years.
The graph embedding technology can be traced back to the early 2000s and its main application scenario is as a dimensionality reduction technique. The basic idea of the dimensionality reduction graph embedding techniques is to construct a set of dimensional vertices into a neighborhoodbased graph at frist, and then embed the vertices into a () dimensional vector space. Laplacian Eigenmaps(LE) [3] and Locally Linear Embedding(LLE) [25] are typical dimensionality reduction graph embedding techniques. The time complexity of the dimensionality reduction graph embedding algorithm is related to the square of the number of vertices, so it is suitable for smallscale graph data and has poor scalability.
Since 2010, the research direction of graph embedding technology has shifted from dimensionality reduction algorithm to scalable graph embedding algorithm. Scalable graph embedding algorithms can be divided into two categories based on factorization methods and random walk. The graph embedding algorithm based on the factorization methods(eg., Graph Factorization [1], GraRep [5], HOPE [18]) has poor interpretability and low parallelism, and it is difficult to perform online incremental calculation.
Random walks have been used to approximate many properties in the graph including vertex centrality [17] and similarity [8]. Among the graph embedding methods based on random walk [20, 10, 6, 21, 30, 13, 19, 29]
, DeepWalk is the first model to introduce deep learning. DeepWalk algorithm consists of two parts: random walk sequence generation part and parameter update part. DeepWalk can learn in parallel and can mine the local structure of the graph, but it only supports unweighted graphs, which makes it impossible to generalize. The introduction of LINE makes the type of graph no longer a limitation of the model. LINE also proposed an edge sampling algorithm to solve the limitations of classic stochastic gradient descent, which improves the sampling efficiency and effect. LINE explicitly defines a loss function to capture firstorder proximity and secondorder proximity, which represent firstorder local relations and secondorder local relations, respectively. However, it cannot be extended to the higherorder proximity. DeepWalk and LINE are essentially an algorithm based on neighborhood relations. Different sampling strategies will result in different neighborhood relations, so different vertex expressions are learned. In order to solve the problem of sampling flexibility, Microsoft proposed the Node2Vec model. Compared with rigid search methods such as DeepWalk and LINE, node2vec can control the search space by adjusting hyperparameters, thus generating a more flexible algorithm. The hyperparameter has an intuitive explanation and determines different search strategies.
In all the proposed graph embedding methods based on random walk, they focus on the mining of vertex relationships in a static application environment. DeepWalk is a kind of pure random walk, and the vertex relationships of its mining are not biased; LINE focuses on the firstorder and secondorder neighbor relations; Node2Vec uses different methods to mine the different relations between vertices. Although all methods can be extended, it is essentially a static extension. In order to implement static and dynamic dualmodal scalable graph embedding method in recommendation system, we propose DGEM.
Iii Framework
In this section, we introduce the design ideas of DGEM in detail, mainly including SDGEM in static mode and DDGEM in dynamic mode. DGEM can be roughly divided into five modules, namely item graph construction module, random walk module, solitary point processing module, graph embedding module and deep neural network module.
Iiia Recommendation target for graph embedding
When using the recommendation system, if the participants in the recommendation system remain relatively stable for a period of time, then we can call the recommendation system a static recommendation system (SRS). If the objects involved in the recommendation system change anytime and anywhere, then we can call the recommendation system as a dynamic recommendation system (DRS).
In a real recommendation scenario, the user’s interest usually shows a diversified trend. The number of items interacted by the user accounts for a small proportion compared to the total number of items. It is difficult to train an accurate model. The traditional recommendation method has problems of insufficient interactive information mining, homogenization of recommended items and data sparseness; and in the deep learningbased recommendation method, its important embedding operations are designed for sequence data and are no longer applicable the graphic data in the real environment. In other words, the above two types of recommendation methods are not suitable for the gradually complicated and networked recommendation environment.
Therefore, in a real recommendation scenario, there are the following requirements:

The user’s historical behavior record contains a large number of implicit user interest feedback behaviors such as clicks, browses, favorites, and purchases. Compared with explicit user interest feedback behaviors, the number of implicit user interest feedback behaviors is larger and more reflective of user interests preferences. Therefore, it is required that the designed recommendation method can deeply dig the user’s implicit user interest feedback behavior to find the user’s interest preference information.

When the user uses the application, each historical behavior of the user have a time stamp, that is, the user’s historical behavior has a strong order. Therefore, it is required to design the recommended method to preserve the sequence of user behavior.

The recommendation system in the dynamic environment changes from moment to moment, and its change will cause the corresponding directed weighted item graph to change, which bring huge challenges to the graph embedding work. Therefore, the proposed recommendation algorithm have to seek an intermediate state for tracking and retaining changes in the item graph structure.

Some items have little or no interaction with users, and it is difficult for deep neural networks to mine information. Therefore, it is required that the designed recommendation algorithm can train accurate models for these items with little interaction.
IiiB The design of SDGEM
IiiB1 The definition of related issues in SDGEM
Definition 1. (Graph): The graph consists of a finite set of nonempty vertices and a set of edges between vertices, usually expressed as , where represents a graph, is a set of vertices in graph , and is a set of edges in graph .
If the edge between the vertices and has no direction, the edge is called an undirected edge, otherwise the edge is called a directed edge. If all edges in the graph are undirected edges, we call the graph undirected graph. Similarly, if all edges in the graph are directed edges, we call the graph directed graph.
Some edges of the graph may have numbers associated with it. We generally call this kind of number related to the edge weight. These weights can represent the distance or cost from one vertex to another. Such a graph with weights is usually called a weighted graph. In the weighted graph, we record the weight of the edge between the vertices and as . The value of the weight is usually nonnegative. If the edge exists, , otherwise . In general, we record the weighted graph as .
Definition 2.(Graph Embedding): Given a graph , the essence of graph embedding is a mapping , , where , and the function retains some proximity defined on the graph . That is, graph embedding maps each vertex to a lowdimensional feature vector space, and attempts to preserve the connection strength relationship between vertices.
There are two pairs of vertices and . If their proximity is related to the connection strength, suppose there is a connection strength relationship . Then in this case, after mapping into the embedding space, the distance between and is closer than the distance between and .
Notation  Description 

The graphical representation of data  
The set of vertices in graph  
The set of edges in graph  
The set of weights of edges in graph  
The number of vertices in graph  
One of vertices in graph  
The edge between the vertices and  
The weight of the  
The dimension of embedded space 
IiiB2 The construction of directed weighted item graph
After obtaining the standard user historical behavior sequence, we can proceed to build a directed weighted item graph. In the standard historical behavior sequence of the same user, we record two consecutive items and as the item pair . For each item pair , if there are no edge between the two items and , then add the edge , is a directed edge, the direction of is the item with the earlier timestamp to the item with the later timestamp, and the weight of is recorded is 1; if an edge already exists between the two items and in the item pair, the edge is no longer added, and the original weight of the edge plus 1. For the edge weight operation of the item pair , the is organized into a mathematical form as follows:
(1) 
Specifically, the weight of the final directed weighted item graph is equal to the number of occurrences of related item pairs in the historical purchase behavior of all users, that is, the weight is equal to The frequency of conversion of to in the purchase history of all users. The directed weighted item graph constructed in this way can preserve the context of the items in the user’s purchase history and the similarities between different items.
Fig. 1 shows the historical purchase behavior sequence of 4 users. The historical purchase sequence of is , the is , the is and the is . For , her historical purchase behavior item pairs are , because there is no edge between these item pairs, so we need to add directed edges to them and set their weights to 1. By analogy, the historical purchase behavior items of , and are also added with directed edges and weights are modified. Fig. 2 is a directed weighted item graph constructed according to the historical purchase behavior sequence of 4 users shown in Fig. 1. It is worth noting that the item pair appears twice in the historical purchase behavior of all users, so the weight of directed edge is 2. Another interesting thing is that there is a directed edge between the item pair and the item pair , forming a closed loop.
IiiB3 The design of random walk with unequal probability
In the abstract conceptual model of random walk, it may be difficult to predict the occurrence of a single random event, but we can confirm the distribution of a large number of random events. That is to say, in the face of a single random event, we may predict that there may be a difference in what happens, but in the face of a large number of random events, we can predict the overall feature similarity. Therefore, random walk can be used to capture the topology of directed weighted item graphs. As the name implies, random walk can select a vertex in the graph as the first step, and then randomly move on the edge. Truncated random walks define the maximum length of all walk sequences.
In the static recommendation environment, after constructing a directed weighted item graph, we randomly sort all vertices in the vertex set in the graph for times. For each randomly sorted vertex sequence, we use each vertex as the starting vertex of random walk according to the vertex sequence, and transfer to the adjacent vertex according to the transition probability until the random walk length meets the requirements. The transition probability of random walk with unequal probability is the proportion of the weights of adjacent connected edges, that is to say, the transition probability of large edge weight will be higher. The mathematical expression of the transition probability of random walk with unequal probability is:
(2) 
Among them, represents the set of all directed edges that go out from the vertex , and represents the hyperparameter of whether to stay at the current vertex. Fig. 3 is generated by random walk with unequal probability according to the directed weighted item graph shown in Fig. 2.
IiiB4 The generation of graph embedding
Word2Vec model is suitable for generating embedded vectors of serial data. Then, after we get the item sequence generated by random walk, we can naturally use the Word2Vec model to obtain the graph embedding vertices of items. Here, we use the SkipGram model to learn the graph embedding vertices of items, as shown in Fig. 4, then its goal is to maximize the simultaneous occurrence probability of two vertices in the obtained sequence, that is, the mathematical expression of our optimization goal is:
(3) 
Where is the window size of the context in the sequence of items generated by random walk. If it is assumed that the vertices of the item is independent of each other, then we can get the following equation:
(4) 
Because the original SkipGram model iteration speed is too slow, we introduced a negative sampling method to accelerate the item’s graph embedding training, then the mathematical expression of our optimization goal can be written in the following form:
(5) 
Among them, is the negative sample of , and
is the sigmoid function:
(6) 
From experience, the larger , the better the effect.
IiiC The design of DDGEM
IiiC1 The definition of related issues in DGE
Definition 3.(Dynamic time graph): Given a dynamic time graph , where is the set of vertices in graph , and is a set of edges with time label in graph , is a function that maps the time label of edge attached to a timestamp. For convenience, without special instructions, we default as the conversion function of UnixTime and real time directly.
Definition 4.(Temporal walk): Given a dynamic time graph , we record the temporal walk from vertices to as a sequence of vertices, written in the mathematical form: , where , is always established and satisfied such a strict timing relationship. For two arbitrary vertices and in the vertex set , if there is a temporal walk from the vertex to the vertex , then we can consider the vertex and the vertex to be temporal connected.
The definition of temporal walk echoes the standard definition of random walk in the directed weighted graph of the static recommendation method, except that there is one more constraint that the walk must follow the timing relationship, that is, the time label of the passing edge must be incremental.
Definition 5.(Dynamic graph embedding): Given a dynamic time graph , the essence of dynamic graph embedding is a kind of mapping , , where , the mapped function indicates that the vertices in the graph are mapped to
dimensional representation vectors suitable for downstream machine learning tasks.
Notation  Description 

The graphical representation of data with time label  
The set of vertices in graph  
The set of edges with time label in graph  
The function of mapping time label to timestamp  
The number of vertices in graph  
One of vertices in graph  
The edge between the vertices and  
The frequency weight of the  
The time weight of the  
The dimension of embedded space 
IiiC2 The enhancement of directed weighted item graph
After obtaining the standard user historical behavior sequence, we can proceed to build a dynamic directed weighted item graph. In the standard historical behavior sequence of the same user, we record two consecutive items and as the item pair . For each item pair , if there are no edge between the two items and , then add the edge , is a directed edge, the direction of is the item with the earlier timestamp to the item with the later timestamp, and the frequency weight of is recorded , is assigned a value of 1, and the later timestamp is added to the list of time weight . If there is already an edge between and , the new edge no longer be added, and the original frequency weight of is increased by 1 , and the later timestamp added to the list of time weights . For the edge weight operation of the item pair , the frequency weight is organized into a mathematical form as follows:
(7) 
The time weights is organized into a mathematical form as follows:
(8) 
Specifically, the frequency weight of the directed weighted item graph is equal to the number of occurrences of related item pairs in the historical purchase behavior of all users, that is, the frequency weight of edge , it is equal to the frequency of conversion of to in the purchase history behavior of all users, and the time weight is the timestamp set of converted to in all user purchase history behaviors.
Fig. 1 shows the historical purchase behavior sequence of 4 users, the historical purchase sequence of is , the is , the is and the is . For , her historical purchase behavior item pairs are , because there is no edge between these item pairs, we need to add directed edges to them, set their frequency weight to 1, and add the later timestamp to the time weight list of the edges. By analogy, the historical purchase behavior items of , and are also added with directed edges, frequency weights are modified and time weights are added. Fig. 5 is a dynamic directed weighted item graph constructed according to the historical purchase behavior sequence of 4 users shown in Fig. 1. It is worth noting that the item pair appears twice in the historical purchase behavior of all users, so the frequency weight of the directed edge is 2, and the time weight list contains two time stamps. Another interesting point is that there is a directional edge between the item pair and the item pair , forming a closed loop.
IiiC3 The design of random walk strategy based on tense
In the static recommendation environment, random walk is an algorithm that ignores time labels. The main purpose of the algorithm is to perform a fixedlength random walk on each vertex in the vertex set in order to collect a sufficient number of item sequences. In the dynamic recommendation environment, for the dynamic time graph , the temporal walk from vertex to is a form of vertex sequence, that is, , where is always established and satisfies such a strict timing relationship is always established, which shows that the temporal walk requires not only a start vertex , but also a start time .
In the dynamic time graph, each edge is related to time
. The selection of the starting vertex can also be considered as the selection of the starting edge. We can find a timestamp according to the uniform distribution or the weighted distribution, and then find the edge closest to the timestamp as our starting edge. There are two types of edge selection: unbiased and biased. The selection of the start edge is expressed in mathematical form as:
(9) 
Uniform distribution is an unbiased start edge selection strategy. Its essence is to select an edge from the edge set with equal probability. Both exponential and linear distributions are biased starting edge selection strategy.
The selection of the start edge of the temporal walk is a very advantageous method, because this is a method of timebiasing the temporal walk. Therefore, when performing the time series regression task or time series classification task downstream, the timebiased method for temporal walk can improve the prediction performance. Not only is the selection of the start edge unbiased or biased, the transition probability of the edge can also be divided into two categories when performing temporal walks. The transition probability of edges is written in the mathematical form as:
(10) 
Among them, is expressed as a set of all directed edges out of one vertex of edge . Uniform distribution is an unbiased selection strategy of adjacent edges. Its essence is to select an edge with equal probability from the set of adjacent edges. Both exponential and linear distributions are biased adjacent edge selection strategy. If the function
in the exponential distribution is a monotonically increasing function, then the exponential distribution is a strategy for selection of adjacent edges that tends to appear later; if the function
is a monotonically decreasing function, then exponential distribution is an adjacent edge selection strategy that favors the selection of successive occurrence edges. in a linear distribution is a function, which itself is a strategy for selection of adjacent edges that favors the selection of successive occurrence edges.Given a dynamic time graph , let be the set of all possible random walks on , and let be the set of all possible temporal walks on . It is easy to see that the temporal walk set is included in the random walk set , and occupies only a small part of . The temporal walk we proposed can be regarded as a random walk that samples a set of random walks strictly following the timing relationship from the random walk set . In some cases, the random walk order that we sample may be invalid if it does not observe time dependence. For example, suppose that each edge represents an interaction event between two people (for example, the purchase behavior after sharing a shopping link), then temporal walk can be regarded as an effective timebased path.
IiiC4 The processing of solitary point
The cold start item, which is a item without user interaction, is represented in a directed weighted graph as a solitary point. Learning accurate embedding vector for cold start items is still a challenge. To solve the cold start problem, we use auxiliary information attached to the cold start item to enhance the item’s graph embedding. In general, items with similar auxiliary information should be closer in the embedding space. Based on this assumption, we propose a method for embedding auxiliary information. We use to denote the embedding matrix of items or auxiliary information. Specifically, represents an embedding vector of , and represents embedding type auxiliary information attached to . Then, for items with kinds of auxiliary information, we have vectors , where is the embedding dimension. Based on experience, we set the dimension of the embedding vector of the item and the embedding vector of the auxiliary information to be the same. To merge auxiliary information, we connect the embedding vectors of and add a layer with an average pool operation to summarize all embeddings related to :
(11) 
Where is the aggregate embedding of . In this way, we merge the auxiliary information so that items with similar auxiliary information are closer in the embedding space.
IiiD The architecture of deep neural network
Static mode and dynamic mode share the same deep neural network module. The recommended model we adopt is a basic framework based on embedded and multilayer perceptrons.
The graph embedding module we designed is used as a part of preprocessing. The function is to pretrain the embedding feature vector of the item, and connect it with other feature vectors before using it as the input of the deep learning network. After obtaining the dense overall representation vector, we use the fully connected layer to automatically learn the combined features.
Among them, we add an attention mechanism. When processing, the deep neural network with attention mechanism value the relevant behavior history, and the irrelevant history can even be ignored. It is also intuitive to reflect such ideas into the model. If we follow the previous approach, we consider that all behavior records have the same effect. Corresponding to the model, we use an average pooling layer to average the embedding vectors of all products interacted by the user to form the user vector of this user. Or add a timestamp to make the impact of the latest behavior greater, corresponding to the model is to adjust the weight according to time when doing average pooling. In the traditional attention mechanism, given two item embedding vectors, such as and , usually do the dot product or directly, where is a weight matrix of . But we have made further improvements, focusing on the attention mechanism shown in the Fig. 6. First of all, the corresponding element difference vectors of , , and are combined as input, and then fed to the fully connected layer, and finally obtained Weights.
In the training process, the objective function we use is a negative loglikelihood function, which is defined as:
(12) 
Among them, is the training set of size , is the input of the deep neural network, is the attribute label, and
is the output of the deep neural network after the softmax layer. The trained deep neural network will be used to predict the user’s next item of interest.
Iv Experiment
Iva Data Set
In order to verify the performance of DGEM, we adopted the Amazon public dataset as the benchmark dataset. The data set contains a product review data set and a product information data set. The product review data set includes a total of 142.8 million reviews on the Amazon website from May 1996 to July 2014. We finally selected a subset of electronic product data from Amazon’s public data set for experiments. The Amazon Electronics product data subset also contains the product review data set (reviewsElectronics) and product information data set (metaElectronics).The product review data set contains information such as reviewer ID, product ID, review usefulness rating, review text, review time, etc. The product review data set contains product ID, product name, Product pictures, product category lists and product descriptions.
In the static recommendation environment, for the product review data set in the Amazon electronic product data subset, we only retain the three types of information: reviewer ID (reviewerID), product ID (asin), unix review time (unixReviewTime). As for the product information in the Amazon electronic product data subset, we only keep the two types of information: product ID (asin) and product category (categories). Considering that some product IDs appear in the product information data set but not appear in the product review data set, we have to preprocess the product information data set, that is, only the product ID that appeared in the product review data set is retained in the product information data set and the duplicates are removed. After preprocessing, the Amazon electronic product data subset contains only 192403 users, 63001 products, 801 categories and 1689188 samples. Each reviewer has published at least 5 reviews, and each product has at least 5 reviews.
In the dynamic recommendation environment, different from the data set preprocessing in static environment, considering that some product IDs appear in products in the product information data set but not in the product review data set, we need to preprocess the product information data set, that is, product IDs which appear in the product information data set but not in the product review data set are extracted separately as soliraty points, and product IDs that have appeared in both product information data set and product review data, the duplicates are removed. We also remove nearly a third of the edge connections between items with later time stamp to simulate a dynamic recommendation environment.
IvB Comparison method
We compare DGEM with the following algorithms:

LR [14]
. Logistic Regression (LR) is an online algorithm for generalized linear models and the basic model before the rise of deep learning.

BaseModel. BaseModel follows the EmbeddingMLP architecture and is the basis for most of the deep network development for subsequent clickthrough modeling. It has laid a solid foundation for our model comparison.

Wide&Deep [7]
. The Wide&Deep model is a model for joint training of shallow and deep models. Among them, the shallow model is a basic linear model that is mainly used to obtain crossfeatures. The deep model is actually a feedforward neural network that is mainly used for feature generalization.

PNN [22]. The PNN model is designed with a Product Layer, which realizes the purpose of extracting linear features and nonlinear features together. The linear features are obtained through the embedding layer, while the nonlinear features are obtained through the inner product and the outer product.

DeepFM [11]
. In the DeepFM model, the factorization machine part is responsible for feature extraction of the firstorder features and the secondorder features formed by combining the firstorder features in pairs, and the multilayer perceptron is responsible for fully connecting the input firstorder features. The formed highorder features are used for feature extraction. In other words, the DeepFM model combines the advantages of the depth model and the breadth model. By factoring and multilayer perceptron sharing feature embedding vectors for joint training, the purpose of learning both loworder feature combinations and highorder feature combinations can be achieved. This endtoend model eases the pressure of feature engineering.

DeepWalk [20]
. Use random walk to transform networked relationships into sequential relationships for subsequent processing.

LINE [26]. By introducing the firstorder and secondorder proximity relations into the objective function, the distribution of embedding items finally learned can be more balanced and smoother.

Node2Vec [10]. DeepWalk has been improved, combining depthfirst search and breadthfirst search, so that the final embedded structure can express the overall and local structure of the network.
IvC Evaluation index
In the field of binary classification, AUC is a widely used indicator, and its calculation formula is:
(13) 
In order to better use the AUC index to measure our model, we introduced the user weighted AUC variation, which judges the model’s merits by averaging the AUC of each user. The mathematical expression of user weighted AUC is:
(14) 
In order to better evaluate the relative improvement between various models, we also introduced the RelaImpr indicator, whose mathematical expression is:
(15) 
IvD Experimental setup
The main part of DGEM is deployed on the ecommerce website, so that both the static recommendation environment and the dynamic recommendation environment can be obtained at the same time. The realization of the whole scheme consists of item graph construction module, random walk module, solitary point processing module, graph embedding module and deep neural network module. In order to verify the performance of the proposed scheme, we implemented all the modules involved in the entire scheme. All modules are implemented in Python 2.7, and the performance is verified on a server with a GPU capacity of more than 10GB and a TensorFlow 1.4.0.
In the item graph construction module, we define two modes, static and dynamic, in fact, the two modes share the same storage structure. Because the directed weighted item graph is a large sparse graph, we use the form of adjacency list to store. In the adjacency list, each vertex has a singlylinked list. The node elements in the singlylinked list are related information of another vertex connected to the directed edge from the vertex. Each node contains the connected vertex, time weight and frequency weight. When in the static mode, the connected vertex and frequency weights of the vertices in the adjacency list will be activated; while in the dynamic mode, the connected vertices and time weights of the vertices in the adjacency list will be activated.
In the random walk module, we define two modes, static and dynamic. In the static mode, we set the length of random walk to 12 and the number of random walks per vertex to 20. By sampling 20 random walk sequences for each vertex in the item graph, we can obtain a set of item sequences with a maximum length of 12, which implies a highorder promixity relationship between items. In the dynamic mode, we set the length of the random walk to 12, and the start edge and start time are selected in an unbiased manner. Through the introduction of the time state, we can track the increase of the edge of the item graph according to the increase of the time stamp, thereby capturing the dynamic changes of the item graph. The random walk sequence sampled in a dynamic environment not only implies a highorder proximity relationship between items, but also allows the temporal relationship to be perfectly preserved according to the strict timing relationship, and can perform more temporal walks according to dynamic changes to meet System scalability requirements.
Static mode and dynamic mode share the same graph embedding module. The graph embedding module is mainly a shallow neural network model of Word2Vec. Specifically, the graph embedding module is a SkipGram model. In the SkipGram model, we set its embedding dimension to 180, the context window size to 20, the input is the item sequence, and the output is the item embedding vector. The essence of the entire graph embedding module is a mapping function, which can ensure that whether it is a static graph embedding vector or a dynamic graph embedding vector, if it has highorder proximity, then it appears to be close in the vector space.
IvE Experimental performance comparison
We repeated all the comparison methods 8 times, and take the average of 5 results as our experimental results. Table III shows the experimental results of various models on the Amazon electronic product data subset.
In the static recommendation environment, all deep networks have greatly defeated the logistic regression model, which proves the effectiveness of deep learning to solve the recommendation problem. In the recommendation algorithm based on deep learning, PNN and DeepFM with special design structure are better than Wide&Deep.However, we propose a static recommendation method based on graph embedding, SDGEM, which performs best among all comparison methods. We owe it to the design graph embedding method in the static recommendation environment. The graph embedding design in the static recommendation environment can find highorder proximity between items through random walks with unequal probability, and the captured highorder proximity can be better weighted under the support of attention mechanism. Through this mechanism, SDGEM can obtain an adaptive change representation of user interest, which greatly improves the expressive power of the model compared with other deep networks.
AUC  RelaImpr  GAUC  RelaImpr  

BaseModel  0.8635  0.0000%  0.8615  0.0000% 
LR  0.7759  24.0990%  0.7741  24.1770% 
WideDeep  0.8645  0.2751%  0.8622  0.1936% 
PNN  0.8658  0.6327%  0.8639  0.6639% 
DeepFM  0.8704  1.8982%  0.8684  1.9087% 
SDGEM  0.8891  7.0426%  0.8869  7.0263% 
DeepWalk  0.8456  0.0000%  0.8319  0.0000% 
LINE  0.7366  31.5394%  0.7202  33.6547% 
Node2Vec  0.8607  4.3692%  0.8489  5.1220% 
DDGEM  0.8571  3.3275%  0.8552  7.0202% 
In the dynamic recommendation environment, we note that temporal walks are timebiased and the random walk sampling has been biased towards the edges that appear later in time. Here, we see that DDGEM always performs better than DeepWalk, Node2Vec and LINE. This means that when time information is ignored, important information is lost. What is confusing is that the performance of LINE does not show its advantages in this problem. The analysis reason is estimated that LINE does not take into account the higher order proximity and introduces a larger error. Compared with the static recommendation system, the performance of the AUC and GAUC of the dynamic recommendation system is reduced, but in view of introducing the time dimension that the static recommendation system does not have and achieving the scalability of the system, these performance losses are within an acceptable range. In other words, DDGEM can make better use of the timing dependencies between items and improve the recommended performance in a dynamic environment. Description from the side, the inclusion of time dependencies in the graph is very important for learning an appropriate and meaningful network representation.
The DGEM is a graph embedding model. When generating the graph embedding, the training model generally adopts the Word2Vec model. The Word2Vec model can be divided into SkipGram and CBOW, and the optimization methods are divided into negative sampling and Hierarchical softmax. Therefore, there are four methods for generating graph embedding, namely SkipGram with negative sampling(SGN), SkipGram with Hierarchical softmax(SGH), CBOW with negative sampling(CBOWN) and CBOW with Hierarchical softmax(CBOWH).
Fig. 7 show the AUC and GAUC for different graph embedding training methods on the Amazon electronic product data subset. We can find that the training method of graph embedding has little effect on the model. SkipGram with negative sampling method has the best effect on the training graph embedding model.
We also conducted experiments on the proposed attention mechanism. We conducted a comparative experiment between the pure static graph embedding method and the static graph embedding method with attention mechanism on the Amazon electronic product data subset. The experimental results are shown in Fig. 8. By observing the experimental results, we can find that the static graph embedding method with the attention mechanism can obtain more competitive AUC and GAUC results. In summary, for our model, the static graph embedding recommendation method based on the attention mechanism can enhance the model’s expressive ability and enhance the recommendation effect.
For the static graph embedding method, we mainly analyze the impact of different Dropout probabilities on GAUC; for the dynamic graph embedding recommendation method, we mainly analyze the impact of different length of random walk on GAUC. Fig.9 shows the experimental results of various static recommendation models using different Dropout probabilities on the Amazon electronic product data subset. Experimental results show that for all of static recommendation models, Dropout probability set at 0.5 or 0.6 can get the best recommendation effect.
Fig. 10 shows the experimental results of various dynamic recommendation models using different length of random walk on the Amazon electronic product data subset. The experimental results show that when the length is 2, all the recommended models participating in the experiment are not effective, and with the increase of the length, except for the LINE, the recommendation effects of the other models are showing a steady growth trend. When the length is 10 or 12, the recommendation effect finally stabilizes. The reason why the experimental results of the LINE finally show a straight line is that LINE only considers the firstorder proximity and secondorder proximity between items.
V Conclusion
In order to overcome the convergence problem caused by the simultaneous training of the embedding layer and the entire neural network and the problem that the current sequence embedding method is no longer applicable to the actual situation, our new technology solution DGEM is proposed. In a static recommendation environment, SDGEM extracts the graph structure model by establishing a directed weighted item graph, uses random walks of unequal probability to capture the vertex attributes of the item graph, and generates item sequence data, and then generates the item graph embedding vector by the Word2Vec method , and finally feed the deep neural network for recommendation. The experimen tal results show that SDGEM can mine the highorder proximity between items and enhance the expression ability of the model. In the dynamic recommendation environment, DDGEM introduces the time state to track the update of the item graph, and improves the unequal probability random walk strategy in the static recommendation environment to capture the vertex attribute in the dynamic item graph, and adds auxiliary information to enhance the special solitary point expression. In this way, the timing dependence between items can be better utilized and the recommendation performance in a dynamic environment can be improved.
It is undeniable that the scale of the recommendation system to be processed is getting larger and larger, and the number of users and items participating in the recommendation system can easily reach 10 million. If the sparseness of the recommendation system is measured by the proportion of interactions among items in all possible interactions, the sparseness of the Amazon electronic product data set used in this article is only 0.0042%. The problem of data sparsity is a big problem inherent in the recommendation system itself. This problem cannot be completely overcome in essence. In future work, we hope to be able to introduce more associations between items to better deal with data sparseness. In addition, we hope to design an adaptive algorithm in the future work. This algorithm can use better labels for tracking graph structure updates without generating cumulative errors and achieving true scalability.
References
 [1] (2013) Distributed largescale natural graph factorization. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13, pp. 37–48. Cited by: §II.
 [2] (2016) ITEM2VEC: neural item embedding for collaborative filtering. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vol. , pp. 1–6. Cited by: §I.
 [3] (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, NIPS’01, pp. 585–591. Cited by: §II.
 [4] (199804) Ganging up on information overload. Computer 31 (4), pp. 106–108. External Links: ISSN 00189162 Cited by: §II.
 [5] (2015) GraRep: learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pp. 891–900. External Links: Link, Document Cited by: §II.
 [6] (201706) HARP: hierarchical representation learning for networks. pp. . Cited by: §II.
 [7] (2016) Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, DLRS 2016, pp. 7–10. Cited by: 3rd item.
 [8] (2007) Randomwalk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering 19 (3), pp. 355–369. Cited by: §II.
 [9] (2018) Realtime personalization using embeddings for search ranking at airbnb. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, pp. 311–320. Cited by: §I.
 [10] (2016) Node2vec: scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp. 855–864. Cited by: §I, §II, 8th item.

[11]
(2017)
DeepFM: a factorizationmachine based neural network for ctr prediction.
In
Proceedings of the 26th International Joint Conference on Artificial Intelligence
, IJCAI’17, pp. 1725–1731. Cited by: 5th item.  [12] (200703) Graph evolution: densification and shrinking diameters. ACM Trans. Knowl. Discov. Data 1 (1), pp. 2–es. Cited by: §I.
 [13] (201608) Discriminative deep random walk for network classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 1004–1013. Cited by: §II.
 [14] (2013) Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Cited by: 1st item.
 [15] (2013) Efficient estimation of word representations in vector space. In Proceedings of the 1st International Conference on Learning Representations(ICLR), Cited by: §I.
 [16] (2013) Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems  Volume 2, NIPS’13, pp. 3111–3119. Cited by: §I.
 [17] (2005) A measure of betweenness centrality based on random walks. Social Networks 27 (1), pp. 39 – 54. External Links: ISSN 03788733 Cited by: §II.
 [18] (2016) Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp. 1105–1114. Cited by: §II.
 [19] (2016) Triparty deep network representation. In Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence, IJCAI’16, pp. 1895–1901. Cited by: §II.
 [20] (2014) DeepWalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701–710. Cited by: §I, §II, 6th item.
 [21] (2016) Walklets: multiscale graph embeddings for interpretable network classification. CoRR abs/1605.02115. Cited by: §II.
 [22] (201810) Productbased neural networks for user response prediction over multifield categorical data. ACM Trans. Inf. Syst. 37 (1). External Links: ISSN 10468188 Cited by: 4th item.
 [23] (2014) Word2vec parameter learning explained. arXiv preprint arXiv:1411.2738. Cited by: §I.
 [24] (200301) Early modern information overload. Journal of the History of Ideas 64, pp. 1–9. External Links: Document Cited by: §II.
 [25] (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290 (5500), pp. 2323–2326. External Links: ISSN 00368075 Cited by: §II.
 [26] (2015) LINE: largescale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, pp. 1067–1077. Cited by: §I, 7th item.
 [27] (2016) Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp. 1225–1234. Cited by: §I.
 [28] (2007) A survey of ecommerce recommender systems. In 2007 International Conference on Service Systems and Service Management, Vol. , pp. 1–5. Cited by: §II.

[29]
(2016)
Revisiting semisupervised learning with graph embeddings
. In Proceedings of the 33rd International Conference on International Conference on Machine Learning  Volume 48, ICML’16, pp. 40–48. Cited by: §II. 
[30]
(2016)
Multimodal bayesian embeddings for learning social knowledge graphs
. In Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence, IJCAI’16, pp. 2287–2293. Cited by: §II.
Comments
There are no comments yet.