An Intelligent Group Event Recommendation System in Social networks

06/16/2020
by   Guoqiong Liao, et al.
0

The importance of contexts has been widely recognized in recommender systems for individuals. However, most existing group recommendation models in Event-Based Social Networks (EBSNs) focus on how to aggregate group members' preferences to form group preferences. In these models, the influence of contexts on groups is considered but simply defined in a manual way, which cannot model the complex and deep interactions between contexts and groups. In this paper, we propose an Attention-based Context-aware Group Event Recommendation model (ACGER) in EBSNs. ACGER models the deep, non-linear influence of contexts on users, groups, and events through multi-layer neural networks. Especially, a novel attention mechanism is designed to enable the influence weights of contexts on users/groups change dynamically with the events concerned. Considering that groups may have completely different behavior patterns from group members, we propose that the preference of a group need to be obtained from indirect and direct perspectives (called indirect preference and direct preference respectively). In order to obtain the indirect preference, we propose a method of aggregating preferences based on attention mechanism. Compared with existing predefined strategies, this method can flexibly adapt the strategy according to the events concerned by the group. In order to obtain the direct preference, we employ neural networks to directly learn it from group-event interactions. Furthermore, to make full use of rich user-event interactions in EBSNs, we integrate the context-aware individual recommendation task into ACGER, which enhances the accuracy of learning of user embeddings and event embeddings. Extensive experiments on two real datasets from Meetup show that our model ACGER significantly outperforms the state-of-the-art models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

10/02/2020

Overcoming Data Sparsity in Group Recommendation

It has been an important task for recommender systems to suggest satisfy...
04/12/2018

Attention-based Group Recommendation

Recommender systems are widely used in big information-based companies s...
03/25/2019

GEVR: An Event Venue Recommendation System for Groups of Mobile Users

In this paper, we present GEVR, the first Group Event Venue Recommendati...
10/06/2017

Understanding Group Event Scheduling via the OutWithFriendz Mobile Application

The wide adoption of smartphones and mobile applications has brought sig...
09/05/2018

Learning User Preferences and Understanding Calendar Contexts for Event Scheduling

With online calendar services gaining popularity worldwide, calendar dat...
03/22/2021

Higher-order Homophily is Combinatorially Impossible

Homophily is the seemingly ubiquitous tendency for people to connect wit...
09/09/2021

Double-Scale Self-Supervised Hypergraph Learning for Group Recommendation

With the prevalence of social media, there has recently been a prolifera...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Event-based Social Networks (EBSNs) applications, such as Meetup.com, Douban.com, and Plancast.com, have become increasing popular in recent years. EBSNs provide online platforms for users to create, distribute, organize and register all kinds of social events, which promotes the success of offline interactions among users. The events here could be academic meetings, business exhibitions, dining out, and movies night, etc. In order to alleviate the problem of information overload brought by massive events, many models on recommending events for individuals are proposed [43, 36, 21, 27].

Since people often participate in offline events in groups in real life, recommending events for groups of people has become research focus in recent years [9, 32, 37, 33]. Traditional group recommendation methods focus on how to aggregate member preferences to form the group preference, and pay less attention to the influence of contexts (such as time, location, and social relationship) on group preferences. In fact, contexts may have important impacts on group behaviors. For example, when a group decides on which restaurant is suitable for dinning out, besides the type of food, the contextual factors such as the restaurant’s location, the parking lot’s capacity, and the members’ free time will all have impacts on the group’s decision.

The impacts of contexts have been widely studied in event recommender systems for individuals [43, 27, 22, 41]. These works study the influence mechanism of contexts on individuals by defining linear or non-linear functions manually. However, manually defined functions are not sufficient to capture the deep, highly non-liner interactions among entities such as users, events, and contexts. Moreover, these works focus on the impacts of contexts on individuals rather than groups, so they cannot be directly applied to event recommendation for groups.

Recently Du et al. [12] studied the group event recommendation problem considering influences of contexts including event content, time, location, and social relationship, and proposed a group recommendation method based on learning-to-rank technology. This method incorporates the influences of various contexts, but the influence functions are defined manually. And the method does not differentiate the influence weights of various contexts. There is a recent work modeling the contextual influences in the field of context-aware recommendation for individuals [24]. It proposes a neural attention mechanism to model the influence weights of contexts on users and events. However, its measurement of influence weights of contexts on users only considers the interaction between contexts and users, neglecting the impacts of events, which is not quite consistent with the fact. For example, the influence weight of the context of season on users will change with the type of events. When a user is faced with winter skiing events, the influence weight of the season is great, but when the user is faced with movie events, the influence weight of the season becomes smaller.

How to acquire group preferences accurately is the core problem of group recommendation. Existing group recommendation algorithms focus on obtaining group preferences by aggregating members’ preferences, and various group aggregation strategies have been proposed, such as strategies based on social theory (like approval voting, least misery, average, etc. [33, 23]) and strategies considering users’ special needs or users’ expertise [2, 7, 28]. However, these works exist following two limitations: (1) predefined group aggregation strategies lack flexibility. When the type of events concerned changes, the group may adopt a different decision-making strategy, and the predefined strategy cannot adapt to it. For example, a tour group often goes hiking in various scenic spots, and the group usually adopts the average strategy to decide the next spot (i.e., each member has equal weight to the final group decision). When the group is faced with some different type of tour project (such as surfing or skiing), members with experiences in such project may have greater weights to the final decision. Then the previous average strategy is no longer applicable. Therefore, we need to study a more flexible and adaptive group aggregation strategy.(2) a group preference is not completely determined by the historical preferences of group members. For example, in real life, an individual user could choose jogging or walking as his/her leisure sport. When he/she is in a group, the group may still choose jogging or walking as its leisure sport, but may also choose playing football, basketball or other sports which need cooperation with each other. We analyzed the events attended by 50 groups of New York City in 2016 on Meetup website. Fig. 1 shows the characteristics of these group events. It can be seen that a large proportion of group events in most groups are similar to the historical events of members, but there are also proportions of group events that are not similar to the historical events of any member. This suggests that group behaviors may be completely different from individual behaviors. For the convenience of discussion, we call the group preference obtained from the historical preferences of group members as group indirect preference, and call the group preference completely different from the historical preferences of members as group direct preference. Therefore, a reasonable group recommendation model should model both indirect preference and direct preference of a group.

Fig. 1: Characteristics of group events attended by 50 groups in New York city in 2016 on Meetup website.

To address above problems, we propose an Attention-based Context-aware Group Event Recommendation model (ACGER) for EBSNs. The details are as follows.

In order to characterize the complex influence of contexts on users, groups and events, we propose a deep model to learn the representations of users, groups, and events under the influence of contexts. This is motivated by the successful development of deep learning in recent years, which has shown a strong representation ability in image, text, and voice data processing

[20, 35, 17]. The multi-layer architecture including non-linear functions designed in our model can capture the complex and non-linear impacts of contexts on groups, users, and events.

In order to model the situation that the influence weights of contexts may change with the type of events, inspired by the neural attention mechanism [3, 40], we design a neural attention network which learns the influence weights of contexts on users/groups from interactions among users/groups, contexts, and events instead of interactions just between users/groups and contexts.

In order to capture the group preference more accurately, we propose that the calculation of a group preference should include two aspects: the group indirect preference and the group direct preference.The former could be obtained by aggregating members’ preferences by employing an attention mechanism, which can learn group aggregation strategy adaptively from data, while the latter could be learned from the historical interactions between groups and events with neural networks.

In summary, the contributions of this paper are listed as follows:

1) We propose an Attention-based Context-aware Group Event Recommendation model (ACGER) in EBSNs. The model can effectively capture the complex and non-linear influence of contexts on users, groups, and events. As far as we know, this is the first work which addresses the context-aware group event recommendation from the perspective of neural representation learning.

2) We design a novel neural attention mechanism, which not only models the interaction between users/groups and contexts, but also incorporates the impacts of events, so that the dynamic change of contextual weights with different events can be captured in time.

3) We propose that the calculation of a group preference should not only consider the indirect preference obtained from group members, but also consider the direct preference which is completely different from the preference of each member. To aggregate member preferences to obtain the indirect preference, an adaptive group aggregation strategy based on a neural attention mechanism is proposed. And the group direct preference is learned from group-event interaction data by neural networks.

4) Extensive experiments on two real datasets from Meetup show that the proposed model ACGER can achieve better recommendation performance compared with the state-of-the-art models.

The rest of this paper is organized as follows. Section II reviews the related works. Section III formulates our problem and presents the framework of our proposed model ACGER. In Section IV, we elaborate the ACGER scheme which includes three main modules. The experiments based on two real-world datasets are conducted and the performance analysis is given in Section V. The last section VI concludes this paper and points out our future work.

Ii Related Work

In this section, we review some works related to our problem in the literature, including conventional context-aware recommendation methods, context-aware event recommendation for individuals, and context-aware event recommendation for groups.

Ii-a Conventional Context-Aware Recommendation Methods

Context in the recommender system domain refers to any information that can be used to characterize the situation of an entity. An entity is a person, a place, or an object that is considered relevant to the interaction between a user and a recommender system [11]

. Context-aware recommendation algorithms can be classified into three main algorithmic paradigms according to the phase when contextual information is incorporated: contextual pre-filtering, contextual post-filtering, and contextual modeling

[32].

In contextual pre-filtering paradigm, contextual information is used for data selection or data construction. Then, ratings can be predicted using any traditional Two-Dimensional (2D) UserItem recommender system on the selected data. One early work is [1]. It proposes a reduction-based approach, which reduces the problem of multidimensional (MD) contextual recommendations to the standard 2D recommendation space. In this work, the authors also tried to combine several contextual pre-filters into one model at the same time, which provides significant performance improvements over the one pre-filter approaches. [4] proposes User Splitting technique, which splits the user profile into several sub-profiles, and each sub-profile represents the user in a particular context. [6] proposes Item Splitting technique, which splits each item into several fictitious items based on the contexts. [44] splits both users and items in the data set to boost context-aware recommendations.

In contextual post-filtering paradigm, contextual information is initially ignored, and any traditional 2D recommender system could be used on the entire data to predict the ratings. Then, the recommendation result is adjusted by using the contextual information. [25]

introduces two contextual post-filtering methods: Weight and Filter. The Weight method adjust the recommendation list by reordering the recommended items according to their probability of relevance in the specific context, and the Filter method filters out recommended items that have low probability of relevance in the specific context. One important benefit of both contextual pre-filtering approaches and contextual post-filtering approaches is that all the previous research on 2D recommender systems could be directly applied. However, all these approaches require manual supervision and fine-tuning in the recommendation process.

In contextual modeling paradigm, contextual information is incorporated directly in the recommendation model as an explicit predictor of a user’s rating for an item. Some studies work on contextualize Matrix Factorization (FM) approach. [5] presents Context-Aware Matrix Factorization (CAMF), which extends MF by considering the influence of contexts on items. [45] extends a matrix factorization method SLIM (Sparse Linear Method) to a Contextual SLIM (CSLIM) incorporating contextual conditions for the top-

recommendation task. However, these approaches cannot handle the ternary relational nature of data. Tensor Factorization (TF) is an extension of MF techniques to incorporate different contexts as multifaceted user-item interactions in the recommendation process. One classical method is Multiverse Recommendation

[19], which relies on Tucker decomposition and allows to work with any categorical context. To address the implicit feedback, [34] proposed a ranking-based Tensor Factorization (TF) model by directly maximizing Mean Average Precision. Another significant work is proposed by [30]. It applies Factorization Machines (MF) to model the interactions between each pair of entities in terms of their latent factors, such as user-user, user-item, user-context interactions.

As far as recommendation methods in EBSNs are concerned, the existing works mostly adopt the paradigm of contextual modeling, which directly integrates context information into models to characterize the contextual influences.

Ii-B Context-Aware Event Recommendation for Individuals

Many models have been proposed in the field of context-aware event recommendation for individuals. Qiao et al. [27] proposed a potential factor model to model online and offline social relations, geographical features of events, and implicit feedback of users in event social networks, so as to recommend offline events for users. Macedo et al. [22] extracted social relations, content, time, and geographical features respectively, and then used the learning to rank technology to combine these contextual information to generate event recommendation. Zhang et al. [43] formulated the cold-start event recommendation problem, using Bayesian Poisson factorization as the basic unit to model different contextual factors, and further combined those units to form a unified model through a collective matrix factorization model. Xu et al. [41] proposed a semantic-enhanced and context-aware hybrid collaborative filtering method, which combines semantic content analysis and contextual event influence for user neighborhood selection. Cao et al. [10]

combined multiple features about topology, temporal, spatial, and semantic to model user preferences, which alleviates the problem of data sparseness in EBSNs. With the successful application of deep learning and representation learning in the fields of image, speech and natural language processing in recent years

[20, 35, 17], some researchers have also applied these techniques in context-aware event recommendation. Wang et al. [38] considered the temporal and spatial effects of events, and mapped the event, location, and time into low-dimensional space based on event sequential data by representation learning method. Wang et al. [39]

utilized convolutional neural network with word embedding to extract the high-level features of contextual information of a user’s interested events and built up a user latent model for each user, then they incorporated the user latent models into a probabilistic matrix decomposition model to obtain more accurate recommendation performance. However, all these methods do not consider either the influence of contexts on group preferences nor the characteristics of group recommendation task (e.g., how to aggregate different preferences of members into a consistent group preference). Therefore, they cannot be directly applied to EBSN group recommendation.

Ii-C Context-Aware Event Recommendation for Groups

Group recommendation in EBSNs has attracted more and more attention in recent years.Yuan et al. [42] proposed a probability model COM to simulate the generation process of group activities and perform group recommendations. Purushotham et al. [26] proposed a collaborative filtering algorithm based on the Bayesian model to recommend events for groups considering the potential topics of groups. Ji et al. [18] proposed a topic-based probability model for group recommendation, in which the group preference not only considers the interests of members, but also considers the interests of subgroups. Du et al. [13] proposed a probabilistic generative model to jointly learn groups’ content preferences and venue preferences. They discovered a strong correlation between organizers and textual contents. Above methods focus on modeling the generative process of group preferences by utilizing the interaction among group members, lacking a deep investigation on the influence of contexts on group behaviors, resulting in a suboptimal performance for group recommendation. Recently, a method named GERF comprehensively considering the influence of various contexts on group recommendation has been proposed [12]. It first models the influences of contexts including time, place, event content, and social relations on the user’s preferences, then merges the preferences of users in a group to form the group preference. Finally, by using a learning to rank algorithm to learn the ranking function for each group, it produces the event recommendation lists for groups. This method relies on manually defined function to characterize the influences of contexts on users and events, which is insufficient to model the complex and highly non-linear influences of contexts. In addition, the influence weights of different contexts are not differentiated in this work. In fact, there are a few works recently, which could be used to model the influence of different contexts on users and events. Among them, [16] proposes Neural Factorization Machines (NFM) which enhances FM by modeling nonlinear feature interactions through neural networks. [40] proposes Attentional Factorization Machines (AFM) that improves FM by differentiating the importance of different feature interactions via a neural attention network. Similar to FM, these two models can be applied to the task of context-aware recommendations by specifying the input data. However, NFM fail to differentiate the different importance of context influences. AFM can automatically differentiate the importance of feature interactions, but it models the feature interactions in a linear way. [24] proposes a novel neural model named AIN to adaptively capture the interactions between contexts and users/items. And a neural attention mechanism is employed to model the influence weights of contexts on users/items. However, the neural attention mechanism of AIN neglects that the influence weight of a context on a user may change when the type of the item concerned changes. In this paper, we employ the deep neural network and representation learning techniques to model the complex and non-linear interactions between contexts and entities including users, events, and groups, and propose a novel neural attention mechanism to weigh the influence of different contexts more accurately.

How to aggregate different members’ preferences into a consistent group preference, i.e., the group aggregation strategy, has always been the focus of group recommendation research. In the early works, Masthoff [23] proposed 10 aggregation strategies based on social choice theory, such as approval voting, Borda counting, least misery, average, etc. Ardissono et al. [2] assigned greater weight to people with special needs (such as children or disabled people). Berkovsky et al. [7] judged a user’s activeness based on the number of items he/she has rated, and assigned greater weight to more active users. SEO et al. [33] considered the deviation of group members’ opinions combining with the average and voting counting strategies. [28] measured the influence weight of a member by the number of times that his/her preference being consistent with the group’s preference. However, the aggregation strategies in these methods are all predefined, which is data independent and lacks flexibility. When the decision-making strategy of a group change, the predefined aggregation strategy cannot adapt to it. To overcome above limitation, we propose an adaptive group aggregation strategy based on the neural attention mechanism, which can learn a member’s weight from the data. In addition, the acquisition of a group preference in our method not only considers preferences of members, but also considers the preference that is quite different from each member’s, which further improves the performance of group recommendation.

Iii Problem Formulation And Model Framework

Iii-a Notations and Problem Formulation

We use bold capital letters (e.g., ) ) and bold lowercase letters (e.g.,

) to represent matrices and vectors, respectively. We employ non-bold letters (e.g.

) to denote scalars, and squiggle letters (e.g. ) to denote sets. denotes the cardinality of a set. If not clarified, all vectors are in column forms.

Given context variable consisted of contextual factors, i.e., , we suppose each contextual factor has values, denoted as , where is the th value of the th contextual factor , is the set of tuple of values of contextual factors, denoted as . Suppose we have a set of users , a set of events , and a set of groups , where each group is consisted of a certain number of users, and we obtain the group-event interaction matrix and the user-event interaction matrix . Given a target group , a tuple of values of contextual factors , our task is defined as recommending a list of events that group may be most interested in, i.e., top- event recommendation for group .

Fig. 2: The framework of ACGER.

Iii-B Model Framework

The ACGER model proposed in this paper is composed of three main components:

(1) Context-aware embedding learning module: given a group-event interaction record, the one-hot feature vectors of the related entities including group, event, group members, and contextual factors are taken as the initial inputs of the model, and they are mapped into low-dimensional and dense vectors through an embedding layer. Then, a Multi-Layer Perceptron (MLP) is used to capture the effects of interactions between contexts and users/events/groups, and then we get the enhanced embeddings of users/events/groups under the comprehensive influences of contexts through the neural attention mechanism. And the enhanced group embedding encodes the group’s direct preference.

(2) Group preference acquisition module: we aggregate the group members’ enhanced embeddings through a neural attention network to get the group’s indirect preference. The indirect preference embedding and the direct preference embedding are combined to get the final group preference embedding.

(3) Score prediction module: the Factorization Machines (FM) model is used as the group score prediction layer, and a pairwise ranking loss is used for the model optimization.

Noted that in order to employ the abundant user-event interaction data in EBSNs to improve the accuracy of embedding learning, i.e., the learning of the user embeddings and the event embeddings, we integrate the context-aware recommendation for individuals task into ACGER. Specifically, given a user-event interaction record, the one-hot feature vectors of user, event and contextual factors are fed into the model. Through an embedding layer, a multi-layer perceptron and a neural attention network in turn, we get the enhanced embeddings of the user/event under the comprehensive influences of the contextual factors. The FM model is still used to predict the score of a user to the target event. The overall framework of ACGER is shown in Fig. 2.

Iv Our Proposed ACGER Scheme

Iv-a Context-aware Embedding Learning Module

Given a set of contextual factors, the goal of this section is to obtain the feature representations of members, events and groups under the contextual influences. The module can be divided into three components and the detailed description is as follows:

1) Obtain the low-dimensional representations. Given contextual factors and the tuple of their current value , the low-dimensional embedding representation of member is calculated as follows:

(1)

where denotes the user embedding matrix, denotes the dimension of the embedding vector, denotes the matrix transpose operation. And is a one-hot vector, where the location of element 1 representing which row in the user matrix the user corresponds to.

Similarly, we obtain the event ’s embedding from the event embedding matrix , and get the group ’s embedding from the group embedding matrix . As for the value of the contextual factor , its embedding could be obtained from the th context embedding matrix .

2) Obtain the embedding representation under the influence of each contextual factor. In order to capture the complex and non-linear impact of each contextual factor on members/events/groups, we use a MLP to map the input data to a deep, non-linear hidden space.

Specifically, we first concatenate the user embedding with the context embedding , then we pass the concatenation through a stack of fully connected layers and finally get ’s influenced representation in the context of . The formulation is as follows:

(2)

where denotes the concatenation of two vectors,

is the Rectifier activation function,

and are the parameter matrices,

denotes the bias vector,

denotes the output vector of the th hidden layers. The superscript indicates the marked model parameters are related to Eq. (2).

As for event and group , we can similarly get their contextual influenced representations and by passing the concatenations and through their respective MLP.

3) Obtain the unique embedding representation influenced by all contextual factors. Different context has different influence weight on members/events/groups. How to measure the weight accurately is a key problem. Inspired by the neural attention mechanism [3, 40] which can learn the importance of different components in the model from data, we consider using attention to learn the weights of various contextual factors.

In order to obtain the user’s unique embedding representation under the influence of all contextual factors, we aggregate the user’s embedding representations influenced by each context. To this end, it is necessary to calculate the influence weight of each context on the user. Different from the existing method [24], the influence weight of each context in our model not only considers the interaction between the user and the context, but also considers the event that the user is currently concerned about. Our idea is that we measure how much the user ’s contextual representation matches the event ’s contextual representation . The more they match, the more the user prefers the context and accordingly would be given more weight. Specifically, in order to calculate the attention score between context and user when is faced with event , we design a neural attention network as follows:

(3)

where and are weight matrices of the attention network, is the bias vector,

is a weight vector which projects the output of the ReLU activation function to a score value. The superscript

indicates the marked model parameters are related to Eq. (3).

We normalize the value of with a softmax function, and obtain the influence weight of context on user when he/she is faced with .

(4)

Finally we get ’s enhanced embedding influenced by all contextual factors, which is calculated as follows:

(5)

Similarly, the attention score and the influence weight of context on group when is faced with event are calculated respectively as follows:

(6)
(7)

where , , , are model parameters. The superscript indicates the marked model parameters are related to Eq. (6).

Then, we get group ’s enhanced embedding influenced by all contextual factors, which is calculated as follows:

(8)

where encodes the group ’s direct preference.

At last, the attention score and the influence weight of context on event are calculated as follows:

(9)
(10)

where , , , and are model parameters. The superscript indicates the marked model parameters are related to Eq. (9).

The event ’s enhanced embedding influenced by all contextual factors is calculated as follows:

(11)

Iv-B Group Preference Acquiring Module

The goal of this section is to obtain an embedding vector for each group. In this paper, group preference is defined as the combination of indirect preference and direct preference. The group’s direct preference is encoded in the group’s enhanced embedding obtained by Eq. (8), and the group’s indirect preference is obtained by aggregating the members’ embeddings, in which the key problem is how to measure the influence weights of group members. We use the neural attention mechanism to learn them from data. Next, we elaborate on the process of obtaining group indirect preference.

Suppose group is making a decision on event , denotes the influence weight of member on the group ’s decision in the contexts , denotes ’s enhanced embedding influenced by all contextual factors, and embedding denotes the property of event influenced by all contextual factors. Then is defined as the output of a neural attention network with embeddings and as the inputs:

(12)
(13)

where is the attention score between member and event , , are the parameter matrices of the attention network, is the bias vector, is a parameter vector projecting the value of ReLU function into a score. The softmax function normalizes the score value to get the final weight . The superscript indicates the marked model parameters are related to Eq. (12).

With the attention mechanism defined above, we can learn adaptively the aggregation strategy, which a group may change for different events, from interactions among contexts, groups and events. Next, we use the weights to aggregate the members’ embeddings to form the group’s indirect reference. In order to obtain the group’s preference embedding, we combine the indirect preference with the direct preference by using an addition operation, which is utilized to combine different signals in the embedding space in work [40]). Specifically, when considering the event , group ’s embedding, denoted as , is calculated as follows:

(14)

where and denotes the indirect preference and the direct preference of group respectively, is the set of members of group .

Iv-C Rating Prediction Module

In order to predict the group rating, we select FM [31]) model. This is because the interaction data in EBSNs is very sparse, and FM can model the high-order interaction between features more effectively than other methods on the sparse dataset [15]. Specifically, we feed the concatenation of group embedding and event embedding into FM, then the predicted rating of on target event is calculated as follows:

(15)

where is the global bias, is the parameter vector, is the th column vector of parameter matrix , the hyper-parameter denotes the dimension of factorized parameters. The superscript indicates the marked model parameters are related to Eq. (15).

We rank the candidate events according to their predicted scores, and finally select the top- events to form the event list recommended for the group.

In addition to the group-event interaction data, there are also rich user-event interaction data in EBSNs. In order to reinforce the task of group event recommendation, we integrate the task of context-aware event recommendation for individuals into our model. Specifically, given a user-event interaction pair and the current values of contextual factors, the one-hot feature vectors of user , event , and values of contextual factors are taken as the initial input data, and are passed through the embedding layer, MLP, and the attention network in turn to get the enhanced user embedding and the enhanced event embedding . Then, the concatenation of these two embeddings are fed into FM to get user ’s prediction score on target event , denoted as . Since the two recommendation tasks share user embeddings, event embeddings and part of network weight parameters, the learning effect of group recommendation task is reinforced.

Iv-D Model Optimization

We treat the group event recommendation task as a ranking task, and select the commonly used pairwise learning method BPR (Bayesian Personalized Ranking) [29] to optimize the model parameters.

The pairwise learning method assumes that the observed interaction events should have a higher recommended ranking than the unobserved interaction events. The optimization objective function of the group recommendation task is as follows:

(16)

where denotes the logistic function, is the parameters to be learned in the neural network, is the regularization hyper-parameter, is the training set in which denotes that group interacted with event and did not interact with event .

Similarly, the objective function of individual recommendation task is as follows:

(17)

where is the set of parameters to be learned in the neural network, is the training set in which denotes that user interacted with event and did not interact with .

Stochastic gradient descent is used to minimize above objective functions. The optimization algorithm for group recommendation task is summarized in Algorithm 1. And the recommendation algorithm for ACGER model is presented in Algorithm. 2.

Input: , learning rate , regularization hyper-parameter , FM hyper-parameter .
Output: updated model parameters .
1Initialize and model parameters ;
2 repeat
3       Draw from ;
4       Compute , , , , by equations similar to Eq. (1) //obtain the low-dimensional embeddings of entities;
5       Compute , , , , by equations similar to Eq. (2)//obtain the embeddings under the influence of each context factor;
6       Compute , , by Eq. (3)-(5);
7       Compute by Eq. (6)-(8);
8       Compute by Eq. (9)-(11);
9       Compute ’s embedding by Eq. (12)-(14) ;
10       Compute by Eq. (15);
11       for each parameter in  do
12             ;
13       end for
14      
15until convergence;
return .
Algorithm 1 Optimization algorithm for group recommendation task in ACGER
Input: , , , , , group-event interaction matrix , user-event interaction matrix , target group , candidate event set , and given contextual values .
Output: the recommended event list for group .
1build model ACGER ;
2 initialize model parameters;
3 repeat
4       model.training() based on Eq. (1) - (5), (9)-(11), (15),(17) //utilize user-event interactions;
5       model.training() as Algorithm 1 //utilize group-event interactions;
6       model.evaluate() //evaluate the performance of individual recommendation;
7       model.evaluate() //evaluate the performance of group recommendation;
8      
9until convergence;
10for  in  do
11       =.predit() based on Eq. (1) - (15);
12      
13 end for
14 //select events with the greatest predict scores of given contexts ;
return .
Algorithm 2 Recommendation Algorithm for ACGER

V Performance Analysis

In this section, we conducted extensive experiments on real datasets to answer the following research questions:

(1) How does our proposed model ACGER perform compared with the state-of-the-art group recommendation model?

(2) How is the effectiveness of our designed attention network for learning the contextual influence weight?

(3) How is the effectiveness of our designed attention network for learning the group aggregation strategy?

(4) How do the three components of the model — attentive context-aware embedding learning, group embedding learning, and individual recommendation task contribute to the performance of ACGER?

V-a Experimental Settings

V-A1 Datasets

The datasets of this paper come from Meetup.com111http://www.meetup.com, a popular EBSN platform. Through this platform, users can create events online, reply on whether to participate in events, join in various online social groups, and participate in events offline. We use the API interface provided by Meetup to obtain the relevant experimental data in 12 months of 2016. We choose to recommend events within the city scope, and choose New York and San Diego city in USA for experiments since they have the largest number of events published in 2016. We generate groups consisted of 2 to 6 users who often participate in events together, and collect events participated by each group and collect events attended by each user. In order to eliminate noise data and ensure the reliability of experimental results, the selected users and groups are required to participate in at least 10 events. In consideration of the universality of groups with the size of 2 users, the selected events are required to have at least two participants. After the data pre-processing, the statistics of the two city datasets are shown in Table I, where #U-E denotes the number of user-event interactions and #G-E denotes the number of group-event interactions

City #Users #Events #Groups
#U-E
#G-E
New York 2,849 10,024 2,727 288,447
101,141
San Diego 2,419 10,685 1,992 287,469
70,239
TABLE I: Statistics of the Meetup datasets.

Four contextual factors were considered in our experiment: organizer, venue, time, and event content. The values of organizer and venue can be converted into one-hot vectors according to their integer ID. And since the value of time is continuous and the value of content is textual, we need to perform data pre-processing on these two factors to get their initial vector representations.

For the event content denoted as , we regard each content text as a document, and all event content documents constitute a corpus. We use the natural language processing technology CBOW (Continuous Bag-of-Words) [14] to map each word in the corpus into a low-dimensional word vector. The vector of the content of an event is calculated as follows:

(18)

where denotes the set of words in the content text of event , denotes the vector of word . The content embeddings of all events are combined to form a pre-trained embedding matrix, which is used to initialize the content embedding matrix in ACGER.

The context of time represents the start time of an event. In order to map a continuous timestamp into a discrete time slot, we adopt a weekday-hour pattern, such as “2 (day of the week), 16:00-17:00 (hour of the day)”. Then, we get at most discrete time slots for the context of time. Next, the time of each event is mapped to a -dimensional one-hot vector according to its time slot.

V-A2 Evaluation Metrics

For each dataset, we rank the group-event/user-event interactions according to the start time of events. Then, we take the first of ranked interactions as the training set, as the validation set, and the last as the test set. Validation set is used for tuning the hyper-parameters. In the test set, the interacted events of each group/user are regarded as the real interested events to evaluate the recommendation performance of algorithms. Since we use the pairwise loss as our objective function, positive samples are selected from the events that the group/user has interacted with, and negative samples are selected from the events that the group/user has not interacted with.

In order to evaluate the performance of top-

recommendation methods, we adopt three widely used evaluation metrics: precision (P@N), recall (R@N), and NDCG (Normalized Discounted Cumulative Gain, NDCG@N)

[42]. Among them, NDCG measures the ability of a method to rank the events of truly interest higher in the recommendation list. For each metric, the higher the value, the better the recommendation performance.

V-A3 Baselines

To justify the effectiveness of our method, we compared it with the following methods:

(1) GERF [12]: This is a method of context-aware event recommendation for groups. In this method, context influences are defined manually, the group feature vector is obtained by concatenating the feature vectors of group members, and a simple linear model is adopted for predicting the group’s rating for an event. This method does not differentiate the influences of different contextual factors and treats group members as equally important.

(2) UL [33]: This is a traditional context-unaware group recommendation method. The group aggregation strategy in this method is manually predefined, which combines the deviation of group members’ opinions with average and approval voting strategies.

(3) AIN_ACGER2 [24]: This method uses AIN (Attention Interaction Network, [12]) to model the influence of contextual factors on users/groups and events. The difference between this method and ACGER is that this method doesn’t consider the impact of events when calculating the influence weight of contextual factors on groups/users.

(4) ACGER1_LinerBpr: This is a variant of ACGER, which firstly obtains the embedding of users, events and groups under the influence of all contextual factors by using the same attention networks as that in ACGER. Then, the members’ embeddings are concatenated to form the group embedding and a linear model is employed to predict the ratings (just the same as GERF). This method is used to compare with GERF. The difference between this method and GERF is that this method uses attention networks to model contextual influences rather than defining influences manually.

V-A4 Experimental Settings

We implemented the neural network-based methods such as AIN_ACGER2, ACGER1_LinerBpr and our ACGER in PyTorch. Other methods such as GERF and UL are implemented in Python. For the methods based on neural networks, Adam algorithm is used for optimization. The minimum batch size and learning rate are respectively in the range of [128, 256, 512, 1024] and [0.001, 0.005, 0.01, 0.05, 0.1]. For the embedded layer and hidden layers, we use the Gaussian distribution with mean value of 0 and standard deviation of 0.1 to initialize their parameters randomly. In neural attention network, the embedding dimensions of users, groups, events and contexts are all empirically set to 32. For each MLP, we deploy it with two hidden layers with dimensions set to 48 and 40 respectively. The factorization dimension in FM are set to 10. And the weight parameters of three elements in the calculation of predicted group rating in UL method are determined by grid search. We repeat 5 times for each setting to report the average results.

V-B Overall Performance Comparison (RQ1)

(a) New York
(b) San Diego
(c) New York
(d) San Diego
Fig. 3: Top- recommendation performance comparison between ACGER and baselines.

Fig. 3 shows the top- (=5, 10) recommendation performance of our ACGER and comparative methods on New York and San Diego datasets. We can see that ACGER achieves the best performance on both datasets with repect to three metrics. ACGER obtains improvements over the best baseline AIN_ACGER2 by in P@5 , in R@5 and in NDCG@5 on New York dataset. On San Diego dataset, ACGER improves over AIN_ACGER2 by in P@5, in R@5 and in NDCG@5. This proves the validity of ACGER. Specifically, we can make the following observations: (1) context-aware methods (GERF, AIN_ACGER2, ACGER1_LinerBpr, and ACGER) have better performance than the context-unaware method (UL). This confirms the positive effect of context information on improving recommendation. (2) among the context-aware methods, the performance of neural network-based methods (AIN_ACGER2, ACGER1_LinerBpr, and ACGER) are better than that of method GERF which does not employ neural networks. This demonstrates the superiority of neural networks, especially their great ability in modeling the high-order interactions among different entities. (3) our model ACGER outperforms AIN_ACGER2 on both datasets in three metrics. This is due to the fact that the influence of a contextual factor on a user/group may change when the type of an event concerned changes. Therefore, the performance can be further improved by taking into account the impacts of events when measuring the weight of contextual factors on users/groups. (4) The performance gap between ACGER1_LinerBpr and ACGER shows the effectiveness of our model ACGER in modeling the group preference.

V-C Effect of Attention for Context Influence (RQ2)

To demonstrate the effectiveness of attention mechanism of ACGER in distinguishing the influences of different contextual factors, we compare ACGER with its following variants:

1) Avg_ACGER2: In this method, the influence weights of different contextual factors on the entity (i.e., group, user, and event) are equal.

2) AIN_ACGER2: In this method, the influence weight of a contextual factor on users/groups is calculated by AIN method, in which the influence on users/groups only considers the interaction between users/groups and context, without capturing the dynamic change of the influence weight of the contextual factor on users/groups when the type of events changes.

3) SingleU_ACGER2: This method considers the influence of contextual factors on users and groups, neglecting their influence on events.

4) SingleE_ACGER2: This method considers the influences of contextual factors on events, neglecting their influence on users and groups.

Fig. 4 shows the top- (=5, 10) recommendation performance comparison results of ACGER and its variants on two datasets. We have the following observations: (1) The performances of Avg_ACGER2 on both datasets are relatively low, slightly better than that of SingleE_ACGER2, which performs the worst on both datasets. This is because that Avg_ACGER2 does not distinguish the influence weights of different contextual factors. (2) the performance of AIN_ACGER2 is not as good as that of ACGER, which indicates that modeling the dynamic change of influence weights of contextual factors on users/groups with the type of events helps to improve recommendation performance. (3) ACGER consistently outperforms SingleU_ACGER2 and SingleE_ACGER2 on both datasets with respect to three metrics. This may be due to the fact that contextual factors characterize the situation where users/groups interact with events, and thus have influences on users/groups and events at the same time. Therefore, considering the effects of contexts on both users/groups and events leads to better performance. (4) SingleU_ACGER2 slightly outperforms SingleE_ACGER2 on both datasets, which indicates that considering the contextual influences on users/groups is more effective than considering the contextual influences on events to improve recommendation performance. It shows that the interests of users/groups are more sensitive to the contextual factors than the properties of events.

(a) New York
(b) San Diego
(c) New York
(d) San Diego
Fig. 4: Top- recommendation performance of ACGER and its variants in measuring the influence of contextual factors.

V-D Effect of Attention for Group Aggregation (RQ3)

To demonstrate the effectiveness of the attention-based group aggregation strategy, we replace the strategy in ACGER with other predefined strategies, and obtain the following variants:

1) ACGER1_Avg [7]: The group aggregation strategy in this method adopts the average strategy.

2) ACGER1_BC [23]: The group aggregation strategy in this method adopts the “Borda Count (BC)” strategy, which scores ratings based on the ranking results. Specifically, for each group member, first rank the events according to their ratings, then score each event according to its ranking position (for example, the lowest ranking scores 0 and the highest ranking in the events. Finally, scores of all members for each event are summed, and events are ranked according to the summed scores to get the recommendation list for the group.

3) ACGER1_Exp [28]: The group aggregation strategy in this method adopts weighted sum method, where the weight of a member is decided by the number of events he/she attends. Generally speaking, the more events a user has participated in, the more expert he/she may be.

4) ACGER1_MP [8]: The group aggregation strategy in this method adopts “Most Pleasure (MP)” strategy, which takes the highest value in members’ ratings as the group’s rating for a candidate event.

The experimental results are shown in Fig. 5. As you can see, there is no predefined strategy that always wins. For example, when = 5 on New York dataset, ACGER1_MP ourperforms ACGER1_BC (as shown in Fig. 5 (a)), but underperforms when = 10 (as shown in Fig. 5 (b)). The Similar situation occurs on San Diego dataset. When = 5, ACGER1_Exp outperforms ACGER1_BC (as shown in Fig. 5 (c)), but underperforms when = 10 (as shown in Fig. 5 (d)). ACGER shows great flexibility and superiority because it can learn the group aggregation strategy from data.

(a) New York
(b) New York
(c) San Diego
(d) San Diego
Fig. 5: Top- recommendation performance of ACGER and its variants in comparing group aggregation strategy.

V-E Contribution Analysis of Components (RQ4)

(a) New York
(b) San Diego
(c) New York
(d) San Diego
Fig. 6: Contribution analysis of different components of ACGER.

To evaluate the contribution of the main components of ACGER to the group recommendation performance, we conducted some ablation experiments. We compare the ACGER with its following variants:

1) Avg_ACGER2, ACGER1_Avg: Avg_ACGER2 is the ACGER taking the contextual factors as equally important, and ACGER1_Avg is the ACGER adopting the average strategy as the group aggregation strategy. The aim of these two variants is to study the contribution of attention networks.

2) ACGER_U, ACGER_G: ACGER_U is the ACGER with group indirect preference only, and ACGER_G is the ACGER with group direct preference only. Our purpose is to study the contribution of two different type of group preferences to the recommendation performance.

3) ACGER_Grp: ACGER_Grp is the ACGER without adding the individual recommendation task. The purpose of this method is to study the contribution of individual recommendation task to the group recommendation performance.

4) ACGER2: This method denotes the ACGER without considering the influences of contextual factors. Our purpose is to study the contribution of modeling contextual influences to the recommendation performance.

The experimental results are shown in Fig. 6. We have observations as follows: (1) Compared with ACGER, the performances of Avg_ACGER2 and ACGER1_Avg decrease on both datasets with respect to three metrics, which demonstrates that our attention-based weight calculation methods for contextual factors and group members are effective. (2) ACGER_U and ACGER_G underform ACGER on two datasets. This indicates that on two datasets, group embedding is affected by both group indirect preference and group direct preference. The performance of ACGER_U is superior to that of ACGER_G, which reveals that the group indirect preference has a larger impact in learning group preference on two datasets. (3) the performance of ACGER_Grp is inferior to that of ACGER on both datasets. For example, compared with ACGER, the performance of ACGER_Grp decreases by in P@5, in R@5 and in NDCG@5 when = 5 on New York dataset, and the similar phenomenon could be observed on San Diego dataset. This indicates that the individual recommendation task could effectively reinforce the group recommendation task. (4) The performance of ACGER2 is significantly inferior to that of ACGER. This indicates that context information has a great impact on the performance of our model, because it can greatly improve the accuracy of group preference learning.

V-F Industrial Applications

In the market environment, participating in product exhibitions is one of the effective marketing methods for enterprises. Through the exhibition, the enterprise can show its products, enterprise strength, and brand image to the industry peers and live audiences, and it can quickly grasp the status and trend of domestic and international industries and get the information on the new products. By participating the exhibition, the enterprise could find potential customers and new cooperative partners at a lower cost than general marketing channels. Therefore, enterprises have the inherent needs for participating in product exhibitions. An EBSN application provides a platform for organizers to release exhibition information and for enterprise users to sign up for the exhibitions. However, with the popularity of the platform, more and more information is published on the platform, and it becomes increasingly difficult for enterprise users, especially enterprise groups who often attend exhibitions together, to find their interested exhibitions. Therefore, it is urgent for the platform to provide exhibition recommendation service (i.e., event recommendation service) to help them find their interested exhibitions efficiently.

In addition to exhibitions, other types of events, such as technical seminars and professional summit forums, are also events of interest to enterprises. An EBSN event recommender system can also meet such needs.

Vi Conclusion And Future Work

In this paper, we study an Attention-based Context-aware Group Event Recommendation model (ACGER) for EBSNs. ACGER employs the neural networks with attention mechanism to model the complex and highly non-linear impacts of contexts on users, groups, and events. In order to model the situation where the influence weight of a contextual factor on users/groups may change with the type of events concerned, ACGER leverages a novel attention network, which not only considers the interaction between users/groups and context, but also considers the impacts of events. To overcome the limitation of lacking flexibility of the predefined group aggregation strategy in existing group recommendation methods, ACGER uses the neural attention mechanism to learn an adaptive group aggregation strategy from the data. Such mechanism enables a group automatically adjust its decision strategy according to the currently concerned events. Considering that a group often have different behavior patterns from its members, ACGER not only considers the indirect preferences aggregated from members’ preferences, but also considers the direct preferences specific to group itself, which makes the group preferences captured more accurate. In addition, in order to make full use of user-event interaction data, we integrate the individual recommendation task into ACGER to reinforce the group recommendation task. Extensive experiments on two real datasets show that ACGER achieves higher recommendation performance than state-of-the-art methods.

In the future, as event sequence could be regarded as a special contextual factor, how to characterize the influence of event sequence on group recommendation could be further studied. We also try to explore the influence of trust relationship on user preference by using trust network embedding technology. In addition, an EBSN is a special heterogeneous network combining online and offline networks. Inspired by the recent development of graph neural network, how to employ this technique to model EBSN network structure for better recommendation is another interesting work.

References

  • [1] G. Adomavicius, R. Sankaranarayanan, S. Sen, and A. Tuzhilin (2005) Incorporating contextual information in recommender systems using a multidimensional approach. ACM Transactions on Information Systems 23 (1). Cited by: §II-A.
  • [2] L. Ardissono, A. Goy, G. Petrone, M. Segnan, and P. Torasso (2003) Intrigue: personalized recommendation of tourist attractions for desktop and hand held devices.

    Applied Artificial Intelligence

    17 (8), pp. 687–714.
    Cited by: §I, §II-C.
  • [3] Bahdanau,Dzmitry, Cho,Kyunghyun, and Bengio,Yoshua (2015) Neural machine translation by jointly learning to align and translates. In 3rd International Conference on Learning Representations, Cited by: §I, §IV-A.
  • [4] L. Baltrunas and X. Amatriain (2009) Towards time-dependant recommendation based on implicit feedback. In Proceedings of the 3rd ACM Conference on Recommender Systems In Workshop on context-aware recommender systems, pp. 423–424. Cited by: §II-A.
  • [5] L. Baltrunas, B. Ludwig, and F. Ricci (2011) Matrix factorization techniques for context aware recommendation. In Proceedings of the 5th ACM Conference on Recommender Systems, pp. 301–304. Cited by: §II-A.
  • [6] L. Baltrunas and F. Ricci (2014) Experimental evaluation of context-dependent collaborative filtering using item splitting. User Modeling & User Adapted Interaction 24 (1-2), pp. 7–34. Cited by: §II-A.
  • [7] S. Berkovsky and J. Freyne (2010) Group-based recipe recommendations: analysis of data aggregation strategies. In Proceedings of the 4th ACM Conference on Recommender Systems, pp. 111–118. Cited by: §I, §II-C, §V-D.
  • [8] L. Boratto, S. Carta, and G. Fenu (2016) Discovery and representation of the preferences of automatically detected groups: exploiting the link between group modeling and clustering. Future Generation Computer Systems 64, pp. 165–174. Cited by: §V-D.
  • [9] L. Boratto and S. Carta (2015) ART: group recommendation approaches for automatically detected groups.

    International Journal of Machine Learning & Cybernetics

    6 (6), pp. 953–980.
    Cited by: §I.
  • [10] J. Cao, Z. Zhu, L. Shi, B. Liu, and Z. Ma (2018) Multi-feature based event recommendation in event-based social network. International Journal of Computational Intelligence Systems 11 (1), pp. 618–633. Cited by: §II-B.
  • [11] A. K. Dey (2001-01) Understanding and using context. Personal Ubiquitous Computing 5 (1), pp. 4–7. External Links: ISSN 1617-4909 Cited by: §II-A.
  • [12] Y. Du, X. Meng, Y. Zhang, and P. Lv (2020) GERF: a group event recommendation framework based on learning-to-rank. IEEE Transactions on Knowledge and Data Engineering 32 (4), pp. 674–687. Cited by: §I, §II-C, §V-A3.
  • [13] Y. Du, X. Meng, and Y. Zhang (2019) CVTM: a content-venue-aware topic model for group event recommendation. IEEE Transactions on Knowledge and Data Engineering 32 (7), pp. 1290–1303. Cited by: §II-C.
  • [14] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. Journal of Machine Learning Research 9, pp. 249–256. Cited by: §V-A1.
  • [15] H. Guo, R. Tang, Y. Ye, Z. Li, and X. He (2017) DeepFM: a factorization-machine based neural network for ctr prediction. In Proc. 26th International Joint Conference on Artificial Intelligence, pp. 1725–1731. Cited by: §IV-C.
  • [16] X. He and T. Chua (2017) Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 355–364. Cited by: §II-C.
  • [17] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine 29 (6), pp. 82–97. Cited by: §I, §II-B.
  • [18] K. Ji, Z. Chen, R. Sun, K. Ma, Z. Yuan, and G. Xu (2018) GIST: a generative model with individual and subgroup-based topics for group recommendation. Expert Systems with Applications 94, pp. 81–93. External Links: ISSN 0957-4174 Cited by: §II-C.
  • [19] A. Karatzoglou, X. Amatriain, L. Baltrunas, and N. Oliver (2010) Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Proceedings of the 4th ACM Conference on Recommender Systems, pp. 79–86. Cited by: §II-A.
  • [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. Cited by: §I, §II-B.
  • [21] G. Liao and H. Zhiwei (2018) A global recommendation strategy considering constraints in event-based social networks. Journal of Nanjing University, Natural Science 54 (1), pp. 11–12. Cited by: §I.
  • [22] A. Q. Macedo, L. B. Marinho, and R. L. T. Santos (2015) Context-aware event recommendation in event-based social networks. In Proceedings of the 9th ACM Conference on Recommender Systems, pp. 123–130. Cited by: §I, §II-B.
  • [23] J. Masthoff (2004) Group modeling: selecting a sequence of television items to suit a group of viewers. User Modeling and User-Adapted Interaction 14 (1), pp. 37–85. Cited by: §I, §II-C, §V-D.
  • [24] L. Mei, P. Ren, Z. Chen, L. Nie, J. Ma, and J. Nie (2018) An attentive interaction network for context-aware recommendations. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 157–166. Cited by: §I, §II-C, §IV-A, §V-A3.
  • [25] U. Panniello, A. Tuzhilin, M. Gorgoglione, C. Palmisano, and A. Pedone (2009) Experimental comparison of pre- vs. post-filtering approaches in context-aware recommender systems. In Proceedings of the 3rd ACM Conference on Recommender Systems, pp. 265–268. Cited by: §II-A.
  • [26] S. Purushotham and C.-C. J. Kuo (2016) Personalized group recommender systems for location- and event-based social networks. ACM Transactions on Spatial Algorithms and Systems 2 (4), pp. 16:1–16:29. Cited by: §II-C.
  • [27] Z. Qiao, P. Zhang, C. Zhou, Y. Cao, L. Guo, and Y. Zhang (2014) Event recommendation in event-based social networks. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pp. 3130–3131. Cited by: §I, §I, §II-B.
  • [28] E. Quintarelli, E. Rabosio, and L. Tanca (2016) Recommending new items to ephemeral groups using contextual user influence. In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 285–292. Cited by: §I, §II-C, §V-D.
  • [29] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme (2009) BPR: bayesian personalized ranking from implicit feedback. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pp. 452–461. Cited by: §IV-D.
  • [30] S. Rendle, Z. Gantner, C. Freudenthaler, and L. Schmidt-Thieme (2011) Fast context-aware recommendations with factorization machines. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 635–644. Cited by: §II-A.
  • [31] S. Rendle (2010) Factorization machines. In Proceedings of the 2010 IEEE International Conference on Data Mining, pp. 995–1000. Cited by: §IV-C.
  • [32] F. Ricci, L. Rokach, and B. Shapira (Eds.) (2015) Recommender systems handbook. 2nd edition, Springer, Boston, MA, USA. External Links: ISBN 978-1-4899-7637-6 Cited by: §I, §II-A.
  • [33] Y. Seo, Y. Kim, E. Lee, K. Seol, and D. Baik (2018) An enhanced aggregation method considering deviations for a group recommendation. Expert Syst. Appl. 93, pp. 299–312. Cited by: §I, §I, §II-C, §V-A3.
  • [34] Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, A. Hanjalic, and N. Oliver (2012) TFMAP: optimizing map for top-n context-aware recommendation. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 155–164. Cited by: §II-A.
  • [35] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642. Cited by: §I, §II-B.
  • [36] H. Wang, H. Jhou, and Y. Tsai (2018) Adapting topic map and social influence to the personalized hybrid recommender system. Information Sciences 60 (11–14), pp. 1684–1706. Cited by: §I.
  • [37] W. Wang, G. Zhang, and J. Lu (2016) Member contribution-based group recommender system. Decision Support Systems 87, pp. 80–93. Cited by: §I.
  • [38] Y. Wang and J. Tang (2019) Event2Vec: learning event representations using spatial-temporal information for recommendation. In Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 314–326. Cited by: §II-B.
  • [39] Z. Wang, Y. Zhang, H. Chen, Z. Li, and F. Xia (2018) Deep user modeling for content-based event recommendation in event-based social networks. In Proceedings of 2018 IEEE Conference on Computer Communications, pp. 1304–1312. Cited by: §II-B.
  • [40] J. Xiao, H. Ye, X. He, H. Zhang, F. Wu, and T. Chua (2017) Attentional factorization machines: learning the weight of feature interactions via attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3119–3125. Cited by: §I, §II-C, §IV-A, §IV-B.
  • [41] M. Xu and S. Liu (2019) Semantic-enhanced and context-aware hybrid collaborative filtering for event recommendation in event-based social networks. IEEE Access 7, pp. 17493–17502. Cited by: §I, §II-B.
  • [42] Q. Yuan, G. Cong, and C. Lin (2014) COM: a generative model for group recommendation. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 163–172. Cited by: §II-C, §V-A2.
  • [43] W. Zhang and J. Wang (2015) A collective bayesian poisson factorization model for cold-start local event recommendation. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1455–1464. Cited by: §I, §I, §II-B.
  • [44] Y. Zheng, R. Burke, and B. Mobasher (2014) Splitting approaches for context-aware recommendation: an empirical study. In Proceedings of the 29th Annual ACM Symposium on Applied Computing, pp. 274–279. Cited by: §II-A.
  • [45] Y. Zheng, B. Mobasher, and R. Burke (2014) CSLIM: contextual slim recommendation algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems, pp. 301–304. Cited by: §II-A.