Multiplayer games are popular among players, with recent statistics showing that frequent gamers play with others online for 6 hours a week on average . Social aspects of gaming are among the main motivators to play online multiplayer games [21, 38, 46, 47], which leads to a lot of social interactions between players. Social interactions in online games have been linked with many positive outcomes, including the emergence of social capital [44, 101], forming and extending physical world relationships [96, 109, 119], and positive effects on psychological well-being ; however, harmful or toxic interactions can also occur, which have negative ramifications for player experience [34, 59] and the overall health of game communities. Game developers spend time and effort addressing toxic game environments and communities [54, 89]; examples such as tribunals in League of Legends  highlight the difficulties of assessing the quality of social interactions that occur between players and ensuring positive game communities.
To create and include game mechanics and community features that promote positive social interactions between players, developers must first be able to evaluate the quality of social interactions in their game; however, methods to do so are limited. Self-report measures assessing players’ subjective experience of social interactions exist [51, 60, 91], but are non-automated and hindered by guessing behaviour , retrospective bias , social desirability , and disruptiveness when administered during gameplay . Generally, questionnaires and interviews are impractical to administer after a game’s release. Using behavioural traces to predict self-reported experiences allows continuous, real-time, and unobtrusive assessment . Many such methods are directed at important experiences in single-player settings (e.g., frustration, boredom, fun) [19, 35, 68, 69], but limited for evaluating the social aspects of play that are important in multiplayer settings. As such, there is a need for researchers to develop methods for evaluating the quality of social interactions in multiplayer games in a way that is practically applicable—that is, using behaviour (in contrast to disruptive self-report measures) and unobtrusive sensors that do not interfere with the game experience or the social interactions between players.
In this paper, we build computational models of the quality of social interactions, by mining players’ behavioural traces for cooperative dyads as a first step of such an assessment in multiplayer games. A variety of criteria are relevant for evaluating how players experience social interactions, such as the perception of cooperation and interdependence [24, 42], trust [24, 25], and supportiveness . We assess these qualities using the umbrella term affiliation and measure it with an 11-item scale used in previous work on social closeness . By predicting questionnaire responses with multiple criteria, we use player behaviour to differentiate between good and bad interactions. In addition to our main multimodal approach, prediction using single feature categories such as game performance could be beneficial in cases when data sources are limited, e.g., as players do not share their video because of privacy. As such, this paper aims to answer the following research questions:
RQ1: Is it possible to predict a player’s affiliation toward a co-player in an online multiplayer game setting from unobtrusively gathered behavioural data?
RQ2: How do models using only features from a single category (e.g., game performance, facial expressions) perform?
RQ3: Which behavioural traces are important features for predicting affiliation during play?
We conducted an online study with 46 participants, in which strangers were matched up in pairs and played a cooperative digital game together. During the game, participants were connected via audio chat for communication. Audio recordings, video recordings of the participants’ webcams while playing, in-game log data, and self-reported data were collected. As we are interested in a method that is practical for use in real gameplay settings, we focused on data that can be collected unobtrusively during gameplay, and avoided categories of features that required specialized hardware (e.g., physiological sensors based on skin contact) or explicit player input required in the moment of play (e.g., state-based self report measures). After playing, participants reported their affiliation toward each other. We employed a supervised machine learning approach, training models with the aim of predicting the participants’ self-reported level of affiliation.
Our results demonstrate that predicting affiliation using behavioural traces is possible with up to 79.1% accuracy () for binary classification and 20.1% explained variance () for continuous measures. Further, an analysis of models using category-based feature subsets (e.g., in-game performance) shows that models based on verbal communication features (chronemics and communication content) perform best, demonstrating their high value for prediction. Finally, an analysis of feature importances gives first insights into the connection between player behaviour and social interaction quality, which can inform future hypotheses for controlled experiments studying causal relationships.
These findings can help researchers and practitioners who want to evaluate social interaction quality in online multiplayer games. Applications include game evaluation, mitigating toxic behaviour in published games, and improving matchmaking. This research is critical because the gaming industry increasingly trends toward games as a service (cf. ), in which publisher revenues and player experiences both depend on healthy, ongoing communities. Our approach can help developers who require community health monitoring tools to identify shifts in their communities, evaluate new features, and tweak and optimize existing features.
Our work is related to assessment approaches in gaming and to research on the relationship between social interactions and human behaviour, which informs the behavioural traces that we use as features.
2.1 Using Behaviour to Assess Social Interaction Quality
In this paper, we evaluate the feasibility of measuring affiliation in dyads, where we consider interpersonal trust as important . Trust has been used to characterize social relationships in computer-mediated communication [48, 51, 60, 91, 94], game settings , and social closeness in multiplayer games specifically . While there are questionnaires that measure self-reported trust [51, 60, 91], there is little previous work on automated and unobtrusive methods affiliation assessment.
The detection of affiliation is, however, closely related to emotion recognition [19, 35], as it can be considered a method of measuring a user’s psychological state based on implicit signals. In a gaming context, the detection of emotion can be used to evaluate the quality of experiences  or to adapt game features based on players’ emotional states [13, 14, 63, 82, 97, 107, 108]. Previous work has shown a relationship between emotion and trust , especially toward unfamiliar people 
, suggesting value of features used in emotion recognition for the evaluation of affiliation in multiplayer games. While emotion recognition methods can inform our feature selection, in general they are rarely used to evaluate multiplayer settings (e.g.,), hinting at a lack of guidance on the assessment of social experiences.
On the other hand, the analysis of text messages to detect toxic behaviour in games [72, 106] is related to the negative consequences of harmful in-game social interactions. While these methods can be used to evaluate social interactions occurring in multiplayer settings and thus inform our use of content-based conversational features, they are not helpful for assessing positive outcomes of beneficial social interactions and they further rely on objective criteria on what is considered toxic. However, different players can experience interactions or messages very differently.
In summary, we find that there is a lack of guidance on methods for the assessment of social interaction quality. Approaches to modeling affiliation between players should consider how a player experiences an interaction, not just the observable characteristics of the interaction itself. Therefore, in this paper, we examine whether players’ behaviour can be used to detect affiliation as experienced by players.
2.2 Potential Indicators of Affiliation
We rely on previous work studying the relationship between human behaviour and affiliation to inform which behavioural traces might be useful for the assessment of social closeness. Affiliation is important for understanding player behaviour , as it is a central motive for human behaviour according to Motive Disposition Theory . However, we aim to predict how players perceive social interactions comprised of a broad spectrum of qualities that we have summarized under the umbrella term affiliation. To inform potentially relevant features, we build on literature from other fields, such as work and organizational psychology. In particular, we are interested in traces that can be collected unobtrusively with low-fidelity sensors in a natural gaming setting. We use features that are related to affiliation and features that can be used for emotion recognition due to the established relationship between emotion and social closeness [28, 120].
Depping and Mandryk found that games requiring interdependence between players can build trust, an effect that was fully explained by a greater number of conversational turns during interdependent play . This suggests that features related to the timing of conversation should be explored as a predictor of social interaction quality among players. Chronemic conversational analysis (i.e., analyzing the timing of conversations) in a game context may require different considerations than in productive or serious contexts. For example, past work has shown that the action of playing a game itself can be considered a conversational act ; a gap in game-related conversation may therefore reflect increased communication through in-game moves, and a speaker who receives no verbal response may see the answer reflected in the other player’s in-game actions. Occurrences that might be considered negative in serious or productive contexts may be a normal part of communication between players in a game.
A major part of a social interaction is the communication between humans. In earlier work, Gilbert et al.  used the content of communication in models to predict social ties between social network users. As we expect a similar connection between communication content and the quality of the social interactions in a game context, we use it as a feature for our model.
Previous research showed that eye blink rate (EBR) is a non-invasive indicator for central dopamine function . Research suggests a link of dopamine to emotions [2, 83] and reward function through social interactions [113, 114]. Therefore, we consider EBR a relevant feature for our model. Due to the importance of emotions, we include facial expressions features based on previous work showing that facial action units  can be used to detect emotional state [3, 77, 99].
Many games use challenges to elicit fun [64, 66], suggesting that player performance and experience are connected. Players’ in-game performance was used in previous research on gameplay adaptation  and emotion recognition in games , hinting at a potential importance for affiliation between players. We suspect that the utility of in-game measures might be high in multiplayer settings, in which players’ performance does not depend solely on themselves. In addition, we consider players’ in-game actions relevant as they are potentially linked to their experience . As such, they might be informative for the assessment of affiliation and are therefore used in our model.
Besides features derived from players’ behaviour, we use a small set of relatively stable traits that can be collected outside of a game session through one-time self-reports. In particular, we added features based on previous work suggesting that perception of the other, e.g., trust, is affected by age , gender , the gender combination of people involved in an interaction [6, 95, 103, 104], and personality traits like agreeableness or propensity to trust [33, 80]. Finally, we use features based on identification as a gamer and preferred gaming style because of previous work suggesting a link between gaming frequency and social interaction .
3 Data Collection
We collected behavioural and self-report data in an online study of pairs of strangers playing a networked, cooperative, and interdependent game to create predictive models of affiliation.
3.1 Game: Labyrinth
We used Labyrinth (see Figure 1), a digital online multiplayer game based on a similarly named board game , which has been used in previous research studying social closeness in games [24, 25]. While the game has multiple modes, we used the cooperative and interdependent version of the game as both of these mechanics are important for social closeness between players  and we assume that a minimum required level of affiliation must be built between players to build a computational model. In the game, players have to cooperate to gather collectibles (gems) by rearranging a maze, operating under a fixed time limit. The players have different roles: only the collector can pick up the gems to increase the shared score and only the pusher can push walls to rearrange the maze and create paths for the collector. These game mechanics lead to cooperative and interdependent gameplay through shared goals and tightly-coupled complementary roles. Players communicate via audio chat during the game.
Labyrinth was developed in Unity  and presented online using WebGL. Players were connected to a game server that managed synchronized game logic using the Unity Multiplayer HLAPI  and WebSockets. The audio chat used WebRTC, a peer-to-peer communication standard for web browsers. In our setup, a server running Kurento Media Server  was used instead of a peer-to-peer connection. This increased reliability when connecting users and allowed recording the participants’ audio streams.
3.3 Study Setting
The study was conducted in an online setting that balanced the need for a natural gaming environment with the need for experimental control. By conducting the study online, we were able to evaluate the social interactions between two strangers, both playing at home, who were matched by a matchmaking system—a natural scenario that happens in many commercial online games. Although we considered gathering data in the laboratory, the idea that local participants might know of each other, move in the same social circles, or interact under the assumption that they may meet again led to our decision to conduct the study online, avoiding uncontrollable effects of these factors on their interactions.
The study was conducted using Amazon’s Mechanical Turk (MTurk), a human-intelligence task market that has been shown to provide reliable data for HCI studies when special care is taken to verify data quality [10, 55, 73, 85]. Participants connected to a web server hosting the study. They were instructed on the procedure of the study and were required to provide informed consent. They completed an initial questionnaire measuring demographics, propensity to trust, and personality. Next, they watched a tutorial video explaining the study and how to play Labyrinth. They were then prompted to allow webcam and microphone access. A set of guidelines was shown reminding participants to ensure a good video (e.g., look at the screen), along with a live preview of their video to provide feedback. Note that participants did not see each others’ videos—we gathered them for our own data analyses, but video previews were not displayed during gameplay to their partner or to themselves, due to potential effects on social interaction [79, 110] and as neither is common in multiplayer games. They were then matched randomly with another participant, connected to each other via audio chat, and redirected to the Labyrinth game page. The game roles were assigned and players were instructed via text on their role and how to start the game, i.e., indicating that they were ready to start, after both players had loaded the game. They then played Labyrinth for five minutes, received feedback on their performance, changed roles, and played a second round for an additional five minutes. Subsequently, they were disconnected and completed a concluding questionnaire about their experience.
Before the game, we measured the participants’ propensity to trust as a trait using the General Trust Scale , their Big 5 personality traits using the Ten-Item Personality Inventory , and their age and gender. In addition, they reported their self-identification as a gamer on a 100-point scale , enjoyment of game genres using a checklist, and BrainHex player type  by choosing their preferred play style. Behavioural traces were based on data recorded during gameplay. We recorded the communication of participants through the audio channel on the server as synchronized raw audio files from start to end of the audio connection for both participants. Additionally, the participants’ webcam video, in-game events, and in-game performance (i.e., scored points) were recorded. After the game, we measured players’ affiliation using an 11-item scale based on items from other scales [51, 60, 91]. This scale was used to measure trust in games [24, 25], but covers other aspects such as honesty, fairness, and reliability, which we summarize as affiliation. We tested the scale on our data and found it to have excellent internal consistency ( = .952) .
Participants were recruited using MTurk. Due to the nature of the task (matching and recording participants), it was easy to detect bots or participants who did not diligently complete the task, allowing for a simple assessment of data validity. All data (audio, video, game, questionnaire responses) were manually inspected and pairs of participants were removed if the inspection suggested that participants did not actually complete the full study or play with each other as intended. In particular, both of them had to have completed all questionnaires and scored at least one point, indicating attempted gameplay (as scoring a point was only possible if both participants played). They had to be connected via audio chat, which was verified by the presence of both audio files, which were created after the audio connection was closed and as such were only present when participants were successfully connected. In the end, we had valid data for 46 participants (female = 16, male = 30) aged 21 to 58 (M = 34.35, SD = 9.45). There were 23 pairs in the study (female–female = 2, male–male = 9, female–male = 12).
4 Prediction Models
We used the study data to generate prediction models.
As there has been little previous work guiding the implementation of a behaviour-based assessment of affiliation between players, we follow an exploratory approach. We collected features that we considered potentially related to players’ perception of the interaction, trained machine learning models, and evaluate their performance in predicting self-reported affiliation—a supervised learning task similar to earlier work predicting human ratings [37, 102]. Depending on the form of the outcome variable, i.e., what is being predicted, supervised learning tasks are tackled with classification (prediction of classes) and regression (prediction of continuous values) approaches.
In this paper, we operationalize the quality of social interactions through a self-reported measure of affiliation; we calculate a single score from the 11-item scale following previous work [24, 25], yielding a continuous affiliation score (range: 1–7, , ), which makes prediction on this scale a regression task. Predicting binary classes (i.e., if a player experiences low or high affiliation) is potentially easier than predicting exact scores, but can still suffice in practical applications. Thus, we also evaluate binary classification, with the goal of predicting low or high affiliation. However, there is no general objective criterion dividing the scale into population norms for low and high states of affiliation. As such, to evaluate binary classification, we used a median split on our distribution to generate low and high categories of affiliation and evaluate binary predictions employing a binary classifications approach (N = 24, N = 22).
As we collected self-reported affiliation once, we only have a single label for each gameplay phase. Thus, we consider the data of a single player as a single sample, and calculate a variety of higher-order features describing the whole gameplay phase. A sample then consists of a feature vector for the whole gameplay phase and a corresponding affiliation label. Generally, we have a unique sample for each participant, but use a small set of features from data of both participants (e.g., communication). We take this approach because we assume that players’ experience of an interaction can diverge and we want to predict affiliation for each individual player. We tackle the potential correlation of affiliation scores within dyads in our cross-validation approach (described later). This approach allows us to use features from various sources as a combined input. While explicit time information of features is discarded, time information can still be used in features, e.g., for time of silence. Even though this approach is not necessarily optimal for separate data streams, it allows us to combine the different types of data streams into a single model, which makes it easy to evaluate the performance of different combinations of features, making it appropriate for addressing our research questions.
4.2 Data Preprocessing
Because our study was conducted online in participants’ homes, some aspects of the collected data were missing or unusable for some participants. As previously reported, we removed all pairs who did not actually play the game, whose questionnaire responses were invalid, or who did not successfully establish an audio connection. However, this does not imply a minimum amount of communication between the players (in the extreme case a participant could mute their audio despite the instruction to enable sound). We inspected the participants’ audio and video files to identify issues that would make them unusable, for example, poor framing or lighting, or the players not talking. Audio files were filtered to remove background noise, such as other people talking in the background, and synchronization was verified and adjusted if necessary. The framerates of videos were tested and standardized.
4.3 Feature Generation
Although all participants for which we generated features were considered valid, single features could be invalid due to a single unusable data stream (e.g., video was unusable or the players did not talk). We retained the sample, marked single features as missing, and imputed them based on existing data at a later step to avoid losing too much valid data. In general, we generated 75 features that can be associated with seven categories (see Table1). All features were generated in multiple steps: the source data was preprocessed and then basic features were extracted. Features were selected based on previous work hinting at an importance for social interaction quality (see Section 2 Background).
Chronemics: Previous work has found that the number of conversational turns predicts trust formation in games ; however, conversational turns alone are a coarse reflection of the balance and timing in a conversation. To capture more detail about the timing of the conversation, we included a total of 12 chronemic features that were processed using custom software written in Python . Using the cleaned audio files, we started by splitting each participant’s audio into segments of speaking and pausing using amplitude-based thresholding. This yielded measures of total time speaking, count of speech segments, average length of a speech segment, average length of pause segment, and SD of length of speech segment. Next, we analyzed each pair’s audio files together and calculated conversational turns by merging pausing and speaking segments that were uninterrupted by a speaking segment from the other participant. We also analyzed silence in the conversation by comparing the pausing segments of the two participants. We calculated the total amount of silence in the conversation, the average length of each period of silence, the fraction of the conversation spent in silence, and the length of first silence of the call (before either person spoke). Finally, we included two boolean features for each participant indicating whether they were the dominant speaker in the conversation, one generated from conversational turns and the second from speech time.
Communication Content: We generated 17 features relating to the content of players’ communication on a semantic level, based on word counts. Audio files were transcribed and then semantically analyzed using the Linguistic Inquiry Word Count (LIWC) tool . We used summary dimensions (Total Word Count, Analytic, Clout, Authentic, Tone), personal pronouns that could indicate players seeing themselves as single players or as a team (I, You, We), general dimensions related to social closeness (Social, Affiliation), dimensions that could be related to gameplay and scoring (Motion, Space, Time, Number), and affect dimensions (Affect, Positive Emotions, Negative Emotions).
, and Keras
to count blinks. Frames of the videos were analyzed by extracting images of the eyes and using a convolutional neural network predicting if the eyes are opened or closed. This network was trained on the “The Closed Eyes in the Wild” database, consisting of data from 2423 subjects with labelled images of faces and opened and closed eyes. We validated the recognition using 3 short test videos and manually labelled frames. The algorithm achieved an score of .989 on those videos, suggesting good performance even if our participants’ videos were noisier than test data. Participant videos were manually inspected and videos that were problematic for the blink detection, e.g., participants with glasses, were marked invalid. As a result, the EBR features of 12 people were discarded. For the remaining participants, blinks per second were calculated; to reduce noise, a moving average window (five seconds) was used. Blinks per minute were also calculated and the average EBR (as well as its SD) was calculated over the duration of the video. Data were marked invalid if the face could not be detected in more than 10% of frames or there was no phase of continuous detection over 5 minutes, which is considered a suitable time span for EBR in experimental designs .
Facial Expressions: We added 16 features related to the players’ emotions, based on their facial expressions. We used the AFFDEX SDK  to predict emotional state based on facial expressions. The SDK generates confidence scores between 0 and 100 in each frame for engagement, contempt, surprise, anger, sadness, disgust, fear, and joy, representing the strength of each emotion reflected in the players’ face for that frame. For each emotion, we calculated two features over the whole gameplay duration: a general measure of overall strength using the average predicted strength over all frames, and a count of strength peaks, defined as local maxima over a threshold of 50, to better reflect facial expressions of a short duration (cf. ). As the SDK only generates predictions when the face is visible, we calculated the ratio of frames for which the SDK recognized a face compared to the overall number of frames and considered all facial expression features for a sample only valid if this ratio was over 80%.
In-Game Performance: We calculated 12 features relating to in-game performance based on score, as we expected that a player’s impression of their co-player might vary if they perform better or worse. We added measures of performance using overall, average, minimum, and maximum score. In addition, we added scores for each round, absolute and relative score difference between rounds as measures of the development of performance over time, score for each role, and absolute and relative score difference between roles as a measure of role-based performance.
In-Game Behaviour: We used two features as a measure of the amount of in-game actions that players performed: number of horizontal and vertical pushes of game board tiles.
|chronemics||TimeSpeaking, CountSpeechSegments, CountConversationalTurns, AvgSpeechSegmentLength, AvgPauseSegmentLength, SDSpeechSegmentLength, IsDominantSpeakTime, IsDominantConvTurns, TimeSilence, FractionTimeSilence, AverageSilenceLength, FirstSilenceLength|
|comm. content||CountTotalWords, CountWordsAnalytic, CountWordsClout, CountWordsAuthentic, CountWordsTone, CountWordsPronounI, CountWordsPronounWe, CountWordsPronounYou, CountWordsNumber, CountWordsAffect, CountWordsPosEmo, CountWordsNegEmo, CountWordsSocial, CountWordsAffilitation, CountWordsMotion, CountWordsSpace, CountWordsTime|
|eye blink||MeanBlinkRate, StdBlinkRate|
|in-game behaviour||CountVerticalPushes, CountHorizontalPushes|
|fac. expr.||EngagementMean, EngagementPeaks, ContemptMean, ContemptPeaks, SurpriseMean, SurprisePeaks, AngerMean, AngerPeaks, SadnessMean, SadnessPeaks, DisgustMean, DisgustPeaks, FearMean, FearPeaks, JoyMean, JoyPeaks|
|performance||ScoreRound1, ScoreRound2, ScoreCollector, ScorePusher, ScoreDiffRounds, ScoreAbsDiffRounds, ScoreDiffRole, ScoreAbsDiffRole, ScoreOverall, ScoreMean, ScoreMin, ScoreMax|
|self-report||Age, GamerIdentification, GenrePuzzles, GenreCasual, SameGenderCoPlayer, Gender, GenderCoPlayer, Extraversion, Agreeableness, Conscientiousness, EmotionalStability, Openness, PropensityToTrust, BrainhexSocializer|
Self-Report Traits: We added 14 features that are based on trait-based self-report measures. We used five features representing the Big 5 personality traits, and a single feature for propensity to trust, as they are important for the perception of social interactions . Gaming preference features used were self-identification as a gamer and boolean features for enjoyment of casual games and puzzle games (i.e., Labyrinth’s genres) and BrainHex class of Socializer. Finally, we added features for player age and gender, co-player gender, and gender pairing (boolean same/mixed).
After feature generation, the dataset consisted of 46 samples, each representing a single participant with a vector of 75 features. We trained classifiers and regression models (regressors) for the prediction of binary affiliation and continuous level of affiliation, respectively. To select suitable models, we trained and tested models using the approach outlined below. We tested Naive Bayes, Logistic Regression, Random Forests (RF), and Support Vector Machines (SVM8, 57] and SVM  performed best, thus we chose them as models to evaluate our research questions.
Before training, invalid data were treated as missing and estimated using multivariate feature imputation based onMICE 
, with ridge regression models predicting missing data using the other features. Hyper parameters for all models were determined using repeated 10-fold cross-validated grid search (see Table2 for parameter grid). Best-performing parameters were subsequently used to train the models. 128 trees were used in the RF models as suggested in previous work  and were fully expanded. The SVM
models used a radial basis function as kernel. We employedScikit-learn 0.20  in our implementation.
Training was performed using leave-2-groups-out cross-validation (CV). We separated 4 samples from 2 dyads from the data set as a test set and trained a model on the remaining 42 samples—repeated for all possible combinations of selecting 2 dyads as test set. We employed this approach as it covers the whole data set and provides a good trade-off of bias and variance due to split size—close to k-folds with . We used this approach over traditional k-fold CV as it keeps samples of dyads separate on training/test splits, and instead of leave-one-out, which has increased variance . The high computational cost was not an issue due to our comparably small number of samples. We repeated the CV 10 times to reduce variance estimates for models, which can be a problem with small sample sizes (cf. ). With this CV approach, we trained a completely new model for each training set and tested it on data that is unknown to the model—in contrast to approaches that refine a single model with train and test data and thus require a separate holdout set. As such, our CV approach allows an assessment of out-of-sample prediction, i.e., how well a model using the same features could predict affiliation on similar data. Therefore, if predictions are better than random chance with our cross-validation approach, it is likely predictions are equally accurate with similar data not present in our data set.
|min samples leafs|
|min samples split|
|Support Vector Machines|
To gain insights into the relevance of features, we trained RF regressors on the whole data set with recursive feature elimination using the same cross-validation approach (cf. [41, 98]). This method recursively trains models with smaller feature subsets until an optimal solution is reached. This optimal solution was reached with 9 features (see Table 4). We used a model containing these features (bestfeatures model) in addition to the other models as a possible best model. In addition, we trained models with subsets of features for each feature category to test if a single category suffices, e.g., when there is no access to video data.
RQ1 and RQ2 concern model performance. In particular, we are interested if affiliation can be predicted with a model using our features in general (RQ1) and with models using features from single categories (RQ2). In both cases, we compare performance to baselines. To the best of our knowledge, no previous work has tried to predict self-reported affiliation based on behavioural traits of players. Thus, we evaluated model performance in comparison to baselines that do not use our feature set. If a model performs better than its baseline, the combination of features has value for the prediction of affiliation. For binary classification, the baseline was random guess (F = .50). For the regression models, we used prediction of data set mean  (R = 0.00) as baseline as it outperformed median prediction and random guess is inappropriate for continuous data. Figure 2 shows performance measures for RF and SVM models for the prediction of binary (F) and continuous (R) affiliation respectively.
To account for variance in model performance, we used statistical tests comparing performance to the baseline. However, frequentist t-tests and ANOVAs are not appropriate for this comparison, because the measures for a model are not independent from one another when gathered with repeated CV (cf.[7, 22, 26, 112]).
5.1 Bayesian Analysis: A Primer and Description
To avoid the potential issues of using frequentist hypothesis tests for comparing classifier performance, we followed the recent recommendation of Benavoli et al. 
, who proposed using Bayesian analysis. We compared models using Bayes factors, which are a popular method of Bayesian hypothesis testing. Bayes factors are a ratio of the likelihood of observing some specific data under two statistical models 
; in other words, they measure the ratio of the likelihood of data occurring given a null hypothesisand an alternative hypothesis . The evaluation of whether a model is better than a baseline can be formulated as a hypothesis , how likely its accuracy measures are higher than the baseline score, which can then be tested with a Bayesian t-test. For example, data is 6.33 times more likely under than under for a hypothetical Bayes factor equaling 6.33 . For Bayesian analyses, we used JASP , a graphical tool providing the Bayesian equivalents to one sample t-tests using an implementation of the JZS t-test as described by Rouder et al. . In our analyses, we used objective, default Cauchy priors centered around 0 with a width of and evaluated their robustness using different sensible priors widths [12, 90]. For all Bayes factors, we report raw . We use the interpretation of JASP , which is based on earlier interpretations [50, 61], to provide context. For example, a between 10 and 30 provides strong evidence for over . In addition, we report posterior estimates for the effect sizes with median Cohen’s d.
5.2 RQ1: Recognition of Affiliation
In RQ1, we ask: Is it possible to predict a player’s interpersonal affiliation toward a co-player in an online multiplayer game setting from unobtrusively gathered behavioural data? The results suggest performance does vary largely with respect to the selected features and model. For RQ1, we are interested in models with features from multiple categories, i.e., the all and best features models. For these models, the SVM models performed better than RF models for classification, while performance was better for RF regressors than for SVM regressors. Unsurprisingly, the bestfeatures models were better than models using all features as they disregard potentially uninformative features. Disregarding single category models temporarily, the best general models were SVM bestfeatures for classification and RF bestfeatures for regression. We compared these to the baseline for RQ1. Table 3 shows results of Bayesian one-sample t-tests comparing all models to their respective baselines.
A Bayesian one-sample t-test tested , that F scores of the SVM bestfeatures classifier were higher than the random guess baseline (F = 0.5). The results suggest very strong () evidence of the data for over (, median d = ). In fact, the other models with bestfeatures and even all features might be more suitable due to lower variance. This shows that models using behavioural traces are better than the random guess baseline with 67.7% accuracy () for the bestfeatures model, suggesting that predicting binary affiliation is possible with these features.
Similarly, a Bayesian one-sample t-test tested , that R scores of the RF bestfeatures regressors were higher than baseline regression performance (R = 0.0). The results provided extreme () evidence of the data for , i.e., that R scores were higher than baseline (R = 0.0). For the regression models, this performance was highest with 19.6% explained variance (not considering single category models). This shows that predicting continuous affiliation is better with our features than just predicting mean affiliation score, which suggests that predicting continuous affiliation is possible as well.
In summary, the data suggest that our models can predict binary and continuous affiliation better than chance, indicating that an evaluation of social interaction quality using behavioral traces is possible. This means that a game can generate features for a gaming session and use these to predict how players experienced the interaction with other players. This works without any additional input from humans, allowing extensive insights into social player experience, while also allowing researchers to use this information in automated systems, such as for improved matchmaking.
|median d||median d|
|best features (RF)||193.90||1.51||815.38||1.94|
|best features (SVM)||63.23||1.22||49.17||1.18|
|comm. content (RF)||2546.07||2.37||36.61||1.09|
|comm. content (SVM)||130617.05||4.29||5.22||0.68|
|eye blink (RF)||2.47||0.54||0.01||0.08|
|eye blink (SVM)||49321.29||3.70||0.10||0.12|
|facial expr. (RF)||0.13||0.09||0.03||0.09|
|facial expr. (SVM)||55007.78||3.80||0.14||0.11|
|in-game behaviour (RF)||438.08||1.80||1958.32||2.19|
|in-game behaviour (SVM)||0.19||0.13||0.02||0.11|
|higher than 0.5||higher than 0.0|
5.3 RQ2: Using Single Category Models
RQ2 asks: How do models using only features from a single category perform? This is interesting as a single-category model would allow the evaluation of social interactions even if researchers have access only to specific data streams, such as players’ voice chat or even only in-game data. This type of model could be desirable because not all data sources might be available for each game context or might not be accessible at all, due to restrictions related to privacy or ethics (see Discussion). We tested models using only features from each category to investigate the performance of single-category models. Table 3 shows the results of the Bayesian t-tests comparing performance to the baselines.
Regarding classification, RF models showed promise for models using in-game data (in-game behaviour & performance), whereas SVM classifiers outperformed RF classifiers for the features gathered from video data (eye blink & facial expression). Overall, the results suggest that for each category, there is a model that has acceptable accuracy, suggesting that single-category models might be useful to varying degrees. The best results were achieved by the models based on verbal communication (communication content & chronemics), where the t-tests strongly suggested better than baseline performance. Performance measures for the SVM communication content model suggested the highest likelihood of being better than the baseline, indicating best performance for classification () and outperforming the bestfeatures model.
Results were more varied for regression models. Bayesian one-sample t-tests suggested evidence that 7 models performed better than the baseline. The tests suggested evidence for , i.e., that performance was not better than baseline for 11 models, including for all video-based feature sets (eye blink & facial expression), self-report features, most in-game data feature sets, and even the SVM all feature set. The worse performance in comparison to the classification task can be explained by the higher difficulty of predicting continuous affiliation and the better baseline method (predicting mean vs. random guess). R scores below zero are caused by a model that does not predict well on the test set. This suggests that in these cases, the models did overfit on the training data and a general relationship is unlikely, leading to unsuitable predictions on new data. On the other hand, models with bestfeatures and communication content as well as the RF regressors using events and chronemics performed better than the baseline. Results suggest the best performance for RF regressor using communication content () and chronemics ( and lower variance).
5.4 RQ3: Feature Importance
RQ3 asks: Which behavioural traces are important features for predicting affiliation during play? To gather first insights on the relationship between player behavior and affiliation, we evaluated feature importances. We consider features that are important for prediction as potential indicators of affiliation. Table 4 shows the 9 features of the optimal feature set as determined by the cross-validated recursive feature elimination with RF regressors.
Higher affiliation coincided with more communication in the form of lower overall conversational pause time and a higher number of conversational turns. Four communication content features were also in the set of important features. Higher affiliation was reflected in fewer words related to analytic thinking and numbers as well as greater usage of the pronoun “I”—including variations like “me”—and words related to time. Further analysis is needed to characterize the relationship between communication content and affiliation; however, we speculate that use of analytic language and numbers, such as to discuss scores, reflects players manifesting their motivation to perform well as explicit discussion, rather than relying on their partner, engendering less affiliation. This lies in contrast, however, to the positive relationship between words related to time and affiliation, considering that time limits were an important part of gameplay. The positive correlation of affiliation and “I”-related words might be related to players communicating their actions, leading to greater feelings of being a team. It could even suggest that players may have revealed personal information about themselves to the other player, which is in line with research on privacy suggesting a link between trust and self-disclosure . Interestingly, the number of horizontal pushes was included in the optimal feature set, whereas scores were only slightly correlated. This suggests that considering features of the process of playing can be valuable for prediction when used in combination with other features. Finally, in line with previous work on personality traits and trust  and trustworthiness , higher affiliation coincided with higher propensity to trust and conscientiousness. Keep in mind that the reported correlation scores are not corrected for dyads and are therefore overestimated. Due to the exploratory nature of this , we did not want to conduct analyses controlling for the relationship amongst features, as this would lead to unreliable estimates of effects and significance that could be misinterpreted. We report these feature importances to give an overview of the direction of a relationship, informing future work with controlled experiments, while our results do not reflect a deeper understanding of the connection between features and affiliation.
We discuss findings, generalizability, and application.
6.1 Summary of Findings
We summarize our findings as follows: (1) Affiliation can be predicted from player behaviour. Our results show that the prediction of both binary and continuous affiliation is possible with up to 79.1% accuracy and 20.1% explained variance. (2) The best models strongly outperformed the baseline models, suggesting that reliable recognition of social interaction quality based on behavioral traces is possible and feasible. (3) Binary affiliation can be predicted with accuracy better than chance from various sets of features (2 models > 70% accuracy, 14 better than baseline, 2 not useful). (4) Predicting continuous affiliation is possible but more challenging (4 models > 15% explained variance, 3 better than baseline, 11 not useful). (5) Models using only communication content or chronemics performed best for both classification and regression indicating value of features based on verbal communication.
6.2 Generalizability of Findings
Our approach applies machine learning methods to gameplay data, i.e., human behaviour and self-reported appraisal of interpersonal interaction. Due to the deviation from experimental studies and their analysis, we provide context on findings and generalizability.
First, with respect to RQ1, we demonstrated validity by showing that binary and continuous predictions are possible with up to 79.1% accuracy for and 20.1% explained variance on data unknown to the models. Due to potential bias in selection, we did not use a dedicated holdout set. Thus, our assessment of generalizability is limited to the cross-validated performance, which is an estimate of out-of-sample performance that could be expected for players in similar scenarios. While we cannot provide a conclusive assessment of the generalizability, this paper suggests validity of our proposed novel approach to unobtrusively assess social interaction quality in games. With our cross-validation, we found that some models likely were overfit, as is common with a high number of features compared to the number of samples. For these, we suspect that they might not generalize well beyond our sample. Based on cross-validation, we suspect they perform well for similarly behaving players, but require further studies to confirm generalizability to other players. The analysis of models with fewer features (e.g., chronemics), where overfitting is less likely, reinforces the potential generalized performance of this approach.
Second, our intention was not to study whether and how a specific behavioural trait is related to affiliation. As such, the analysis of feature importances does not provide generalizable insights into the relationship between behaviour and affiliation. Based on the analysis of the feature importances, we provide a set of features that in combination is important for predicting affiliation. Correlation measures give potential insights into the relationship of the variables, but with our approach we cannot meaningfully control for interaction effects or correlations amongst these variables without overestimating effects. While we cannot draw conclusions on the general relationship between our variables, our results can be used to inform hypotheses in future controlled experiments that allow for causal inference.
Third, some readers might wonder if better-than-chance prediction rates are good enough for real-world application. The decades of research on emotion recognition have shown that assessing complex psychological states is challenging and performance should be considered as context-dependently “good enough”. Our paper demonstrates that predicting affiliation based on human behaviour is possible with acceptable performance suggesting validity of the approach. Real value then depends on the use case. Predictions are likely better for players known to the models and therefore are useful when applied in published games where state is predicted repeatedly, e.g., for each match. Further, models can be valuable without high accuracy but specifically trained to detect particularly bad or good experiences (i.e., low/high affiliation), because such states are more relevant for assessing and evaluating game features, e.g., if a new chat feature leads to many negatively perceived interactions.
6.3 Detecting Social Interaction Quality in Games
People may experience the same social interaction differently depending on their context, personality, or previous experience. Whereas approaches such as the detection of toxic behaviour [72, 106] try to assess if a message is toxic or offensive, they mostly assume that there is a generally agreed-upon definition of what they aim to predict, i.e., a negative interaction. We argue there is no general truth of the social interaction—what one person (or algorithm) considers harmless might deeply offend another player, while a third may think it is a hilarious and integral part of the in-game interaction between friends. Further, one player may think that a comment is funny one day and hurtful the next, depending on their mood and the circumstances. An assessment of social interactions must be grounded in the appraisal of the experience of this interaction. More generally, the way in which people interact with each other is highly complex, and to capture the full degree of how each player experiences an interaction, a system needs to examine a variety of very subtle cues. For example, a player blushing after another player helped them out might be a good indicator for a positive social interaction that can be interpreted easily by a human but is difficult to assess for an algorithm. We propose that our work provides a first step toward the goal of evaluating the experience of social interaction in games by showing that it is possible to predict self-reported affiliation. While many essential questions remain to be answered, this paper establishes the potential of this approach.
6.3.1 Application of Affiliation Recognition
The assessment of social experiences is useful for game developers to evaluate multiplayer games. The design and development of multiplayer games is very expensive and thus carries substantial financial risk; our approach can help inform low-fidelity and unobtrusive measurement tools that would be valuable for games-as-a-service to assess the quality of social experiences. While we used it for a single assessment, this approach can be used to provide a continuous measurement that allows for an evaluation of the progress of social interaction quality over time and also attribute the experience of a social interaction to specific micro events by creating windows of gameplay phases. This can help game developers assess an essential aspect of multiplayer experiences for which there are surprisingly few automated, unobtrusive, and continuous measurement tools.
In our analysis, we used an offline feature generation pipeline, but all steps could be applied online in real-time. In a real application, the prediction models can be pre-trained and improved over time. The same feature generation pipeline generates feature vectors that the model uses as input. Predictions are not costly. Implementing a pipeline as described allows an automated, continuous, real-time evaluation of player states.
In this paper, we propose the behaviour-based assessment of social interaction quality, but not how to use this knowledge in a game to improve the interaction between players. Improving problematic interpersonal interactions is a difficult problem and an easy solution might not exist. Developers of multiplayer games (similar to other platforms on which humans interact) need to address hostile, toxic, and otherwise negative interpersonal interactions, which is a challenging and perpetual task. Our work demonstrates that affiliation effectively can be predicted based on behaviour, and further provides guidance on feature subsets that are particularly salient. We hope that game developers can use our findings and that our work helps contribute to a shared effort of industry practitioners and academic researchers to create healthier, more positive environments for players, in which the risk of negative and toxic interactions is minimized.
In addition to measuring the quality of social interactions to inform design and development of games and game communities, our findings have interesting applications in adaptive gaming. In many multiplayer games, players are assigned to teams with strangers by matchmaking systems. Games that are able to detect the quality of social interactions as experienced by players can directly leverage this information without requiring additional explicit input from players. A system that can automatically detect the state of a player can react accordingly, or better yet, can preventatively act when it predicts that a player could experience a state before it even happens. Adaptation and preventative action are major benefits of using an automated assessment of how players experience a social interaction. A system could prevent toxicity in a game community by giving appropriate feedback to players whose behavior is experienced as hostile or by protecting victims of negative behavior, e.g., by censoring hurtful messages with an approach that is aware of how players perceive these messages. As such, a computational model in combination with an adaptive system that knows how to handle varying quality of social interactions can not only assess experiences but provide user-specific solutions like directed feedback or matchmaking that finds suitable partners, while acknowledging that people differ and sometimes interpret and experience the same social interaction from very different perspectives.
6.3.2 Group Dynamics
Our overall goal is to recognize the quality of social interactions in multiplayer games. In this paper, we avoided group dynamics and operationalized social interaction quality as affiliation between dyads in a cooperative interdependent game. In commercial multiplayer games, there exist a variety of other contexts and considerations. While we suspect that it is still possible to assess an individual’s—as well as the group’s—overall experience of the social interaction in settings with multiple co-players, this requires future research. In particular, it seems challenging to account for group dynamics and attribute a player’s experience to specific co-players. Similarly, we suspect that our findings are in part limited to the cooperative setting that we employed and a generalization to the quality of social interactions with opponents in competitive settings requires further research. We think that a recognition of the quality of social interactions in groups settings can be challenging, but could benefit games that struggle with negative social experiences such as toxicity.
In our approach, we predict unidirectional affiliation and consider behavioural traces of a player as indicative of their affiliation for an initial investigation of feasibility, while mostly disregarding the other player. We suspect that performance can be improved by leveraging the fact that a player’s appraisal of the quality of a social interaction is probably dependent on the behaviour from all involved players. A player’s affiliation toward a teammate is affected by the things they say, e.g., when they use supportive or hurtful language. We suspect that models that use features from all involved players in an interaction are better suited to evaluate the quality of an interaction as suggested by the strong performance with communication features. As such, we think that models with information of all players predicting affiliation for all of them is a promising direction for future work.
6.3.3 Privacy and Ethical Considerations
Our method relies on collecting behavioural data, which can affect players’ privacy. First, it is important that such methods are only used sparingly and with informed consent of the users. Different types of data affect privacy differently. Features based on audio and video streams rely on the analysis of player behaviour in the physical world, which encompasses unrelated activities during gameplay (e.g., eating), the surrounding environment (e.g., letters on a desk), or other people (e.g., family members talking in the background). While it is important that all data is treated ethically, there is a higher danger of mismanagement with such data than relying on less critical data such as those based on in-game behaviours and performance. As there are trade-offs between functionality and privacy , affecting players’ privacy is not always worth it. While this privacy intrusion might be worth to prevent hurtful messages, collecting video data for better matchmaking is potentially unnecessarily invasive. As such, it is important that developers consider these trade-offs and, if possible, use less invasive features such as in-game data. Generally, raw data should not be stored on centralized servers. Instead, feature generation and models can be implemented on the client-side in anonymized form, which poses fewer problems for privacy. However, a game that predicts affiliation from player behaviour needs to be certain to provide a benefit for players and only use such data with the explicit informed consent of players.
This paper provides evidence that behavioural traces can be used to reliably predict players’ affiliation toward a co-player in a dyadic cooperative online game. Our results suggest this is possible to varying degrees with many different types of features, i.e., with models using only chronemics, communication content, in-game events, or in-game performance features. This work can assist game developers by building toward a powerful, automated, and continuous method of evaluating the quality of social interactions as they are experienced by players.
We thank NSERC and SWaGUR for funding, members of our labs for feedback, and our participants.
-  Rajendra D Badgaiyan. 2010. Dopamine is released in the striatum during human emotional processing. Neuroreport 21, 18 (2010), 1172. DOI:http://dx.doi.org/10.1097/wnr.0b013e3283410955
-  Marian Stewart Bartlett, Gianluca Donato, Javier R Movellan, Joseph C Hager, Paul Ekman, and Terrence J Sejnowski. 1999. Face image analysis for expression measurement and detection of deceit. In Proceedings of the sixth joint symposium on neural computation. 8–15.
-  Claudia Beleites, Richard Baumgartner, Christopher Bowman, Ray Somorjai, Gerald Steiner, Reiner Salzer, and Michael G. Sowa. 2005. Variance reduction in estimating classification error using sparse datasets. Chemometrics and Intelligent Laboratory Systems 79, 1 (2005), 91 – 100. DOI:http://dx.doi.org/10.1016/j.chemolab.2005.04.008
-  Alessio Benavoli, Giorgio Corani, Janez Demšar, and Marco Zaffalon. 2017. Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis. The Journal of Machine Learning Research 18, 1 (2017), 2653–2688.
-  Dianne Bevelander and Michael John Page. 2011. Ms. Trust: Gender, Networks and Trust–Implications for Management and Education. Academy of Management Learning & Education 10, 4 (2011), 623–642. DOI:http://dx.doi.org/10.5465/amle.2009.0138
-  Remco R Bouckaert. 2003. Choosing between two learning algorithms based on calibrated tests. In ICML, Vol. 3. 51–58.
-  Leo Breiman. 2001. Random forests. Machine Learning 45, 1 (2001), 5–32.
-  Jason Brownlee. 2014. How To Get Baseline Results And Why They Matter. https://machinelearningmastery.com/how-to-get-baseline-results-and-why-they-matter/. Accessed: 2019-09-11. (2014).
-  Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. 2011. Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science 6, 1 (2011), 3–5. DOI:http://dx.doi.org/10.1177/1745691610393980 PMID: 26162106.
-  Stef van Buuren and Karin Groothuis-Oudshoorn. 2010. mice: Multivariate imputation by chained equations in R. Journal of statistical software (2010), 1–68. DOI:http://dx.doi.org/10.18637/jss.v045.i03
-  Rickard Carlsson, Ulrich Schimmack, Donald R. Williams, and Paul-Christian Bürkner. 2017. Bayes Factors From Pooled Data Are No Substitute for Bayesian Meta-Analysis: Commentary on Scheibehenne, Jamil, and Wagenmakers (2016). Psychological Science 28, 11 (2017), 1694–1697. DOI:http://dx.doi.org/10.1177/0956797616684682 PMID: 28910202.
-  Guillaume Chanel, Cyril Rebetez, Mireille Bétrancourt, and Thierry Pun. 2008. Boredom, Engagement and Anxiety as Indicators for Adaptation to Difficulty in Games. In Proceedings of the 12th international conference on Entertainment and media in the ubiquitous era. ACM, 13–17. DOI:http://dx.doi.org/10.1145/1457199.1457203
-  Guillaume Chanel, Cyril Rebetez, Mireille Bétrancourt, and Thierry Pun. 2011. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 41, 6 (2011), 1052–1063. DOI:http://dx.doi.org/10.1109/tsmca.2011.2116000
-  François Chollet and others. 2015. Keras. https://keras.io. (2015).
-  April K. Clark and Marie A. Eisenstein. 2013. Interpersonal trust: An age-period-cohort analysis revisited. Social Science Research 42, 2 (2013), 361 – 375. DOI:http://dx.doi.org/10.1016/j.ssresearch.2012.09.006
-  Adam Cook. 2018. How games as a service are changing the way we play. https://www.redbull.com/ie-en/games-as-a-service-changing-gaming-forever. Accessed: 2019-09-18. (2018).
-  Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning 20, 3 (1995), 273–297.
-  Roddy Cowie, Ellen Douglas-Cowie, Nicolas Tsapatsoulis, George Votsis, Stefanos Kollias, Winfried Fellenz, and John G Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal processing magazine 18, 1 (2001), 32–80.
-  Rachel Croson and Uri Gneezy. 2009. Gender differences in preferences. Journal of Economic literature 47, 2 (2009), 448–74.
-  Frederik De Grove, Johannes Breuer, Vivian Hsueh Hua Chen, Thorsten Quandt, Rabindra Ratan, and Jan Van Looy. 2017. Validating the digital games motivation scale for comparative research between countries. Communication Research Reports 34, 1 (2017), 37–47. DOI:http://dx.doi.org/10.1080/08824096.2016.1250070
-  Janez Demšar. 2006. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, Jan (2006), 1–30.
-  Ansgar E Depping, Colby Johanson, and Regan L Mandryk. 2018. Designing for Friendship: Modeling Properties of Play, In-Game Social Capital, and Psychological Well-being. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. ACM, 87–100.
-  Ansgar E. Depping and Regan L. Mandryk. 2017. Cooperation and Interdependence: How Multiplayer Games Increase Social Closeness. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play - CHI PLAY '17. ACM Press. DOI:http://dx.doi.org/10.1145/3116595.3116639
-  Ansgar E. Depping, Regan L. Mandryk, Colby Johanson, Jason T. Bowey, and Shelby C. Thomson. 2016. Trust Me: Social Games Are Better Than Social Icebreakers at Building Trust. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’16). ACM, New York, NY, USA, 116–129. DOI:http://dx.doi.org/10.1145/2967934.2968097
-  Thomas G Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation 10, 7 (1998), 1895–1923. DOI:http://dx.doi.org/10.1162/089976698300017197
Josep Domingo-Ferrer. 2009.
The functionality-security-privacy game. In
International Conference on Modeling Decisions for Artificial Intelligence. Springer, 92–101.
-  Jennifer R Dunn and Maurice E Schweitzer. 2005. Feeling and believing: the influence of emotion on trust. Journal of personality and social psychology 88, 5 (2005), 736. DOI:http://dx.doi.org/10.1037/e617892011-029
-  Mark G Ehrhart, Karen Holcombe Ehrhart, Scott C Roesch, Beth G Chung-Herrera, Kristy Nadler, and Kelsey Bradshaw. 2009. Testing the latent factor structure and construct validity of the Ten-Item Personality Inventory. Personality and individual Differences 47, 8 (2009), 900–905. DOI:http://dx.doi.org/10.1016/j.paid.2009.07.012
-  Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion 6, 3-4 (1992), 169–200.
-  Paul Ekman and Wallace Friesen. 1978. Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists, San Francisco (1978).
-  Entertainment Software Association. 2017. Essential Facts About the Computer and Video Game Industry. (2017).
-  Anthony M Evans and William Revelle. 2008. Survey and behavioral measurements of interpersonal trust. Journal of Research in Personality 42, 6 (2008), 1585–1593.
-  Chek Yang Foo and Elina M. I. Koivisto. 2004. Defining Grief Play in MMORPGs: Player and Developer Perceptions. In Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE ’04). ACM, New York, NY, USA, 245–250. DOI:http://dx.doi.org/10.1145/1067343.1067375
-  Nikos Fragopanagos and John G Taylor. 2005. Emotion recognition in human–computer interaction. Neural Networks 18, 4 (2005), 389–405. DOI:http://dx.doi.org/10.1016/j.neunet.2005.03.006
-  Julian Frommel, Katja Rogers, Julia Brich, Daniel Besserer, Leonard Bradatsch, Isabel Ortinau, Ramona Schabenberger, Valentin Riemer, Claudia Schrader, and Michael Weber. 2015. Integrated questionnaires: maintaining presence in game environments for self-reported data acquisition. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play. ACM, 359–368. DOI:http://dx.doi.org/10.1145/2793107.2793130
-  Julian Frommel, Claudia Schrader, and Michael Weber. 2018. Towards Emotion-based Adaptive Games: Emotion Recognition Via Input and Performance Features. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’18). ACM, New York, NY, USA, 173–185. DOI:http://dx.doi.org/10.1145/3242671.3242672
-  Maria Frostling-Henningsson. 2009. First-person shooter games as a way of connecting to people:“Brothers in blood”. CyberPsychology & Behavior 12, 5 (2009), 557–562. DOI:http://dx.doi.org/10.1089/cpb.2008.0345
-  Darren George and Paul Mallery. 1999. SPSS for Windows Step by Step: A simple guide and reference. Allyn & Bacon.
-  Eric Gilbert and Karrie Karahalios. 2009. Predicting tie strength with social media. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 211–220.
-  Isabelle Guyon, Jason Weston, Stephen Barnhill, and Vladimir Vapnik. 2002. Gene selection for cancer classification using support vector machines. Machine Learning 46, 1-3 (2002), 389–422.
-  John Harris and Mark Hancock. 2019. To Asymmetry and Beyond!: Improving Social Connectedness by Increasing Designed Interdependence in Cooperative Play. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, Article 9, 12 pages. DOI:http://dx.doi.org/10.1145/3290605.3300239
-  Robin Hunicke. 2005. The case for dynamic difficulty adjustment in games. In Proceedings of the 2005 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology. ACM, 429–433. DOI:http://dx.doi.org/10.1145/1178477.1178573
-  Isto Huvila, Kim Holmberg, Stefan Ek, and Gunilla Widén-Wulff. 2010. Social capital in second life. Online Information Review 34, 2 (2010), 295–316.
-  Itseez. 2015. Open Source Computer Vision Library. https://github.com/itseez/opencv. (2015).
-  Jeroen Jansz and Lonneke Martens. 2005. Gaming at a LAN event: the social context of playing video games. New media & society 7, 3 (2005), 333–355. DOI:http://dx.doi.org/10.1177/1461444805052280
-  Jeroen Jansz and Martin Tanis. 2007. Appeal of playing online first person shooter games. Cyberpsychology & behavior 10, 1 (2007), 133–136. DOI:http://dx.doi.org/10.1089/cpb.2006.9981
-  Sirkka L Jarvenpaa and Dorothy E Leidner. 1999. Communication and trust in global virtual teams. Organization science 10, 6 (1999), 791–815. DOI:http://dx.doi.org/10.1111/j.1083-6101.1998.tb00080.x
-  JASP Team. 2018. JASP (Version 0.9)[Computer software]. (2018). https://jasp-stats.org/
-  Harold Jeffreys. 1961. Theory of Probability (3rd ed. ed.). Oxford University Press.
-  Cynthia Johnson-George and Walter C Swap. 1982. Measurement of specific interpersonal trust: Construction and validation of a scale to assess trust in a specific other. Journal of personality and Social Psychology 43, 6 (1982), 1306. DOI:http://dx.doi.org/10.1037/0022-35220.127.116.116
-  Adam N Joinson, Ulf-Dietrich Reips, Tom Buchanan, and Carina B Paine Schofield. 2010. Privacy, trust, and self-disclosure online. Human–Computer Interaction 25, 1 (2010), 1–24.
-  Bryant J Jongkees and Lorenza S Colzato. 2016. Spontaneous eye blink rate as predictor of dopamine-related cognitive function–A review. Neuroscience & Biobehavioral Reviews 71 (2016), 58–82. DOI:http://dx.doi.org/10.1016/j.neubiorev.2016.08.020
-  Jeff Kaplan. 2017. Developer Update | Play Nice, Play Fair | Overwatch (Online Video). (2017). https://www.youtube.com/watch?v=rnfzzz8pIBE
-  Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453–456. DOI:http://dx.doi.org/10.1145/1357054.1357127
-  Ron Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In International Joint Conference on Artificial Intelligence, Vol. 14. Montreal, Canada, 1137–1145.
-  Igor Kononenko and Matjaž Kukar. 2007. Machine learning and data mining: introduction to principles and algorithms. Horwood Publishing.
-  Yubo Kou and Xinning Gui. 2014. Playing with Strangers: Understanding Temporary Teams in League of Legends. In Proceedings of the First ACM SIGCHI Annual Symposium on Computer-human Interaction in Play (CHI PLAY ’14). ACM, New York, NY, USA, 161–169. DOI:http://dx.doi.org/10.1145/2658537.2658538
-  Haewoon Kwak, Jeremy Blackburn, and Seungyeop Han. 2015. Exploring cyberbullying and other toxic behavior in team competition online games. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3739–3748. DOI:http://dx.doi.org/10.1145/2702123.2702529
-  Robert E. Larzelere and Ted L. Huston. 1980. The Dyadic Trust Scale: Toward Understanding Interpersonal Trust in Close Relationships. Journal of Marriage and Family 42, 3 (1980), 595–604. http://www.jstor.org/stable/351903
-  Michael D Lee and Eric-Jan Wagenmakers. 2013. Bayesian cognitive modeling: A practical course. Cambridge University Press.
-  Yannick LeJacq. 2015. League Of Legends Is Bringing Back An Old System To Deal With Jerks. https://kotaku.com/league-of-legends-is-bringing-back-an-old-system-to-dea-1702108970. (2015). Accessed: 2018-09-12.
-  Changchun Liu, Pramila Agrawal, Nilanjan Sarkar, and Shuo Chen. 2009. Dynamic difficulty adjustment in computer games through real-time anxiety-based affective feedback. International Journal of Human-Computer Interaction 25, 6 (2009), 506–529. DOI:http://dx.doi.org/10.1080/10447310902963944
-  Ricardo Lopes and Rafael Bidarra. 2011. Adaptivity challenges in games and simulations: a survey. IEEE Transactions on Computational Intelligence and AI in Games 3, 2 (2011), 85–99. DOI:http://dx.doi.org/10.1109/tciaig.2011.2152841
-  Luis López, Miguel París, Santiago Carot, Boni García, Micael Gallego, Francisco Gortázar, Raul Benítez, Jose A. Santos, David Fernández, Radu Tom Vlad, Iván Gracia, and Francisco Javier López. 2016. Kurento: The WebRTC Modular Media Server. In Proceedings of the 2016 ACM on Multimedia Conference (MM ’16). ACM, New York, NY, USA, 1187–1191. DOI:http://dx.doi.org/10.1145/2964284.2973798
Thomas W Malone. 1980.
What makes things fun to learn? Heuristics for designing instructional computer games. InProceedings of the 3rd ACM SIGSMALL symposium and the first SIGPC symposium on Small systems. ACM, 162–169. DOI:http://dx.doi.org/10.1145/800088.802839
-  Regan L Mandryk. 2005. Modeling user emotion in interactive play environments: A fuzzy physiological approach. Ph.D. Dissertation. School of Computing Science-Simon Fraser University.
-  Regan L. Mandryk and M. Stella Atkins. 2007. A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies. International Journal of Human-Computer Studies 65, 4 (2007), 329 – 347. DOI:http://dx.doi.org/j.ijhcs.2006.11.011 Evaluating affective interactions.
-  Regan L. Mandryk, M. Stella Atkins, and Kori M. Inkpen. 2006. A Continuous and Objective Evaluation of Emotional Experience with Interactive Play Environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06). ACM, New York, NY, USA, 1027–1036. DOI:http://dx.doi.org/10.1145/1124772.1124926
-  Regan L Mandryk and Max V Birk. 2017. Toward game-based digital mental health interventions: player habits and preferences. Journal of Medical Internet Research 19, 4 (2017). DOI:http://dx.doi.org/10.2196/jmir.6906
-  Regan L Mandryk and Kori M Inkpen. 2004. Physiological indicators for the evaluation of co-located collaborative play. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work. ACM, 102–111. DOI:http://dx.doi.org/10.1145/1031607.1031625
-  Marcus Märtens, Siqi Shen, Alexandru Iosup, and Fernando Kuipers. 2015. Toxicity detection in multiplayer online games. In Network and Systems Support for Games (NetGames), 2015 International Workshop on. IEEE, 1–6. DOI:http://dx.doi.org/10.1109/netgames.2015.7382991
-  Winter Mason and Siddharth Suri. 2012. Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods 44, 1 (2012), 1–23. DOI:http://dx.doi.org/10.3758/s13428-011-0124-6
-  Iris B. Mauss and Michael D. Robinson. 2009. Measures of emotion: A review. Cognition and Emotion 23, 2 (2009), 209–237. DOI:http://dx.doi.org/10.1080/02699930802204677 PMID: 19809584.
-  Max Kobbert and Ravensburger. 1986. Labyrinth. Game [Boardgame]. (1986). Ravensburger, Ravensburg, Germany.
-  David C McClelland. 1985. How motives, skills, and values determine what people do. American Psychologist 40, 7 (1985), 812.
-  Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 3723–3726. DOI:http://dx.doi.org/10.1145/2851581.2890247
-  Gregor McEwan and Carl Gutwin. 2016. Chess as a Conversation: Artefact-Based Communication in Online Competitive Board Games. 21–30. DOI:http://dx.doi.org/10.1145/2957276.2957314
-  Matthew K. Miller, Regan L Mandryk, Max V. Birk, Ansgar E. Depping, and Tushita Patel. 2017. Through the Looking Glass: The Effects of Feedback on Self-Awareness and Conversational Behaviour During Video Chat. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 5271–5283. DOI:http://dx.doi.org/10.1145/3025453.3025548
-  Todd Mooradian, Birgit Renzl, and Kurt Matzler. 2006. Who trusts? Personality, trust and knowledge sharing. Management learning 37, 4 (2006), 523–540. DOI:http://dx.doi.org/10.1177/1350507606073424
-  Lennart E Nacke, Chris Bateman, and Regan L Mandryk. 2014. BrainHex: A neurobiological gamer typology survey. Entertainment computing 5, 1 (2014), 55–62. DOI:http://dx.doi.org/10.1016/j.entcom.2013.06.002
-  Faham Negini, Regan L Mandryk, and Kevin G Stanley. 2014. Using affective state to adapt characters, NPCs, and the environment in a first-person shooter game. In Games Media Entertainment (GEM), 2014 IEEE. IEEE, 1–8. DOI:http://dx.doi.org/10.1109/gem.2014.7048094
-  André Nieoullon and Antoine Coquerel. 2003. Dopamine: a key regulator to adapt action, emotion, motivation and cognition. Current Opinion in Neurology 16 (2003), S3–S9. DOI:http://dx.doi.org/10.1097/00019052-200312002-00002
Thais Mayumi Oshiro, Pedro Santoro Perez, and José Augusto
How many trees in a random forest?. In
International Workshop on Machine Learning and Data Mining in Pattern Recognition. Springer, 154–168. DOI:http://dx.doi.org/10.1007/978-3-642-31537-4_13
-  Gabriele Paolacci, Jesse Chandler, Panagiotis G Ipeirotis, and others. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 5 (2010), 411–419.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
-  James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. UT Faculty/Researcher Works (2015). DOI:http://dx.doi.org/10.15781/T29G6Z
-  Susanne Poeller, Max V. Birk, Nicola Baumann, and Regan L. Mandryk. 2018. Let Me Be Implicit: Using Motive Disposition Theory to Predict and Explain Behaviour in Digital Games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 190, 15 pages. DOI:http://dx.doi.org/10.1145/3173574.3173764
-  Shaun Prescott. 2017. Overwatch’s Jeff Kaplan on toxic behavior: ’the community needs to take a deep look inwards’. https://www.pcgamer.com/overwatchs-jeff-kaplan-on-toxic-behavior-the-community-needs-to-take-a-deep-look-inwards/. (2017). Accessed: 2018-09-17.
-  Daniel S Quintana and Donald R Williams. 2018. Bayesian alternatives for common null-hypothesis significance tests in psychiatry: a non-technical guide using JASP. BMC psychiatry 18, 1 (2018), 178. DOI:http://dx.doi.org/10.1186/s12888-018-1761-4
-  John K Rempel, John G Holmes, and Mark P Zanna. 1985. Trust in close relationships. Journal of personality and social psychology 49, 1 (1985), 95. DOI:http://dx.doi.org/10.1037/0022-3518.104.22.168
-  Guido Rossum. 1995. Python Reference Manual. Technical Report. Amsterdam, The Netherlands, The Netherlands.
-  Jeffrey N Rouder, Paul L Speckman, Dongchu Sun, Richard D Morey, and Geoffrey Iverson. 2009. Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic bulletin & review 16, 2 (2009), 225–237. DOI:http://dx.doi.org/10.3758/pbr.16.2.225
-  Ellen Rusman, Jan Van Bruggen, Peter Sloep, and Rob Koper. 2010. Fostering trust in virtual project teams: Towards a design framework grounded in a TrustWorthiness ANtecedents (TWAN) schema. International Journal of Human-Computer Studies 68, 11 (2010), 834–850. DOI:http://dx.doi.org/10.1016/j.ijhcs.2010.07.003
Jörn P.W Scharlemann, Catherine C Eckel, Alex Kacelnik, and Rick K
The value of a smile: Game theory with a human face.Journal of Economic Psychology 22, 5 (2001), 617 – 640. DOI:http://dx.doi.org/10.1016/S0167-4870(01)00059-9
-  Diane J. Schiano, Bonnie Nardi, Thomas Debeauvais, Nicolas Ducheneaut, and Nicholas Yee. 2014. The “lonely gamer” revisited. Entertainment Computing 5, 1 (2014), 65 – 70. DOI:http://dx.doi.org/10.1016/j.entcom.2013.08.002
-  Claudia Schrader, Julia Brich, Julian Frommel, Valentin Riemer, and Katja Rogers. 2017. Rising to the Challenge: An Emotion-Driven Approach Toward Adaptive Serious Games. In Serious Games and Edutainment Applications. Springer International Publishing, 3–28. DOI:http://dx.doi.org/10.1007/978-3-319-51645-5_1
-  scikit-learn developers. 2018. Feature Selection — Recursive feature elimination. https://scikit-learn.org/stable/modules/feature_selection.html#rfe. (2018). Accessed: 2019-02-19.
-  Thibaud Sénéchal, Jay Turcot, and Rana El Kaliouby. 2013. Smile or smirk? automatic detection of spontaneous asymmetric smiles to understand viewer experience. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE, 1–8. DOI:http://dx.doi.org/10.1109/fg.2013.6553776
-  Fengyi Song, Xiaoyang Tan, Xue Liu, and Songcan Chen. 2014. Eyes closeness detection from still images with multi-scale histograms of principal oriented gradients. Pattern Recognition 47, 9 (2014), 2825–2838.
-  Constance A Steinkuehler and Dmitri Williams. 2006. Where everybody knows your (screen) name: Online games as “third places”. Journal of computer-mediated communication 11, 4 (2006), 885–909. DOI:http://dx.doi.org/10.1111/j.1083-6101.2006.00300.x
-  Adam Summerville, Julian R. H. Mariño, Sam Snodgrass, Santiago Ontañón, and Levi H. S. Lelis. 2017. Understanding Mario: An Evaluation of Design Metrics for Platformers. In Proceedings of the 12th International Conference on the Foundations of Digital Games (FDG ’17). ACM, New York, NY, USA, Article 8, 10 pages. DOI:http://dx.doi.org/10.1145/3102071.3102080
-  Xiaoning Sun, Susan Wiedenbeck, Thippaya Chintakovid, and Qiping Zhang. 2007a. The effect of gender on trust perception and performance in computer-mediated virtual environments. Proceedings of the American Society for Information Science and Technology 44, 1 (2007), 1–14. DOI:http://dx.doi.org/10.1002/meet.1450440211
-  Xiaoning Sun, Susan Wiedenbeck, Thippaya Chintakovid, and Qiping Zhang. 2007b. Gender talk: differences in interaction style in CMC. In IFIP Conference on Human-Computer Interaction. Springer, 215–218. DOI:http://dx.doi.org/10.1007/978-3-540-74800-7_17
-  Unity Technologies. 2018. The Multiplayer High Level API. https://docs.unity3d.com/Manual/UNetUsingHLAPI.html. (2018). Accessed: 2018-09-12.
-  Joseph J Thompson, Betty HM Leung, Mark R Blair, and Maite Taboada. 2017. Sentiment analysis of player chat messaging in the video game StarCraft 2: Extending a lexicon-based model. Knowledge-Based Systems 137 (2017), 149 – 162. DOI:http://dx.doi.org/10.1016/j.knosys.2017.09.022
-  Tim Tijs, Dirk Brokken, and Wijnand IJsselsteijn. 2008a. Creating an emotionally adaptive game. In International Conference on Entertainment Computing. Springer, 122–133. DOI:http://dx.doi.org/10.1007/978-3-540-89222-9_14
-  Tim Tijs, Dirk Brokken, and Wijnand A IJsselsteijn. 2008b. Dynamic game balancing by recognizing affect. In Fun and games. Springer, 88–93. DOI:http://dx.doi.org/10.1007/978-3-540-88322-7_9
-  Sabine Trepte, Leonard Reinecke, and Keno Juechems. 2012. The social side of gaming: How playing online computer games creates online and offline social support. Computers in Human Behavior 28, 3 (2012), 832–839.
-  Michael Tscholl, John McCarthy, and Jeremiah Scholl. 2005. The effect of video-augmented chat on collaborative learning with cases. In Proceedings of th 2005 conference on Computer support for collaborative learning: learning 2005: the next 10 years! International Society of the Learning Sciences, 682–686. DOI:http://dx.doi.org/10.3115/1149293.1149383
-  Unity Technologies. 2018. Unity (Version 2017.3.1)[Computer software]. (2018). https://unity3d.com
-  Gitte Vanwinckelen and Hendrik Blockeel. 2012. On estimating model accuracy with repeated cross-validation. In BeneLearn 2012: Proceedings of the 21st Belgian-Dutch Conference on Machine Learning. 39–44.
-  Pascal Vrticka. 2012. Interpersonal Closeness and Social Reward Processing. Journal of Neuroscience 32, 37 (2012), 12649–12650.
-  Pascal Vrticka and Patrik Vuilleumier. 2012. Neuroscience of human social interactions and adult attachment style. Frontiers in human neuroscience 6 (July 2012), 212–212. DOI:http://dx.doi.org/10.3389/fnhum.2012.00212
-  Eric-Jan Wagenmakers, Jonathon Love, Maarten Marsman, Tahira Jamil, Alexander Ly, Josine Verhagen, Ravi Selker, Quentin F. Gronau, Damian Dropmann, Bruno Boutin, Frans Meerhoff, Patrick Knight, Akash Raj, Erik-Jan van Kesteren, Johnny van Doorn, Martin Šmíra, Sacha Epskamp, Alexander Etz, Dora Matzke, Tim de Jong, Don van den Bergh, Alexandra Sarafoglou, Helen Steingroever, Koen Derks, Jeffrey N. Rouder, and Richard D. Morey. 2018a. Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review 25, 1 (01 Feb 2018), 58–76. DOI:http://dx.doi.org/10.3758/s13423-017-1323-7
-  Eric-Jan Wagenmakers, Maarten Marsman, Tahira Jamil, Alexander Ly, Josine Verhagen, Jonathon Love, Ravi Selker, Quentin F. Gronau, Martin Šmíra, Sacha Epskamp, Dora Matzke, Jeffrey N. Rouder, and Richard D. Morey. 2018b. Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychonomic Bulletin & Review 25, 1 (01 Feb 2018), 35–57. DOI:http://dx.doi.org/10.3758/s13423-017-1343-3
-  Wim Westera. 2016. Performance assessment in serious games: Compensating for the effects of randomness. Education and Information Technologies 21, 3 (01 May 2016), 681–697. DOI:http://dx.doi.org/10.1007/s10639-014-9347-3
-  Dmitri Williams. 2006. Groups and goblins: The social and civic impact of an online game. Journal of Broadcasting & Electronic Media 50, 4 (2006), 651–670. DOI:http://dx.doi.org/10.1207/s15506878jobem5004_5
-  Dmitri Williams, Nicolas Ducheneaut, Li Xiong, Yuanyuan Zhang, Nick Yee, and Eric Nickell. 2006. From tree house to barracks: The social life of guilds in World of Warcraft. Games and culture 1, 4 (2006), 338–361. DOI:http://dx.doi.org/10.1177/1555412006292616
-  Michele Williams. 2001. In whom we trust: Group membership as an affective context for trust development. Academy of management review 26, 3 (2001), 377–396. DOI:http://dx.doi.org/10.2307/259183
-  Toshio Yamagishi and Midori Yamagishi. 1994. Trust and commitment in the United States and Japan. Motivation and Emotion 18, 2 (01 Jun 1994), 129–166. DOI:http://dx.doi.org/10.1007/BF02249397
-  Georgios N Yannakakis and Julian Togelius. 2011. Experience-driven procedural content generation. IEEE Transactions on Affective Computing 2, 3 (2011), 147–161. DOI:http://dx.doi.org/10.1109/t-affc.2011.6