Angry Birds Flock Together: Aggression Propagation on Social Media

02/24/2020 ∙ by Chrysoula Terizi, et al. ∙ Fedora Summer University Telefonica Information Technologies Institute (ITI) 0

Cyberaggression has been found in various contexts and online social platforms, and modeled on different data using state-of-the-art machine and deep learning algorithms to enable automatic detection and blocking of this behavior. Users can be influenced to act aggressively or even bully others because of elevated toxicity and aggression in their own (online) social circle. In effect, this behavior can propagate from one user and neighborhood to another, and therefore, spread in the network. Interestingly, to our knowledge, no work has modeled the network dynamics of aggressive behavior. In this paper, we take a first step towards this direction, by studying propagation of aggression on social media. We look into various opinion dynamics models widely used to model how opinions propagate through a network. We propose ways to enhance these classical models to accommodate how aggression may propagate from one user to another, depending on how each user is connected to other aggressive or regular users. Through extensive simulations on Twitter data, we study how aggressive behavior could propagate in the network, and validate our models with ground truth from crawled data and crowdsourced annotations. We discuss the results and implications of our work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Online aggression has spiked in the last few years, with many reports of such behavior across different contexts [13, 33]. Indeed, cyberaggression can potentially manifest in any type of platform, regardless of the target audience and utility or purpose envisioned for the platform. In fact, such behavior has been observed in different online social media platforms such as Twitter [8, 9], Instagram [29], YouTube [10], Yahoo Finance [18], Yahoo Answers [32], 4chan [28], and across various demographics (e.g., teenagers vs. adults, men vs. women, etc., [41, 13, 7]). Interestingly, it is difficult to find a generally accepted definition of cyberaggression across disciplines. As argued in corcoran2015cyberbullying corcoran2015cyberbullying, there are different ways to define online aggression, depending on frequency or severity of the behavior, power difference of the victim with aggressor, etc.

As it has been found in Henneberger2017 Henneberger2017, users can be influenced to act aggressively and even bully others because of elevated toxicity and aggression in their own social circle. This behavior can manifest in a similar fashion in the online world as well, and aggression can propagate from one user and neighborhood to another, and thus, spread in the network. In fact, some early works in sociology and psychology already proposed models of computer abuse based on the theories of social learning, social bonds, and planned behavior [34].

However, to our knowledge, no work has modeled the network dynamics of aggressive behavior, and study how online users’ connections and interactions may affect the propagation of aggression through an online social network. This paper takes the first, but crucial, steps to investigate pending fundamental questions such as: How can aggressive behavior propagate from one user or neighborhood in the network to another? What model and parameters could best represent the aggression propagation and its intensity? In particular, it studies classical opinion dynamics models widely used to model how opinions propagate through a network, and proposes ways to alter or enhance them to accommodate how aggression may propagate from one user to another. We opt for simple models that consider important factors such as how much the second user is exposed to aggressive behavior of the first user or its neighborhood, popularity of the users, etc. We validate the models’ performance on real Twitter data to measure their ability to model this behavior.

The contributions of this work are the following:

  • We formally present the problem of aggression propagation in a social network, and the necessary assumptions to study it in an algorithmic fashion on a network of users.

  • We propose network algorithms to model aggression propagation, based on opinion dynamics methods, and informed by properties of aggression found in literature on psychology, sociology, and computational social science.

  • We implement these methods into a framework that simulates aggression propagation in a network, while controlling for various experimental factors such as: 1) social network used, 2) propagation model applied, 3) selection and ordering of users or edges affected by the propagation. This framework can be applied in different social networks, given appropriate data to bootstrap the models.

  • We present extensive experimentation with the simulating framework and Twitter data, and show how model performance depends on the various factors controlled. We find that methods which consider direct interactions between users and users’ internal aggression state, better model aggression and how it could evolve on Twitter. We discuss implications of our findings for curbing cyberaggression on Twitter and other networks with similar structure.

2 Modeling Opinion Propagation

In social networks, users may interact with others in their immediate ego-network on a given topic, and consequently may adapt their opinion, or even adopt their friends’ personal opinion altogether. Opinion formation is a complex process and many researchers have studied it under different settings. Thus, several mathematical models have been proposed to simulate the propagation of opinion in a network (for a review see 2016arXiv160506326S 2016arXiv160506326S). In this section, we first cover background concepts around opinion propagation, and then basic methods proposed to model opinion dynamics.

2.1 General Problem of Opinion Modeling

Background. Numerous studies have been conducted around the opinion spreading [42]. Most published opinion models simulate how an individual’s opinion could evolve from the influence he could receive from his immediately environment. In [37], a simple stochastic model (Voter model) was presented, where an individual is absolutely vulnerable to his neighbor’s opinion, which he assimilates if he interacts with. In this class of binary-state dynamics models belongs the Sznajd model [43] which states that “if two people share the same opinion, their neighbors will start to agree with them (social validation), and if a block of adjacent persons disagree, their neighbors start to argue with them (discord destroys)”. In doi:10.1142/S0219525900000078 doi:10.1142/S0219525900000078, a model of opinion dynamics showed that agents adjust continuous opinions on the occasion of random binary encounters whenever their difference in opinion is below a given threshold. A classical model of consensus formation was presented in hk hk, the variant of this model due to fj fj, a time dependent version and a nonlinear version with bounded confidence of the agents. There are also models that combine and compare the aforementioned basic models [4, 5, 21, 35]. Also, a few studies have been published that verify the opinion prediction in real setting [42]. For example, real data were used by behera2003 behera2003, psychological data in deffuant2004modelling deffuant2004modelling, and information from Italian and German elections in Bernardes2002,CARUSO_2005,Fortunato2007 Bernardes2002,CARUSO_2005,Fortunato2007.

Problem Definition. The opinion propagation problem can be formally presented as follows [30, 31, 43]. There is a population of M individuals connected in a network over weighted edges. This weight can signify the intensity of interaction or closeness between two individuals. Each user, at time has its initial interior opinion on a topic. After an interaction between two users and who are neighboring in the network, each of the individuals is led to a new state on the topic.

In general, the interaction between users during the opinion propagation can be assumed to happen in a pairwise or group fashion. In effect, one user may be influenced by another user, or multiple users in his neighborhood. In pairwise models, there are two connected individuals, and , and each one can have their personal opinion on a topic. User ’s opinion can only be affected by the influence of user ’s opinion that he is connected. In group fashion models, there is user and his neighborhood of users with opinions which can influence ’s opinion. Next, we characterize each model based on this distinction and its fundamentals.

Overall, this kind of procedure can be considered as a mechanism of making a decision in a closed community. The general problem of opinion modeling presents a plurality of models that differentiate the final state of the individual and the manner in which it is formed. For easiness, we use the following general notations in the text:
The opinion value for user

at moment


The weight of edge , between users and
The set of friends (neighborhood) of user

2.2 Classic Opinion Propagation Models

Voter Model. One of the simplest and well-known pairwise models in opinion dynamics is the Voter model [11]. In this model, there are only two discrete opinion types: {0, +1}. At each time step, an edge is selected from the network, and user adopts the opinion that his neighbor had in the previous time step:

(1)

For undirected networks, the model reaches consensus to one of the possible initial opinions. For directed networks, the model was modified [46] to one that fixes the users’ out-degree that induce early fragmentation.

Deffuant Model. Another pairwise model suited for interaction within large populations is the Deffuant model [15] that captures confirmation bias, i.e., people’s tendency to accept opinions that agree with their own. In this model, users adjust continuous opinions from their initial binary opinion whenever their difference in opinion is below a given threshold. At each time step, an edge between users is selected and user takes into account the opinion of its neighbor when the absolute value of their opinions’ difference is less than a specific selected value :

(2)

= + ( - ) and = + ( - )

where, is the convergence parameter, with . High threshold values lead to convergence of opinions whereas low values lead in several opinion clusters.

DeGroot Model. Another model in literature that allows a user to consider all or some of their neighbors’ score, is the DeGroot model [16]. Here, there is an undirected network and at time , all users change their opinion by taking the average of their own opinion and the opinion of their neighbors. After a number of iterations (i.e., opinion changes or time steps), the network will reach consensus and each user in the network will have the same opinion.

(3)

FJ Model. A variation of the DeGroot model was proposed by Friedkin and Johnsen [23]. The main difference between the two models is that in the FJ model each user has an intrinsic initial opinion that remains the same, and an expressed opinion that changes over time. User

’s new opinion is estimated as:

(4)

The network consensus is not reached every time, but only in specific cases. Also, the calculation of the opinion’s convergence can be modeled as a random walk in the graph, and if an absorbent node is attached in a node, it maintains the node’s opinion stable [25].

HK Model. In hk hk model, opinions take values in a continuous interval, where a bounded confidence limits the interaction of user holding opinion to neighbors with opinions in [-, +], where is the uncertainty. Also, users interact with all of his friends:

(5)

The model has been proven to converge in polynomial time and leads to consensus when and each user or group of users are polarized when .

3 Modeling Aggression Propagation

Previously, we outlined some of the most classical models for opinion propagation in a network. Here, we build on these methods to model how aggression could propagate in the network, as an opinion would. Next, we discuss insights extracted from literature that attempt to model aggression in different ways. Building on this background, we formally propose the Aggression Propagation problem, in a way that can be aligned with opinion propagation. Finally, we present how the literature insights can be used to inform existing opinion models into modeling aggression propagation.

3.1 Aggression Modeling: Literature Insights

Aggression has been well studied in the past, in online and offline contexts, from sociologists and psychologists [20, 3, 45, 41, 40, 1, 12] and computational social or computer scientists Chatzakou et al. meanBirds,meanBirdsTweb2019 and [14, 22, 34, 44, 26, 17, 10, 38].

1. Influence from strong social relationships. Aggressive behavior is reactionary and impulsive, and often results in breaking household rules or the law, and can even be violent and unpredictable [19]. Interestingly, aggressive acts, while reflecting the influence of various mental and physical disorders, in most instances represent learned behaviors from other individuals [24]. In fact, some earlier works proposed that online abusive behavior could be explained using sociology- and psychology-based theories such as social learning, social bonds and planned behavior [34]. Furthermore, Cheng_2017 Cheng_2017 observed that a person’s negative mood increases the likelihood of adopting negative behavior, which is easily transmitted from person to person. These works lead us to the first insight on aggressive behavior: due to strong social bonds, users can be influenced by, and learn from others such aggressive behavior.

2. Influence from social groups. Aggressive adolescents may be unpopular in the larger social community of peers and adults, yet they can be accepted by and closely linked to particular subgroups of peers [6]. Furthermore, as it was investigated by GAM_games GAM_games, the personal responsibility exhibited by individuals or groups can be captured by the General Aggression Model (GAM) [1]. The authors established that when individuals or a group of individuals come to believe either that they are not responsible or that they will not be held accountable by others, the stage is set for the occurrence of violent evil and aggressiveness [2]. In addition, in recent studies on Twitter by Chatzakou et al. meanBirds,meanBirdsTweb2019, cyberbully and aggressive users were found to be less embedded in the network, with fewer friends and smaller clustering coefficient. Further, Kramer8788 Kramer8788 found that exposing a person to negative or positive behaviors of those around him, leads him to experience the same emotions as them. These results lead us to the second insight: aggressive users may be embedded in small social groups, which can have high impact on their aggression.

3. Influence due to power difference. Studies have also looked at the emotional and behavioral state of victims of bullying and aggression and how it connects to the aggressor’s or victim’s network status. In [12] it was observed that a high power difference in network status of the two individuals can be a significant property of bully-victim relationship. The authors in [40] noted that the emotional state of a victim depends on the power of the victim’s bullies. For example, more negative emotional experiences were observed when more popular cyberbullies conducted the attack. These observations lead us to the third insight: the power difference that a user may have over another (e.g., due to popularity) can be a decisive factor on the exerted aggression.

4. Influence due to internal state and external input. GAM is an integrative approach to understanding aggression that incorporates the best aspects of many domain-specific theories of aggression and takes into account a wide range of factors that affect aggression. It is separated into two layers, representing the distal causes and proximate causes. The distal processes express how biological (e.g., hormone imbalances, low serotonin, and testosterone) and persistent environmental (e.g., difficult life conditions, victimization, and diffusion of responsibility) factors work together to influence a user’s personality and increase the likelihood of developing an aggressive personality. The proximate processes have three stages: (i) inputs, (ii) routes, and (iii) outcomes, that can affect the person’s level of aggression and possible reactions to the input. The reaction that is selected then influences the encounter, which in turn influences the person and situation factors, beginning a new cycle of influence. The findings from this important study lead us to a forth insight: users can be influenced by external inputs, but they also try to consolidate them with their internal state of arousal, cognition and affect, before moving to a new state.

5. Influence can appear in cycles. The overall process outlined by the GAM, and also the previously extracted insights (1-4)  can be captured in an aggression propagation model. The user (who could be an aggressor or a victim/normal user) is allowed under monitoring to: (1) have an internal aggression state of his own, (2) interact with his neighbors and close friends and receive and/or exert influence of aggression, (3) assess if he will be changing his stance on aggression, i.e., to become more (or less) aggressive after his interactions, (4) act by changing (or not) his stance, (5) repeat these steps in the next cycle (or time step). This insight allows to build on existing, but adapted opinion models that work in simulated rounds or cycles, to solve the Aggression Propagation problem, presented next.

3.2 Aggression Propagation Problem

Online users may be “friends” and connected in an online social network. In this setup, user , at time has its own aggression score that represents his internal, continuous state, . While he interacts with his followers or followings, he may be influenced to be more or less aggressive, thus changing his internal aggression state at every time instance. The impact that others (i.e., his direct friends or neighborhood) have on his aggression state, can be a function of the strong social relationships with him (), his power score (e.g., degree centrality), the size of the user’s neighborhood (), etc. The change of aggression state is continuous, i.e., at every time instance, users are influencing each other’s aggression state, partially or totally. Therefore, the problem of aggression propagation is to model how aggression among users will diffuse or propagate in a network for some time window . Obviously, this problem has clear similarities with the opinion propagation problem, and techniques to model opinion dynamics presented earlier could be adapted to model how users influence each other to change aggression state. As this is the first investigation on this problem, we opt to establish a solid baseline of solutions to the problem at hand, and propose simple, parameter-free models that are generalizable and applicable to different social networks.

The following lists the additional notations used in text:

Aggression score of user , at time ,
Weight of edge of users and , defined as
Jaccard overlap of neighbor sets:
Power score of user : ratio of in-degree over
out-degree:
Selector for applying a factor out of the options:
1, , , , and , for user

Next, we take the five insights identified and embed them in the mentioned opinion models, to construct our proposals for modeling aggression propagation through the network.

Voter & Deffuant models & variants. Firstly, we propose four pairwise models based on the Voter model. We assume that after an interaction between two users and , the aggression score of changes, because he was influenced (positively or negatively). The formulation of the first set of proposed models is the following:

(6)

The model names depend on factor : Voter, Voter_W, Voter_P and Voter_WP.

These models take into account the strong relationship (1st insight) between user and . User ’s aggression score does not consider its own state but only the aggression score of the neighbor . The four versions reflect different variations of the Voter model, where the user assumes the aggression of his neighbor: 1) all of it (i.e., the neighbor is completely affecting the user), 2) weighted by their edge weight (i.e., the neighbor has an influence but only depending on the strength of their relationship), 3) weighted by the Power score of the neighbor (i.e., to capture the concept of power difference that aggressors take advantage of), and 4) weighted by the combination of Power and edge weight.

Based on Voter and Deffuant models, we propose a 2nd set of models, in which user , at time does not only take into account the aggression state of his neighbor (1st insight), but also includes his personal state before making any changes in his aggression (4th insight). Consequently, this set of models can be formalized as follows:

(7)

The model names depend on factor : Deffuant_W, Deffuant_P and Deffuant_WP. If factor is equal to 1, it is not taken into account. To maintain the limits of aggression score in a closed interval , we normalize the final aggression score of user using the maximum aggression score from all the neighbors of at time .

Deffuant & HK models & variants. Another set of pairwise models we propose rely on the combination of Deffuant and HK models, as follows:

(8)

The model names depend on factor : HK_d_W, HK_d_P and HK_d_WP. It does not include the case of factor equal to 1. This set of pairwise models uses the condition about the bounded confidence limits from the HK model (3rd insight), and updates the aggression score accordingly. The proposed model is affected by the strong relationship with its neighbor (1st insight) and internal personal state (4th insight) at the previous moment. We normalize the final aggression score using the maximum aggression score from all of user ’s neighbors, for those that the treaty is valid at .

DeGroot model & variants. The next set of proposed models take into account the neighborhood of user for deciding what aggression score to give to the user (2nd insight). The aggression of a user can be influenced by all of user’s neighbors and its internal behavior (4th insight). As a result, this set of models are variants of DeGroot model which considers an average effect across all the neighborhood of the user, and are calculated as follows:

(9)

The model names depend on factor : Degroot, DeGroot_W, DeGroot_P and DeGroot_WP. corresponds to the original DeGroot model.

FJ model & variants. We also propose variants of the FJ model, integrating the initial aggression state of an individual in the network (4th insight), along with the user’s neighborhood (2nd insight):

(10)

The model names depend on factor : FJ_W, FJ_P, FJ_WP.

Averaging DeGroot & FJ models & variants. We propose the following set of models based on DeGroot and FJ, where the aggression score of each user has been inspired by the 2nd, 3rd and 4th insights. The models are modified by taking the average power score and aggression score from all of user’s neighbors, individually:

(11)

The model names depend on factor : avg DeGroot_W, avg DeGroot_P, and avg DeGroot_WP.

The final set of proposed models is similar to FJ models, but in these we consider the initial aggressive state of each user (2nd-4th insights). Thus, the models are as follows:

(12)

The model names depend on factor : avg FJ_W, avg FJ_P, and avg FJ_WP.

Next, we explain how all presented models are implemented into a simulator for exhaustive experimentation with different parameter settings using real Twitter data.

4 Simulation Methodology

In this section, we outline the methodology followed to simulate propagation of aggression in a social network, given each one of the models proposed earlier. First, some users in the network are assumed to be aggressive, and the rest as normal, formalizing the network’s initial state. As time passes, users interact with, and may affect, each other to become more or less aggressive, thus changing the overall state of aggression of the network through time. Different models can be used to describe these user interactions, and aggression change. To identify which model is better for this task, we compare each model’s imposed aggression changes with real (ground truth) data of aggression propagation. Each model performs differently through the simulation, and may match best with the ground truth data at different point in the simulated time. Therefore, at regular time intervals during each simulation we capture snapshots of the network’s aggression state and compare each model with the validation data.

Next, we address in the simulator design important factors that can impact the exploration of this complex problem:
1. Online social network
2. Aggressive and normal users
3. Users (edges) to perform propagation
4. Ordering of users to perform propagation
5. Propagation model applied to modify users’ scores
6. Metrics used to capture (change of) state of aggression
7. Metrics to compare state of aggression in simulated and
validation networks

4.1 Online social network

We use an unlabeled, directional Twitter network [36] (). This dataset has users or nodes, and directed edges between them. We focus on the network’s strongly connected component, that has nodes and directed edges. Following past work on network analysis [47], we apply weights on edges based on the Jaccard overlap of social circles between two users and , .

4.2 Which users should be aggressive?

In , we have no labels of aggression: we do not know which users exhibit aggression and which are normal. To identify users who should be labeled as aggressive in :

  1. We use a small, Twitter network  [8]

    , with users labeled as aggressive or not, to train a classifier (

    ) on users’ network features,111

    User’s followers and followees and their ratio, user’s clustering coefficient score, hub score, authority score, and eigenvector score.

    to infer the likelihood that a user will be aggressive. was trained with accuracy and precision and recall.

  2. We extract the same network properties from , and apply on to label its users as aggressive or not, based on some threshold.

By applying on , we got or users who were labeled as aggressive. We verified that users selected to be aggressive (or normal) in had similar distributions for their network properties with the annotated users in . Each aggressive (normal) user was given a score (). Interacting users can modify each other’s aggression state, leading to users with scores between 0 and 1.

4.3 Users to perform propagation

We select a set of random users for executing the propagation model. A large can cover a larger portion of the network, but can be extremely costly to simulate. We opt for of random edges, covering of total users.

4.4 Propagation changes applied

Users in selected for propagation may interact with each other in different ways:

  1. Randomly, i.e., the selected users are randomly shuffled before their aggression is propagated.

  2. Based on the most popular (or least popular) user (e.g., using their degree centrality).

  3. Based on the neighborhood involved (i.e., group users based on neighborhood and propagate between them).

We measure how each method impacts the aggression change of each model during simulation.

4.5 Propagation models used

We test all models explained earlier. They are parameter-less, making them simpler and more generalizable on different networks and setups.

4.6 Metrics used to measure aggression change

We measure the state of aggression of users and network, and how it changes through simulated time using different metrics, as explained next:

  • n: portion of normal users in the network

  • a: portion of aggressive users in the network

  • N-N: portion of edges that both users and are normal

  • N-A: portion of edges that user is normal & aggressive

  • A-N: portion of edges that user is aggressive & normal

  • A-A: portion of edges that and are aggressive users

  • n {n a}: portion of normal users in the initial state who remain normal or become aggressive, respectively

  • a {n a}: portion of aggressive users in the initial state who become normal or remain aggressive, respectively

  • N-N {N-N N-A A-N A-A}: portion of edges that users and at initial state were normal and remain normal, or one, or both users become aggressive, respectively

  • N-A {N-N N-A A-N A-A}: same as above for edges where is aggressive at the initial state

  • A-N {N-N N-A A-N A-A}: same as above for edges where is aggressive at the initial state

  • A-A {N-N N-A A-N A-A}: same as above for edges where both and are aggressive at the initial state

These elements capture the state of the network with respect to users and edges and their label at time , and how these are changing through the simulated time between time and (declared as ). Each of these metrics can be computed at regular snapshots (, , , , ), by comparing the network state at a given snapshot () with initial state .

4.7 Measuring ground truth metrics

We compute these metrics in as follows. First, we used the snapshot from 2016, with users labeled as normal, aggressive, bully, or spammers. We removed the spam class and merged all aggressors and bullies under the aggressive class. Note: in this set of labeled users, their friends or followers are not labeled, so they are considered normal. Then, during 04-05/2019, we re-crawled these users’ ego-networks. This crawl involved users ( egos - unique users). When this crawl was completed, users were found to be active, users were found suspended, and

users were found to have deleted their account. We make the assumption that active users in 2019 can be considered “normal”, and suspended users are “aggressive”, since at some point in the past they violated Twitter rules. We ignore users who deleted their profiles, as it is a user-decided action. Using the above two time crawls (2016 and 2019), we computed the ground truth or validation vector for the above mentioned

metrics, which capture the change of aggression of users and type of edges (A-A, A-N, etc.).

4.8 Comparing simulation and ground truth data

The above set of metrics is computed for all models and for 10 time snapshots per simulation. Using a pre-selected threshold

for each user’s aggression score, we binarize their final state and thus, compute overall aggression change in nodes and edges. Then, we compare with the validation vector from the ground truth data. This comparison is executed using standard similarity metrics such as Cosine similarity, Pearson correlation, Spearman rank correlation, and Euclidean distance. This comparison establishes how close a model changes the state of aggression of the network (in both nodes and edges) to match the ground truth.

5 Analysis of Simulation Results

In this section, we show the results from the extensive simulations performed, under different experimental settings used: 26 propagation models, 10 thresholds for , comparisons with 10 time snapshots, 4 metrics for comparing ground truth with model performance in each snapshot, 5 types of orderings of users to propagate aggression, and 10% random edges (and their users).

5.1 Which models are stable and perform best?

The first step towards analyzing the simulation results is to compare the proposed models with respect to their performance. Cosine similarity is used for the comparison, with the threshold above which a user is characterized as aggressive or not, set to . Figure 1 plots the cosine similarity for all considered models in relation to the validation vector of real data. We observe that with Deffuant_P, we achieve best performance. Also, , Deffuant_W, and HK_*_W are among the top models.

We note that when the edge weight () is considered, the performance in some cases is adversely affected. For example, the Voter_W model reaches similarity with ground truth less than . Mixed results are observed when the Power score () is used; e.g., in Deffuant and DeGroot models the performance increases, indicating that the influence of user to its neighbor is more important when it is not constrained to the given edge , but instead, when the neighborhood of , , is considered. Degroot and models perform similarly, indicating that the state of neighborhood of the user has no significant influence to the overall performance. Finally, averaging DeGroot and models perform the worst, regardless of whether the edge weight or power score are considered, separately or in combination.

Figure 1: Cosine similarity of all proposed models with the validation vector, for 10% of selected edges and . The different time snapshots of the simulation are colored from light to dark blue. We group the models in sets based on their commonality.

Different Aggression Thresholds . We evaluate further the proposed models by investigating a wider range of thresholds = (results omitted due to space limits). Overall, we observe models showing stable (low or high) performance, independent of selected, or models highly depended on . Specifically, Deffuant_P, Voter, and HK_*_P achieve high performance (similarity  0.8). On the contrary, Voter_W, Voter_WP, and Averaging_*_* show lower performance (similarity  0.7) regardless of selected. Alternatively, there are models whose performance highly depends on the change of . Deffuant_W* and HK_*_W* fluctuate from average to high performance (0.7  similarity  0.85), and Degroot* and FJ_* show highly varying performance (0.45  similarity  0.85).

(a) Voter
(b) Deffuant_P
(c) HK_0.5_P
(d) HK_1.0_P
Figure 2: Similarity of the top 4 performing models with ground truth, through 10 time snapshots: (a) Voter, (b) Deffuant_P, (c) HK_0.5_P, and (d) HK_1.0_P. We show 4 similarity metrics: cosine similarity, Pearson correlation, spearman correlation, and euclidean distance. Variation of colors for a metric illustrate the performance of the given model and metric for different thresholds .

Takeaways. From the top performing models, i.e., Deffuant_P, Voter, and HK_*_P, two main observations can be made: (i) a user’s internal aggression state is highly dependent on their mate’s aggression state (i.e., with whom there is a direct interaction / relationship), and (ii) the internal aggression state of a user constitutes an important factor in aggression propagation. This observation aligns well with the 4th insight in Sec. 3.1. In situations with various options for reaction, the inner state (in our case, the aggression state) of individuals, as well as those with whom there is a direct connection with, are key factors in the subsequent state of the individuals themselves [27]. This is also reflected by the top models, as, on the one hand, they are all pairwise models, while on the other, apart from , they consider a user’s internal aggression state before making aggression changes. Overall, based on the best models, we observe that online aggression (especially when taking place on Twitter) is propagating from one user to another; users are not so influenced by their neighborhood. Aggressive users have been shown to be less popular (i.e., smaller number of followers and friends) than normal users [9] which could explain the fact that aggressive users are more affected by direct relationships rather than their neighborhood aggression state.

5.2 How do models perform over time?

Next, we examine how model performance is affected when different time snapshots of the network’s aggression state are considered. This analysis is done to: (i) compare with ground truth each model’s state at progressing simulation times, and (ii) detect which of the snapshots was better fitting the real data. We remind the reader that there is no point in simulating propagation until the models converge to some steady state, since the time taken for this may not match the timing the ground truth data were captured. For our analysis, we focus on the top four performing (regardless of ) models, i.e., Deffuant_P, Voter, HK_0.5_P, and HK_1.0_P.

Figure 2 shows that for all models, performance is lower within the first snapshots and gradually increases, to stabilize in the last snapshots. As for the similarity metrics, they follow a similar pattern across models, indicating that either of them can be used to do the performance analysis, and that the comparison results are stable against analysis that considers ranks or absolute numbers in the vectors compared.

Takeaways. These models successfully capture how a network’s aggression status changes across time. The notion of snapshots constitutes a valid process in representing the aggression propagation, since it captures the way aggression is (or will be) expected to be in real time in a network. By focusing on user interactions (Voter model), or user’s internal aggression state (Deffuant_P, HK_0.5_P, and HK_1.0_P), the aforementioned models can be used to track how aggression propagates in networks with similar properties.

5.3 Is the order of changes important?

(a) Deffuant_P
(b) HK_0.5_P
Figure 3: Final average aggression score for aggressive (top part), normal (bottom part), and all (middle part) users, based on 5 different types of users’ ordering, through the simulation time.

Figure 3 shows the aggression evolution of three sets of users (i.e., normal, aggressive, all users) in relation to how they could interact (i.e., randomly, based on popularity (most/least), involved neighborhood, and network id), for the top two models (similar results for all top models, omitted due to space). If aggressive users were to interact randomly, it would lead to faster decline of the aggression compared to the rest of the ordering methods. In contrast, aggressive users show greater resistance in reducing their aggression if they were to interact based on users’ popularity (from highest to lower). Interestingly, assuming the least popular users were to interact (act on their aggression) first, it would lead to slower propagation. If aggression propagated from one neighborhood to the next, it would also lead to a slow rate of propagation, at times approaching the least popular user ordering. For normal users, aggression status could not be significantly affected by the way they interact; the difference between the initial and final aggression scores is subtle. Instead, the effect of normal users is greater on the aggressors than the inverse. For all considered models (apart from Voter) after a number of interactions (i.e., ), the aggression score converges, indicating that within a network, even with faster or slower changes, the network state stabilizes.

Takeaways. Overall, the way users could interact and “exchange” or propagate aggression impacts the overall network state. Popularity of users can be a great predictor of how aggression will move in the network. This could be attributed to the fact that more popular users can have stronger impact within a network if they are aggressive (or not) due to their high degree centrality, since they can affect many users at the same time, leading to high rate of aggression propagation. This is also aligned with phenomena already evident in the wild (3rd insight: Influence due to power difference, Sec. 3.1). At the same time, normal users are more resistant to aggression, due to their expected higher power status and larger neighborhoods with non-aggressive state.

5.4 How is users’ final aggression state?

(a) Normal users
(b) Aggressive users
Figure 4: CDFs of the final aggression scores of (a) normal and (b) aggressive users who participated in the aggression propagation.

Figure 4 shows how users’ aggression has settled at the end of the simulation (on 10th snapshot) across the top four performing models; we consider the more realistic random ordering of changes. Figure 4(a) shows that regardless of model, at least of normal users in the end remain unaffected (i.e., their aggression score is zero), while at most of users gain some aggression (at different levels, varying from to ). For instance, based on Deffuant_P, almost of users end up with maximum aggression score, with the rest of users varying between the lower and upper limits. In Voter, because of its formulation, the aggression score is either or , with about of normal users affected. Finally, from Figure 4(b), about - of aggressive users maintain their high aggression score (depending on the model), with only about of aggressive users turning normal (Deffuant_P and HK_*_P). Contrary to normal users, when is used, a high portion of aggressive users is positively affected by turning into normal. Overall, in consistency with Figure 3, normal users are more resistant to adopting aggression, as opposed to aggressive users.

6 Discussion

Despite the consequences that abusive behavior has on individuals (e.g., embarrassment, depression, isolation from other community members), there are still important cases of aggression that stay under the radar of social networks, e.g., [39]. In fact, how such behavior propagates within networks has not been studied extensively. To address this gap, here, we are the first to propose a pipeline to evaluate various aggression dynamics models, and to conclude in those best emulating aggression propagation in social networks. To simulate how such behavior could spread in the network, we built on top of popular opinion dynamics models, and test and validate our models’ performance on real Twitter data. We found that our proposed models based on the Deffuant and Hegselmann & Krause opinion models, perform best in modeling aggression propagation in a network such as Twitter, regardless of parameters or thresholds used. Important insights embedded in these models are: (1) online aggression tends to propagate from one user to another, (2) power score of a user (e.g., degree centrality) and (3) users’ internal aggression state, both constitute top factors to be considered in aggression propagation modeling, (4) influence by users’ neighborhood is of less importance.

Overall, we believe this work makes a significant first step towards understanding and modeling the dynamics of aggression. The outcomes of our work highlight the suitability of the top performing models in simulating propagation of aggression in a network such as Twitter, and how a campaign to monitor and even stop aggression on Twitter could work. That is, if aggressive users are monitored in their interactions with others (e.g., posting of aggressive messages), and simultaneously, normal users are shielded from this aggression by dropping such communication, the overall aggression in the network will significantly drop. In fact, if the campaign targets highly popular aggressive users, who are encouraged to reduce their aggression via educational tutorials and other interventions, the overall aggression in the network can drop faster than selecting users with different criteria (e.g., random). An interesting extension of this work would be to attempt aggression propagation modeling on a dynamic network, in which links are added or removed through time, since evolving networks are the most realistic but notoriously difficult to model. Also, it would be worthwhile to investigate the effectiveness of the proposed models to predict aggression on other platforms.

Acknowledgements

This research has been partially funded by the European Union’s Horizon 2020 Research and Innovation program under the Marie Skłodowska-Curie ENCASE project (Grant Agreement No. 691025) and CONCORDIA project ((Grant Agreement No. 830927).

References

  • [1] J. J. Allen, C. A. Anderson, and B. J. Bushman (2018) The general aggression model. Current Opinion in Psychology 19, pp. 75 – 80. Note: Aggression and violence External Links: ISSN 2352-250X, Document, Link Cited by: §3.1, §3.1.
  • [2] C. A. Anderson and N. L. Carnagey (2004) Violent evil and the general aggression model. The Social Psychology of Good and Evil, pp. 168 – 192. Cited by: §3.1.
  • [3] A. Bandura, D. Ross, and S. A. Ross (1961) Transmission of aggression through imitation of aggressive models.. The Journal of Abnormal and Social Psychology 63 (3), pp. 575–582. Cited by: §3.1.
  • [4] L. Behera and F. Schweitzer (2003-06) On spatial consensus formation: is the sznajd model different from a voter model?. International Journal of Modern Physics C 14, pp. 1331–1354. External Links: Document Cited by: §2.1.
  • [5] A. Bernardes, D. Stauffer, and J. Kertész (2002-01) Election results and the sznajd model on barabasi network. Physics of Condensed Matter 25, pp. 123–127. External Links: Document Cited by: §2.1.
  • [6] R. B. Cairns, B. D. Cairns, H. J. Neckerman, S. D. Gest, and J. Gariepy (1988) Social networks and aggressive behavior: peer support or peer rejection?. Developmental Psychology 24 (6), pp. 815–823. External Links: Link Cited by: §3.1.
  • [7] S. Campbell (1987) Models of anger and aggression in the social talk of women and men. Journal for Theory of Social Behaviour 17 (4). Cited by: §1.
  • [8] D. Chatzakou, N. Kourtellis, J. Blackburn, E. De Cristofaro, G. Stringhini, and A. Vakali (2017) Mean birds: detecting aggression and bullying on twitter. In WebSci, New York, NY, USA, pp. 13–22. External Links: ISBN 978-1-4503-4896-6, Link, Document Cited by: §1, item 1.
  • [9] D. Chatzakou, I. Leontiadis, J. Blackburn, E. De Cristofaro, G. Stringhini, A. Vakali, and N. Kourtellis (2019) Detecting cyberbullying and cyberaggression in social media. Transactions on the Web (). Cited by: §1, §5.1.
  • [10] Y. Chen, Y. Zhou, S. Zhu, and H. Xu (2012) Detecting Offensive Language in Social Media to Protect Adolescent Online Safety. In PASSAT and SocialCom, Cited by: §1, §3.1.
  • [11] A. Clifford (1973-12) A model for spatial conflict. Biometrika 60 (3), pp. 581–588. External Links: ISSN 0006-3444, Document, Link, http://oup.prod.sis.lan/biomet/article-pdf/60/3/581/576759/60-3-581.pdf Cited by: §2.2.
  • [12] L. Corcoran, C. Guckin, and G. Prentice (2015) Cyberbullying or cyber aggression?: a review of existing definitions of cyber-based peer-to-peer aggression. Societies 5 (2), pp. 245–255. Cited by: §3.1, §3.1.
  • [13] Cyberbullying Research Center (2019) Note: https://cyberbullying.org/facts Cited by: §1.
  • [14] T. Davidson, D. Warmsley, M. Macy, and I. Weber (2017) Automated hate speech detection and the problem of offensive language. In ICWSM, Cited by: §3.1.
  • [15] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch (2000) Mixing beliefs among interacting agents. Advances in Complex Systems 03 (01n04), pp. 87–98. External Links: Document, Link Cited by: §2.2.
  • [16] M. H. Degroot (1974) Reaching a consensus. Journal of the American Statistical Association 69 (345), pp. 118–121. External Links: Document Cited by: §2.2.
  • [17] K. Dinakar, R. Reichart, and H. Lieberman (2011) Modeling the detection of textual cyberbullying.. The Social Mobile Web 11 (02). Cited by: §3.1.
  • [18] N. Djuric, J. Zhou, R. Morris, M. Grbovic, V. Radosavljevic, and N. Bhamidipati (2015) Hate Speech Detection with Comment Embeddings. In WWW, Cited by: §1.
  • [19] A. E. Gabbey and T. Jewell (2016) Aggressive behavior. Note: http://bit.ly/2QNuFSR Cited by: §3.1.
  • [20] J.A.M. Farver (1996) Aggressive behavior in preschoolers’ social networks: do birds of a feather flock together?. Early Childhood Research Quarterly 11 (3), pp. 333–350. External Links: ISSN 0885-2006, Document, Link Cited by: §3.1.
  • [21] S. Fortunato (2005-09) Monte carlo simulations of opinion dynamics. Complexity, Metastability and Nonextensivity. External Links: ISBN 9789812701558, Link, Document Cited by: §2.1.
  • [22] A. Founta, C. Djouvas, D. Chatzakou, I. Leontiadis, J. Blackburn, G. Stringhini, A. Vakali, M. Sirivianos, and N. Kourtellis (2018) Large scale crowdsourcing and characterization of twitter abusive behavior. In ICWSM, Cited by: §3.1.
  • [23] N. E. Friedkin and E. C. Johnsen (1990) Social influence and opinions. The Journal of Mathematical Sociology 15 (3-4), pp. 193–206. External Links: Document, Link, https://doi.org/10.1080/0022250X.1990.9990069 Cited by: §2.2.
  • [24] W. I. Gardner and C. W. Moffatt (1990) Aggressive behaviour: definition, assessment, treatment. International Review of Psychiatry 2 (1), pp. 91–100. External Links: Document, Link, https://doi.org/10.3109/09540269009028275 Cited by: §3.1.
  • [25] A. Gionis, E. Terzi, and P. Tsaparas (2013) Opinion maximization in social networks. In SDM, pp. 387–395. Cited by: §2.2.
  • [26] Hariani and I. Riadi (2017) Detection of cyberbullying on social media using data mining techniques. IJCSIS 15 (3), pp. 244. Cited by: §3.1.
  • [27] E. Hatfield, J. T. Cacioppo, and R. L. Rapson (1993) Emotional contagion. Current directions in psychological science 2 (3), pp. 96–100. Cited by: §5.1.
  • [28] G. E. Hine, J. Onaolapo, E. De Cristofaro, N. Kourtellis, I. Leontiadis, R. Samaras, G. Stringhini, and J. Blackburn (2017) Kek, Cucks, and God Emperor Trump: A Measurement Study of 4chan’s Politically Incorrect Forum and Its Effects on the Web. In ICWSM, Cited by: §1.
  • [29] H. Hosseinmardi, S. A. Mattson, R. I. Rafiq, R. Han, Q. Lv, and S. Mishra (2015) Analyzing Labeled Cyberbullying Incidents on the Instagram Social Network. In SocInfo, Cited by: §1.
  • [30] E. Ising (1925-02-01) Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik A Hadrons and Nuclei 31 (1), pp. 253–258. Cited by: §2.1.
  • [31] G. J. (1963) Time‐dependent statistics of the ising model. Journal of Mathematical Physics 4 (2), pp. 294–307. External Links: Document, Link, https://doi.org/10.1063/1.1703954 Cited by: §2.1.
  • [32] I. Kayes, N. Kourtellis, D. Quercia, and F. Iamnitchi (2015) The Social World of Content Abusers in Community Question Answering. In WWW, Cited by: §1.
  • [33] S. W. L.S.W. (2018-05-07) Confronting passive aggressive behavior on social media. Note: http://bit.ly/2QP1yi9 Cited by: §1.
  • [34] J. Lee and Y. Lee (2002) A holistic model of computer abuse within organizations. Information management & computer security 10 (2), pp. 57–63. Cited by: §1, §3.1, §3.1.
  • [35] J. Lorenz (2007-12) Continuous opinion dynamics under bounded confidence: a survey. International Journal of Modern Physics C 18 (12), pp. 1819–1838. External Links: ISSN 1793-6586, Link, Document Cited by: §2.1.
  • [36] J. McAuley and J. Leskovec (2012) Learning to discover social circles in ego networks. In NIPS - Volume 1, USA, pp. 539–547. External Links: Link Cited by: §4.1.
  • [37] B. I. Newman and J. N. Sheth (1985-09) A Model of Primary Voter Behavior. Journal of Consumer Research 12 (2), pp. 178–187. External Links: ISSN 0093-5301, Document, Link, http://oup.prod.sis.lan/jcr/article-pdf/12/2/178/5410353/12-2-178.pdf Cited by: §2.1.
  • [38] C. Nobata, J. Tetreault, A. Thomas, Y. Mehdad, and Y. Chang (2016) Abusive language detection in online user content. In 25th ACM WWW Companion, Cited by: §3.1.
  • [39] D. O’Sullivan (2018-10-27) Bomb suspect threatened people on twitter, and twitter didn’t act. Note: https://cnn.it/2t8F1na Cited by: §6.
  • [40] S. Pieschl, T. Porsch, T. Kahl, and R. Klockenbusch (2013) Relevant dimensions of cyberbullying - Results from two experimental studies . Journal of Applied Developmental Psychology 34 (5). Cited by: §3.1, §3.1.
  • [41] P.K. Smith, J. Mahdavi, M. Carvalho, S. Fisher, S. Russell, and N. Tippett (2008) Cyberbullying: Its nature and impact in secondary school pupils. In Child Psychology and Psychiatry, Cited by: §1, §3.1.
  • [42] P. Sobkowicz (2009) Modeling opinion formation with physics tools: call for closer link with reality. JASSS 12 (1), pp. 11. External Links: ISSN 1460-7425, Link Cited by: §2.1.
  • [43] K. Sznajd-Weron and J. Sznajd (2000-09) Opinion evolution in closed community. International Journal of Modern Physics C 11 (06), pp. 1157–1165. External Links: ISSN 1793-6586, Link, Document Cited by: §2.1, §2.1.
  • [44] Z. Waseem and D. Hovy (2016) Hateful symbols or hateful people? predictive features for hate speech detection on twitter.. In SRW@ HLT-NAACL, Cited by: §3.1.
  • [45] H. Xie, R. B. Cairns, and B. D. Cairns (1999) Social networks and configurations in inner-city schools: aggression, popularity, and implications for students with ebd. JEBD 7 (3), pp. 147–155. External Links: Document, Link, https://doi.org/10.1177/106342669900700303 Cited by: §3.1.
  • [46] G. Zschaler, G. A. Böhme, M. Seißinger, C. Huepe, and T. Gross (2012-04) Early fragmentation in the adaptive voter model on directed networks. Phys. Rev. E 85, pp. 046107. External Links: Document, Link Cited by: §2.2.
  • [47] X. Zuo, J. Blackburn, N. Kourtellis, J. Skvoretz, and A. Iamnitchi (2016) The power of indirect ties. Computer Communications 73, pp. 188–199. Cited by: §4.1.