Introduction
Given the success of traditional machine learning, interest in geometric learning has grown in recent years. Geometric learning seeks to extend machine learning models beyond euclidean data to include objects such as graphs, point clouds, and manifolds. Noneuclidean data structures are informationrich, as they can describe data that traditional structures cannot. For example, a traditional data structure may contain attributes of a group of individuals, while a graph or network can also encode the
relationships between the individuals. Thus, new algorithms to leverage this type of information can bring new insights.For graphs in particular there are three main problems: node classification, link prediction, and graph classification. Here, we focus on the last task, graph classification. In this problem, each input sample is a graph, which has a corresponding category or label. The goal is to create a model that takes an entire graph as an input, and assigns it to the correct class.
Graph classification is gaining interest in part due to the variety of domains it may be applied to. The same models that can classify proteins based on their structure may also be used to classify social media conversations. We identify a new application which is highly relevant in today’s sociopolitical landscape: bot classification on social media. Automated accounts called bots are increasingly used in online information operations to manipulate both networks (virtual social links) and the narratives that transit these networks. Since bots operate through networks, their network structure can be used to identify them.
While many problems regarding social media data can be posed under a graph classification framework, few prior models focus on this domain. Much of the prior work in graph classification focuses on benchmark data that does not reflect the typical structure of social media data. Specifically, social media graphs or networks tend to have large nodesets and low density, while benchmark datasets tend to have less than 100 nodes and are quite dense.
In this work, we develop a new graph classification architecture inspired by classical network analysis. In analysis of large networks, it is common practice to calculate local (nodelevel) features, and study the distribution. Here, we use an endtoend graphconvolutional architecture to extract local latent features and classify the graph based on the distribution of these features. Due to the high dimensionality of the feature space, we instead use 1D cross sections of the distribution in the form of a multichannel histogram. Since this procedure classifies graphs based on histograms of latent features, it has been named GraphHist.
In the following sections, we review prior work in graph classification, explain our architecture, demonstrate GraphHist’s ability to achieve state of the art results on social media benchmarks, and finally demonstrate realworld application of our model through a case study of bot classification on Twitter data.
The field of graph learning has expanded rapidly since the notation for Graph Neural Networks (GNNs) were first introduced by Gori et al. (Gori, Monfardini, and Scarselli (2005)). The work from then to 2018 is well summarized by Wu et. al (Wu et al. (2019)).
GNNs for graph classification are typically based on a type of graph convolution. Traditional convolutional networks have proved extremely successful at learning shape features in the euclidean domain, such as image classification, however translating this operation to the graph domain is difficult due to irregularities in graph structure (Krizhevsky, Sutskever, and Hinton (2012)). Graph convolutions usually fall into one of two approaches: spectral or spatial. Spectral methods stem from efforts to extend traditional signal processing techniques to graph signals, or Graph Signal Processing (Shuman et al. (2013)). Spectral approaches typically use the symmetric normalized Laplacian, shown in Equation 3
. Many spectralbased approaches like ChebNet relied on eigenvalue calculations, making them computationally costly (
Defferrard, Bresson, and Vandergheynst (2016)). Spatial methods on the other hand, operate on the local structure of the graph. In spatial methods such as GraphSage, nodes aggregate information from their neighbors (Hamilton, Ying, and Leskovec (2017)).Kipf and Welling introduced a model that bridged the gap between the two: it is an approximation of a spectral convolution, but it is localized in space (Kipf and Welling (2016)). This model uses the propagation rule shown in Equation 1, where trainable weights, , are multiplied into learned node features, , and then into the Laplacian . The process starts by using given node features, , as an input: .
(1) 
Many architectures for graph classification now build upon this convolutional structure, including two works we draw from here: Sortpool and Capsule Graph Networks (Zhang et al. (2018); Xinyi and Chen (2019)).
Zhang et al, replaces the normalized Laplacian with the randomwalk Laplacian, and draws parallels to the the WeisfeilerLehman subtree kernel (Shervashidze et al. (2011)
). This effectively gives node embeddings, which they then sort and either truncate or pad to a fixed size, hence the name SortPool. While the sorting procedure gives some spatial relationship to the nodes, the truncating/padding procedure either drops important information, or adds erroneous data when datasets have high variance in graph size, which is often the case in social media datasets. Xinyi and Chen have also used this GNN structure, but applied attention to handle the differences in graph size (
Xinyi and Chen (2019)).Tixier et al. take a different approach (Tixier et al. (2017)). They first assume that node embeddings are given. Node embeddings can be obtained in a number of unsupervised ways, most of which attempt to preserve the networkbased distance between nodes in the embedded space through operations like random walks (Perozzi, AlRfou, and Skiena (2014)). They then compress this highdimensional embedding into a multichannel image by looking at cross sections of consecutive principle components from principle component analysis (PCA). Finally, they use a standard image classifier architecture to classify the graphs. This approach achieved good results, but has two shortcomings. First, the lack of endtoend architecture results in embeddings that may work well for spatial preservation, but poorly for graph classification. Second, their pairing of PCA dimensions is somewhat arbitrary.
Here, we effectively combine and apply a powerful CNN architecture like that used by Tixier et al to the expressive node embeddings from GNNs, as in Kipf, Zhang, and Xinyi. The previously missing piece to attaching these two methods is a differentiable operation that converts node embeddings into a format that CNN can leverage. To define such an operation we draw from classical network science. Networks are typically analyzed by studying the distribution of their local features (Wasserman, Faust, and others (1994)). The most common of such analyses is performed on the degree distribution, which has been used to classify networks of different types, such as scalefree or small world network. This individual analysis of feature histrograms inspired the binning mechanism introduced here. Our binning operation approximates the full node embedding distribution into a multichannel histogram, which is easily inputted to a standard CNN.
GraphHist
In graph classification a set of labeled graphs is given as . There are potential labels, so for all . Our task is to find a mapping from a graph to its label. That is, to find a mapping such that . In practice, we train a model to minimize the cross entropy loss for this classification: , where , is an indicator function outputting 1 if given a true statement, and 0 otherwise and where
is the probability the model assigns training example
to belong to class .For each graph, we will operate on the adjacency matrix, . Elements in indicate the links in such that , if nodes and are linked with weight , and otherwise. We also assume the graph has self loops, . Additionally, each node in graph has features, encoded in . In most example datasets, no node features are given. In these cases we use the the node degree and an identity feature. The degree of node is given by . From there, the degree matrix is constructed as
Given this setting, a diagram of our proposed architecture is given in Figure 1.
Graph Convolution
Our variant of graph convolution is given in Equation 2, where are trainable weight and bias parameters, respectively, and where
is a nonlinear activation function. This GCN relies on the normalized Laplacian matrix (shown in Equation
3), making it similar to spectral GCNs. Spectral GCNs were popularized by Kipf and Welling when they provided a local approximation to a ChebNet, greatly reducing the computational cost (Kipf and Welling (2016)).(2)  
(3) 
These spectral GCNs serve as a local approximation to the more general convolutional framework that was originally proposed by Bruna et al. and have strong underpinnings in graph signal processing (Bruna et al. (2013); Shuman et al. (2013)). Another possibility would have been the random walk Laplacian, , which encodes the probability of transitioning from node to node. The random walk Laplacian has been used in a variety of works, both for graph classification and node embedding (Scarselli et al. (2009); Zhang et al. (2018); Hamilton, Ying, and Leskovec (2017)).
However, given the underpinnings in graph signal processing and the success of recent spectral models, we move forward with this definition.
Previous graph classification architectures like Sortpool and Capsule Graph stack GCNs such that embeddings from the first GCN are the new input features to the second GCN, which then outputs features for the third, and so forth. This form of stacking allows the features to be passed beyond direct neighbors. A level stacking allows nodes to aggregate features within a its neighborhood of radius .
Additionally, Zhang et al. show that stacked GCNs provide a continuous analog to the graph coloring problem and the WeisfeilerLehman subtree kernel (Zhang et al. (2018); Shervashidze et al. (2011)).
Despite this, GCN stacking necessitates that feature aggregation from a node’s extended neighborhood is reliant on aggregation closer to the source node. In our framework, aggregation is obtained independently (through the power of the Laplacian matrix). While previous works do not explicitly use powers of the Laplacian, they do so implicitly through the stacking of GCN modules. Experiments with our approach show slight improvements in performance while allowing for parallelization if memory allows. Additionally, independence of GCNs allows for the use of a bias term, which is not natural in stackedGCN architectures, since they would be multiplied back into the Laplacian at the next level. Lastly, we include
in our GCN step, which reduces the Laplacian to the identity matrix, thus providing a standard fully connected submodule from the node features.
Fully Connected Combination Layers
As in Sortpool, Graph Capsules, and others, the node embeddings obtained from are concatenated to give node embeddings of dimension . In the succeeding step, the intradimension properties are lost (more details on why in the following section).
To minimize the information loss, the embeddings are passed through two fully connected layers, which can capture these nonlinear intradimension properties. For simplicity we’ve chosen layers that have the same dimension as the initial embeddings, . This step is similar to the initial convolution applied to the final embeddings in Sortpool, with differences being in the size of the output, and the activation function.
Node Binning
One of the fundamental challenges in graph classification is that each input graph can have a different number of nodes, , in its nodeset. Thus, graph classification algorithms which rely on node embeddings must find some way of transitioning from a variable size to a fixed size .
Sortpool, for example, reshapes the data by setting a threshold . After sorting, the top nodes are selected, and the rest are dropped. If the graph has more than nodes, zeros are appended to the nodeset until it is size . As previously discussed, this is suboptimal from a data preservation point of view.
We solve this problem through a binning procedure. The input space is discretized, and the number of nodes falling into each discrete bin is counted. Then, a standard convolutional architecture can be applied to the obtained density function. However, effective node embeddings are typically high in dimension, making discretization over the full space intractable. The 2DCNN approach taken by Tixier et al. approximates the distribution using principle components (Tixier et al. (2017)). They take 2D cross sections of the ascending principle components, stacking them as channels of an image. Given the image output, a standard image classifier can then be applied. While this achieved good results, it relies on preprocessing (using given node embeddings and calculating the principle components) and cannot update node embeddings to improve performance.
Here we provide a differentiable alternative that does not rely on a dimensionpairing scheme. Instead, we bin the data along onedimensional cross sections of each dimension, resulting in a histogram with channels. The number of bins is a tuneable parameter, .
The derivative of the loss function,
, can then be propagated through the binning layer by average weighted of the bin gradients, as shown in Equation 5. First, the distance to the bin centers, , are calculated, making . Then, weighted average of the bin gradients are taken, allowing bins closer to nodes to have more pull than bins further away. Thus, each bin pulls nodes towards it if its gradient is positive, and pushes nodes away if its gradient is negative. The amount of pull is proportionate to its distance from the nodes and is controlled by , which we have set to . While the activation function for does not necessarily have to be , it must be bounded. Without a bounded activation function bin boundaries could not be predefined to capture all of the nodes placements. With , for example, all output values will be between 1, and 1, so if , evenly spaced nonoverlapping bins will be of length , and will capture all potential node placements.(4)  
(5) 
Again, this process does lose the covariance relationship between the dimensions of the distribution. However, the combination layers overcome this simplification since we are using an endtoend architecture. Nodes will be pushed along these 1D cross sections during backpropagation such that a classification can be made. The effectiveness of this approach is demonstrated on standard benchmark data in the Experiments section and in a case study for a new application, bot classification, in the Case Study section.
Histogram Classification
Finally, the multichannel histogram can be classified using a traditional convolutional architecture. Tixier et al used a variant of LeNet5 to classify graphs based on on 2D cross sections of predefined node embeddings in (Tixier et al. (2017)). The CNN achieved 99.45% accuracy on the MNIST handwritten digit classification task. We slightly modify the architecture to suit the 1dimensional data that we have obtained from the previous steps.
As in Tixier et al. (2017), is passed to 4 submodules, with filter sizes of , respectively. A submodule is performed as follows. The input data is convolved over with its filter size
and a stride of 1, to 64 output channels. Then, max pooling is performed with size and stride 2. The convolution is performed again, but with 96 output channels. Simultaneously,
is passed to a convolution with, thus capturing the entire histogram with 96 output channels. The submodule outputs and the fullhistogram convolutions are concatenated and connected to a fully connected layer of size 256. Lastly, the 256 unit layer is connected to a softmax output that classifies the graph. Dropout layers were placed before all fully connected layers in the histogram classifier. The activation function used was ReLU. The three changes from the original classifier given are: the wholehistogram convolution is added, the 128 hidden unit layer was changed to 256 units, and the model was adapted to its 1dimensional analog.
The entire model, then, can be trained in an endtoend manner.
Experiments
Benchmark Datasets and Methods
There are many potential benchmark datasets for graph classification, however few of them are social networks, and even fewer resemble the type of networks seen in real world social media data. Real world social media networks are typically large and sparse (Onnela et al. (2007)).
However, most benchmark datasets are relatively dense and have nodesets with less than 100 nodes. With this in mind, we have selected 6 popular benchmark datasets, displayed in Table 1. The datasets have been obtained from Kersting et al’s collection, but were created by Yanardag and Vishwanathan (Kersting et al. (2016); Yanardag and Vishwanathan (2015)).
Dataset  IMDBB  IMDBM  COLLAB  REDDITB  REDDIT5K  REDDIT12K  Bots 

Graphs  1000  1500  5000  2000  4999  11929  14962 
Classes  2  3  3  2  5  11  2 
Nodes  19.77  13.00  74.49  429.63  508.52  391.41  7294 
Edges  96.53  65.94  2457.78  497.75  594.87  456.89  11034 
The IMDB datasets are movie collaboration datasets. Nodes are actors/actresses and links represent coappearance in a movie. The graphs are ego networks, and the task is to classify the genre that an ego network belongs to. This dataset is somewhat challenging because movies may belong to more than one genre, but may only be given one label.
COLLAB was derived from scientific collaboration data in three fields: High Energy Physics, Condensed Matter Physics, and Astro Physics. Each graph is an author’s ego network, and the task is to identify which field they work in.
All three of the Reddit datasets were scraped from the social media platform Reddit, using their API. Nodes in the graph are Reddit users, and links are created by direct replies in the discussion. In the binary dataset, the graphs either come from questionandanswer subreddits, or discussionbased subreddits. The task is to identify which type of subreddit the conversational graph comes from. In the 5k and 12k, datasets, the task is to identify the specific subreddit that the graph belongs to.
We place greater emphasis on the Reddit datasets, as they are the only social media classification tasks. Table 1, illustrates the importance of this distinction. The graphs in the Reddit datasets tend to be an order of magnitude bigger in terms of nodeset size, and two orders of magnitude lower in density.
We have selected 5 different methods to compare our results against, namely Anonymous Walk Embeddings, Sortpool, DiffPool, CapsGNN, and 2D CNN, (Ivanov and Burnaev (2018); Zhang et al. (2018); Ying et al. (2018); Tixier et al. (2017)
). These methods were selected to reflect stateoftheart classification results, and to compare our results against methods from which we have built upon. To the best of our knowledge, the current stateoftheart performances are shown for every dataset. The accuracies and standard deviations are reported in Table
3based on the values reported in initial publication. Because of this, not every dataset has a value for every method. Fey and Lenssen have introduced Pytorch Geometric, a library with implementations of many geometric learning algorithms (
Fey and Lenssen (2019)). Some gaps are filled by using values reported from their implementations. Anonymous Walk Embeddings is the only kernel approach compared against, so it is separated in Table 3.Experimental Setting
The general architecture used for all experiments is illustrated in Figure 1. GraphHist was implemented in Pytorch. The hyperbolic tangent function was used for all activation functions leading up to LeNet. We used the ReduceLROnPlateau scheduler with an initial learning rate of , a factor of 0.5, a patience of 2, a cooldown of 0, and a minimum learning rate of
. We used stochastic gradient descent with a minibatch size of 32. We terminated training after 9 consecutive epochs without progress in the testing loss.
We then tuned parameters to each dataset in the search space , , . Parameters were selected based on their performance on the test set. The final parameters for each dataset is given in Table 2.
Finally, we performed 10fold crossvalidation on each of the datasets using the parameters in Table 2. The mean accuracy and its standard deviation is reported for each dataset in Table 3. GraphHist advances stateoftheart classification in all 3 of the social media benchmarks. It also beats stateoftheart results for IMDBB, and obtains second place results for the remaining two datasets.
We recognize that there are many more hyperparameters that could be tuned, like the batch size, and that even the size of transformations like
and could be tuned. Exploring these possibilities is left for future work, but could result in even better results than those demonstrated here.Dataset  k  h  u  d 

IMDB B  50  2  128  0.8 
IMDBM  25  4  128  0.8 
COLLAB  25  2  256  0.2 
REDDITB  25  6  64  0.8 
REDDIT5K  25  8  64  0.8 
REDDIT12K  25  2  64  0.8 
Twitter Bots  25  2  8  0.5 
Dataset  IMDBB  IMDBM  COLLAB  REDDITB  REDDIT5K  REDDIT12K 

AWE  
Sortpool      
DiffPool      47.1  
CapsGNN    
2D CNN    
GraphHist 
Model  F1  Precision  Recall 

Botometer  0.524  0.858  0.377 
Debot  0.012  1.00  0.006 
bothunter Tier1  0.656  0.821  0.546 
bothunter Tier2  0.687  0.691  0.683 
bothunter Tier3  0.599  0.837  0.466 
GraphHist  0.683  0.807 
Bot detection F1 Precision and Recall scores. All models but Botometer trained on debot data. Top2 F1 scores are emboldened, the stateoftheart score is marked with an asterisk.
Case Study
Automated accounts called bots are increasingly used in online information operations to manipulate both networks (virtual social links) and the narratives that transit these networks. In doing so, state and nonstate actors can artificially manipulate the online marketplace of belief and ideas. To battle this rise in bots, researchers at industry, government, and academia have developed increasingly sophisticated algorithms to detect these nefarious accounts. These research efforts have led to a “cat and mouse” cycle in which increasingly sophisticated algorithms are required to detect increasingly sophisticated automated accounts. Early detection models identified telltale indicators of automated activity such as stolen identities, lack of normal human circadian rhythms, anonymous attributes (lack of profile picture, random string screen name, etc), and a low follower/followee ratio. These features, however, are relatively easy for a bot “puppetmaster” to manipulate in order to remain undetected.
It is much harder for these same bot “puppetmasters” to change the artificial features of the social and communication networks that they inhabit. These social and communication networks (following, retweeting, mentioning, replying) lack the overlapping social integration of human social and communication links. Thus, we exploit the structure of these communication networks directly using GraphHist. We find that this approach generalizes to new datasets better than current alternative approaches.
Building Networks
We built the conversational network that a Twitter account inhabits in the same manner as (Beskow and Carley (2018a)). This approach combines the timelines of the target account and their followers to build the larger conversation. This method was selected because it creates a comprehensive ego network while overcoming API rate limiting constraints and expediting the time it would take to collect the data (target collection is 5 minutes per account). While 5 minutes per account seems long, this is trivial compared to the hours or days that it would take to build a single ego network based on friends/followers connections. The properties of these networks are summarized in Table 1.
Again, the differences between social media networks and standard benchmarks are pronounced. The Twitter networks are 2 orders of magnitude larger than the nonsocial media benchmarks in terms of nodeset size. The Twitter network densities are also 3 orders of magnitude smaller than those of the standard benchmarks.
Previous Work in Bot Detection
For the past decade, increasing numbers of researchers have worked on developing algorithms to detect increasingly sophisticated bots. These models can be broadly separated into supervised machine learning models, unsupervised models, and graph based models. These in turn have also create several prominent tools that are used in social cyber security workflows, including the Botometer (Davis et al. (2016)), Bothunter (Beskow and Carley (2018b)), Debot (Chavoshi, Hamooni, and Mueen (2016)), and Botwalk (Minnich et al. (2017)) algorithms.
Most of the graph and community detection methods have been conducted on Facebook, where these bots are at times called Sybils. These include random walk approaches like SybilGuard (Yu et al. (2006)), SybilResist (Ma et al. (2014)) and SybilRank (Cao et al. (2012)). Other models relax some of the assumptions and use trust propagation approaches such as the SybilFence method (Cao and Yang (2013)).
Supervised models include traditional machine learning with SVM (Lee and Kim (2014)), Naïve Bayes (Chen, Guan, and Su (2014)
), and Random Forest (
Ferrara et al. (2016)) models trained on features extracted Twitter tweet objects and user objects. Other methods have attempted to classify accounts based only on their text (
Kudugunta and Ferrara (2018)) or their screen name (Beskow and Carley (2018c)). Several of the available models like Botometer (Davis et al. (2016)) and Bothunter (Beskow and Carley (2018b)) are classic supervised machine learning models.Several unsupervised methods have also emerged, largely focused on identifying underlying patterns produced by certain types of bots. These include clustering algorithms (Benigni, Joseph, and Carley (2017)
) and anomaly detection algorithms like the BotWalk algorithm (
Minnich et al. (2017)).Most of these models leverage account data and account history while graph based models are focused finding patterns in the conversation and connections. Not many models focus on the larger conversational egonetwork surrounding the account. Only one supervised machine learning model has attempted to bring network science metrics (centrality, simmelian ties, triadic census, etc) from these ego networks into their feature space (Beskow and Carley (2018a)). Rather than using network metrics as proxies for the network itself, we approached this same problem with geometric learning over the entire graph.
Bot Classification Results
For the case study, we built training data of bot accounts that have been labeled by the Debot unsupervised algorithm. The Debot algorithm uses warped correlation to identify “correlated” Twitter bots (bots that post the same content at roughly the same time) (Chavoshi, Hamooni, and Mueen (2016)). Debot has demonstrated high precision identifying this special class of bots, has been used to train classic supervised bot detection models with strong results, and thus was used to label bot data for training. Nonbot “human” data was randomly sampled from the Twitter 1% Stream. Our training data consisted of 8,842 bots and 6,120 human accounts and their associated conversational networks.
We developed a separate test dataset to compare against other state of the art algorithms as well as measure generalizabilty. The final test data was created by manually annotating 337 bot accounts focused on propaganda and other manipulation. Emphasis was made to ensure this test data did not overlap with any training data used by the models tested. The test dataset was balanced with 337 bot accounts and 337 human accounts.
For an evaluation metric we used the F1score, defined as the harmonic mean of precision and recall. Many early botdetection models had relatively high precision but low recall, inflating accuracy metrics. With low recall, these models underestimate the scale of the bot infestation and disinformation problem in general. We found the F1 score as an adequate balance emphasizing both precision and recall. F1 score for all models is provided in Table
4.From the results we see that the Botometer model has the highest precision of all comparison models, but lower recall and therefore lower relative F1 score. The Debot algorithm was able to identify two of the bots, has perfect precision, but very low recall and F1 score. The bothunter algorithms improve recall at a slight cost in precision, resulting in slightly higher F1 scores compared to other models.
The benchmark classification datasets were balanced, while the bot training data was not. To account for this, random oversampling of the data was performed during training. GraphHist was handtuned after the grid search used on the benchmark datasets, resulting in the final configuration given in Table 2. The training environment was the same as that used for the benchmark dataset experiments. The new stopping threshold was given by F1 score in the validation set. GraphHist has recall higher than all other models and precision slightly below bothunter, resulting in the highest F1 score of all models tested.
Conclusions
In this paper, we have proposed a neural network architecture, GraphHist, for graph classification. GraphHist creates expressive node embeddings from GCNs in a similar manner to previous successful models, and uses a powerful CNN architecture to classify these embeddings in an endtoend manner. While each aspect of the model has not appeared in its exact form in prior literature, the most significant innovation here is the binning module, which allows node embedding distributions to be approximated in a differential manner, such that convolutional architectures are then applicable. The binning procedure was inspired by the analysis of large social networks, and as such has been applied to social network classification tasks. GraphHist advances the stateoftheart performance on 4 out of the 6 tested benchmark datasets, including all 3 of the social media benchmarks.
Lastly, GraphHist was applied to a new graph classification domain: bot detection. GraphHist demonstrated better generalization in this task than the current leading bot detection models. Graph classification methods have another huge advantage to classic approaches when it comes to bot detection: they are hard to guard against. These models are highly nonlinear, so it is not obvious what types of graphs a “puppetmaster” should try to construct to avoid detection. Even if an inconspicuous structure was known, the communication graph is far more challenging to manipulate than simple features like tweet frequency. While communication networks are more costly to collect, the popularization of graph classification approaches to bot detection should slow down the “cat and mouse” cycle we are currently experiencing.
Future extensions of this work may involve attaching binning modules to different embeddings schemes, or classifying the resulting histograms with new methods. It could also include improvements in the bot domain, specifically by classifying other types of entities, such as trolls, which may have communication graphs differing from both normal actors and bots. More generally, future work could advocate for increased attention to social media datasets though the release of new social media benchmark datasets which reflect the scale and sparsity of networks seen in the wild.
Acknowledgments
This work was supported in part by the Office of Naval Research (ONR) Multidisciplinary University Research Initiative Award N000140811186 and Award N000141812108, and the Center for Computational Analysis of Social and Organization Systems (CASOS). Thomas Magelinski was also supported by an ARCS Foundation scholarship. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR, ARL, DTRA, or the U.S. government.
References
 Benigni, Joseph, and Carley (2017) Benigni, M. C.; Joseph, K.; and Carley, K. M. 2017. Online extremism and the communities that sustain it: Detecting the isis supporting community on twitter. PloS one 12(12):e0181405.
 Beskow and Carley (2018a) Beskow, D., and Carley, K. M. 2018a. Bot conversations are different: Leveraging network metrics for bot detection in twitter. In Advances in Social Networks Analysis and Mining (ASONAM), 176–183. IEEE.
 Beskow and Carley (2018b) Beskow, D., and Carley, K. M. 2018b. Introducing bothunter: A tiered approach to detection and characterizing automated activity on twitter. In Bisgin, H.; Hyder, A.; Dancy, C.; and Thomson, R., eds., International Conference on Social Computing, BehavioralCultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer.
 Beskow and Carley (2018c) Beskow, D., and Carley, K. M. 2018c. Using random string classification to filter and annotate automated accounts. In Bisgin, H.; Hyder, A.; Dancy, C.; and Thomson, R., eds., International Conference on Social Computing, BehavioralCultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer.
 Bruna et al. (2013) Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203.
 Cao and Yang (2013) Cao, Q., and Yang, X. 2013. Sybilfence: Improving socialgraphbased sybil defenses with user negative feedback. arXiv preprint arXiv:1304.3819.
 Cao et al. (2012) Cao, Q.; Sirivianos, M.; Yang, X.; and Pregueiro, T. 2012. Aiding the detection of fake accounts in large scale social online services. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, 15–15. USENIX Association.
 Chavoshi, Hamooni, and Mueen (2016) Chavoshi, N.; Hamooni, H.; and Mueen, A. 2016. Debot: Twitter bot detection via warped correlation. In ICDM, 817–822.
 Chen, Guan, and Su (2014) Chen, C.M.; Guan, D.; and Su, Q.K. 2014. Feature set identification for detecting suspicious urls using bayesian classification in social networks. Information Sciences 289:133–147.
 Davis et al. (2016) Davis, C. A.; Varol, O.; Ferrara, E.; Flammini, A.; and Menczer, F. 2016. Botornot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web, 273–274. International World Wide Web Conferences Steering Committee.
 Defferrard, Bresson, and Vandergheynst (2016) Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, 3844–3852.
 Ferrara et al. (2016) Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; and Flammini, A. 2016. The rise of social bots. Communications of the ACM 59(7):96–104.
 Fey and Lenssen (2019) Fey, M., and Lenssen, J. E. 2019. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428.
 Gori, Monfardini, and Scarselli (2005) Gori, M.; Monfardini, G.; and Scarselli, F. 2005. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, 729–734. IEEE.
 Hamilton, Ying, and Leskovec (2017) Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, 1024–1034.
 Ivanov and Burnaev (2018) Ivanov, S., and Burnaev, E. 2018. Anonymous walk embeddings. arXiv preprint arXiv:1805.11921.
 Kersting et al. (2016) Kersting, K.; Kriege, N. M.; Morris, C.; Mutzel, P.; and Neumann, M. 2016. Benchmark data sets for graph kernels.
 Kipf and Welling (2016) Kipf, T. N., and Welling, M. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
 Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105.
 Kudugunta and Ferrara (2018) Kudugunta, S., and Ferrara, E. 2018. Deep neural networks for bot detection. Information Sciences 467:312–322.
 Lee and Kim (2014) Lee, S., and Kim, J. 2014. Early filtering of ephemeral malicious accounts on twitter. Computer Communications 54:48–57.
 Ma et al. (2014) Ma, W.; Hu, S.Z.; Dai, Q.; Wang, T.T.; and Huang, Y.F. 2014. Sybilresist: A new protocol for sybil attack defense in social network. In International Conference on Applications and Techniques in Information Security, 219–230. Springer.
 Minnich et al. (2017) Minnich, A.; Chavoshi, N.; Koutra, D.; and Mueen, A. 2017. Botwalk: Efficient adaptive exploration of twitter bot networks. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, 467–474. ACM.
 Onnela et al. (2007) Onnela, J.P.; Saramäki, J.; Hyvönen, J.; Szabó, G.; Lazer, D.; Kaski, K.; Kertész, J.; and Barabási, A.L. 2007. Structure and tie strengths in mobile communication networks. Proceedings of the national academy of sciences 104(18):7332–7336.
 Perozzi, AlRfou, and Skiena (2014) Perozzi, B.; AlRfou, R.; and Skiena, S. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 701–710. ACM.
 Scarselli et al. (2009) Scarselli, F.; Gori, M.; Ah Chung Tsoi; Hagenbuchner, M.; and Monfardini, G. 2009. The Graph Neural Network Model. IEEE Transactions on Neural Networks 20(1):61–80.
 Shervashidze et al. (2011) Shervashidze, N.; Schweitzer, P.; Leeuwen, E. J. v.; Mehlhorn, K.; and Borgwardt, K. M. 2011. Weisfeilerlehman graph kernels. Journal of Machine Learning Research 12:2539–2561.

Shuman et al. (2013)
Shuman, D. I.; Narang, S. K.; Frossard, P.; Ortega, A.; and Vandergheynst, P.
2013.
The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains.
IEEE signal processing magazine 30(3):83–98.  Tixier et al. (2017) Tixier, A. J.P.; Nikolentzos, G.; Meladianos, P.; and Vazirgiannis, M. 2017. Graph Classification with 2d Convolutional Neural Networks. arXiv:1708.02218 [cs]. arXiv: 1708.02218.
 Wasserman, Faust, and others (1994) Wasserman, S.; Faust, K.; et al. 1994. Social network analysis: Methods and applications, volume 8. Cambridge university press.
 Wu et al. (2019) Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Yu, P. S. 2019. A Comprehensive Survey on Graph Neural Networks. arXiv:1901.00596 [cs, stat]. arXiv: 1901.00596.
 Xinyi and Chen (2019) Xinyi, Z., and Chen, L. 2019. Capsule graph neural network. In International Conference on Learning Representations.
 Yanardag and Vishwanathan (2015) Yanardag, P., and Vishwanathan, S. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1365–1374. ACM.
 Ying et al. (2018) Ying, Z.; You, J.; Morris, C.; Ren, X.; Hamilton, W.; and Leskovec, J. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, 4800–4810.
 Yu et al. (2006) Yu, H.; Kaminsky, M.; Gibbons, P. B.; and Flaxman, A. 2006. Sybilguard: defending against sybil attacks via social networks. ACM SIGCOMM Computer Communication Review 36(4):267–278.

Zhang et al. (2018)
Zhang, M.; Cui, Z.; Neumann, M.; and Chen, Y.
2018.
An EndtoEnd Deep Learning Architecture for Graph Classification.
In AAAI.
Comments
There are no comments yet.