Image as Data: Automated Visual Content Analysis for Political Science

10/03/2018 ∙ by Jungseock Joo, et al. ∙ 6

Image data provide unique information about political events, actors, and their interactions which are difficult to measure from or not available in text data. This article introduces a new class of automated methods based on computer vision and deep learning which can automatically analyze visual content data. Scholars have already recognized the importance of visual data and a variety of large visual datasets have become available. The lack of scalable analytic methods, however, has prevented from incorporating large scale image data in political analysis. This article aims to offer an in-depth overview of automated methods for visual content analysis and explains their usages and implementations. We further elaborate on how these methods and results can be validated and interpreted. We then discuss how these methods can contribute to the study of political communication, identity and politics, development, and conflict, by enabling a new set of research questions at scale.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 13

page 14

page 17

page 18

Code Repositories

Social-Science-with-Unstructured-Data

Social science research cases with unstructured data (mainly image and audio)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction: From Text to Image

In 1976, photographs of President Gerald Ford failing to husk a tamale may have cost him the presidential election. In 1988, Democratic front runner Gary Hart was felled by a photo of him with a mistress; the man who became the nominee, Michael Dukakis, by an awkward photo riding an M1 Abrams tank. In 2004, candidate John Kerry was photographed wind surfing, cementing his reputation as an effete elite. In 2010, video of a self-immolated fruit vendor spread throughout Tunisia, sparking the Arab Spring. Visual communication is a powerful component of politics, and new methods from computer vision and deep learning are enabling political scientists to better understand its power.

Political scientists have developed and applied advanced computational techniques to large scale text corpora, such as party manifestos (Laver, Benoit and Garry, 2003, Mikhaylov, Laver and Benoit, 2011), Congressional press releases (Grimmer, 2010, Grimmer and Stewart, 2013), news articles (Hopkins and King, 2010), and survey responses (Hobbs, 2017)

. These methods have advanced in response to the growing availability of textual data in a quantity that overwhelms manual analysis. Automated content analysis methods, therefore, have been widely used by political scientists in the past decade, ranging from simple keyword based methods to topic modeling or sentiment analysis and opinion mining.

This article argues that we are at a similar juncture with visual data. While political scientists have long understood the importance of imagery in politics, analysis has consisted of the manual collection and annotation of data. The labor intensiveness of the collection process has limited the external validity of studies and prevented answering certain kinds of questions. Moreover, the rise of the internet, and social media in particular, provides political scientists with vastly more visual data. There is now more data than a single team can analyze manually, requiring the adoption of new methodologies. This paper introduces political scientists to these methodologies.

Visual data are characterized by several key features, distinct from text data, as summarized in Table 1

. Due to these features, it is often challenging or impossible to apply the same machine learning techniques for text data to visual data. The most critical distinction between them is that an image is a two dimensional array of pixels and each pixel carries no semantic meaning, as opposed to text data whose atomic elements are words. A single word can provide a great deal of semantic information,

e.g., “Trump” or “election,” and a simple string comparison operation allows to access the information. In contrast, in visual analysis one has to process a huge number of meaningless pixels to detect and identify people, objects and events just to be on par with the starting point of text analysis. Recognizing elementary content, i.e., visual “words,” from an image is, however, extremely difficult. This technical difficulty has been the main obstacle to research inquires involving quantitative analysis of visual data on a large scale.

Text Image
  • One dimensional:
    a sequence of words

  • Low uncertainty at word level

  • Small size; easy to transfer and store

  • Known dictionary

  • Language specific

  • Elaborative

  • More logical

  • Two dimensional:
    an array of pixels

  • High uncertainty at any level

  • Bigger size

  • Unknown dictionary

  • Universal

  • Intuitive and immediate

  • More emotional

Table 1: Distinct Characteristics between Text and Image Data

This paper introduces recent breakthroughs in computer vision and machine learning and demonstrates how they can be applied to political science research. The new approach, colloquially known as deep learning, represents significant improvements in learning from big data, with the help of increased hardware capabilities, especially the prevalence of graphical processing units (GPUs). In addition to explaining how deep learning works, tasks for which it is well-suited, and training and validation, this paper suggests substantive research areas in which these techniques will prove useful. Finally, an analysis of protests in South Korea and Hong Kong using social media images is presented as a demonstration of this promising methodology.

2 Computer Vision and Deep Learning

2.1 Goals

Computer vision is an interdisciplinary branch of study crossing computer science, statistics, cognitive science and psychology. The primary goal of computer vision is automated understanding of visual content, i.e

., to replicate human vision abilities such as face recognition or object detection by computational models.

The human vision system is versatile, complicated, and not fully understood, and the computer vision systems cannot simply reconstruct the mechanisms of human vision. Therefore, the literature has mostly focused on using statistical inference and machine learning approaches to deal with noisy inputs and discover meaningful patterns. In practice, this pipeline usually consists of collecting a large amount of visual data, manually labeling the data, and training a model (estimating its learnable parameters) which can best explain the observed data.

The insufficient reliability and accuracy of computer vision based methods was the primary limiting factor to practical applications – including political analysis of visual content – until the field made a dramatic leap forward with the advances in deep learning based approaches, which will be elaborated in the following sections.

2.2 Deep Learning and Hierarchical Representations

Deep learning refers to a class of machine learning methods which utilizes hierarchical, multi-layered models.111Computer vision tries to solve visual problems with any kinds of methods and deep learning offers efficient methods to any kinds of data.

In contrast to shallow or flat models such as linear regression in which output variables can be directly computed from input variables, “deep” models employ repetitive structures with multiple layers

222A layer means a separate operation in a network. It will be further elaborated shortly. such that the final outputs of the model are obtained through a sequence of operations applied to the input data and intermediate results.

In machine learning, hierarchical model structures are commonly used, e.g., topic models such as LDA (Blei, Ng and Jordan, 2003) or PLSA (Hofmann, 1999). These models incorporate different levels of representations which capture structured and global information (e.g., topic) as well as local information (e.g., words) from input data.

Deep learning-based methods profit from the same hierarchical structure, but they employ a larger number of consecutive layers. These extra layers add the “deep” to the learning. Indeed, the success of deep learning is related to the depth of the models, as additional layers can achieve a higher expressiveness and capture more complex data distributions than what shallower models can (Delalleau and Bengio, 2011, Eldan and Shamir, 2016, Poggio et al., 2017). Furthermore, such complex internal structures and interactions are directly learned from data rather than manually defined. This is also advantageous when applied to complex data such as images.

2.3 Artificial Neural Network

While there have been different models proposed in the deep learning literature, artificial deep neural networks (DNN) are the most popular branch of deep models and have been used in a number of areas including computer vision, audio processing, natural language processing, robotics, bioengineering, and medicine. This subsection describes a general neural network, and Section

2.4

discusses its variant, a convolutional neural network (CNN). The convolutional neural network is commonly used in computer vision applications with two-dimensional inputs.

Artificial neural networks represent complex concepts, like the probability an image contains a human face, as a system of connections between elementary

nodes; the collection of nodes and connections is the neural network. Each node, also called a neuron or an unit, in this system only performs simple computations and interacts with a few other nodes. Nevertheless, the network of a large number of nodes enables complex data modeling through their interactions.

Figure 1: An example computation in a node and its connected nodes.

Figure 1 shows an example configuration of a node and its connected nodes. Each node takes input values from its input nodes and evaluates a weighted sum using weights associated with edges (in this example,

). Typically this value is transformed by a non-linear activation function,

e.g., sigmoid, and then passed to output node.

Figure 2: An example architecture of a neural network with an input layer, an output layer, and two hidden layers.

Figure 2 shows an example architecture of a neural network with several layers. Neural networks with multiple hidden layers are considered “deep.” A layer in a neural network is a set of nodes which takes inputs from the nodes in the previous layer and deliver outputs to the nodes in the next layer. When a network is visualized as Figure 2, a column of nodes is a layer, and the number of columns is the number of layers. Inputs to the whole network therefore undergo several steps of transformations through layers until they reach the output layer of the network. The output layer is the network’s final layer, and it contains one node per desired label in case of classification.

Hidden layers are intermediate layers between the input and output layers in a network whose true values are not observed during training. They play a critical role in modeling complex concepts by giving an expressive power to deeper networks. Studies have shown, both experimentally and theoretically, that the more layers a neural network has, the better performance it can achieve (Eldan and Shamir, 2016, Poggio et al., 2017). A drawback of having too many layers is that it is more difficult to train such a model, i.e., vanishing gradients (Bengio, Simard and Frasconi, 1994)333

Networks are trained by a gradient descent method with backpropagation, and the gradients become smaller as it goes back through more layers, which makes it very hard to update the parameters.

.

Training a neural network is a task to find optimal values for edge weights (i.e., in Figure 1). In most cases, objective functions of neural networks are non-convex and training is conducted by a gradient descent method with the backpropagation algorithm (LeCun et al., 1989)

, alternating between forward and backward passes. In the “forward” pass, given an input value, the network evaluates the output and computes the loss function based on the ground truth output value. In the “backward” pass, the gradient of the loss function is propagated backward by the chain rule and model weights are updated accordingly.

2.4 Convolutional Neural Network

Figure 3 illustrates an example configuration of a typical convolutional neural network (CNN) for classification. LeCun et al. (1989) first proposed the CNN structure with an efficient learning algorithm based on the backpropagation. Since Krizhevsky, Sutskever and Hinton (2012) showed an impressive performance, it has become the de facto

standard method for image classification. CNNs have a repetitive structure with several important layers: the convolutional layer, nonlinear layer (ReLU, in Figure

3), pooling layer, and fully connected layer. This subsection describes each in turn.

Figure 3: An example of a convolutional neural network architecture.

2.4.1 Convolutional layer

A convolutional layer in CNNs performs a convolution operation to the input to the layer, either raw image data or an output from the previous layer. Convolution is widely used in signal processing for transforming or comparing sequence data. For example, one can reduce noise in a signal by convolving it with a Gaussian filter, which will smooth out the original signal by blending the original value at time with other values at adjacent time points around . Convolutino is also used for template matching, where one wants to detect the specific part of a signal that matches a pre-defined template: this is the main purpose of convolutional layers in CNNs.

Formally, the convolution of two functions, and , is another function defined by

(1)

The second function, , is called a kernel. Note that the kernel is flipped () by the definition of convolution. In a discrete case, this measures the sum of element-wise multiplication between two functions, with one function being shifted over time, such that

(2)
Figure 4: Illustration of computations in a convolutional layer.

Each convolutional layer in a CNN uses a convolution operation in order to compare the input data with the kernels that are stored in the model. The kernel is also called the filter in the deep learning literature. In practice, the kernel is not flipped in computation in most implementations as it is unnecessary for the purpose of CNN444The parameters will be learned in the same way irrespective of the flipping direction.. This will give us a slightly modified definition of convolution of a two-dimensional input and a two-dimensional kernel in CNN:

(3)

and denote the element in th row and th column in the matrices and . and denote the height and width of the kernel , and, typically, CNNs use square kernels (). The result of the convolution is another 2D array, , which is called a feature map. This is the output of the convolutional layer. This computation is performed on every location in an input map and the result is stored in the same location in the output feature map (See Figure 4).

Most images are three-dimensional data with two spatial dimensions and an additional dimension of color channel (e.g., RGB). Also, intermediate feature maps in each layer are three-dimensional as each feature map (or each channel) corresponds to the response from a specific kernel (filter)555For example, a layer may contains 10 filters describing the shape of digits, 0-9. Then the resulting output (10 W H) will have 10 feature maps (or 10 channels) of the same size, each recording the response for one digit.. The entire weight parameters of each convolutional layer () are therefore represented by a four-dimensional array of size (), where is the number of channels of the input (the number of channels in the previous convolutional layer) and is the number of channels in the current layer. The feature map for each channel will therefore be obtained as follows:

(4)

Convolutional layers enable the following two key properties of convolutional neural networks.

Weight sharing. In Equation 4, the kernel is invariant to the location of each input node (). Therefore, the same kernel will apply to every location of the input map and the connections between two layers will share the same weights (thus weight sharing). Weight sharing is efficient because an object may appear in any part of an image and its appearance is invariant to its placement. Weight sharing reduces the number of free parameters in the network and makes it easier to train.

Local and sparse connectivity. Convolutional layers in CNN achieve sparse connectivity by using a kernel much smaller than the size of input map (, usually). Each node in a convolutional layer is only connected to a small number of nodes in the previous layer, i.e., a local region. This kernel is small because adjacent pixels and subregions of an image are more highly correlated than distant regions.

2.4.2 Nonlinear Layer

Each convolutional layer is typically followed by a nonlinear activation function that applies to each element in the feature map. One of the most common activation functions is the rectified linear unit (ReLU):

(5)

This function will simply replace negative feature map values with and keep positive values. Other functions such as sigmoid or hyperbolic tangent function can be also used. The main advantage of the ReLU is that it runs much faster than those functions.

Nonlinearity of visual models is important as it allows to capture a complex data distribution. Especially, nonlinear layers are essential in deep networks because consecutive layers of linear operations will be simply collapsed into one linear layer. Thus, there will be no benefit of adding more layers to the network without nonlinear functions.

2.4.3 Pooling layer

Pooling is another important operation in convolutional neural networks since it reduces computational complexity. A pooling layer takes an input feature map from the previous layer and generates a transformed map whose size differs from its input size. Most images and feature maps in a CNN are spatially correlated: values in closer pixels or nodes tend to be more similar because most scenes are continuous. Instead of keeping similar values redundantly from adjacent locations, one can simply choose the maximum response (or the average value) in each spatial neighborhood (pooling window) to represent the area.

Figure 5:

Illustration of a max-pooling operation of the window size

. For each window, only the maximum value will be retained.

Specifically, a max pooling layer compares values in each sub window (e.g., a window of pixels) of the input feature map and chooses the maximum value (see Figure 5). Only these maximum values will be stored in the output map; the other values are disregarded. Removing non-maximum values also means that the resulting feature map will be of a smaller size than the input map. For example, an input image of size 256 256 will be downsized to 16 16 after applying 4 max-pooling layers of size 2 2.

Pooling not only reduces the number of free trainable parameters but also helps the network achieve translation invariance, which is an important property for computer vision systems. One main difficulty in visual learning is high geometric variations of objects and parts arisen from part movements and viewpoint changes. Robust computer vision system needs to handle such a geometric variation, and pooling operations help by disregarding small spatial perturbations within the pooling window.

2.4.4 Fully connected layer

CNN architectures used for classification include one or a few fully connected layers at the last stage. By definition, a fully connected layer densely connects all the nodes from the previous layer to all the nodes in the current layer. A convolutional layer encodes local information tied to specific image subregions distributed over a two dimensional map, through a sparse connectivity (i.e., nodes are selectively connected in a convolutional layer). A fully connected layer collects local evidences from all the subregions, captured in the prior convolutional layer, and outputs the overall likelihood of a visual concept which the whole network attempts to model.

In the case of classification, the fully connected layers in a CNN are usually followed by a softmax function, which normalizes the final classification scores over categories. See Section 3.1 for details.

2.5 Why Does It Work Well?

Artificial neural networks have a long history in machine learning and computer vision and became extremely popular after Krizhevsky, Sutskever and Hinton (2012)

demonstrated an impressive image classification performance on a benchmark dataset, Imagenet

(Deng et al., 2009).

Figure 6: Comparing Deep Learning to Previous Computer Vision Methods

Traditional machine learning methods utilize a two-step process as shown in Figure 6. Given raw input data (e.g., images), these methods first extract features (e.g., an edge histogram) using a fixed, handcrafted feature extractor.666

The notion of handcrafting means that a researcher manually designs and defines the feature extraction function based on his insight.

These features should capture the most important cues in the raw data, which will be exploited by an off-the-shelf classifier in the second step.

In contrast, deep learning based methods learn their representations directly from data without separate, hand-crafted feature extraction. These methods employ a data driven approach in feature learning and train an integrated model that will automatically learn and capture both low- and high-level representations of data (LeCun et al., 1998). This approach is advantageous because the learning algorithm can discover many subtle features which are specific to the given task. In other words, the features in deep learning are optimized for the task during training, whereas in traditional methods, handcrafted features are invariable bottlenecks. It is also very common that all trainable parameters in a neural network are jointly optimized simultaneously, called end-to-end training.

In addition, deep models contain a large number of layers, using the representation in lower layers as building blocks for the representation in higher layers. This idea of compositional and hierarchical modeling of visual data has long been a key principle in computer vision and pattern recognition

(Bienenstock, Geman and Potter, 1997, Felzenszwalb et al., 2010). Nevertheless, deep learning dramatically extends the number of layers possible by an effective learning algorithm (LeCun et al., 1989).

3 Tasks in Computer Vision

This section describes common tasks that computer vision and deep learning techniques can solve. Following this section, Section 4 explains how to build and evaluate a CNN for these tasks and Section 5 discusses how these methods and tasks can be applied to political analysis.

3.1 Image Classification

One of the most well known research problems in computer vision is image classification. Given an input image, , the goal of image classification is to assign a label, , in a predefined label set, , based on the image content:

(6)

In case of binary classification, = {positive, negative}. In general, may contain any arbitrary number of possible labels, e.g

., generic object categories. The posterior probability for each category is computed upon a given input image and the classifier chooses one category with the highest output score.

Figure 7: Example results of image classification with the confidence scores computed from a CNN. Red color indicates the correct category and blue color indicates the incorrect categories.

One famous example of image classification is the ImageNet Challenge (Deng et al., 2009), which is a public competition among diverse classification systems which are trained on and tested by identical image data and annotations. In this data, each image has been manually labeled with one category out of 1,000 categories. The categorization is object-centric and the correct label corresponds to the main object in each image. These categories include animals (e.g., bear, horse), vehicles (e.g., school bus, scooters), musical instruments (e.g., piano, acoustic guitar), and so on.

The CNN architecture depicted in Figure 3

can be used for image classification. The softmax function is commonly used in multiclass classification to normalize output scores over multiple categories such that the final scores sum to 1 as a probability distribution. Suppose that the last fully connected layer outputs a vector

, where is the raw output score before normalization for the -th class out of classes. The final score will be obtained as follows.

(7)

In the case of multilabel classification, each class is classified separately and an image is allowed to be assigned more than one positive class. For instance, an image may contain a horse and a dog. This image will be labeled with both classes in multilabel classification. In this case, the softmax function is replaced with sigmoid functions applying to each output node separately because there will be no normalization over classes.

3.2 Object Detection

The goal of object detection is to localize object instances and classify their categories in a given input image. The output of object detection is a set of detected object instances including their locations and categories. Figure 8 shows example results of object detection with detection scores.

Object detection is considered a more complex problem than image classification because the model should classify the types of object instances and their locations. In practice, many object detection systems utilize a two-stage procedure. First, the system generates a number of generic object “proposals” from an input image (Uijlings et al., 2013). These proposals are image subregions which the system believes are likely to capture an object instance, regardless of its category. Second, the image classification step will then apply to each object proposal to determine its category or reject it as a background region. In more recent methods (Ren et al., 2017), these steps are integrated within one model, yielding a better accuracy and computational efficiency.

An object location is represented by a rectangular bounding box, , indicating the coordinates and the size of the bounding box. This bounding box is the rectangular area of the minimum size that can cover all the pixels that the object occupies in the image.

Figure 8: Example results of object detection.

3.3 Face and Person

The human face has received enormous research attention as a special domain in computer vision since the 1970s for two main reasons. First, it has many useful applications, e.g., a security system or a survey tool. Second, it is relatively easier to handle face images compared to other objects, because the appearance of a human face is consistent across individuals but distinct from other objects. These properties motivated early innovative explorations in the topic such as automated feature extraction (Kanade, 1977), feature learning with neural networks (Fleming and Cottrell, 1990), and classification based on statistical analysis of data (Belhumeur, Hespanha and Kriegman, 1997). Existing work in this topic can be categorized into three areas: face detection, face recognition, and face attribute classification.

Face Detection. Face detection refers to finding the location of every face in an input image. It can be posed as a binary classification problem where the classifier is required to determine whether each image sub-region contains a human face or not. There are mainly two approaches in the general objection detection task: the sliding window and the object proposal-based search. The sliding window is an exhaustive search algorithm which simply examines every possible “window,” i.e., 2-D rectangular sub-region of an image. This method will extract the feature from each window and classify its label based on the feature description. This can achieve the best recall but is inefficient because it has to examine all possible windows. In contrast, the proposal-based search is a selective procedure which first selects a subset of windows by rejecting a large number of easy negatives, e.g., black background, and performs subsequent classification only for the selected windows, i.e., object proposals.

Viola and Jones (2004) proposed the most famous face detection algorithm, commonly known as the Viola-Jones detector. This system employs “haar-like” features, which measure local contrast in image intensity, and evaluates the weighted sum of a number of local feature values. Adaptive Boosting is used for selecting the best features from a larger feature pool and determining the optimal weights. It combines a number of “weak” classifiers to construct a strong one by iteratively adding the best weak classifier at each round and adaptively adjusting the weight of each sample in training data.

Face Recognition and Verification. Face recognition is a task to classify the identity of a person from a facial image and face verification is a task to compare two facial images and judge whether they are the same person. They are usually based on the same face model which takes an input facial image and computes the facial feature from it; face recognition can build on face verification by comparing one input facial image against every face in the database of people with known identities.

Face recognition models are either part-based or holistic. In part based approaches, different facial regions, such as forehead or mouth, are detected and modeled separately, and the local features from multiple regions are combined for final classification (Kumar et al., 2011). In holistic approaches, the appearance of a whole facial region is directly modeled (Belhumeur, Hespanha and Kriegman, 1997).

Most recent approaches in face recognition are based on convolutional neural networks in which local parts are not explicitly defined but implicitly captured in the model hierarchy. A recent study by Facebook (Taigman et al., 2014) has reported that their model based on a CNN is as accurate as human annotators in face verification after trained from 4.4 million labeled face images obtained from their users.

Human Attribute Classification. A face provides clues for recognizing demographic variables (e.g., gender, race, age), emotional states, expressions, and actions, commonly referred to as human attributes in computer vision. Figure 9 shows two example results of face recognition and gender and race classification from facial appearance. Large scale datasets of facial images and attribute annotations are also available (Liu et al., 2015) and enable training a deep CNN with a similar structure to an image classification model.

Figure 9: Example results of face detection, recognition, and attribute classification.

4 Training and Validation

This section explains how one develops an image classification system using CNN, focusing on model training and validation. We will discuss practical issues in training a model and introduce tools to diagnose the model performance.

4.1 Training

As in other machine learning methods, training means estimating from training data optimal values for model parameters that minimize a target loss function, e.g., an error function or a learning objective of the model. In CNNs, these parameters include kernel weights in convolutional layers (Section 2.4.1) and weights in fully connected layers (Section 2.4.4).

There exist many loss functions widely used in training a CNN. One can select a specific loss function or a combination of multiple loss functions depending on the task and the output dimension. In image classification, for example, the most popular loss function is cross-entropy loss, also called log loss. In a binary classification task, the binary cross-entropy loss is:

(8)

where is the true label for the example and is the output value computed from the model. In training, all the model parameters (model weights) are optimized to minimize this loss function across the entire training set.

This optimization is performed by stochastic gradient descent with the backpropagation algorithm

(LeCun et al., 1989), alternating between forward and backward passes. In the forward pass, given an input (an image), the network evaluates the model outputs (classification results) and computes the loss function based on the ground truth output labels. In the backward pass, the gradient of the loss function is propagated backward by the chain rule and the model weights are updated accordingly.

Training a CNN with many layers can take several weeks to several months, even with a GPU. Stochastic gradient descent is an iterative procedure and updates the model parameters incrementally through many iterations. Training deeper models requires a larger training set and more iterations than shallower models.

Fine Tuning. Fine tuning is a popular technique which can accelerate the training process of neural networks. When training a network, all the weight values are typically initialized to small random values at the beginning and then gradually updated. Instead of using random values, one can also take the weight values from an existing model which was already fully trained from another dataset and initiate a new training process. This procedure is called fine tuning as an existing model is to be tuned to another task. This existing model is called a pre-trained model. For example, one may use a pre-trained model trained for face detection to initialize the weight values of a new model for person detection.

The underlying idea is that CNNs, especially in their lower layers, capture features that can transfer and generalize to other related tasks as these features can be shared across the tasks, i.e

., transfer learning. In visual learning, these sharable representations include elementary features such as edges, color, or some simple textures (Figure 

10). Since these features can commonly apply to many visual tasks, one can simply reuse what has been already trained from a large amount of training data and refine the model to the new data.

Fine tuning is also helpful when the training data is insufficient to train a deep network. Training a deep model requires a huge amount of training data when starting from scratch, but such a large dataset may not be available. In the case of fine tuning, the pre-trained model was already trained on a large dataset and can provide a robust starting point for the new task.

4.2 Validation and Interpretation

Deep neural networks, despite their remarkable performance, often receive criticism due to the lack of interpretability of their results and internal mechanisms compared to simple models with a handful of explanatory variables (linear classifiers, for example). A deep model typically comprises millions of weight parameters (edges between nodes), and it is impossible to identify their meanings or roles from the output.

Once a model is trained, one can measure its generalization error using a validation dataset which does not overlap with the training set. As in other classification problems, the performance, e.g

., accuracy or goodness, of a CNN-based classifier can be measured by several metrics, including raw accuracy, precision and recall, average precision, and many others. These measures, however, do not explain how the model achieves such an accuracy.

To make the result of deep models more explainable and interpretable, several methods have been proposed.

4.2.1 Language-based Interpretation

As humans use language to explain a concept, one can develop a joint model that incorporates visual and textual data such that the text part explains its visual counterpart. For example, image captioning generates a sentence describing visual content in an input image (Kiros, Salakhutdinov and Zemel, 2014) or text-based justifications to explain why the model produces such outputs Hendricks et al. (2016).

Another line of research on text-based interpretation of visual learning utilizes questioning and answering (Antol et al., 2015). Such methods take both an image and a text question as input and output a text-based answer to the input question. This allows a more flexible interface between a user and a model than a traditional classification task, which essentially asks a fixed question to the model.

The key limitation of these methods is that they do not generalize: they are unable to deal with novel content or questions. The models are trained on image-text pairs and simply reproduce the mapping learned from the training data. When the model is given a novel question which was not given during training, it may not understand the meaning of the question.

4.2.2 Visualization

Another way of understanding how a deep network produces its output is through visualizations. Since convolutional neural networks are largely used for visual learning from images, it is especially effective to visually illustrate their mechanism so that the user can understand it better. We introduce the two most popular approaches: feature-based and region-based.

Figure 10: Visualization of feature activations at different layers in a CNN by a deconvolutional network (Zeiler and Fergus, 2014). For each layer, the left panel shows groups of similar image patches which produce high activation values for a specific node in the layer. The right panel shows corresponding feature visualizations.

Figure 10 provides examples of the feature-based approach. This approach uses a “deconvolutional” network (Zeiler and Fergus, 2014), which is akin to a reverse CNN. Unlike a CNN which collects feature activations at multiple layers to make a final output, a deconvolutional network redistributes the contributions of each feature and projects their importance back to the input pixel space. Figure 10

shows that visually similar image patches that contain the same image feature (left sub-panel) will trigger high activation scores in the same node in the network that captures the image feature. The image feature can be visually identified from the feature activation maps (right sub-panel). Moreover, this visualization also confirms that the lower layers in a network respond to the low level visual similarity such as color or texture, and the higher layers tend to capture semantic similarity at the object category level.

Figure 11: Visualization of region importance to visual concept classification by Grad-CAM (Selvaraju et al., 2017). Examples are taken from a recent protest image analysis (Won, Steinert-Threlkeld and Joo, 2017). Important regions are marked by red color.

Figure 11 shows the region-based approach, highlighting important regions which more contribute to the model output (Selvaraju et al., 2017). This example visualizes the region importance of a CNN trained for protest image classification (Won, Steinert-Threlkeld and Joo, 2017). This network classifies whether an image contains protesters, police, or fire, and it estimates the level of perceived violence, using the Grad-CAM method (Selvaraju et al., 2017), which evaluates a weighted sum of feature response maps from all convolutional layers in a network, to highlight important regions by color. For example, Figure 11 shows that abstract concepts, such as protest and violence, are classified from less abstract concepts such as individuals holding signs or the presence of smoke. Visualization helps users understand how complex classification is internally performed.

5 Applications in Political Science Research

This section describes existing research in political science relying on visual data. Since automated visual analysis is still very new, few works in political science have adopted the methods. We will therefore also discuss existing manual analysis on visual data and explain the potential utility of automated methods to different domains.

5.1 Political Behavior

Visual data can identify demographic information about a person such as gender, race, age group, or other features based on facial appearance. A few recent studies have used profile images of individual users in social media in order to infer such demographic information about the users. Wang, Li and Luo (2016) analyzed demographic compositions of the followers of Donald Trump and Hillary Clinton in Twitter in the U.S. 2016 presidential election, using profile images. A similar approach was also used to analyze promoter demographics in various social and political campaigns in Twitter (Chakraborty et al., 2017). Both studies used the same commercial software to classify demographic attributes of people from photographs (Face++, 2018).

Using deep learning on images holds much promise in extending our knowledge of the relationship between demographics of protesters and policy change, an area that has received little direct testing (Fisher et al., 2005). For example, a 1986 study of 1964 Freedom Summer participants uses survey data of college participants collected twenty years prior (McAdam, 1986). A study of participants in the 1989 East Germany revolution asks respondents their age, gender, marital status, number of kids, and education (Opp and Gern, 1993). Others conducted a survey of protesters in Egypt’s Tahrir Square, to measure how social media use varied by gender, age, and education (Tufekci and Wilson, 2012). A study of participants in Ukraine’s 2004 Orange Revolution finds participants came from diverse ideological backgrounds, with different demographic categories have opposing effects on the probability of protesting (Beissinger, 2013).

These studies each focus on one protest event, making it difficult to generalize about how gender, age, and race affect protest size. For example, Beissinger (2013) and Opp and Gern (1993) find older individuals are less likely to protest, but Tufekci and Wilson (2012) find the opposite. Opp and Gern (1993) find men less likely to participate than women, the opposite again of Tufekci and Wilson (2012) and Beissinger (2013). While these conflicting findings could be due to various factors - survey design, political regime effects, and differing economic opportunities, to name a few - another possibility is that scholars have not been able to pool demographic correlates in models. Because of the difficulty of measuring demographics, research focuses on one or two protest waves, uses surveys, and data are usually gathered after the fact. The ability to generate demographic data about protesters using deep learning and computer vision may permit a more stable identification of which individual features correlate with protest participation.

5.2 Political Communication

Political communication studies the interaction and communication among politicians, media, and the public across speeches, public debates, newspaper articles, and television broadcasts. Recent advances in technology, especially the rise of social media, have prompted the creation and transmission of an enormous amount of political communication data. A substantial portion of such data is accessible to researchers, and scholars have developed automated machine learning-based techniques for analysis of political text data (Grimmer and Stewart, 2013). These techniques allow researchers to discover latent topic structures from an unstructured document set (Grimmer, 2010, Roberts et al., 2014) or measure opinions or sentiments of authors from text (Tumasjan et al., 2010, O’Connor et al., 2010).

Political scientists have also paid close attention to the visual dimension of political communication (Gilliam Jr and Iyengar, 2000, Barrett and Barrington, 2005, Rosenberg et al., 1986, Grabe and Bucy, 2009, Barnhurst and Steele, 1997, Hansen, 2015, Schill, 2012). Most people access news through multimodal media; even newspapers devote significant space to photographs. Presidential debates, for instance, may be seen as an event mainly about verbal exchanges between candidates. They are, however, televised to the viewers, with many visual cues which communicate emotions and tensions between them to the viewers (Shah et al., 2016). Indeed, several studies argue the nonverbal cues and visual exposures of politicians in media may encode their emotions and invoke voter reactions (Sullivan and Masters, 1988, Grabe and Bucy, 2009, McHugo et al., 1985).

Automatically accessing, collecting, processing, and analyzing visual data has been the main bottleneck preventing existing manual studies from scaling. To overcome such a difficulty, recent studies use computer vision models to analyze characteristics pertaining to the political dimensions of actors or events portrayed in large collections of images. Joo et al. (2014) demonstrated that an automated method can be used to infer the hidden communicative intents from photographs of politicians, highlighting certain personal traits to visually persuade the audience. The study detects several scene components such as facial expressions and surrounding objects and measures the visual favorability of politicians. The study shows that the visual favorability automatically estimated from newspaper photographs positively correlates with the public opinion of the politician.

Computer vision methods have also shown the potential effects of politicians’ facial appearance on voters’ trait judgment and election outcomes. Personality inference from facial appearance is a well studied topic in psychology (Zebrowitz and Montepare, 2008), and political scientists have attempted to explain public responses and election outcomes based on physical appearance of political leaders such as their visually-inferred competence (Todorov et al., 2005). Automated models have been used to extract visual features from facial images to predict subjective trait judgments on various dimensions such as intelligence or trustworthiness (Rojas et al., 2011, Vernon et al., 2014). It has been also shown that automatically inferred facial traits predict the actual election outcomes (Joo, Steen and Zhu, 2015).

Computational approaches offer several important advantages over manual investigations. First, a manual study requires a large pool of participants for reliable coding, which makes experiments expensive and time consuming. Second, since the study depends on participants’ subjective evaluation, it often yields inconsistent, conflicting results. Third, manual studies cannot exclude participants’ prior exposure and knowledge about politicians. In contrast, computational approaches are inexpensive to execute, entirely reproducible, and transferable from ordinary, unknown people to prominent politicians.

5.3 Development

Computer vision techniques also hold promise in broadening and refining measures of development by using remote sensing data. “Remote sensing” refers to the passive gathering of auditory or visual data about a place using tools the researcher does not control directly. Common examples are detecting animals using movement sensors or measuring forest destruction using spectral (image) data. Remote sensing data is of use to many research questions in political science, especially those that involve socioeconomic indicators.

Spectral data can measure different features of cities, such as the distribution of building types, as well as land use in rural areas (Jensen and Cowen, 1999). Imagery with a resolution of one meter or smaller can provide data on socioeconomic characteristics as they vary by neighborhood, allowing for frequent census-like data creation, an ability especially useful in countries with no, or irregular, census (Tapiador et al., 2011). For agricultural areas, it can measure changes in rainfall and crop growth, proximate measures of income for many countries (Toté et al., 2015). Since income shocks are a precursor to civil conflict, data that accurately measure subnational changes in income could act as an early warning system (Hsiang, Burke and Miguel, 2013).

A canonical example of remote sensing is using light emissions to measure wealth, which works even for small geographic units in poor countries (Weidmann and Schutte, 2017). Convolutional neural networks have recently been applied to publicly available satellite imagery to measure household consumption and wealth, verified using household survey data (Jean et al., 2016). The same tools and data can further help measure various socioeconomic characteristics at the household, neighborhood, or city level.

Image data also provide access to temporal changes in local regions. For example, a model that accurately recovers built features of towns and cities could provide insight into how institutions affect recovery from natural disasters. If images exist of the same area immediately before and after a natural disaster, the physical and geographic extent of damage as well as the speed and amount of recovery may be measurable. These dependent variables may then be related to various institutional independent ones. Recovery may occur more quickly in democracies than non-democracies or in countries with free media, for example. In democracies, subnational variation could depend on whether a disaster strikes a powerful politician’s district or if there is an impending election.

More broadly, it should be possible to measure socioeconomic variables using photographs of places taken by people. Manual analysis of Google Street View (GSV) imagery shows that photographs of cities taken at random times as Google’s vehicles map them recovers public health data in the United States (Odgers et al., 2012, Wilson et al., 2012). A model trained on GSV images recovers income by block in New York City (Glaeser et al., 2018), and a deep learning model of cars in GSV images can measure income, race, and education at the precinct level (Gebru et al., 2017).

5.4 Subnational Conflict

A persistent debate in the civil war and protest literatures is the extent to which participation is driven by economic (“greed”) or political (“grievance”) motivations (Collier and Hoeffler, 2004, Kern, 2011). Those two concepts are notoriously difficult to operationalize, and researchers rely on measures such as the availability of natural resources (greed) or aggregate economic statistics such as gross domestic product (economic grievance). Because studies rely on third parties’ reports of these variables and generating those measures ranges from difficulty to almost impossible, variables are aggregated geographically to the state or country level and temporally to the yearly level.

Using computational approaches, greed and grievance can be measured with more geographic and temporal precision. For example, greed is measurable using the precise outline of diamond mines, virgin forests, or oil deposits, which can be observed from satellite data or resource maps (Hunziker and Cederman, 2017). Grievance should be reflected in city-level variation in economic activity measurable using light emissions (Weidmann and Schutte, 2017). While developing deep learning models on a large corpus of images from cities or countries with very different built and natural environments is not trivial, doing so is impossible with public data. Moreover, doing so might be the only feasible way of acquiring these data in countries without detailed socioeconomic data. This approach is likely to be even more useful for studying antecedents of civil wars, since they occur largely in poor countries (Fearon and Laitin, 2003). Images can also be used to measure state capacity. Humans-as-sensors can take photographs of specific objects, such as produce in a market, road conditions, or school conditions, using smart phones (Premise Data, 2017). These images can give disaggregated information about a state’s ability to repress intranational conflict, as well as the ability of rebels to attack the state. Maps are also images, and digitizing them can provide historical data on state capacity, especially power projection, that current measures, such as GDP, may not capture (Hunziker, Müller-Crepon and Cederman, 2018).

Applying deep learning models to images would be especially useful to scholars researching state repression and protest. For example, research into the repression-dissent puzzle consistently finds inconsistent results. Repression may have no effect on dissent (Ritter and Conrad, 2016), increase it (Francisco, 2004, Steinert-Threlkeld, 2017), decrease it (Ferrara, 2003), or have time varying effects (Opp and Roehl, 1990, Rasler, 1996). Data on the severity of violence, size of crowds, and initiator of violence that varies by city and day could provide a more definitive answer to these dynamics. For an example of what these data would look like, see Won, Steinert-Threlkeld and Joo (2017) and Steinert-Threlkeld, Won and Joo (2018). For examples of work that automatically code protest data from images, see as well Torres (2018) and Zhang and Pan (2018).777Cowart, Saunders and Blackstone (2016) manually evaluates images of Black Lives Matter protests newspapers and broadcast media tweeted. As of this writing, Won, Steinert-Threlkeld and Joo (2017) is the only one that is published.

Finally, image data can generate data useful for interstate conflict research as well. For example, convolutional neural networks can detect fixed surface-to-air missile launchers eighty times faster than humans with the same accuracy (Marcum et al., 2017).

6 Demonstration: Protest Analysis with Images

As a demonstration of the method we discuss in the paper, we present an example analysis which can show the potential to use images as data to study details of protests that have eluded text-based datasets. It focuses on measuring crowd size, violence, and the demographic composition of protesters, which is fundamentally not possible using text to generate event data.

Specifically, we focus on measuring various features of two protests. The first is the 2016-2017 protests in South Korea against President Park Geun-hye. Revelations in October 2016 that President Guen-hye received council from a Rasputin like figure triggered large protests, and those protests persisted through her impeachment on March 10, 2017. The second is the 2014 Hong Kong protests against changes to Hong Kong’s electoral system seen as contradicting the “One Country, Two Systems” relationship with China.

We have developed a pipeline that identifies faces in a photo and estimates each face’s sex (male or female) and race (white, black, or Asian)888We focus on the cases of South Korea and Hong Kong in this paper where the majority of the population are Asian and thus omit the variable race in analysis.. It also identifies if an image contains a child’s face. We can also measure whether a protest image contains police or fire; whether protesters are holding images; and the amount of violence in a protest image. Section 6.1 shows that computer vision and deep learning applied to geolocated protest images shared on Twitter accurately recovers protest size in South Korea and the United States. (We do not analyze the United States in more detail because its protests have not lasted more than one day. We do not analyze Hong Kong because we could find no other source of size information.) Section 6.2 shows that this approach can measure daily level changes in the composition of protesters and violence at protests; these daily measures can then be used as variables in a regression.

6.1 Measuring Crowd Size

Reliably measuring crowd size is an open problem, and estimates for protest crowd size are not consistently available. Estimates that do exist come from state authorities or protest organizers, and systematic academic datasets of those estimates occurs when newspaper articles provide those estimates. The only academic dataset that provides crowd size estimates for a protest in our study is the Crowd Counting Consortium (Chenoweth and Pressman, 2017), and it only documents United States protests. The other major dataset that provides crowd size estimates is ACLED, but it focuses on Africa, the Middle East, and South Asia and is updated with a delay.

We generate crowd size estimates by counting all faces in protest photos per day for a given location. Other studies have found that activity on Twitter correlates with verified estimates of crowd size for airports, stadiums, and protests (Botta, Moat and Preis, 2015). Figure 12 shows that our measure of protest size correlates with these protest estimates very well (.76 when logged) for the 2017 United States Women’s March. Figure 13 does the same for South Korea’s 2016-2017 protests against President Park Geun-hye. Using crowd size estimates provided by police and activists, as reported on Wikipedia, Figure 13 shows that the same procedure works in South Korea, though the police appear to provide more accurate estimates than activists.999The police provided fewer size estimates than activists, while this figure shows the correlation for all sizes each group reports. Restricting the analysis to only those events for which police report size does not change the results.


(a) Logged Correlation

Figure 12: Summing Number of Faces Accurately Measures Protest Size in the United States

(a) Raw Correlation

(b) Logged Correlation

Figure 13: Summing Number of Faces Accurately Measures Protest Size in South Korea

Figurse 12 and 13 suggest that computer vision can provide reliable estimates of protest size.

6.2 South Korea and Hong Kong Protests in More Detail

Figure 14

shows daily level variation in the size of the South Korean and Hong Kong protests. In South Korea, There are clear spikes on Saturdays (the dotted lines), and these spikes correspond to known protest events. There is some discrepancy between the largest protest as reported by police and activists and as recorded via Twitter, but the overall patterns match others’ estimates (see Figure

13).

(a) South Korea (b) Hong Kong
Figure 14: Change in Protest Size

Figure 15 shows variation in the percent of protesters who are female at each protest. The vertical dashed lines indicate Saturdays.

(a) South Korea (b) Hong Kong
Figure 15: Percent of Female Faces

Figure 16 shows variation in the percent of images which contain a child. Our classifier was not trained to recognize individual children, so we cannot count the number of children. We are not aware of any event dataset that permits the study of protester demographics.

(a) South Korea (b) Hong Kong
Figure 16: Percent of Photos with a Child

Figure 17 shows the average perceived state violence during the two protests. To generate these estimates, we presented pairs of 11,659 protest images to Amazon Mechanical Turk workers and asked each which image is more violent, creating 58,295 ratings (). From these ratings, the Bradley-Terry model generates a score from [0,1], where a higher rating equates with more violence (Bradley and Terry, 1952). Since violence is subjective, we call this outcome “perceived violence”. An advantage of measuring violence from images, compared to the Goldstein Scale (Goldstein, 1992), is that it is continuously valued and does not require an image to document the interaction between two actors, as the Goldstein Scale does.

(a) South Korea (b) Hong Kong
Figure 17: Perceived State Violence

Figures 1417

suggest descriptive differences in the protest that match how the two protest waves were covered in newspapers. The Korean protests were large and peaceful, whereas students dominated the Hong Kong protests, especially because police responded to protesters with tear gas and violence. A t-test comparing perceived state violence during the two waves confirms that Hong Kong’s protests had more state violence (.029 versus .007,

) and smaller crowds (6.24 versus 28.24, ). Hong Kong protest photos also show fewer women and children in photos, though that result is not statistically significant. These results hold when restricting the South Korea sample to only Saturdays.

Finally, we can use these data to model the correlations between various protest features and the size of protest. To that end, we have modeled the size of the protests as a function of perceived violence (general, protester, and state), the presence of fire, the average number of faces in a photo, gender diversity, whether a protest image contains a child, the number of tweets with protest images, and the size of the most recent protest. All variables are lagged by one day. The dependent variable is the logarithm (base 10) of the number of faces in protest images; using the raw count does not change results. All independent variables are standardized, so the coefficient represents the percent change in the dependent variable in response to a one unit change of the independent variable. The data for Hong Kong and South Korea are pooled, though only South Korea’s Saturdays are kept.

Figure 18

shows the results of this model, with error bars representing 95% confidence intervals. The model finds autocorrelation of protest size. More tweets with protest images also correlates with larger subsequent protests. On the other hand, photos with more faces and of children correlate with smaller protests. While the results for state violence and gender diversity do not reach conventional levels of statistical significance, they may with more observations. It is not surprising that state violence could decrease protest size, but it is surprising that gender and age diversity would. Though more investigation is necessary, we suspect that these variables are trailing indicators, that diversity increases as protests grow, not vice-versa.

Figure 19 shows the results when modeling each country separately. With fewer data, only two variables, Hong Kong’s children and tweets, are statistically significant.


(a) South Korea and Hong Kong

Figure 18: Regression Results. DV is .

(a) South Korea

(b) Hong Kong

Figure 19: Regression Results. DV is .

7 Conclusion

If a picture is worth 1,000 words, then it would require five kilobytes of storage. In fact, images from consumer cell phones and digital cameras require at least three megabytes of storage but usually more. Even images shared on social media platforms, which are compressed from their original size, require hundreds of kilobytes of space. A picture, in other words, is worth anywhere from 20,000 (100 kilobytes) to 5,000,000 words (5 megabytes). In digital terms, it is more accurate to say that a picture is worth a book.

The extra information stored in images is both an opportunity and a challenge. It is an opportunity because one image can document many more variables, including ones not measurable from text, than newspaper articles, speeches, or legislative documents. The opportunity has remained underexploited because of the technical difficulty of identifying the objects and concepts encoded in an image, requiring researchers to rely on human coders. Because human coders are slow, expensive, and have different interpretations of the same raw data (an image), studies using images have historically been small.

Advances in machine learning algorithms, specifically the rise of convolutional neural networks, have removed these barriers. Along with increased hardware capabilities, especially the use of GPUs, these algorithms have expanded the frontier of computer capabilities. For social media platforms, these advances mean automatically recognizing faces in uploaded images. For governments, these advances mean increased biometric security as well as policing capabilities. For researchers, these advances mean the ability to measure existing concepts better, operationalize measures previously only available in theoretical models, and do both with greater geographic and temporal resolution than previous efforts.

This paper has introduced computer vision and machine learning to political scientists. It showed how convolutional neural networks process images, how to validate their classification output, and how these capabilities can contribute to various literatures. There are certainly more applications for which space does not permit a discussion, and the applicability of these methods is limited largely by the imagination, and resources, of researchers. As the third communication revolution continues to alter domestic and international politics (Steele and Stein, 2002), the ability to analyze the data produced by it will only grow in importance.

Finally, the pipeline for generating data from images is the same as from text. The researcher acquires a corpus of documents, sets aside some for training and testing, labels the training and test set into categories that fit the research question, and develops a model. The model is then applied to the rest of the corpus, generating categories for each document, text or image, the researcher possesses. These labels form the raw material of subsequent analysis. In other words, research using text or images is much more similar than different. Aside from hardware requirements, the main difference is the types of questions that can be answered.

References

  • (1)
  • Antol et al. (2015) Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick and Devi Parikh. 2015. Vqa: Visual question answering. In International Conference on Computer Vision. pp. 2425–2433.
  • Barnhurst and Steele (1997) Barnhurst, Kevin G and Catherine A Steele. 1997. “Image-bite news: The visual coverage of elections on US television, 1968-1992.” Harvard International Journal of Press/Politics 2(1):40–58.
  • Barrett and Barrington (2005) Barrett, Andrew W and Lowell W Barrington. 2005. “Is a picture worth a thousand words? Newspaper photographs and voter evaluations of political candidates.” Harvard International Journal of Press/Politics 10(4):98–113.
  • Beissinger (2013) Beissinger, Mark R. 2013. “The Semblance of Democratic Revolution: Coalitions in Ukraine’s Orange Revolution.” American Political Science Review 107(03):574–592.
  • Belhumeur, Hespanha and Kriegman (1997) Belhumeur, Peter N., João P Hespanha and David J. Kriegman. 1997. “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection.” IEEE Transactions on pattern analysis and machine intelligence 19(7):711–720.
  • Bengio, Simard and Frasconi (1994) Bengio, Yoshua, Patrice Simard and Paolo Frasconi. 1994. “Learning long-term dependencies with gradient descent is difficult.” IEEE transactions on neural networks 5(2):157–166.
  • Bienenstock, Geman and Potter (1997) Bienenstock, Elie, Stuart Geman and Daniel Potter. 1997. Compositionality, MDL priors, and object recognition. In Advances in neural information processing systems. pp. 838–844.
  • Blei, Ng and Jordan (2003) Blei, David M, Andrew Y Ng and Michael I Jordan. 2003. “Latent dirichlet allocation.” Journal of machine Learning research 3(Jan):993–1022.
  • Botta, Moat and Preis (2015) Botta, Federico, Helen Susannah Moat and Tobias Preis. 2015. “Quantifying crowd size with mobile phone and Twitter data.” Royal Society Open Science 2:150162.
  • Bradley and Terry (1952) Bradley, Ralph Allan and Milton E Terry. 1952. “Rank analysis of incomplete block designs: I. The method of paired comparisons.” Biometrika 39(3/4):324–345.
  • Chakraborty et al. (2017) Chakraborty, Abhijnan, Johnnatan Messias, Fabricio Benevenuto, Saptarshi Ghosh, Niloy Ganguly and Krishna P Gummadi. 2017. “Who makes trends? understanding demographic biases in crowdsourced recommendations.” arXiv preprint arXiv:1704.00139 .
  • Chenoweth and Pressman (2017) Chenoweth, Erica and Jeremey Pressman. 2017. “Crowd Counting Consortium.”.
    https://sites.google.com/view/crowdcountingconsortium/home
  • Collier and Hoeffler (2004) Collier, Paul and Anke Hoeffler. 2004. “Greed and grievance in civil war.” Oxford Economic Papers 56:563–595.
  • Cowart, Saunders and Blackstone (2016) Cowart, Holly S., Lynsey M. Saunders and Ginger E. Blackstone. 2016. “Picture a Protest: Analyzing Media Images Tweeted From Ferguson.” Social Media and Society 2(4):1–9.
  • Delalleau and Bengio (2011) Delalleau, Olivier and Yoshua Bengio. 2011. Shallow vs. deep sum-product networks. In Advances in Neural Information Processing Systems. pp. 666–674.
  • Deng et al. (2009) Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition. Ieee pp. 248–255.
  • Eldan and Shamir (2016) Eldan, Ronen and Ohad Shamir. 2016. The power of depth for feedforward neural networks. In Conference on Learning Theory. pp. 907–940.
  • Face++ (2018) Face++. 2018. Face++.
    http://www.faceplusplus.com
  • Fearon and Laitin (2003) Fearon, James D. and David D. Laitin. 2003. “Ethnicity, Insurgency, and Civil War.” The American Political Science Review 97(1):75–90.
  • Felzenszwalb et al. (2010) Felzenszwalb, Pedro F, Ross B Girshick, David McAllester and Deva Ramanan. 2010. “Object detection with discriminatively trained part-based models.” IEEE transactions on pattern analysis and machine intelligence 32(9):1627–1645.
  • Ferrara (2003) Ferrara, Federico. 2003. “Why Regimes Create Disorder: Hobbes’s Dilemma During a Rangoon Summer.” Journal of Conflict Resolution 47(3):302–325.
  • Fisher et al. (2005) Fisher, Dana R., Kevin Stanley, David Berman and Gina Neff. 2005. “How do organizations matter? Mobilization and support for participants at five globalization protests.” Social Problems 52(1):102–121.
  • Fleming and Cottrell (1990) Fleming, Michael K and Garrison W Cottrell. 1990. Categorization of faces using unsupervised feature extraction. In IJCNN. IEEE pp. 65–70.
  • Francisco (2004) Francisco, Ronald A. 2004. “After the Massacre: Mobilization in the Wake of Harsh Repression.” Mobilization: An International Journal 9(2):107–126.
  • Gebru et al. (2017) Gebru, Timnit, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden and Li Fei-Fei. 2017. “Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States.” Proceedings of the National Academy of Sciences 114(50):13108–13113.
  • Gilliam Jr and Iyengar (2000) Gilliam Jr, Franklin D and Shanto Iyengar. 2000. “Prime suspects: The influence of local television news on the viewing public.” American Journal of Political Science pp. 560–573.
  • Glaeser et al. (2018) Glaeser, Edward L, Scott Duke Kominers, Michael Luca and Nikhil Naik. 2018. “Big data and big cities: The promises and limitations of improved measures of urban life.” Economic Inquiry 56(1):114–137.
  • Goldstein (1992) Goldstein, Joshua S. 1992. “A Conflict-Cooperation Scale for WEIS Events Data.” Journal of Conflict Resolution 36(2):369–385.
  • Grabe and Bucy (2009) Grabe, Maria Elizabeth and Erik Page Bucy. 2009. Image bite politics: News and the visual framing of elections. Oxford University Press.
  • Grimmer (2010) Grimmer, Justin. 2010. “A Bayesian hierarchical topic model for political texts: Measuring expressed agendas in Senate press releases.” Political Analysis 18(1):1–35.
  • Grimmer and Stewart (2013) Grimmer, Justin and Brandom M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis 21(3):267–297.
  • Hansen (2015) Hansen, Lene. 2015. “How images make world politics: International icons and the case of Abu Ghraib.” Review of International Studies 41(2):263–288.
  • Hendricks et al. (2016) Hendricks, Lisa Anne, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele and Trevor Darrell. 2016. Generating visual explanations. In European Conference on Computer Vision. Springer pp. 3–19.
  • Hobbs (2017) Hobbs, William. 2017. “Pivoted Text Scaling for Open-Ended Survey Responses.”.
    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044864
  • Hofmann (1999) Hofmann, Thomas. 1999. Probabilistic latent semantic analysis. In

    Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence

    .
    Morgan Kaufmann Publishers Inc. pp. 289–296.
  • Hopkins and King (2010) Hopkins, Daniel J. and Gary King. 2010. “A Method of Automated Nonparametric Content Analysis for Social Science.” American Journal of Political Science 54(1):229–247.
  • Hsiang, Burke and Miguel (2013) Hsiang, Solomon M, Marshall Burke and Edward Miguel. 2013. “Quantifying the influence of climate on human conflict.” Science 341(6151):1235367.
  • Hunziker, Müller-Crepon and Cederman (2018) Hunziker, Philipp, Carl Müller-Crepon and Lars-Erik Cederman. 2018. “Roads to Rule, Roads to Rebel: Relational State Capacity and Conflict in Africa.”.
  • Hunziker and Cederman (2017) Hunziker, Philipp and Lars-Erik Cederman. 2017. “No Extraction without Representation: The Ethno-Regional Oil Curse and Secessionist Conflict.” Journal of Peace Research 54(3):365–381.
  • Jean et al. (2016) Jean, Neal, Marshall Burke, Michael Xie, W Matthew Davis, David B Lobell and Stefano Ermon. 2016. “Combining satellite imagery and machine learning to predict poverty.” Science 353(6301):790–794.
  • Jensen and Cowen (1999) Jensen, Jr and Dc Cowen. 1999. “Remote sensing of urban suburban infrastructure and socio-economic attributes.” Photogrammetric Engineering and Remote Sensing 65(5):611–622.
  • Joo, Steen and Zhu (2015) Joo, Jungseock, Francis F Steen and Song-Chun Zhu. 2015. Automated facial trait judgment and election outcome prediction: Social dimensions of face. In International Conference on Computer Vision. pp. 3712–3720.
  • Joo et al. (2014) Joo, Jungseock, Weixin Li, Francis Steen and Song-Chun Zhu. 2014. Visual Persuasion : Inferring Communicative Intents of Images. In Computer Vision and Pattern Recognition. pp. 216–223.
  • Kanade (1977) Kanade, Takeo. 1977. “Computer Recognition of Human Faces.” Interdisciplinary Systems Research 47:1–47.
  • Kern (2011) Kern, Holger Lutz. 2011. “Foreign Media and Protest Diffusion in Authoritarian Regimes: The Case of the 1989 East German Revolution.” Comparative Political Studies 44(9):1179–1205.
  • Kiros, Salakhutdinov and Zemel (2014) Kiros, Ryan, Ruslan Salakhutdinov and Rich Zemel. 2014. Multimodal neural language models. In International Conference on Machine Learning. pp. 595–603.
  • Krizhevsky, Sutskever and Hinton (2012) Krizhevsky, Alex, Ilya Sutskever and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. pp. 1097–1105.
  • Kumar et al. (2011) Kumar, Neeraj, Alexander Berg, Peter N Belhumeur and Shree Nayar. 2011. “Describable visual attributes for face verification and image search.” IEEE Transactions on Pattern Analysis and Machine Intelligence 33(10):1962–1977.
  • Laver, Benoit and Garry (2003) Laver, Michael, Kenneth Benoit and John Garry. 2003. “Extracting Policy Positions from Political Texts Using Words as Data.” American Political Science Review 97(2):311–331.
  • LeCun et al. (1989) LeCun, Yann, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard and Lawrence D Jackel. 1989. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1(4):541–551.
  • LeCun et al. (1998) LeCun, Yann, Léon Bottou, Yoshua Bengio and Patrick Haffner. 1998. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86(11):2278–2324.
  • Liu et al. (2015) Liu, Ziwei, Ping Luo, Xiaogang Wang and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In International Conference on Computer Vision. pp. 3730–3738.
  • Marcum et al. (2017) Marcum, Richard A., Curt H. Davis, Grant J. Scott and Tyler W. Nivin. 2017. “Rapid broad area search and detection of Chinese surface-to-air missile sites using deep convolutional neural networks.” Journal of Applied Remote Sensing 11(04):1.
  • McAdam (1986) McAdam, Doug. 1986. “Recruitment to High-Risk Activism: The Case of Freedom Summer.” American Journal of Sociology 92(1):64–90.
  • McHugo et al. (1985) McHugo, Gregory J, John T Lanzetta, Denis G Sullivan, Roger D Masters and Basil G Englis. 1985. “Emotional reactions to a political leader’s expressive displays.” Journal of personality and social psychology 49(6):1513.
  • Mikhaylov, Laver and Benoit (2011) Mikhaylov, S., M. Laver and K. R. Benoit. 2011. “Coder Reliability and Misclassification in the Human Coding of Party Manifestos.” Political Analysis 20(1):78–91.
  • O’Connor et al. (2010) O’Connor, Brendan, Ramnath Balasubramanyan, Bryan R Routledge, Noah A Smith et al. 2010. “From tweets to polls: Linking text sentiment to public opinion time series.” Icwsm 11(122-129):1–2.
  • Odgers et al. (2012) Odgers, Candice L., Avshalom Caspi, Christopher J. Bates, Robert J. Sampson and Terrie E. Moffitt. 2012. “Systematic social observation of children’s neighborhoods using Google Street View: a reliable and cost-effective method.” The Journal of Child Psychology and Psychiatry 53(10):1009–1017.
  • Opp and Gern (1993) Opp, Karl-Dieter and Christiane Gern. 1993. “Dissident Groups, Personal Networks, and Spontaneous Cooperation: The East German Revolution of 1989.” American Sociological Review 58(5):659–680.
  • Opp and Roehl (1990) Opp, Karl-Dieter and Wolfgang Roehl. 1990. “Repression, Micromobilization, and Political Protest.” Social Forces 69(2):521–547.
  • Poggio et al. (2017) Poggio, Tomaso, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda and Qianli Liao. 2017.

    “Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review.”

    International Journal of Automation and Computing 14(5):503–519.
  • Premise Data (2017) Premise Data. 2017. “Premise Data.”.
    www.premise.com
  • Rasler (1996) Rasler, Karen. 1996. “Concessions, Repression, and Political Protest in the Iranian Revolution.” American Sociological Review 61(1):132–152.
  • Ren et al. (2017) Ren, Shaoqing, Kaiming He, Ross Girshick and Jian Sun. 2017. “Faster R-CNN: towards real-time object detection with region proposal networks.” IEEE transactions on pattern analysis and machine intelligence 39(6):1137–1149.
  • Ritter and Conrad (2016) Ritter, Emily Hencken and Courtenay R. Conrad. 2016. “Preventing and Responding to Dissent: The Observational Challenges of Explaining Strategic Repression.” American Political Science Review 110(1):85–99.
  • Roberts et al. (2014) Roberts, Margaret E, Brandon M Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson and David G Rand. 2014. “Structural topic models for open-ended survey responses.” American Journal of Political Science 58(4):1064–1082.
  • Rojas et al. (2011) Rojas, Mario, David Masip, Alexander Todorov and Jordi Vitria. 2011. “Automatic prediction of facial trait judgments: Appearance vs. structural models.” PloS one 6(8):e23323.
  • Rosenberg et al. (1986) Rosenberg, Shawn W, Lisa Bohan, Patrick McCafferty and Kevin Harris. 1986. “The image and the vote: The effect of candidate presentation on voter preference.” American Journal of Political Science pp. 108–127.
  • Schill (2012) Schill, Dan. 2012. “The visual image and the political image: A review of visual communication research in the field of political communication.” Review of Communication 12(2):118–142.
  • Selvaraju et al. (2017) Selvaraju, Ramprasaath R, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh and Dhruv Batra. 2017. Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. In Computer Vision and Pattern Recognition. pp. 618–626.
  • Shah et al. (2016) Shah, Dhavan V, Alex Hanna, Erik P Bucy, David S Lassen, Jack Van Thomme, Kristen Bialik, JungHwan Yang and Jon CW Pevehouse. 2016. “Dual screening during presidential debates: Political nonverbals and the volume and valence of online expression.” American Behavioral Scientist 60(14):1816–1843.
  • Steele and Stein (2002) Steele, Cherie and Arthur Stein. 2002. Communications Revolutions and International Relations. In Technology, Development, and Democracy: International Conflict and Cooperation in the Information Age, ed. Juliann Emmons Allison. State University of New York Press pp. 25–53.
  • Steinert-Threlkeld (2017) Steinert-Threlkeld, Zachary C. 2017. “Spontaneous Collective Action: Peripheral Mobilization During the Arab Spring.” American Political Science Review 111(02):379–403.
  • Steinert-Threlkeld, Won and Joo (2018) Steinert-Threlkeld, Zachary C, Donghyeon Won and Jungseock Joo. 2018. “Mechanisms of Collective Action: Measuring the Role of Violence, Identity, and Free-riding Using Geolocated Images.”.
  • Sullivan and Masters (1988) Sullivan, Denis G and Roger D Masters. 1988. “”Happy Warriors”: Leaders’ Facial Displays, Viewers’ Emotions, and Political Support.” American Journal of Political Science pp. 345–368.
  • Taigman et al. (2014) Taigman, Yaniv, Ming Yang, Marc’Aurelio Ranzato and Lior Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification. In Computer Vision and Pattern Recognition. pp. 1701–1708.
  • Tapiador et al. (2011) Tapiador, Francisco J., Silvania Avelar, Carlos Tavares-Correa and Rainer Zah. 2011. “Deriving fine-scale socioeconomic information of urban areas using very high-resolution satellite imagery.” International Journal of Remote Sensing 32(21):6437–6456.
  • Todorov et al. (2005) Todorov, Alexander, Anesu N Mandisodza, Amir Goren and Crystal C Hall. 2005. “Inferences of competence from faces predict election outcomes.” Science 308(5728):1623–1626.
  • Torres (2018) Torres, Michelle. 2018. “Give me the full picture: Using computer vision to understand visual frames and political communication.”.
    http://qssi.psu.edu/new-faces-papers-2018/torres-computer-vision-and-political-communication
  • Toté et al. (2015) Toté, Carolien, Domingos Patricio, Hendrik Boogaard, Raymond van der Wijngaart, Elena Tarnavsky and Chris Funk. 2015. “Evaluation of satellite rainfall estimates for drought and flood monitoring in Mozambique.” Remote Sensing 7(2):1758–1776.
  • Tufekci and Wilson (2012) Tufekci, Zeynep and Christopher Wilson. 2012. “Social Media and the Decision to Participate in Political Protest: Observations From Tahrir Square.” Journal of Communication 62(2):363–379.
  • Tumasjan et al. (2010) Tumasjan, Andranik, Timm Oliver Sprenger, Philipp G Sandner and Isabell M Welpe. 2010. “Predicting elections with twitter: What 140 characters reveal about political sentiment.” Icwsm 10(1):178–185.
  • Uijlings et al. (2013) Uijlings, Jasper RR, Koen EA Van De Sande, Theo Gevers and Arnold WM Smeulders. 2013. “Selective search for object recognition.” International journal of computer vision 104(2):154–171.
  • Vernon et al. (2014) Vernon, Richard JW, Clare AM Sutherland, Andrew W Young and Tom Hartley. 2014. “Modeling first impressions from highly variable facial images.” Proceedings of the National Academy of Sciences 111(32):E3353–E3361.
  • Viola and Jones (2004) Viola, Paul and Michael J Jones. 2004. “Robust real-time face detection.” International journal of computer vision 57(2):137–154.
  • Wang, Li and Luo (2016) Wang, Yu, Yuncheng Li and Jiebo Luo. 2016. Deciphering the 2016 US Presidential Campaign in the Twitter Sphere: A Comparison of the Trumpists and Clintonists. In ICWSM.
  • Weidmann and Schutte (2017) Weidmann, Nils B and Sebastian Schutte. 2017. “Using night light emissions for the prediction of local wealth.” Journal of Peace Research 54(2):125–140.
  • Wilson et al. (2012) Wilson, Jeffrey S, Cheryl M Kelly, Mario Schootman, Elizabeth A Baker, Aniruddha Banerjee, Morgan Clennin and Douglas K Miller. 2012. “Assessing the built environment using omnidirectional imagery.” American journal of preventive medicine 42(2):193–199.
  • Won, Steinert-Threlkeld and Joo (2017) Won, Donghyeon, Zachary C Steinert-Threlkeld and Jungseock Joo. 2017. Protest Activity Detection and Perceived Violence Estimation from Social Media Images. In Proceedings of the 2017 ACM on Multimedia Conference. ACM pp. 786–794.
  • Zebrowitz and Montepare (2008) Zebrowitz, Leslie A and Joann M Montepare. 2008. “Social psychological face perception: Why appearance matters.” Social and personality psychology compass 2(3):1497–1517.
  • Zeiler and Fergus (2014) Zeiler, Matthew D and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer pp. 818–833.
  • Zhang and Pan (2018) Zhang, Han and Jennifer Pan. 2018. “CASM: A Deep-Learning Approach for Identifying Collective Action Events with Text and Image Data from Social Media.”.
    https://www.princeton.edu/~hz2/files/protest_method.pdf