Advances in Machine Learning for the Behavioral Sciences

11/08/2019 ∙ by Tomas Kliegr, et al. ∙ 0

The areas of machine learning and knowledge discovery in databases have considerably matured in recent years. In this article, we briefly review recent developments as well as classical algorithms that stood the test of time. Our goal is to provide a general introduction into different tasks such as learning from tabular data, behavioral data, or textual data, with a particular focus on actual and potential applications in behavioral sciences. The supplemental appendix to the article also provides practical guidance for using the methods by pointing the reader to proven software implementations. The focus is on R, but we also cover some libraries in other programming languages as well as systems with easy-to-use graphical interfaces.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning has considerably matured in recent years, and has become a key enabling technology for many data-intensive tasks. Advances in neural network-based deep learning methodologies have yielded unexpected and unprecedented performance levels in tasks as diverse as image recognition, natural language processing, and game playing. Yet, these techniques are not universally applicable, the key impediments being their hunger for data and their lack of interpretable results. These features make them less suitable for behavioral scientists where data are typically scarce, and results that do not yield insights into the nature of the processes underlying studied phenomena are often considered of little value.

This article presents an up-to-date curated survey of machine learning methods applicable to behavioral research. Since being able to understand a model is a prerequisite for uncovering the causes and mechanisms of the underlying phenomena, we favored methods that generate interpretable models from the multitude of those available. However, we also provide pointers to state-of-the-art methods in terms of predictive performance, such as neural networks.

Each covered method is described in nontechnical terms. To help researchers in identifying the best tool for their research problem, we put emphasis on examples, when most methods covered are complemented with references to their existing or possible use in the behavioral sciences. Each described method is supplemented with a description of software that implements it, which is provided in Supplemental Appendix (available online). Given the predominance of R as a language for statistical programming in behavioral sciences, we focus in particular on these packages. We also cover some libraries in other programming languages, most notably in Python, as well as systems with easy-to-use graphical interfaces.

The survey is organized by the character of input data. In the “Tabular Data” section, we cover structured, tabular data, for which we present an up-to-date list of methods used to generate classification models, as well as algorithms for exploratory and descriptive data mining. The “Behavioral Data” section covers methods and systems that can be used to collect and process behavioral data, focusing on clickstreams resulting from web usage mining, and methods developed for learning preference models from empirical data. The latter two areas can, for example, be combined for consumer choice research based on data obtained from an online retailer. Given the uptake of social media both as sources of data and objects of study, the “Textual Data” section provides in-depth coverage of textual data, including syntactic parsing and document classification methods used to categorize content as well as new advances that allow a representation of individual documents using word embeddings. The Internet also provides new machine-readable resources, which contain a wealth of information that can aid the analysis of arbitrary content. Knowledge graphs and various lexical resources, covered in the “External Knowledge Sources” section, can be used, for example, for enrichment of content of small documents (microposts), which are an increasingly common form of online communication. The “Related Work” section discusses related work and also covers miscellaneous topics. such as machine learning as service systems. These can provide the behavioral scientist the ability to process very large data sets with little setup costs. The conclusion summarizes the methods covered in this article, focusing on the performance – interpretability trade-off. It also discusses emerging trends and challenges, such as the legal and ethical dimensions of machine learning.

2 Tabular Data

The task that has received the most attention in the machine learning literature is the supervised learning scenario: Given a database of observations described with a fixed number of measurements or features and a designated attribute, the class, find a mapping that is able to compute the class value from the feature values of new, previously unseen observations. While there are statistical techniques that are able to solve particular instances of this problem, machine learning techniques provide a strong focus on the use of categorical, non-numeric attributes, and on the immediate interpretability of the result. They also typically provide simple means for adapting the complexity of the models to the problem at hand. This, in particular, is one of the main reasons for the increasing popularity of machine learning techniques in both industry and academia.

Education Marital Status Sex Has Children Approve?
primary single male no no
primary single male yes no
primary married male no yes
university divorced female no yes
university married female yes yes
secondary single male no no
university single female no yes
secondary divorced female no yes
secondary single female yes yes
secondary married male yes yes
primary married female no yes
secondary divorced male yes no
university divorced female yes no
secondary divorced male no yes
Table 1: A sample database

Table 1 shows a small, artificial sample database, taken from Billari et al. [2006]. The database contains the results of a hypothetical survey with 14 respondents concerning the approval or disapproval of a certain issue. Each individual is characterized by four attributes—Education (with possible values primary school, secondary school, or university), Marital Status (with possible values single, married, or divorced), Sex (male or female), and Has Children (yes or no)—that encode rudimentary information about their sociodemographic background. The last column, Approve?, encodes whether the individual approved or disapproved of the issue.

The task is to use the information in this training set to derive a model that is able to predict whether a person is likely to approve or disapprove based on the four demographic characteristics. As most classical machine learning methods tackle a setting like this, we briefly recapitulate a few classical algorithms, while mentioning some new developments as well.

2.1 Induction of Decision Trees

Figure 1:

A decision tree describing the dataset shown in Table 


The induction of decision trees is one of the oldest and most popular techniques for learning discriminatory models, which has been developed independently in the statistical [Breiman et al., 1984, Kass, 1980] and machine learning [Quinlan, 1986] literatures. A decision tree is a particular type of classification model that is fairly easy to induce and to understand. In the statistical literature [cf., e.g., Breiman et al., 1984], decision trees are also known as classification trees. Related techniques for predicting numerical class values are known as regression trees.

Figure 1 shows a sample tree which might be induced from the data of Table 1

. To classify a specific instance, the decision tree asks the question “What is the marital status for a given instance?”. If the answer is “married” it assigns the class “yes”. If the answer is divorced or single, an additional question is sought.

In general, the classification of a new example starts at the top node—the root. In our example, the root is a decision node, which corresponds to a test of the value of the Marital Status attribute. Classification then proceeds by moving down the branch that corresponds to a particular value of this attribute, arriving at a new decision node with a new attribute. This process is repeated until we arrive at a terminal node—a so-called leaf—which is not labeled with an attribute but with a value of the target attribute (Approve?). For all examples that arrive at the same leaf value, the same target value will be predicted. Figure 1 shows leaves as rectangular boxes and decision nodes as ellipses.

Decision trees are learned in a top-down fashion: The program selects the best attribute for the root of the tree, splits the set of examples into disjoint sets (one for each value of the chosen attribute, containing all training examples that have the corresponding value for this attribute), and adds corresponding nodes and branches to the tree. If there are new sets that contain only examples from the same class, a leaf node is added for each of them and labeled with the respective class. For all other sets, a decision node is added and associated with the best attribute for the corresponding set as described above. Hence, the dataset is successively partitioned into non-overlapping, smaller datasets until each set only contains examples of the same class (a pure node). Eventually, a pure node can always be found via successive partitions unless the training data contain two identical but contradictory examples, that have the same feature values but different class values.

The crucial step in decision tree induction is the choice of an adequate attribute. Typical attribute selection criteria use a function that measures the purity

of a node, that is, the degree to which the node contains only examples of a single class. This purity measure is computed for a node and all successor nodes that result from using an attribute for splitting the data. The difference between the original purity value and the sum of the values of the successor nodes weighted by the relative sizes of these nodes, is used to estimate the utility of this attribute, and the attribute with the largest utility is selected for expanding the tree. The algorithm C4.5 uses information-theoretic entropy as a purity measure

[Quinlan, 1986], whereas CART uses the Gini index [Breiman et al., 1984]. Algorithm C5.0, successor to C4.5, is noted for its best performance among all tree learning algorithms in the seminal benchmark article of Fernandez-Delgado et al. [2014].

Overfitting refers to the use of an overly complex model that results in worse performance on new data than would be achievable with a simpler model [Mitchell, 1997, ch. 3]. Tree models may overfit due to specialized decision nodes that refer to peculiarities of the training data. In order to receive simpler trees and to fight overfitting, most decision tree algorithms apply pruning techniques that simplify trees after learning by removing redundant decision nodes.

A general technique for improving the prediction quality of classifiers is to form an ensemble – learning multiple classifiers whose individual predictions are joined into a collective final prediction. The best-known technique is random forests [Breiman, 2001]

, which uses resampling to learn a variety of trees from different samples of the data. They also use different random subsets of all available attributes, which not only increases the variance in the resulting trees but also makes the algorithm quite fast. However, the increased predictive accuracy also comes with a substantial decrease in the interpretability of the learned concepts.

Applications in Behavioral Sciences

Given that they are not only well-known in machine learning and data mining, but are also firmly rooted in statistics, decision trees have seen a large number of applications in behavioral sciences, of which we can list just a few. McArdle and Ritschard [2013] provide an in-depth introduction to this family of techniques, and also demonstrate their use in a number of applications in demographic, medical, and educational areas. In demography, Billari et al. [2006] have applied decision tree learning to the analysis of differences in the life courses in Austria and Italy, where the key issue was to model these events as binary temporal relations. Similar techniques have also been used in survival analysis. For example, so-called survival trees have been used in [De Rose and Pallara, 1997]. In the political sciences, decision trees have been used for modeling international conflicts [Fürnkranz et al., 1997] and international negotiation [Druckman et al., 2006]. Rosenfeld et al. [2012] also used decision trees to model people’s behavior in negotiations. In psychology, Walsh et al. [2017] used random forests to predict future suicide attempts of patients.

2.2 Induction of Predictive Rule Sets

Another traditional machine learning technique is the induction of rule sets [Fürnkranz et al., 2012]. The learning of rule-based models has been the main research goal in the field of machine learning since its beginning in the early 1960s. Rule-based techniques have also received some attention in the statistical community [Friedman and Fisher, 1999].

IF Marital Status = married THEN yes IF Sex = female THEN yes IF Sex = male THEN no DEFAULT yes
Figure 2: A smaller rule set describing the dataset shown in Table 1

Comparison between Rule and Tree Models

Rule sets are typically simpler and more comprehensible than decision trees, where each leaf of the tree can be interpreted as a single rule consisting of a conjunction of all conditions in the path from the root to the leaf.

The main difference between the rules generated by a decision tree and the rules generated by a rule learning algorithm is that the former rule set consists of non-overlapping rules that span the entire instance space – each possible combination of feature values will be covered by exactly one rule. Relaxing this constraint by allowing potentially overlapping rules that need not span the entire instance space may often result in smaller rule sets.

However, in this case, we need mechanisms for tie-breaking: Which rule to choose when more than one covers the given example. We also need mechanisms for default classifications: What classification to choose when no rule covers the given example. Typically, one prefers rules with a higher ratio of correctly classified examples from the training set.

Example of a Rule Model

Figure 2 shows a particularly simple rule set for the data in Table 1, which uses two different attributes in its first two rules. Note that these two rules are overlapping, i.e., several examples will be covered by more than one rule. For instance, examples 3 and 10 are covered by both the first and the third rule. These conflicts are typically resolved by using the more accurate rule, i.e., the rule that covers a higher proportion of examples that support its prediction (the first one in our case). Again, pruning is a good idea for rule learning, which means that the rules only need to cover examples that are mostly from the same class. It turns out to be advantageous to prune rules immediately after they have been learned before successive rules are learned [Fürnkranz, 1997].

The idea to try to prune or simplify each rule right after it has been learned has been exploited in the well-known RIPPER algorithm [Cohen, 1995]. This algorithm has been frequently used in applications because it learns very simple and understandable rules. It also added a postprocessing phase for optimizing a rule set in the context of other rules. The key idea is to remove one rule out of a previously learned rule set and try to relearn the rule in the context of previous rules and subsequent rules. Another type of approach to rule learning, which heavily relies on effective pruning methods, is Classification Based on Associations [Liu et al., 1998] and succeeding algorithms. Their key idea is to use algorithms for discovering association rules (cf. “Discovering Interesting Rules” section), and then combine a selection of the found rules into a predictive rule model.111The Classification Based on Associations algorithm does not generate a rule set but a rule list. The difference is that in a predictive rule list, the order of rules is important as it signifies precedence.

Current Trends

Current work in inductive rule learning is focused on finding simple rules via optimization [Dash et al., 2018, Wang et al., 2017, Malioutov and Meel, 2018], mostly with the goal that simple rules are more interpretable. However, there is also some evidence that shorter rules are not always more convincing than more complex rules [Fürnkranz et al., 2018, Stecher et al., 2016]. Another line of research focuses on improving the accuracy of rule models, often by increasing their expressiveness through fuzzification, i.e., by making the decision boundary between different classes softer. At the expense of lower interpretability, fuzzy rule learning algorithms such as SLAVE [García et al., 2014], FURIA [Hühn and Hüllermeier, 2009] and FARC-HD [Alcala-Fdez et al., 2011] often outperform models with regular, “crisp” rules.

Applications in Behavioral Sciences

Similar to decision trees, rule learning can be generally used for prediction or classification in cases where interpretability of the model is important. Rule learning could also be useful in domains where the output of the model should be easily applicable for a practitioner, such as a physician or a psychologist, given that the resulting model can be easier to remember and apply than a logistic regression or a decision-tree model.

Multiple studies used the RIPPER algorithm [Cohen, 1995], which is considered to be the state-of-the-art in inductive rule learning, for learning classification rules. Classification rules may be used for classification of documents in various categories. For example, one study [Stumpf et al., 2009]

used RIPPER and other algorithms to classify emails. The RIPPER algorithm outperformed Naive Bayes, another popular machine learning algorithm, in terms of classification accuracy. Furthermore, rule-based explanations were considered, on average, the most understandable, which might be especially useful when the interpretation of the output of the algorithm or further work with the algorithm’s results is necessary.

Other uses of RIPPER include classifying the strengths of opinions in nested clauses [Wilson et al., 2004] and predicting students’ performance [Kotsiantis et al., 2002]. Some of the studies using decision trees are also used for rule learning [Fürnkranz et al., 1997, Billari et al., 2006].

Rule learning is suggested as a possible computational model in developmental psychology [Shultz, 2013]. These algorithms, or decision tree models convertible to rules, could, therefore, be used in psychology to simulate human reasoning.

2.3 Discovering Interesting Rules

The previous section focused on the use of rules for prediction, but rule learning can be also adapted for exploratory analysis, where only rules corresponding to interesting patterns in data are generated.

A commonly used approach for this task is association rule learning. Algorithms belonging to this family are characterized by outputting all rules that match user-defined constraints on interestingness. These constraints are called interest measures and are typically defined by two parameters: minimum confidence threshold and minimum support threshold.

If we consider rule r: IF Antecedent THEN Consequent , then rule confidence is the proportion of objects correctly classified by the rule to all objects matched by the antecedent of the rule. An object is correctly classified when it matches the entire rule (its antecedent and consequent), and incorrectly classified if it matches only the antecedent, but not consequent. Rule support is typically defined as the proportion of objects correctly classified by the rule to all objects in the training data.

Example. Let us consider the following object
and rule r: IF income=low AND district=London THEN risk=high.
Object matches rule , because meets all conditions in the antecedent of . Rule will incorrectly classify , because the class assigned by rule consequent does not match the value of the target attribute risk of .

Apriori [Agrawal et al., 1993] is the most well-known algorithm for mining association rules. There are also newer algorithms, such as FP-Growth, which can provide faster performance. While association rule mining is commonly used for discovering interesting patterns in data, the simplicity of the generated rules as well as restricted options for constraining the search space may become a limitation.

One common problem with the application of association rule mining stems from the fact that all rules matching user-defined interestingness thresholds are returned. There may be millions of such rules even for small datasets, resulting in impeded interpretability of the resulting list of rules. A possible solution is to apply pruning, which will remove redundant rules. Another limitation of association rule mining is a lack of direct support for numeric attributes.

An alternative approach to pruning is to better focus the generation of association rules. This approach is provided by the GUHA method [Hájek et al., 2010], which was initially developed with the intent to automatically search for all statistical hypotheses supported by data. The method enables the user with many fine-grained settings for expressing what should be considered as an interesting hypothesis. The trade-off is that GUHA has a slower performance on larger datasets compared with association rule mining performed with Apriori [Rauch and Simunek, 2017].

Another related task applicable to descriptive and explorative data mining is subgroup discovery, which finds groups of instances in data, which exhibit “distributional unusualness with respect to a certain property of interest” [Wrobel, 1997]. A number of quality measures were developed for subgroup discovery, but interest measures applied in association rule mining can be used as well. By choosing a suitable quality measure, the subgroup discovery task can thus be adapted for a range of diverse goals, such as mining for unexpected patterns. A subgroup can be considered as unexpected when it significantly deviates from the total population in terms of the selected quality measure [Atzmueller, 2015].

Subgroup discovery approaches are algorithmically diverse, with both association rule mining and predictive rule learning algorithms used as a base approach [Herrera et al., 2011]. The use of subgroup discovery can be considered over association rule mining when the task at hand involves a numeric target attribute. Some subgroup discovery algorithms also address the problem of too many rules generated by the convenient top-k approach, which returns only top subgroups according to the selected quality metric.

Applications in Behavioral Sciences

Association rule mining has been extensively used to find interesting patterns in data in a number of disciplines. Selected recent applications include exploration of mathematics anxiety among engineering students [Herawan et al., 2011] or discovering color-emotion relationships [Feng et al., 2010]. An interdisciplinary review of applications of subgroup discovery is provided by Herrera et al. [2011]. More recently, subgroup discovery was used, for example, to study the relationship between technology acceptance and personality by Behrenbruch et al. [2012]. Goh and Ang [2007] provide an accessible introduction to association rule mining aimed at behavioral researchers.

2.4 Neural Networks and Deep Learning

Neural networks have a long history in artificial intelligence and machine learning. First works were motivated by the attempt to model neurophysiological insights, which resulted in mathematical models of neurons, so-called perceptrons

[Rosenblatt, 1962]. Soon, their limitations were recognized [Minsky and Papert, 1969], and interest in them subsided until Rumelhart et al. [1986]

introduced backpropagation, which allows to train multi-layer networks effectively. While a perceptron can essentially only model a linear function connecting various input signals

to an output signal by weighting them with weights , multi-layer networks put the linear output through non-linear so-called activation functions, which allow one to model arbitrary functions via complex neural networks [Hornik, 1991]. This insight led to a large body of research in the 1990s, resulting in a wide variety of applications in industry, business, and science [Widrow et al., 1994]

before the attention in machine learning moved to alternative methods such as support vector machines.

Recently, however, neural networks have surfaced again in the form of so-called deep learning, which often leads to better performance [Goodfellow et al., 2016, Lecun et al., 2015, Schmidhuber, 2015]. Interestingly, the success of these methods is not so much based on new insights—the key methods have essentially been proposed in the 1990s—but on the availability of huge labeled datasets and powerful computer hardware that allows their use for training large networks.

The basic network structure consists of multiple layers of fully connected nodes. Each node in layer takes the outputs of all nodes in layer as input. For training such networks, the input signals are fed into the input layer , and the output signal at the last layer

is compared to the desired output. The difference between the output signal and the desired output is propagated backward through the network, and each node adapts the weights that it puts on its input signals so that the error is reduced. For this adaptation, error gradients are estimated, which indicate the direction into which the weights have to be changed in order to minimize the error. These estimates are typically not computed from single examples, but from small subsets of the available data, so-called mini-batches. Several variants of this stochastic gradient descent algorithm have been proposed with AdaGrad

[Duchi et al., 2011] being one of the most popular ones. Overfitting the data has to be avoided with techniques such as dropout learning, which in each optimization step randomly exempts a fraction of the network nodes from training [Srivastava et al., 2014].

Multiple network layers allow the network to develop data abstractions, which is the main feature that distinguishes deep learning from alternative learning algorithms. This is most apparent when auto-encoders are trained, where a network is trained to map the input data upon itself but is forced to project them into a lower-dimensional embedding space on the way [Vincent et al., 2010].

In addition to the conventional fully connected layers, there are various special types of network connections. For example, in computer vision,

convolutional layers are commonly used, which train multiple sliding windows that move over the image data and process just a part of the image at a time, thereby learning to recognize local features. These layers are subsequently abstracted into more and more complex visual patterns [Krizhevsky et al., 2017]. For temporal data, one can use recurrent neural networks, which do not make predictions for individual input vectors, but for a sequence of input vectors. To do so, they allow feeding abstracted information from previous data points forward to the next layers. A particularly successful architecture are LSTM networks, which allow the learner to control the amount of information flow between successive data points [Hochreiter and Schmidhuber, 1997].

The main drawback of these powerful learning machines is the lack of interpretability of their results. Understanding the meaning of the generated variables is crucial for transparent and justifiable decisions. Consequently, the interest in methods that make learned models more interpretable has increased with the success of deep learning. Some research has been devoted to trying to convert such arcane models to more interpretable rule-based [Andrews et al., 1995] or tree-based models [Frosst and Hinton, 2017], which may be facilitated with appropriate neural network training techniques [González et al., 2017]. Instead of making the entire model interpretable, methods like LIME [Ribeiro et al., 2016] are able to provide local explanations for inscrutable models, allowing a trade-off between fidelity to the original model with interpretability and complexity of the local model. There is also research on developing alternative deep learning methods, most notably sum-product networks [Peharz et al., 2017]

. These methods are firmly rooted in probability theory and graphical models and are therefore easier to interpret than neural networks.

Applications in Behavioral Sciences

Neural networks are studied and applied in psychological research within the scope of connectionist models of human cognition since about 1980s [Houghton, 2004]. The study of artificial neural networks in this context has intensified in recent years in response to algorithmic advances. McKay et al. [2017, p. 467] review approaches involving artificial neural networks for studying psychological problems and disorders. For example, schizophrenic thinking is studied by purposefully damaging artificial neural networks. Neural networks have also been used to study non-pathological aspects of human decision making, such as consumer behavior [Greene et al., 2017].

Deep neural networks have enjoyed considerable success in areas such as computer vision [Krizhevsky et al., 2017], natural language understanding [Deng and Liu, 2018], and game-playing [Silver et al., 2016]. However, these success stories are based on the availability of large amounts of training data, which may be an obstacle to wide use in behavioral sciences.

3 Behavioral Data

Machine learning and data mining have developed a variety of methods for analyzing behavioral data, ranging from mimicking behavioral traces of human experts, and are also known as behavioral cloning [Sammut, 1996], to the analysis of consumer behavior in the form of recommender systems [Jannach et al., 2010]. In this section, we will look at two key enabling technologies, the analysis of log data and the analysis of preferential data.

3.1 Web Log and Mobile Usage Mining

Logs of user interactions with web pages and mobile applications can serve as a trove of data for psychological research seeking to understand, for example, consumer behavior and information foraging strategies. The scientific discipline providing the tools and means for studying this user data in the form of click streams is called web usage mining [Liu, 2011]. Many web usage mining approaches focus on the acquisition and preprocessing of data. These two steps are also the main focus of this section.

Data Collection. For web usage mining, there are principally two ways of collecting user interactions. Historically, the administrators of servers where the web site is hosted were configuring the server in such a way that each request for a web page was logged and stored in a text file. Each record in this web log contains information such as the name of the page requested, timestamp, the IP address of the visitor, name of the browser, and resolution of the screen, providing input for web usage mining. An alternative way is to use Javascript trackers embedded in all web pages of the monitored web site instead of web logs. When a user requests the web page, the script is executed in the user’s browser. It can collect similar types of information as web logs, but the script can also interact with the content of the page, acquiring the price and category of the product displayed. The script can be extended to track user behavior within the web page, including mouse movements. This information is then typically sent to a remote server, providing web analytics as a service. In general, Javascript trackers provide mostly advantages over web logs as they can collect more information and are easier to set up and operate. Figure 3A presents an example of a clickstream collected from a travel agency website and Figure 3B shows the additional information about the content of the page, which can be sent by the Javascript tracker.

Figure 3: Data collection for web usage mining

Data Enrichment. In addition to user interactions, data collection may involve obtaining a semantic description of data being interacted with, like price and category of a product. This information can be sent by the tracked web page. When this is not possible, one can resort to using web crawlers and scrapers. Web crawler is software which downloads web pages and other content from a given list of web sites and stores them in a database. Web scrapers provide means of subsequent processing of the content of web pages. This software provides a description of information to look for, such as prices or product categories, finds the information on the provided web page, and saves it in a structured way to a database.

Further enrichment of data can be performed, for example, through mapping IP addresses to regions via dedicated databases and software services. Their outputs include, among other information, zip codes, which might need to be further resolved to variables meaningful for psychological studies. This can be achieved using various openly accessible datasets. For example, for the United States, there is the Income tax statistics dataset222, which maps zip codes to several dozen income-related attributes. Other sources of data include and This enrichment is exemplified in Figure 3C-D.

Data Preprocessing and Mining. The output of the data collection phase for web usage mining can be loosely viewed as a set of user interactions. User interactions that take place within a given time frame (such as 30 minutes) are organized into sessions. Each user interaction is also associated with a unique user identifier. When web logs

are used, individual records may need to be grouped into sessions by a heuristic algorithm, possibly resulting in some errors. On the other hand, records are naturally grouped into sessions when Javascript-based trackers are used.

Clickstream data are in a sequential format, in which, for example, sequential patterns or rules [Agrawal and Srikant, 1995] can be discovered.

Example. Considering the input presented in Fig. 3A and a minimum support threshold of 30%, the maximum gap between two sequences = 2 and a minimum confidence of 50%, the list of discovered sequential rules includes: IF Norway.html, AlpTrip.html THEN Ski.html, conf = 100%, supp =50%. This rule says that in all (100%) sessions where the user visited Norway.html and later AlpTrip.html, the user later also visited Ski.html. The number of sessions complying to this rule amounted to 50% of all sessions.

Note that the elements in the consequent of a sequential rule occur at a later time than the elements of the antecedent. As shown in [Liu, 2011, p. 540-543], the sequential representation can also be transformed to a tabular format, which allows for the application of many standard implementations of machine learning algorithms.

Applications in Behavioral Sciences

The use of clickstreams has a direct application in the study of consumer behavior. For example, Senecal et al. [2005] examined the use of product recommendations in online shopping. Other related research involves using various cognitive phenomena to explain the effect of online advertisements [Rodgers and Thorson, 2000], determine the visitor’s intent [Moe, 2003], or analyze reasons for impulse buying on the Internet [Koski, 2004]. However, the use of data from web sites does not have to be limited to the study of consumer behavior. For example, primacy and recency effects were used to explain the effect of link position on the probability of a user clicking on the link [Murphy et al., 2006]. Process tracing methods have a rich history in the study of decision making and some methods, for example, mouse tracking analysis [Stillman et al., 2018], can be easily employed with data from Javascript trackers.

3.2 Preference Learning

Preference learning is a recent addition to the suite of learning tasks in machine learning [Fürnkranz and Hüllermeier, 2010]. Roughly speaking, preference learning is about inducing predictive preference models from empirical data, thereby establishing a link between machine learning and research fields related to preference modeling and decision making. The key difference to conventional supervised machine learning settings is that the training information is typically not given in the form of single target values, like in classification and regression, but instead in the form of pairwise comparisons expressing preferences between different objects or labels.

In general, the task of preference learning is to rank a set of objects based on observed preferences. The ranking may also depend on a given context. For example, the preference between red wine or white wine for dinner often depends on the meal one has ordered. Maybe the best-known instantiation of preference learning are recommender systems [Jannach et al., 2010, Gemmis et al., 2010], which solve the task of ranking a set of products based on their interest for a given user. In many cases, neither the products nor the user is characterized with features, in which case the ranking is based on similarities between the recommendations across users (user-to-user correlation) or items (item-to-item correlations) [Breese et al., 1998]. In many cases, we can observe features of the context, but the objects are only designated with unique labels. This task is also known as label ranking; [Vembu and Gärtner, 2010]. In object ranking, on the other hand, the objects are described with features, but there is no context information available [Kamishima et al., 2010]. Finally, if both the contexts and the objects are characterized with features, we have the most general ranking problem, dyad ranking [Schäfer and Hüllermeier, 2018], where a set of objects is ranked over a set of different contexts. The best-known example is the problem of learning to rank in Web search where the objects are web pages, the contexts are search queries, and the task is to learn to rank Web pages according to their relevance to a query.

Preferences are typically given in the form of pairwise comparisons between objects. Alternatively, the training information may also be given in the form of (ordinal) preference degrees attached to the objects, indicating an absolute (as opposed to a relative/comparative) assessment.

There are two main approaches to learning representations of preferences, namely utility functions, which evaluate individual alternatives, and preference relations, which compare pairs of competing alternatives. From a machine learning point of view, the two approaches give rise to two different kinds of learning. The latter, learning a preference relation, deviates more strongly from conventional problems like classification and regression, as it involves prediction of complex structures, such as rankings or partial order relations, rather than a prediction of single values. Moreover, training input in preference learning will not be offered in the form of complete examples, as is usually the case in supervised learning, but it may comprise more general types of information, such as relative preferences or different kinds of indirect feedback and implicit preference information. On the other hand, the learning of a utility function, where the preference information is used to learn a function that assigns a numerical score to a given object, is often easier to apply because it enforces transitivity on the predicted rankings.

Applications in Behavioral Sciences

For many problems in the behavioral sciences, people are required to make judgments about the quality of certain courses of action or solutions. However, humans are often not able to determine the precise utility value of an option, but they are typically able to compare the quality of two options. Thurstone’s Law of Comparative Judgment essentially states that such pairwise comparisons correspond to an internal, unknown utility scale [Thurstone, 1927]. Recovering this hidden information from such qualitative preference is studied in various areas such as ranking theory [Marden, 1995], social choice theory [Rossi et al., 2011], voting theory [Coughlin, 2008], sports [Langville and Meyer, 2012], negotiation theory [Druckman, 1993], decision theory [Bouyssou et al., 2002], democratic peace theory [Cuhadar and Druckman, 2014], and marketing research [Rao et al., 2007]. Thus, many results in preference learning are based on established statistical models for ranking data, such as the Plackett-Luce [Plackett, 1975, Luce, 1959] or Bradley-Terry [Bradley and Terry, 1952]

models, which allow an analyst to model probability distributions over rankings.

Given that preference and ranking problems are ubiquitous, computational models for solving such problems can improve prediction and lead to new insights. For example, in voting theory and social choice, Bredereck et al. [2017] use computational methods to analyze several parliamentary voting procedures.

4 Textual Data

Much data analyzed in the behavioral sciences take the form of text. The rise of online communication has dramatically increased the volume of textual data available to behavioral scientists. In this section, we will review methods developed in computational linguistics and machine learning that can help the researcher to sift through textual data in an automated way. These methods increase the scale at which data can be processed and improve the reproducibility of analyses since a subjective evaluation of a piece of text can be replaced by automated processing, which produces the same results given the same inputs.

We review various methods for representing text with vectors, providing a gateway for further processing with machine learning algorithms. This is followed by methods for text annotation, including additional information, such as parts of speech for individual words or the political orientation of people mentioned in the text. The section concludes with machine learning algorithms for document classification, which operates on top of the vector-based representation of text.

4.1 Word Vectors and Word Embeddings

A vector space model was developed to represent a document in the given collection as a point in a space [Turney and Pantel, 2010]. The position of the document is specified by a vector, which is typically derived from the frequency of occurrence of individual words in the collection. The notion of vector space models was further extended to other uses, including representation of words using their context.

Vector-based representation has important psychological foundations [Hinton et al., 1986, Turney and Pantel, 2010]. Word vectors closely relate to a distributed representation; that is, using multiple (reusable) features to represent a word. Landauer et al. [2013] provide further empirical and theoretical justification for psychological plausibility of selected vector space models.

There are multiple algorithms that can be applied to finding word vectors. They have a common input of an unlabeled collection of documents, and their output can be used to represent each word as a list or vector of weights. Depending on the algorithm, the degree to which the individual weights can be interpreted varies substantially. Also, the algorithms differ in terms of how much the quality of the resulting vectors depends on the size of the provided collection of documents. Table 2 is aimed at helping the practitioner to find the right method for the task at hand.333It should be emphasized that this comparison is only indicative. For details on comparison see, for example, Edgar et al. [2016], Cimiano et al. [2003]. All of the methods covered in Table 2 are briefly described in the following text.

Method Required data size Features Algorithmic approach
BoW small Explicit (terms) Term-document matrix
ESA medium Explicit (documents) Inverted index
LDA smaller Latent topics Generative model
LSA smaller Latent concepts Matrix factorization
word2vec large Uninterpretable Neural network
Glove large Uninterpretable Regression model
Table 2: Methods generating word vectors

Bag of Words (BoW)

One of the most commonly applied type of vector space model is based on a term-document matrix, where rows correspond to terms (typically words) and columns to documents. For each term, the matrix expresses the number of times it appears in the given document. This representation is also called a bag of words. The term frequencies (TFs) act as weights that represent the degree to which the given word describes the document. To improve results, these weights are further adjusted through normalization or through computing inverse document frequencies (IDFs) in the complete collection. IDF reflects the observation that rarer terms – those that appear only in a small number of documents – are more useful in discriminating documents in the collection from each other than terms that tend to appear in all or most documents. Bag-of-words representation incorporating IDF scores is commonly referred to as TF-IDF.

Semantic Analysis

The explicit semantic analysis (ESA) approach [Gabrilovich and Markovitch, 2007] represents a particular word using a weighted list of documents (typically Wikipedia articles). ESA represents words based on an inverted index, which it builds from documents in the provided knowledge base.444ESA assumes that documents in the collection form a knowledge base – each document covers a different topic. Each dimension in a word vector generated by ESA corresponds to a document in the training corpus, and the specific weight indicates to what extent that document represents the given word.

Latent semantic analysis (LSA) [Landauer and Dumais, 1997] and latent Dirichlet allocation (LDA) [Blei et al., 2003] are two older, well-established algorithms, which are often used for topic modeling, namely, the identification of topics or concepts best describing a given document in the collection. The concepts and topics produced by these methods are latent. That is, LDA topics are not given an explicit label by the method (such as “finances”), but instead can be interpreted through weights of associated words (such as “money” or “dollars” [Chen and Wojcik, 2016]).

Semantic Embeddings

Word2vec [Mikolov et al., 2013] is a state-of-the-art approach to generating word vectors. The previously covered algorithms generate interpretable word vectors essentially based on analyzing counts of occurrences of words. A more recent approach is based on predictive models. These use a predictive algorithm – word2vec uses a neural network – to predict a word given a particular context or vice versa. Word vectors created by word2vec (and related algorithms) are sometimes called word embeddings: an individual word is represented by a list of weights (real numbers).

Glove (Global Vectors for Word Representation) [Pennington et al., 2014] is an algorithm inspired by word2vec, which uses a weighted least squares model trained on global word-word co-occurrence counts. Word embeddings trained by the Glove algorithm do particularly well on the word analogy tasks, where the goal is to answer questions such as “Athens is to Greece as Berlin is to

Quality of Results vs Interpretability of Word Vectors.

Predictive algorithms such as word2vec have been shown to provide better results than models based on analyzing counts of co-occurrence of words across a range of lexical semantic tasks, including word similarity computation [Baroni et al., 2014]. While the individual dimensions in word2vec or Glove models do not directly correspond to explicit words or concepts as in ESA, distance between word vectors can be computed to find analogies and compute word similarities (see Figure 4).

Figure 4: Nearest words to word ”anger” (Embeddings Projector, Word2Vec 10K model)

Applications in Behavioral Sciences

Caliskan et al. [2017] have shown that semantic association of words measured using the distance of their embeddings generated by the Glove algorithm can reproduce results obtained with human subjects using the implicit association test. The results suggest that implicit associations might be partly influenced by similarities of words that co-occur with concepts measured by the implicit association test. The method could also be fruitful in predicting implicit associations and examining possible associations of people in the past. Word embeddings might also be useful for the preparation of stimuli in tasks where semantic similarity of words is important, such as in semantic priming or memory research. The method provides a means of creating stimuli and also can be used to measure semantic similarity in models of performance on tasks depending on the semantic similarity of words. For example, [Howard and Kahana, 2002] used LSA to examine how semantically similar words recalled in sequence in a memory study. Similarly, the Deese-Roediger-McDermott paradigm [Roediger and McDermott, 1995] uses semantically related words to elicit false memories. The described methods could then be used to measure the semantic similarity of words, which could influence the probability or strength of the false memories.

The LDA algorithm is typically used for topic modeling. Based on an analysis of input documents, these algorithms generate a list of topics. Each document is assigned a list of scores that expresses to what degree the document corresponds to each of the topics. Recent uses of LDA and word2vec include a detection of fake news on Twitter [Helmstetter and Paulheim, 2018]. For other examples of uses of the LSA and LDA algorithms in a psychological context, we refer the reader to Chen and Wojcik [2016], Edgar et al. [2016].

4.2 Text Annotation

Textual documents can be extended with an additional structure using a variety of algorithms developed for natural language processing.

Syntactic Parsing

Analysis of a textual document often starts with syntactic tagging. This breaks the words in the input text into tokens and associates tokens with tags, such as parts of speech and punctuation. Syntactic parsing may also group tokens into larger structures, such as noun chunks or sentences. Other types of tags include punctuation. Syntactic parsing may also group tokens into larger structures, such as noun chunks or sentences. Other types of processing include lemmatization — reducing the different forms of a word to one single form — which is important particularly for inflectional languages, such as Czech.

The result of syntactic parsing is typically used in further linguistic processing, but it also serves as a source of insights on the writing style of a particular group of subjects [O’dea et al., 2017].

Named Entity Recognition (NER)

Syntactic parsing can already output noun chunks, such as names consisting of multi-word sequences (“New York”). Named entity recognition goes one step further, by associating each of these noun chunks with an

entity type. The commonly recognized types of entities are [Tjong Kim Sang and De Meulder, 2003]: persons, locations, organizations, and miscellaneous entities that do not belong to the previous three groups.

NER systems are pretrained on large tagged textual corpora and are thus generally language dependent. Adjusting them to a different set of target classes requires a substantial amount of resources, particularly of tagged training data.

Wikification: Linking Text to Knowledge Graphs

The NER results are somewhat limited in terms of the small number of types recognized and lack of additional information on the entity. A process popularly known as wikification addresses these limitations by linking entities to external knowledge bases. The reason why this process is sometimes called wikification [Mihalcea and Csomai, 2007] is that multiple commonly used knowledge bases are derived from Wikipedia.

The first step in entity linking is called mention detection. The algorithm identifies parts of the input text, which can be linked to an entity in the domain of interest. For example, for input text “Diego Maradona scored a goal”, mention detection will output “Diego Maradona” or the corresponding positions in the input text.

When mentions have been identified, the next step is their linking to the knowledge base. One of the computational challenges in this process is the existence of multiple matching entries in the knowledge base for a given mention. For example, the word “Apple” appearing in an analyzed Twitter message can be disambiguated in Wikipedia to Apple_Inc or Apple (fruit).

URI support types surfaceForm offset sim perc
Apple_Inc. 14402 Organisation, Company, Agent Apple Inc. 5 1.00 2.87E-06
Steve_Jobs 1944 Person, Agent Steve Jobs 27 1.00 8.66E-11
ITunes 13634 Work, Software iTunes 53 0.98 2.12E-02
Table 3: Wikification example. “Late Apple Inc. Co-Founder Steve Jobs ’Testifies’ In iTunes Case” generated by DBpedia Spotlight. The column names have the following meaning. URI: values were stripped of the leading, support: indicates how prominent is the entity by the number of inlinks in Wikipedia, types: were stripped of the leading, surfaceForm: the entity as it appears in the input tweet, offset: the starting position of the text in the input tweet in characters, sim: similarity between context vectors and the context surrounding the surface form, perc (percentageOfSecondRank): indicates confidence in disambiguation (the lower this score, the further the first ranked entity was ”in the lead”).

Always assigning the most frequent meaning of the given word has been widely adopted as a base line in word sense disambiguation research [Navigli, 2009]. When entity linking is performed, the knowledge base typically provides a machine-readable entity type, which might be more fine-grained than the type assigned by NER systems. An example of a wikification output is shown in Table 3.

Entity Salience and Text Summarization

When text is represented by entities, an optional processing step is to determine the level of salience of the entity in the text. Entities with high salience can help to summarize content of longer documents, but the output of entity salience algorithms can also serve as input for subsequent processing, such as document classification.

Supervised entity salience algorithms, such as the one described by Gamon et al. [2013], are trained on a number of features derived from the entity mention (whether the word starts with an upper-case or lower-case letter), from the local context (how many characters the entity is from the beginning of the document), and global context (how frequently does the entity occur in inlinks and outlinks). Knowledge bases can be used as a complementary source of information [Dojchinovski et al., 2016].

Sentiment Analysis

With the proliferation of applications in social media, the analysis of sentiment and related psychological properties of text gained in importance. Sentiment analysis encompasses multiple tasks, such as determining valence and intensity of sentiment, determination of subjectivity, and detection of irony

[Serrano-Guerrero et al., 2015].

Most systems rely on lexicon-based analysis, machine learning, or a combination of both approaches. Lexicon-based approaches rely on the availability of lists of words, terms, or complete documents, which are preclassified into different categories of sentiment. A well-known example developed for psychometric purposes is the LIWC2015 Dictionary, which assigns 6,400 words into several dozen nuanced classes such as swear words, netspeak, or religion

[Pennebaker et al., 2015].

Applications in Behavioral Sciences

Entities linked to knowledge graphs can be used to improve the results of many natural language processing tasks. Troisi et al. [2018], for example, studied variables influencing the choice of a university by using wikification to find topics discussed in the context of writing about universities in various online sources. External information can be particularly useful in domains where the available documents are short and do not thus contain much information. To this end, Varga et al. [2014] report significant improvement in performance when the content of tweets is linked to knowledge graphs as opposed to lexical-only content contained in the input tweets.

The LIWC system has been widely used in the behavioral sciences (see the article [Donohue et al., 2014]). Among other topics, it has been used to study close relationships, group processes, deception, and thinking styles [Tausczik and Pennebaker, 2010]. In general, it can be easily used to study differences in the communication of various groups. For example, it was used to analyze psychological differences between Democrats and Republicans by Sylwester and Purver [2015]. This research focused on general linguistic features, such as part of speech tags and sentiment analysis. The study found, for example, that those who identified as Democrats more commonly used first-person singular pronouns, and that the expression of positive emotions was positively correlated with following Democrats, but not Republicans.

Many uses of sentiment analysis deal with microposts such as Twitter messages. Examples of this research include characterization of debate performance [Diakopoulos and Shamma, 2010] or analysis of polarity of posts [Speriosu et al., 2011].

4.3 Document classification

Document classification is a common task performed on top of a vector space representation of text, such as bag of words, but document classification algorithms can also take advantage of entity-annotated text [Varga et al., 2014]. The goal of document classification is to assign documents in a given corpus to one of the document categories. The training data consist of documents for which the target class is already known and specified in the input data.

In the following, we describe a centroid-based classifier, a well-performing algorithm. Next, we cover a few additional algorithms and tasks.

Centroid Classifier [Han and Karypis, 2000]. The centroid classifier is one of the simplest classifiers working on top of the BOW representation. Input for the training phase is a set of documents for each target class, and the output is a centroid for each category. Centroid is a word vector, which is intended to represent the documents in the category. It is computed as an average of word vectors of documents belonging to the category.

The application of the model works as follows. For each test document with an unknown class, its similarity to all target classes is computed using a selected similarity measure. The class with the highest similarity is selected. There are several design choices when implementing this algorithm, such as the word weighting method, document length normalization, and the similarity measure. The common approach to the first two choices is TF-IDF, covered in the “Word Vectors and Word Embeddings” subsection, and L1 normalization. L1 normalization is performed by dividing each element in the given vector by the sum of absolute values of all elements in the vector. The similarity measure used for document classification is typically cosine similarity.

Other Tasks and Approaches

The centroid classifier is a simple approach, which has the advantage of good interpretability. The simplicity of the algorithm can make it a good choice for large datasets. Centroid-based classifiers are noted to have excellent performance on multiple different collections of documents but are not suitable for representing classes that contain fine-grained subclasses [Pang et al., 2015].

Support vector machines (SVM) [Boser et al., 1992] is a frequently used algorithm for text classification, which can be adapted for some types of problems where centroid-based classification cannot be reasonably used. According to experiments reported by Pang et al. [2015], SVM is a recommended algorithm for large balanced corpora. Balanced corpora have a similar proportion of documents belonging to individual classes. SVMs can also be adapted to hierarchical classification, where target classes can be further subdivided into subclasses [Dumais and Chen, 2000]. Another adaptation of the text classification problem is multilabel text classification, where a document is assigned multiple categories.

Applications in Behavioral Sciences

Document classification methods have varied uses. One possible use is in predicting a feature of a person based on a text they wrote. For example, using a training set of documents, it is possible to train a model to distinguish between documents written by men and women. Given a document for which an author is not known, the algorithm may be able to say whether the document was more likely to be written by a man or a woman. Similarly, in [Komisin and Guinn, 2012], SVM and Bayes classifiers were used to identify persona types based on word choice. Profiling using SVMs was also successfully applied for distinguishing among fictional characters [Flekova and Gurevych, 2015].

The use of document classification can be further extended. Once the model is trained to classify documents using a list of features, it is possible to study and interpret the distinguishing features themselves. That is, it might be of interest not only to be able to predict the gender of the author of a document but also to say what aspects of the documents written by males and females differ.

5 External Knowledge Sources

Enrichment with external knowledge can be used to improve results of machine learning tasks, but the additional information can also help to gain new insights about the studied problem [Paulheim, 2018].

Two major types of knowledge sources for the machine learning tasks covered in this article are knowledge graphs and lexical databases. In this section, we cover DBpedia and Wikidata, prime examples of knowledge graphs which are semi-automatically generated from Wikipedia. For lexical databases, we cover WordNet, expert-created thesaurus with thousands of applications across many disciplines.

5.1 Knowledge graphs

Resources providing a mix of information in a structured and unstructured format are called knowledge bases. A knowledge base can be called a knowledge graph when information contained in it has a network structure and can be obtained with structured queries.555 There is no universal graph query language used to obtain information from knowledge graphs, but the openly available knowledge graphs covered in this section support SPARQL [Harris et al., 2013]. The goal of a typical query is to retrieve a list of entities along with their selected properties, given a set of conditions. Entity roughly corresponds to a thing in human knowledge described by the knowledge graph.

DBpedia666 [Lehmann et al., 2015] is one of the largest and oldest openly available knowledge graphs. The English version of DBpedia covers more than 6 million entities, but it is also available for multiple other languages. For a knowledge base to contain the information on an entity, it must have been previously populated. DBpedia is populated mostly by algorithms analyzing semistructured documents (Wikipedia articles).

Wikidata777 [Vrandečić and Krötzsch, 2014] is another widely used knowledge graph, which is available since 2012. Wikidata currently contains information on 45 million items or entities. Similar to DBpedia, Wikidata is partly populated by robots extracting data from Wikipedia, but it also allows the general public to contribute.

Information from DBpedia and Wikidata can be obtained either through a web interface, with a SPARQL query, or by downloading the entire knowledge graph.

Other Knowledge Graphs

Thanks to the use of global identifiers for entities and their properties, many knowledge graphs are connected to the Linked Open Data Cloud. A list of more than 1,000 knowledge graphs cataloged by domain – such as life sciences, linguistics, or media – is maintained at

In addition to open initiatives, there are proprietary knowledge graphs, which can be accessed via various APIs. These include Google Knowledge Graph Search API, Microsoft’s Bing Entity Search API, and Watson Discovery Knowledge Graph.

Applications in Behavioral Sciences

One of the main uses of Knowledge graphs in the behavioral sciences is in the study of the spread of disinformation [Luca Ciampaglia et al., 2015, Fernandez and Alani, 2018]. DBpedia is used for computational fact-checking in several systems, including DeFacto [Gerber et al., 2015]. Knowledge graphs are also used to enhance understanding of the text by linking keywords and entities appearing in text to more general concepts. DBpedia has also been used to analyze the discourse of extremism-related content, including a detection of offensive posts [Halloran et al., 2016, Soler-Company and Wanner, 2019, Saif et al., 2017].

5.2 WordNet and Related Lexical Resources

WordNet is a large English thesaurus that was created at Princeton University [Fellbaum, 2010]. It covers nouns, verbs, adjectives, and adverbs. Synonyms are grouped together into synsets, that is, sets of synonyms. In WordNet 3.0, there are about 150,000 words grouped into more than 100,000 synsets. For each synset, there is a short dictionary explanation available called a gloss. There are several types of relations captured between synsets depending on the type of synset, such as hypo-hypernymy, antonymy, or holonymy-meronymy. For example, for the noun “happiness” wordnet returns the synonym “felicity” and for “sad” the antonym “glad”.

Use for Word Similarity Computation

WordNet is also an acclaimed lexical resource that is widely used in the literature for word similarity and word disambiguation computations. With Wordnet, one can algorithmically compute semantic similarity between a word and one or more other words. There are many algorithms – or formulas – for this purpose, which differ predominantly in the way they use the paths between the two words in the WordNet thesaurus as well as in the way they use external information – such as how rare the given word is in some large collection of documents. Well-established algorithms include Resnik [Resnik, 1995] and Lin [Lin, 1998] measures. A notable example in the behavioral context is the Pirro and Seco measure [Pirró and Seco, 2008], which is inspired by the feature-based theory of similarity proposed by Tversky [1977].

Use for Sentiment Analysis

Further elaborating on the variety of possible uses of WordNet, recent research has provided an extension called “Wordnet-feelings” [Siddharthan et al., 2018], which assigns more than 3,000 WordNet synsets into nine categories of feeling. A related resource used for sentiment classification is SentiWordNet [Baccianella et al., 2010].

Applications in Behavioral Sciences

WordNet is often used in the behavioral sciences to complement free association norms, which are costly and time-consuming to develop [Maki et al., 2006]. Maki et al. [2004] showed that semantic distance computed from WordNet is related to participants’ judgment of similarity.

Specific uses of WordNet in behavioral research include studies of perceptual inference [Johns and Jones, 2012], access to memory [Buchanan, 2010], and predicting survey responses [Arnulf et al., 2014]. For example, Arnulf et al. [2014] showed that semantic similarity of items computed with an algorithm using WordNet predicted observed reliabilities of scales as well as associations between different scales.

6 Related Work

In this section, we point readers to several works that also aimed at communicating recent advances in machine learning algorithms and software to researchers in behavioral science. McArdle and Ritschard [2013] provide an edited volume exploring many topics and applications at the intersection of exploratory data mining and the behavioral sciences. Methodologically, the book has a strong focus on decision tree learning, exploring its use in areas as diverse as life-course analysis, the identification of academic risks, and clinical prediction, to name but a few.

Tonidandel et al. [2018] provide a discussion of “big data” methods applicable to the organizational science, which is complemented by a list of various software systems across different programming languages (Python, R, …), environments (cloud, desktop), and tasks (visualization, parallel computing, …). Varian [2014] reviews selected “big data” methods in the context of econometrics, focusing on random forests and trees.

Chen and Wojcik [2016] give a practical introduction to ”big data” research in psychology, providing an end-to-end guide covering topics such as a selection of a suitable database and options for data acquisition and preprocessing, focusing on web-based APIs and processing HTML data. Their article focuses on methods suitable for text analysis, giving a detailed discussion, including worked examples for selected methods (LSA, LDA). There is also a brief overview of the main subtasks in data mining, such as classification or clustering. The article also contains advice on processing large datasets, referring to the MapReduce framework.

Machine Learning vs. Big Data

While many articles often use the term “big data”, most data sets in behavioral science would not qualify. According to Kitchin [2017], Gandomi and Haider [2015], big data consist of terabytes or more of data. Consequently, “big data” requires adaptation of existing algorithms so that they can be executed in a parallel fashion in a cloud or in grid-based computational environments. R users have the option to use some of the R packages for high-performance computing.888 Examples of dedicated big data architectures include Apache Spark or cloud-based machine learning services [Hashem et al., 2015].

Machine Learning as a Service

In this article, we focused on packages available in the R ecosystem.999For a general introductory reference to R, we refer, e.g., to Torgo [2010]. The R data frame, used usually to store research data, is principally limited to processing data that do not exceed the size of available memory [Lantz, 2015, p. 399], which puts constraints on the size of analyzed data for packages that use this structure. As noted above, there are several options for scaling to larger data, but the behavioral scientist may find it most convenient to use a cloud-based machine learning system, such as BigML.101010

MLaaS systems provide a comfortable web-based user interface, do not require installation or programming skills, and can process very large datasets. The disadvantage of using API-based or web tools such as MLaaS include impeded reproducibility of studies which used them for analysis. The researcher reproducing the analysis may not be able to employ the specific release of the system that was used to generate the results. The reason is that these systems are often updated.

7 Conclusion

The continuing shift of communication and interaction channels to online media provides a new set of challenges and opportunities for behavioral scientists. The fact that much interaction is performed online also allows for evolution in research methods. For example, certain research problems may no longer require costly laboratory studies as suitable data can be obtained from logs of interactions automatically created by social networking applications and web sites. This article aimed to introduce a set of methods that allow for analyses of such data in a transparent and reproducible way. Where available, we therefore suggested software available under an open source license.

We put emphasis on selecting proven algorithms, favoring those that generate interpretable models that can be easily understood by a wide range of users. When easy-to-interpret models lead to worse results than more complex models, it is possible to use the latter to improve the former. For example, Agrawal et al. [2019] used neural networks to predict moral judgments. Because the neural network model was itself not easily interpretable, they looked at situations where the neural network model fared particularly well in comparison to a simpler, but more easily interpretable, choice model. They then iteratively updated the choice model to better predict judgments in situations where the neural network model predicted better. A similar strategy can be used generally by behavioral scientists if the interpretability of the models is considered valuable.

There are several other noteworthy areas of machine learning that could be highly relevant to particular subdomains of behavioral science. We left them uncovered due to space constraints. These include reinforcement learning, image processing, and the discovery of interesting patterns in data. Another interesting technological trend in terms of how data are collected and processed is the connection between crowdsourcing services and Machine Learning as a Service offering. Crowdsourcing may decrease the costs by outsourcing some parts of research such as finding and recruiting participants and can also aid replicability by engaging large and varied participant samples.

Employment of MLaaS systems may have benefits in terms of setup costs, ease of processing, and the security of the stored data. On the other hand, experimenters relying on crowdsourcing lose control of the laboratory environment. MLaaS may impede reproducibility and accountability of the analysis since the results of these systems may vary in time as they are often updated. See article by Crump [2019], which is in this issue, on the challenges of recruiting participants.

Overall, we expect that the largest challenge for the behavioral scientist in the future will not be the choice or availability of suitable machine learning methods. More likely, it will be ensuring compliance with external constraints and requirements concerning ethical, legal, and reproducible aspects of the research.


The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: TK was supported by Faculty of Informatics and Statistics, University of Economics, Prague by grant IGA 33/2018 and by institutional support for research projects. TK would like to thank BigML Inc. for providing a subscription that allowed to test processing of large datasets in free of charge. The work of ŠB was supported by the Internal Grant Agency of the Faculty of Business Administration, University of Economics, Prague (Grant No. IP300040).


  • Agrawal et al. [2019] Mayank Agrawal, Joshua C Peterson, and Thomas L Griffiths. Using machine learning to guide cognitive modeling: A case study in moral reasoning. arXiv preprint arXiv:1902.06744, 2019.
  • Agrawal and Srikant [1995] Rakesh Agrawal and Ramakrishnan Srikant. Mining sequential patterns. In Data Engineering, 1995. Proceedings of the Eleventh International Conference on, pages 3–14. IEEE, 1995.
  • Agrawal et al. [1993] Rakesh Agrawal, Tomasz Imieliński, and Arun Swami. Mining association rules between sets of items in large databases. In Acm sigmod record, volume 22, pages 207–216. ACM, 1993.
  • Alcala-Fdez et al. [2011] Jesús Alcala-Fdez, Rafael Alcala, and Francisco Herrera. A fuzzy association rule-based classification model for high-dimensional problems with genetic rule selection and lateral tuning. IEEE Transactions on Fuzzy systems, 19(5):857–872, 2011.
  • Andrews et al. [1995] Robert Andrews, Joachim Diederich, and Alan B. Tickle. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8(6):373–389, 1995.
  • Arnulf et al. [2014] Jan Ketil Arnulf, Kai Rune Larsen, Øyvind Lund Martinsen, and Chih How Bong. Predicting survey responses: How and why semantics shape survey statistics on organizational behaviour. PloS one, 9(9):e106361, 2014.
  • Atzmueller [2015] Martin Atzmueller. Subgroup discovery. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 5(1):35–49, 2015.
  • Baccianella et al. [2010] Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In Lrec, volume 10, pages 2200–2204, 2010.
  • Baroni et al. [2014] Marco Baroni, Georgiana Dinu, and Germán Kruszewski. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 238–247, 2014.
  • Behrenbruch et al. [2012] Kay Behrenbruch, Martin Atzmüller, Christoph Evers, Ludger Schmidt, Gerd Stumme, and Kurt Geihs. A personality based design approach using subgroup discovery. In Marco Winckler, Peter Forbrig, and Regina Bernhaupt, editors, Human-Centered Software Engineering, pages 259–266, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. ISBN 978-3-642-34347-6.
  • Billari et al. [2006] Francesco C. Billari, Johannes Fürnkranz, and Alexia Prskawetz. Timing, sequencing, and quantum of life course events: A machine learning approach. European Journal of Population, 22(1):37–65, 2006.
  • Blei et al. [2003] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003.
  • Boser et al. [1992] Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for optimal margin classifiers. In

    Proceedings of the fifth annual workshop on Computational learning theory

    , pages 144–152. ACM, 1992.
  • Bouyssou et al. [2002] Denis Bouyssou, Eric Jacquet-Lagreze, Patrice Perny, Roman Słowiński, Daniel Vanderpooten, and Philippe Vincke, editors. Aiding Decisions with Multiple Criteria — Essays in Honor of Bernard Roy. Kluwer Academic Publishers, Boston, 2002.
  • Bradley and Terry [1952] Ralph A. Bradley and Milton E. Terry. The rank analysis of incomplete block designs — I. The method of paired comparisons. Biometrika, 39:324–345, 1952.
  • Bredereck et al. [2017] Robert Bredereck, Jiehua Chen, Rolf Niedermeier, and Toby Walsh. Parliamentary voting procedures: Agenda control, manipulation, and uncertainty. Journal of Artificial Intelligence Research, 59:133–173, 2017.
  • Breese et al. [1998] John S. Breese, David Heckerman, and Carl Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In G.F. Cooper and S. Moral, editors, Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI-98), pages 43–52, Madison, WI, 1998. Morgan Kaufmann.
  • Breiman [2001] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
  • Breiman et al. [1984] Leo Breiman, Jerome H. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth & Brooks, Pacific Grove, CA, 1984.
  • Buchanan [2010] E Buchanan. Access into memory: Differences in judgments and priming for semantic and associative memory. Journal of Scientific Psychology, 1:1–8, 2010.
  • Caliskan et al. [2017] Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
  • Chen and Wojcik [2016] Eric Evan Chen and Sean P Wojcik. A practical guide to big data research in psychology. Psychological Methods, 21(4):458, 2016.
  • Cimiano et al. [2003] Philipp Cimiano, Antje Schultz, Sergej Sizov, Philipp Sorg, and Steffen Staab. Explicit versus latent concept models for cross-language information retrieval. In IJCAI, 2003.
  • Cohen [1995] William W. Cohen. Fast effective rule induction. In Proceedings of the Twelfth International Conference on International Conference on Machine Learning, ICML’95, pages 115–123, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. ISBN 1-55860-377-8.
  • Coughlin [2008] Peter J. Coughlin. Probabilistic Voting Theory. Cambridge University Press, 2008.
  • Crump [2019] Larry Crump. Conducting field research effectively. American Behavioral Scientist, page 0002764219859624, 2019.
  • Cuhadar and Druckman [2014] Esra Cuhadar and Daniel Druckman. Representative decision-making: challenges to democratic peace theory. In Handbook of International Negotiation, pages 3–14. Springer, 2014.
  • Dash et al. [2018] Sanjeeb Dash, Oktay Günlük, and Dennis Wei. Boolean decision rules via column generation. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31 (NeurIPS-18), pages 4660–4670, Montréal, Canada, 2018.
  • De Rose and Pallara [1997] A. De Rose and A. Pallara. Survival trees: An alternative non-parametric multivariate technique for life history analysis. European Journal of Population, 13:223–241, 1997.
  • Deng and Liu [2018] Li Deng and Yang Liu. Deep Learning in Natural Language Processing. Springer-Verlag, 2018.
  • Diakopoulos and Shamma [2010] Nicholas A. Diakopoulos and David A. Shamma. Characterizing debate performance via aggregated twitter sentiment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 1195–1198, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-929-9.
  • Dojchinovski et al. [2016] Milan Dojchinovski, Dinesh Reddy, Tomas Kliegr, Tomas Vitvar, and Harald Sack. Crowdsourced corpus with entity salience annotations. In LREC, 2016.
  • Donohue et al. [2014] William A Donohue, Yuhua Liang, and Daniel Druckman. Validating liwc dictionaries: the oslo i accords. Journal of Language and Social Psychology, 33(3):282–301, 2014.
  • Druckman [1993] Daniel Druckman. The situational levers of negotiating flexibility. Journal of Conflict Resolution, 37(2):236–276, 1993.
  • Druckman et al. [2006] Daniel Druckman, Richard Harris, and Johannes Fürnkranz. Modeling international negotiation: Statistical and machine learning approaches. In Robert Trappl, editor, Programming for Peace: Computer-Aided Methods for International Conflict Resolution and Prevention, volume 2 of Advances in Group Decision and Negotiation, pages 227–250. Kluwer Academic Publishers, Dordrecht, 2006.
  • Duchi et al. [2011] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
  • Dumais and Chen [2000] Susan Dumais and Hao Chen. Hierarchical classification of web content. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 256–263. ACM, 2000.
  • Edgar et al. [2016] Altszyler Edgar, Ribeiro Sidarta, Sigman Mariano, and Fernández Slezak Diego. Comparative study of lsa vs word2vec embeddings in small corpora: a case study in dreams database. In ASAI Simposio Argentino de Inteligencia Artificial, 2016.
  • Fellbaum [2010] Christiane Fellbaum. Wordnet. In Theory and applications of ontology: computer applications, pages 231–243. Springer, 2010.
  • Feng et al. [2010] Haifeng Feng, Marie-Jeanne Lesot, and Marcin Detyniecki. Using association rules to discover color-emotion relationships based on social tagging. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, pages 544–553. Springer, 2010.
  • Fernandez and Alani [2018] Miriam Fernandez and Harith Alani. Online misinformation: Challenges and future directions. In Companion Proceedings of the The Web Conference 2018, WWW ’18, pages 595–602, Republic and Canton of Geneva, Switzerland, 2018. International World Wide Web Conferences Steering Committee. ISBN 978-1-4503-5640-4. doi: 10.1145/3184558.3188730.
  • Fernandez-Delgado et al. [2014] Manuel Fernandez-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? The Journal of Machine Learning Research, 15(1):3133–3181, 2014.
  • Flekova and Gurevych [2015] Lucie Flekova and Iryna Gurevych. Personality profiling of fictional characters using sense-level links between lexical resources. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1805–1816. Association for Computational Linguistics, 2015. doi: 10.18653/v1/D15-1208.
  • Friedman and Fisher [1999] Jerome H. Friedman and Nicholas I. Fisher.

    Bump hunting in high-dimensional data.

    Statistics and Computing, 9(2):123–143, 1999.
  • Frosst and Hinton [2017] Nicholas Frosst and Geoffrey E. Hinton. Distilling a neural network into a soft decision tree. In Tarek R. Besold and Oliver Kutz, editors, Proceedings of the 1st AI*AI International Workshop on Comprehensibility and Explanation in AI and ML, volume 2071 of CEUR Workshop Proceedings, Bari, Italy, 2017.
  • Fürnkranz [1997] Johannes Fürnkranz. Pruning algorithms for rule learning. Machine Learning, 27(2):139–171, 1997.
  • Fürnkranz and Hüllermeier [2010] Johannes Fürnkranz and Eyke Hüllermeier, editors. Preference Learning. Springer-Verlag, 2010. ISBN 978-3642141249.
  • Fürnkranz et al. [1997] Johannes Fürnkranz, Johann Petrak, and Robert Trappl. Knowledge discovery in international conflict databases. Applied Artificial Intelligence, 11(2):91–118, 1997.
  • Fürnkranz et al. [2012] Johannes Fürnkranz, Dragan Gamberger, and Nada Lavrač. Foundations of Rule Learning. Springer-Verlag, 2012. ISBN 978-3-540-75196-0.
  • Fürnkranz et al. [2018] Johannes Fürnkranz, Tomás Kliegr, and Heiko Paulheim. On cognitive preferences and the interpretability of rule-based models. arXiv preprint arXiv:1803.01316, 2018.
  • Gabrilovich and Markovitch [2007] Evgeniy Gabrilovich and Shaul Markovitch. Computing semantic relatedness using Wikipedia-based explicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 1606–1611, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc.
  • Gamon et al. [2013] Michael Gamon, Tae Yano, Xinying Song, Johnson Apacible, and Patrick Pantel. Identifying salient entities in web pages. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2375–2380. ACM, 2013.
  • Gandomi and Haider [2015] Amir Gandomi and Murtaza Haider. Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 35(2):137–144, 2015.
  • García et al. [2014] David García, Antonio González, and Raúl Pérez. Overview of the slave learning algorithm: A review of its evolution and prospects. International Journal of Computational Intelligence Systems, 7(6):1194–1221, 2014.
  • Gemmis et al. [2010] Marco de Gemmis, Leo Iaquinta, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Giovanni Semeraro. Learning preference models in recommender systems. In Fürnkranz and Hüllermeier [2010], pages 387–407. ISBN 978-3642141249.
  • Gerber et al. [2015] Daniel Gerber, Diego Esteves, Jens Lehmann, Lorenz Bühmann, Ricardo Usbeck, Axel-Cyrille Ngonga Ngomo, and René Speck. Defacto-temporal and multilingual deep fact validation. Journal of Web Semantics, 35:85 – 101, 2015. ISSN 1570-8268.
  • Goh and Ang [2007] Dion H. Goh and Rebecca P. Ang. An introduction to association rule mining: An application in counseling and help-seeking behavior of adolescents. Behavior Research Methods, 39(2):259–266, May 2007. ISSN 1554-3528. doi: 10.3758/BF03193156. URL
  • González et al. [2017] Camila González, Eneldo Loza Mencía, and Johannes Fürnkranz. Re-training deep neural networks to facilitate boolean concept extraction. In Proceedings of the 20th International Conference on Discovery Science (DS-17), volume 10558 of Lecture Notes in Computer Science, pages 127–143. Springer-Verlag, October 2017. ISBN 978-3-319-67785-9. doi: 10.1007/978-3-319-67786-6˙10.
  • Goodfellow et al. [2016] Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. Deep Learning. Adaptive Computation and Machine Learning. MIT Press, 2016. ISBN 978-0-262-03561-3.
  • Greene et al. [2017] Max N Greene, Peter H Morgan, and Gordon R Foxall. Neural networks and consumer behavior: Neural models, logistic regression, and the behavioral perspective model. The Behavior Analyst, 40(2):393–418, 2017.
  • Hájek et al. [2010] Petr Hájek, Martin Holeňa, and Jan Rauch. The guha method and its meaning for data mining. Journal of Computer and System Sciences, 76(1):34–48, 2010.
  • Halloran et al. [2016] KL O Halloran, S Tan, P Wignell, JA Bateman, DS Pham, Michele Grossman, and AV Moere. Interpreting text and image relations in violent extremist discourse: A mixed methods approach for big data analytics. Terrorism and Political Violence, October 2016. doi: 10.1080/09546553.2016.1233871.
  • Han and Karypis [2000] Eui-Hong Sam Han and George Karypis. Centroid-based document classification: Analysis and experimental results. In European conference on principles of data mining and knowledge discovery, pages 424–431. Springer, 2000.
  • Harris et al. [2013] Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. Sparql 1.1 query language. W3C recommendation, 21(10), 2013.
  • Hashem et al. [2015] Ibrahim Abaker Targio Hashem, Ibrar Yaqoob, Nor Badrul Anuar, Salimah Mokhtar, Abdullah Gani, and Samee Ullah Khan. The rise of “big data” on cloud computing: Review and open research issues. Information systems, 47:98–115, 2015.
  • Helmstetter and Paulheim [2018] S. Helmstetter and H. Paulheim. Weakly supervised learning for fake news detection on twitter. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), volume 00, pages 274–277, Aug. 2018. doi: 10.1109/ASONAM.2018.8508520. URL
  • Herawan et al. [2011] Tutut Herawan, Prima Vitasari, and Zailani Abdullah. Mining interesting association rules of student suffering mathematics anxiety. In Jasni Mohamad Zain, Wan Maseri bt Wan Mohd, and Eyas El-Qawasmeh, editors, Software Engineering and Computer Systems, pages 495–508, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. ISBN 978-3-642-22191-0.
  • Herrera et al. [2011] Franciso Herrera, Cristóbal José Carmona, Pedro González, and María José Del Jesus. An overview on subgroup discovery: foundations and applications. Knowledge and information systems, 29(3):495–525, 2011.
  • Hinton et al. [1986] G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. Distributed representations. In David E. Rumelhart, James L. McClelland, and CORPORATE PDP Research Group, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, pages 77–109. MIT Press, Cambridge, MA, USA, 1986. ISBN 0-262-68053-X.
  • Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • Hornik [1991] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257, 1991.
  • Houghton [2004] George Houghton. Introduction to connectionist models in cognitive psychology: Basic structures, processes, and algorithms. In Connectionist models in cognitive psychology, pages 11–19. Psychology Press, 2004.
  • Howard and Kahana [2002] Marc W Howard and Michael J Kahana. When does semantic similarity help episodic retrieval? Journal of Memory and Language, 46(1):85–98, 2002.
  • Hühn and Hüllermeier [2009] Jens Hühn and Eyke Hüllermeier. Furia: an algorithm for unordered fuzzy rule induction. Data Mining and Knowledge Discovery, 19(3):293–319, 2009.
  • Jannach et al. [2010] Dietmar Jannach, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. Recommender Systems: An Introduction. Cambridge University Press, Cambridge, UK, 2010. ISBN 978-0-521-49336-9.
  • Johns and Jones [2012] Brendan T Johns and Michael N Jones. Perceptual inference through global lexical similarity. Topics in Cognitive Science, 4(1):103–120, 2012.
  • Kamishima et al. [2010] Toshihiro Kamishima, Hideto Kazawa, and Shotaro Akaho. A survey and empirical comparison of object ranking methods. In Fürnkranz and Hüllermeier [2010], pages 181–201. ISBN 978-3642141249.
  • Kass [1980] G. V. Kass. An exploratory technique for investigating large quantities of categorical data. Applied Statistics, 29:119–127, 1980.
  • Kitchin [2017] Rob Kitchin. Big data-hype or revolution. The SAGE handbook of social media research methods, pages 27–39, 2017.
  • Komisin and Guinn [2012] Michael Komisin and Curry Guinn. Identifying personality types using document classification methods. In Florida Artificial Intelligence Research Society Conference, 2012.
  • Koski [2004] Nina Koski. Impulse buying on the internet: encouraging and discouraging factors. Frontiers of E-business Research, 4:23–35, 2004.
  • Kotsiantis et al. [2002] S Kotsiantis, C Pierrakeas, and P Pintelas. Efficiency of machine learning techniques in predicting students’ performance in distance learning systems. Educational Software Development Laboratory Department of Mathematics, University of Patras, Greece, 2002.
  • Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
  • Landauer and Dumais [1997] Thomas K Landauer and Susan T Dumais. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211, 1997.
  • Landauer et al. [2013] Thomas K Landauer, Danielle S McNamara, Simon Dennis, and Walter Kintsch. Handbook of latent semantic analysis. Psychology Press, 2013.
  • Langville and Meyer [2012] Amy M. Langville and Carl D. Meyer. Who’s #1? The Science of Rating and Ranking. Princeton University Press, 2012.
  • Lantz [2015] Brett Lantz. Machine learning with R. Packt Publishing Ltd, 2015.
  • Lecun et al. [2015] Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. ISSN 0028-0836. doi: 10.1038/nature14539.
  • Lehmann et al. [2015] Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167–195, 2015.
  • Lin [1998] Dekang Lin. An information-theoretic definition of similarity. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML ’98, pages 296–304, San Francisco, CA, USA, 1998. Morgan Kaufmann Publishers Inc. ISBN 1-55860-556-8.
  • Liu [2011] Bing Liu. Web data mining: exploring hyperlinks, contents, and usage data, 2nd ed. Springer Science & Business Media, 2011.
  • Liu et al. [1998] Bing Liu, Wynne Hsu, and Yiming Ma. Integrating classification and association rule mining. In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, KDD’98, pages 80–86. AAAI Press, 1998.
  • Luca Ciampaglia et al. [2015] Giovanni Luca Ciampaglia, Prashant Shiralkar, Luis Rocha, Johan Bollen, Filippo Menczer, and Alessandro Flammini. Computational fact checking from knowledge networks (vol 10, e0128193, 2015). PLoS ONE, 10, 10 2015. doi: 10.1371/journal.pone.0141938.
  • Luce [1959] Robert Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959.
  • Maki et al. [2004] William S Maki, Lauren N McKinley, and Amber G Thompson. Semantic distance norms computed from an electronic dictionary (wordnet). Behavior Research Methods, Instruments, & Computers, 36(3):421–431, 2004.
  • Maki et al. [2006] William S Maki, Marissa Krimsky, and Sol Muñoz. An efficient method for estimating semantic similarity based on feature overlap: Reliability and validity of semantic feature ratings. Behavior research methods, 38(1):153–157, 2006.
  • Malioutov and Meel [2018] Dmitry Malioutov and Kuldeep S. Meel. MLIC: A MaxSAT-based framework for learning interpretable classification rules. In John N. Hooker, editor, Proceedings fo the 24th International Conference on Principles and Practice of Constraint Programming (CP-18), volume 11008 of Lecture Notes in Computer Science, pages 312–327, Lille, France, 2018. Springer.
  • Marden [1995] John I. Marden. Analyzing and Modeling Rank Data. Chapman & Hall, 1995.
  • McArdle and Ritschard [2013] John J. McArdle and Gilbert Ritschard, editors. Contemporary Issues in Exploratory Data Mining in Behavioral Sciences. Routeledge, New York, 2013.
  • McKay et al. [2017] Dean McKay, Jonathan S Abramowitz, and Eric A Storch. Treatments for Psychological Problems and Syndromes. John Wiley & Sons, 2017.
  • Mihalcea and Csomai [2007] Rada Mihalcea and Andras Csomai. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 233–242. ACM, 2007.
  • Mikolov et al. [2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
  • Minsky and Papert [1969] Marvin Minsky and Seymour A. Papert. Perceptrons : Introduction to Computational Geometry. MIT Press, 1969. Expanded Edition 1990.
  • Mitchell [1997] Tom Mitchell. Machine Learning. McGraw-Hill Education, 1997.
  • Moe [2003] Wendy W Moe. Buying, searching, or browsing: Differentiating between online shoppers using in-store navigational clickstream. Journal of consumer psychology, 13(1-2):29–39, 2003.
  • Murphy et al. [2006] Jamie Murphy, Charles Hofacker, and Richard Mizerski. Primacy and recency effects on clicking behavior. Journal of Computer-Mediated Communication, 11(2):522–535, 2006.
  • Navigli [2009] Roberto Navigli. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):10, 2009.
  • O’dea et al. [2017] Bridianne O’dea, Mark E Larsen, Philip J Batterham, Alison L Calear, and Helen Christensen. A linguistic analysis of suicide-related twitter posts. Crisis, 2017.
  • Pang et al. [2015] Guansong Pang, Huidong Jin, and Shengyi Jiang. Cenknn: a scalable and effective text classifier. Data Mining and Knowledge Discovery, 29(3):593–625, 2015.
  • Paulheim [2018] Heiko Paulheim. Machine learning with and for semantic web knowledge graphs. In Reasoning Web International Summer School, pages 110–141. Springer, 2018.
  • Peharz et al. [2017] Robert Peharz, Robert Gens, Franz Pernkopf, and Pedro M. Domingos. On the latent variable interpretation in sum-product networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(10):2030–2044, 2017.
  • Pennebaker et al. [2015] James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. The development and psychometric properties of liwc2015. Technical report,, Austin, Texas, 2015.
  • Pennington et al. [2014] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
  • Pirró and Seco [2008] Giuseppe Pirró and Nuno Seco. Design, implementation and evaluation of a new semantic similarity metric combining features and intrinsic information content. In Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part II on On the Move to Meaningful Internet Systems, OTM ’08, pages 1271–1288, Berlin, Heidelberg, 2008. Springer-Verlag. ISBN 978-3-540-88872-7.
  • Plackett [1975] Robin Plackett. The analysis of permutations. Applied Statistics, 24, 1975.
  • Quinlan [1986] John Ross Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.
  • Rao et al. [2007] Vithala R. Rao, Paul E. Green, and Jerry Wind. Applied Conjoint Analysis. SAGE Publications, 2007. ISBN 9780761914464.
  • Rauch and Simunek [2017] Jan Rauch and Milan Simunek. Apriori and guha–comparing two approaches to data mining with association rules. Intelligent Data Analysis, 21(4):981–1013, 2017.
  • Resnik [1995] Philip Resnik. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 448–453, 1995.
  • Ribeiro et al. [2016] Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. ”why should I trust you?”: Explaining the predictions of any classifier. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu Aggarwal, Dou Shen, and Rajeev Rastogi, editors, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-16), pages 1135–1144, San Francisco, CA, USA, 2016. ACM. doi: 10.1145/2939672.2939778.
  • Rodgers and Thorson [2000] Shelly Rodgers and Esther Thorson. The interactive advertising model: How users perceive and process online ads. Journal of interactive advertising, 1(1):41–60, 2000.
  • Roediger and McDermott [1995] Henry L Roediger and Kathleen B McDermott. Creating false memories: Remembering words not presented in lists. Journal of experimental psychology: Learning, Memory, and Cognition, 21(4):803, 1995.
  • Rosenblatt [1962] F. Rosenblatt. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, DC, 1962.
  • Rosenfeld et al. [2012] Avi Rosenfeld, Inon Zuckerman, Amos Azaria, and Sarit Kraus. Combining psychological models with machine learning to better predict people’s decisions. Synthese, 189(1):81–93, 2012.
  • Rossi et al. [2011] Francesca Rossi, Kristen Brent Venable, and Toby Walsh. A Short Introduction to Preferences: Between Artificial Intelligence and Social Choice. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2011.
  • Rumelhart et al. [1986] David E. Rumelhart, Geoffrey Hinton, and R. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J. McCle, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1: Foundations, pages 318–363. MIT Press, Cambridge, MA, 1986.
  • Saif et al. [2017] Hassan Saif, Thomas Dickinson, Leon Kastler, Miriam Fernandez, and Harith Alani. A semantic graph-based approach for radicalisation detection on social media. In European semantic web conference, pages 571–587. Springer, 2017. doi: 10.1007/978-3-319-58068-5˙35.
  • Sammut [1996] Claude Sammut. Automatic construction of reactive control systems using symbolic machine learning. Knowledge Engineering Review, 11(1):27–42, 1996.
  • Schäfer and Hüllermeier [2018] Dirk Schäfer and Eyke Hüllermeier. Dyad ranking using Plackett-Luce models based on joint feature representations. Machine Learning, 107(5):903–941, 2018.
  • Schmidhuber [2015] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. doi: 10.1016/j.neunet.2014.09.003.
  • Senecal et al. [2005] Sylvain Senecal, Pawel J Kalczynski, and Jacques Nantel. Consumers’ decision-making process and their online shopping behavior: a clickstream analysis. Journal of Business Research, 58(11):1599–1608, 2005.
  • Serrano-Guerrero et al. [2015] Jesus Serrano-Guerrero, Jose A Olivas, Francisco P Romero, and Enrique Herrera-Viedma. Sentiment analysis: A review and comparative analysis of web services. Information Sciences, 311:18–38, 2015.
  • Shultz [2013] Thomas R Shultz. Computational models in developmental psychology. The Oxford Handbook of Developmental Psychology, Vol. 1: Body and Mind, 1:477, 2013.
  • Siddharthan et al. [2018] Advaith Siddharthan, Nicolas Cherbuin, Paul J Eslinger, Kasia Kozlowska, Nora A Murphy, and Leroy Lowe. Wordnet-feelings: A linguistic categorisation of human feelings. arXiv preprint arXiv:1811.02435, 2018.
  • Silver et al. [2016] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Soler-Company and Wanner [2019] Juan Soler-Company and Leo Wanner. Automatic classification and linguistic analysis of extremist online material. In Ioannis Kompatsiaris, Benoit Huet, Vasileios Mezaris, Cathal Gurrin, Wen-Huang Cheng, and Stefanos Vrochidis, editors, MultiMedia Modeling, pages 577–582, Cham, 2019. Springer International Publishing. ISBN 978-3-030-05716-9.
  • Speriosu et al. [2011] Michael Speriosu, Nikita Sudan, Sid Upadhyay, and Jason Baldridge. Twitter polarity classification with label propagation over lexical links and the follower graph. In

    Proceedings of the First Workshop on Unsupervised Learning in NLP

    , EMNLP ’11, pages 53–63, Stroudsburg, PA, USA, 2011. Association for Computational Linguistics.
    ISBN 978-1-937284-13-8.
  • Srivastava et al. [2014] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: Simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
  • Stecher et al. [2016] Julius Stecher, Frederik Janssen, and Johannes Fürnkranz. Shorter rules are better, aren’t they? In Toon Calders, Michelangelo Ceci, and Donato Malerba, editors, Proceedings of the 19th International Conference on Discovery Science (DS-16), pages 279–294. Springer-Verlag, 2016.
  • Stillman et al. [2018] Paul E Stillman, Xi Shen, and Melissa J Ferguson. How mouse-tracking can advance social cognitive theory. Trends in cognitive sciences, 2018.
  • Stumpf et al. [2009] Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human-Computer Studies, 67(8):639 – 662, 2009. ISSN 1071-5819.
  • Sylwester and Purver [2015] Karolina Sylwester and Matthew Purver. Twitter language use reflects psychological differences between democrats and republicans. PloS one, 10(9):e0137422, 2015.
  • Tausczik and Pennebaker [2010] Yla R Tausczik and James W Pennebaker. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24–54, 2010.
  • Thurstone [1927] Louis Leon Thurstone. A law of comparative judgement. Psychological Review, 34:278–286, 1927.
  • Tjong Kim Sang and De Meulder [2003] Erik F Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics, 2003.
  • Tonidandel et al. [2018] Scott Tonidandel, Eden B King, and Jose M Cortina. Big data methods: Leveraging modern data analytic techniques to build organizational science. Organizational Research Methods, 21(3):525–547, 2018.
  • Torgo [2010] Luís Torgo. Data Mining with R: Learning with Case Studies. Chapman and Hall/CRC Press, 2010. ISBN 9781439810187.
  • Troisi et al. [2018] Orlando Troisi, Mara Grimaldi, Francesca Loia, and Gennaro Maione. Big data and sentiment analysis to highlight decision behaviours: a case study for student population. Behaviour & Information Technology, 37(10-11):1111–1128, 2018.
  • Turney and Pantel [2010] Peter D Turney and Patrick Pantel. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37:141–188, 2010.
  • Tversky [1977] Amos Tversky. Features of similarity. Psychological Review, 84:327–352, 1977.
  • Varga et al. [2014] Andrea Varga, Amparo Elizabeth Cano Basave, Matthew Rowe, Fabio Ciravegna, and Yulan He. Linked knowledge sources for topic classification of microposts: A semantic graph-based approach. Web Semantics: Science, Services and Agents on the World Wide Web, 26:36–57, 2014.
  • Varian [2014] Hal R Varian. Big data: New tricks for econometrics. Journal of Economic Perspectives, 28(2):3–28, 2014.
  • Vembu and Gärtner [2010] Shankar Vembu and Thomas Gärtner. Label ranking algorithms: A survey. In Fürnkranz and Hüllermeier [2010], pages 45–64. ISBN 978-3642141249.
  • Vincent et al. [2010] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.

    Journal of Machine Learning Research, 11:3371–3408, 2010.
  • Vrandečić and Krötzsch [2014] Denny Vrandečić and Markus Krötzsch. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78–85, 2014.
  • Walsh et al. [2017] Colin G Walsh, Jessica D Ribeiro, and Joseph C Franklin. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science, 1:12, 2017.
  • Wang et al. [2017] Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. A Bayesian framework for learning rule sets for interpretable classification. Journal of Machine Learning Research, 18:70:1–70:37, 2017.
  • Widrow et al. [1994] Bernard Widrow, David E. Rumelhart, and Michael A. Lehr. Neural networks: Applications in industry, business and science. Communications of the ACM, 37(3):93–105, 1994.
  • Wilson et al. [2004] Theresa Wilson, Janyce Wiebe, and Rebecca Hwa. Just how mad are you? finding strong and weak opinion clauses. In aaai, volume 4, pages 761–769, 2004.
  • Wrobel [1997] Stefan Wrobel. An algorithm for multi-relational discovery of subgroups. In European Symposium on Principles of Data Mining and Knowledge Discovery, pages 78–87. Springer, 1997.