Understanding Negations in Information Processing: Learning from Replicating Human Behavior

04/18/2017 ∙ by Nicolas Pröllochs, et al. ∙ University of Freiburg 0

Information systems experience an ever-growing volume of unstructured data, particularly in the form of textual materials. This represents a rich source of information from which one can create value for people, organizations and businesses. For instance, recommender systems can benefit from automatically understanding preferences based on user reviews or social media. However, it is difficult for computer programs to correctly infer meaning from narrative content. One major challenge is negations that invert the interpretation of words and sentences. As a remedy, this paper proposes a novel learning strategy to detect negations: we apply reinforcement learning to find a policy that replicates the human perception of negations based on an exogenous response, such as a user rating for reviews. Our method yields several benefits, as it eliminates the former need for expensive and subjective manual labeling in an intermediate stage. Moreover, the inferred policy can be used to derive statistical inferences and implications regarding how humans process and act on negations.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

When making decisions in their daily lives, humans base their reasoning on information, while pondering expected outcomes and the importance of individual arguments. At the same time, they are continuously confronted with novel information of potentially additional value (LaBerge and Samuels, 1974). Psychological theories suggest that, when processing information, humans constantly categorize and filter for relevant tid bits (LaBerge and Samuels, 1974; Schneider and Shiffrin, 1977). The outcome of this filtering then drives decision-making , which in turn affects interactions with information technology, personal relationships, businesses or whole organizations. Information Systems (IS) research (Briggs, 2015) thus strives for insights into how humans interpret and react to information in order “to understand and improve the ways people create value with information”missing (Nunamaker and Briggs, 2011, p. 20).

Information is increasingly encoded not only in quantitative figures, but also in qualitative formats, such as textual materials (Lacity and Janson, 1994). Common examples from the digital age include blog entries, posts on social media platforms, user-generated reviews or negotiations in electronic commerce. These materials predominantly encompass feedback, comments or reviews, and thus immediately impact the decision-making processes of individuals and organizations (Chau and Xu, 2012; Vodanovich et al., 2010). Rather than a manual text analysis, a computerized evaluation is generally preferable, as it can process massive volumes of documents, often in realtime. Recent advances in information technology render it possible to automatically investigate the influence of qualitative information from narrative language and word-of-mouth – especially in order to gain an understanding of its content (Hirschberg and Manning, 2015). This, in turn, opens up novel opportunities for computerized decision support, e. g. question answering and information retrieval (e. g. (Vlas and Robinson, 2012)). Consequently, understanding decision-making and providing decision support both increasingly rely upon computerized natural language processing.

The exact interpretation of language is largely impracticable with computer programs at present. Among the numerous difficulties is the particularly daunting challenge of analyzing negations (Cruz et al., 2015; Pang and Lee, 2008), since their context-dependent nature hampers bag-of-words approaches for natural language. The latter method considers only word frequencies without looking at the order of words from the beginning to the end of a document. Such a careful consideration is, however, necessary as negations occur in various forms; they reverse the meanings of individual words, but also of phrases or even whole sentences (Councill et al., 2010). Thus, one must handle negations properly in order to achieve an accurate comprehension of texts. The importance of profound language understanding is demonstrated by the following exemplary applications:

Recommender Systems.

Recommender systems support users by predicting their rating or preference for a product or service. An increasing number of user-generated reviews constitutes a compelling source of information (Archak et al., 2011)

. Hence, recommender systems must accurately classify positive and negative content in order to interpret the intended opinion contained within reviews 

(Cruz et al., 2015; Thet et al., 2010; Pang and Lee, 2008) or their credibility (Jensen et al., 2013).

Financial News.

Investors in financial markets predominantly base their decision on news when choosing whether to exercise stock ownership. In addition to quantitative numbers, qualitative information found in news, such as tone and sentiment, strongly influences stock prices (e. g. (Henry, 2008; Tetlock, 2007)). For example, companies often frame negative news using positive words (Loughran and McDonald, 2011); therefore, empirical research, investors and automated traders demand the precise processing of negations.

Question Answering Systems.

Question answering systems support users with insights drawn from immense bodies of data. For instance, IBM’s Watson processes millions of medical documents in order to discover potential diseases and recommends treatments based on symptoms. Similarly, one can automatically determine software requirements from descriptions. For such applications of information retrieval, it is necessary to distinguish between certainty and beliefs in natural language by considering negations (Cruz Díaz et al., 2012; Rokach et al., 2008; Vlas and Robinson, 2012).


Negotiations usually consists of a seesaw of offers and counter-offers, of which most are usually rejected until one is finally accepted. Even in the digital area, negotiations are based on textual arguments (Johnson and Cooper, 2015) and, in order for systems to automatically decode outcomes, it is necessary to examine language correctly (Lai et al., 2002; Twitchell et al., 2013). Similarly, this holds for cases in which language helps to predict deception in human communication (Fuller et al., 2013; Zhou et al., 2004).

Despite the fact that language offers a rich source of information, its processing and the underlying decision-making are still subject to ongoing research activities. This includes negation processing, which affects virtually every context or domain, since neglecting negations can lead to erroneous implications or false interpretations. In the following, we refer to the part of a document whose meaning is reversed as the negation scope. Identifying negation scopes is difficult as these are latent, unobservable and – even among experts – highly subjective (Councill et al., 2010)

. In addition, many machine learning algorithms struggle with this type of problem as it is virtually impossible to encode with a fixed-length vector while preserving its order and context 

(Hirschberg and Manning, 2015).

This paper develops a novel method, based on reinforcement learning, for detecting, understanding and interpreting negations in natural language. This approach exhibits several favorable features that overcome shortcomings found in prior works. Among them, reinforcement learning is well suited to learning tasks of varying lengths; that is, it can process sentences of arbitrary complexity while preserving context and order of information. Furthermore, our approach eliminates the need for manual labeling of individual words and thus avoids the detrimental influence of subjectivity and misinterpretation. On the contrary, our model is solely trained on an exogenous response variable at document level. We refer to this score as the

gold standard, not only because it represents a common term in text mining, but also to stress that it can reflect any human behavior of interest. As a result of its learning process, our method adapts to domain-specific cues or particularities of the given prose.

Our contribution goes beyond the pure algorithmic benefits, since we envision the goal of understanding human information processing. For this purpose, our approach essentially learns to replicate human decision-making in order to gain insights into the workings of the human mind when processing narrative content. In fact, the reinforcement learning approach is trained based on past decisions. While the model receives feedback as to how well it matches the human decision, it does not receive explicit information regarding how to improve accuracy. Rather, the learner iteratively processes information in textual materials and experiments with different variants of negation processing in a trial-and-error manner that imitates human behavior.

Reinforcement learning can considerably advance our understanding of decision-making. Indeed, learning itself has long been conceptualized as the process of creating new associations between stimuli, actions and outcomes, which then guide decision-making in the presence of similar stimuli (Niv et al., 2015). We thus show how to make the knowledge of these associations explicit: we propose a framework by which to study negation scopes that were previously assumed to be latent and unobservable. Contrary to this presumption, we manage to measure their objective perception. Therefore, we exploit the action-value function inside the reinforcement learning model in order draw statistical inferences and derive conclusions regarding how the human mind processes and acts upon negations. As such, our approach presents an alternative or supplement to experiments, such as those of a psychological or neuro-physiological nature (along the lines of NeuroIS).

This paper is structured as follows. Section 2 provides an overview of related works that investigate human information processing of textual materials, while also explaining the motivation behind our research objective of understanding negations in natural language. Subsequently, Section 3 explains how we adapt reinforcement learning to improve existing methods of negation scope detection. In Section 4, we demonstrate our novel approach with applications from recommender systems and finance in order to contribute to existing knowledge of information processing. Finally, Section 5 discusses implications for IS research, practice and management.

2 Background

This section presents background on natural language processing. First, we discuss recent advances in computational intelligence and then outline challenges that arise when working with narrative content. We conclude by briefly reviewing previous works on the handling of negation in natural language.

2.1 Human Information Processing

Advances in computational intelligence have revolutionized our understanding of learning processes in the human brain. As a result, research has yielded precise theories regarding the reception of information and function of human memory (Niv et al., 2015). For instance, statistical models provide insights into human memory formation and the dynamics of memory updating, while also validating these theories by replicating experiments with statistical computations (Gershman et al., 2014). Reinforcement learning, especially, has gained considerable traction as it mines real experiences with the help of trial-and-error learning to understand decision-making (Niv et al., 2015). Accordingly, existing studies find that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant for predicting the outcome (Niv et al., 2015). Along these lines, a recent review argues for jointly combining both perception and learning in order to draw statistical inferences regarding information processing (Fiser et al., 2010). While the previous reference materials predominantly address visual perception, the focus of this paper is rather on natural language.

Behavioral theories suggest that human decision-makers seek as much information as possible in order to make an informed decision (Wilson, 1999). In the case of natural language, researchers have devised advanced methods to study the influence of textual information on the resulting decision. On the one hand, it is common to extract specific facts or features from the content and relate these to a decision variable (Thet et al., 2010). On the other hand, information diffusion is also frequently studied by measuring the overall tone of documents. This latter approach comprises a variety of different aspects of perception, including negative language, sentiment and emotions (Pang and Lee, 2008; Stieglitz and Dang-Xuan, 2013; Tetlock, 2007).

In the case of natural language, a variety of textual sources have served as research subjects for studying word-of-mouth communication and information diffusion. For instance, the dissemination of information and sentiment has been empirically tested in social networks (Trung et al., 2014), revealing that emotionally-charged tweets are retweeted more often and faster (Stieglitz and Dang-Xuan, 2013). Similarly, measuring the response to information allows one to test behavioral theories, such as attribution theories or the negativity bias, by distinguishing between the reaction to positive and negative content (Aggarwal et al., 2012; Stieglitz and Dang-Xuan, 2013).

2.2 Natural Language Processing

Addressing the above research questions on information processing requires accurate models for understanding and interpreting natural language. However, the majority of such methods only count the occurrences of words (or combinations), resulting in so-called bag-of-words methods. By doing so, these techniques have a tendency to ignore information relating to the order of words and their context (Hirschberg and Manning, 2015), such as inverted meanings through negations.

Neglecting negations can substantially impair accuracy when studying human information processing; for example, it is common “to see the framing of negative news using positive words”missing (Loughran and McDonald, 2011). To avoid false attributions, one must identify and predict negated text fragments precisely, since information is otherwise likely to be classified erroneously. This holds true not only for negations in information retrieval (Cruz Díaz et al., 2012; Rokach et al., 2008), but especially when studying sentiment (Cruz et al., 2015; Wiegand et al., 2010)

; even simple heuristics can yield substantial improvements in such cases 

(Jia et al., 2009).

2.3 Negation Processing

Previous methods for detecting, handling and interpreting negations can be grouped into different categories (cf. (Pröllochs et al., 2015, 2016; Rokach et al., 2008)).

Rule-based approaches are among the most common due to their ease of implementation and solid out-of-the-box performance. In addition, rules have been found to work effectively across different domains and rarely need fine-tuning (Taboada et al., 2011). They identify negations based on pre-defined lists of negating cues and then hypothesize a language model which assumes a specific interpretation by the audience. For example, some rules invert the meaning of all words in a sentence, while others suppose a forward influence of negation cues and thus invert only a fixed number of subsequent words (Hogenboom et al., 2011). Furthermore, a rule-based approach can also incorporate syntactic information in order to imitate subject and object (Padmaja et al., 2014). However, rules cannot effectively cope with implicit expressions or particular, domain-specific characteristics.

Machine learning approaches can partially overcome previous shortcomings (Rokach et al., 2008)

, such as the difficulty of recognizing implicit negations. Common examples of such methods include generative probabilistic models in the form of Hidden Markov models and conditional random fields (e. g. 

(Councill et al., 2010)

). These methods can adapt to domain-specific language, but require more computational resources and rely upon ex ante transition probabilities. Although approaches based on unsupervised learning avoid the need for any labels, practical applications reveal inferior performance compared to supervised approaches 

(Pröllochs et al., 2015). The latter usually depend on manual labels at a granular level, which are not only costly but suffer from subjective interpretations (Councill et al., 2010).

3 Method Development

This section posits the importance of developing a novel method for learning negation scopes in textual materials. After first formulating a problem statement, we introduce our approach, which is based on reinforcement learning.

3.1 Rationale and Intuition of Proposed Methodology

Negation scope detection in related research predominantly relies on rule-based algorithms. Rule-based approaches entail several drawbacks, as the list of negations must be pre-defined and the selection criterion according to which rule a rule is chosen is usually random or determined via cross validation. Rules aim to reflect the “ground truth” but fail at actually learning this.

For those seeking to incorporate a learning strategy, a viable alternative exists in the form of generative probabilistic models (e. g. Hidden Markov models or conditional random fields (Rokach et al., 2008)

). These process narrative language word-by-word and move between hidden states representing negated and non-negated parts. On the one hand, unsupervised learning can estimate the models without annotations, but yields less accurate results overall 

(Pröllochs et al., 2015)

. On the other hand, supervised learning offers better performance, but this approach requires a training set with manual labels for each word (see

Figure 1) which are supposed to approximate the latent negation scopes. Such labeling requires extensive manual work and is highly subjective, thus yielding only fair performance (Pröllochs et al., 2015, 2016). As a further drawback, many approaches from supervised machine learning are simply infeasible as they usually require an input vector of a fixed, pre-defined length without considering its order. This circumstance thus necessitates a tailored method for dealing with negations in narrative materials.

Figure 1: Process chart compares the different stages of labeling, training and applying rules in order to evaluate information processing.

In contrast to these suboptimal methods, we propose a novel approach to determining latent negation scopes based on reinforcement learning. It works well with learning tasks of arbitrary length (Sutton and Barto, 1998) and is adaptable to domain-specific features and particularities. Since it relies only upon a gold standard at document level, it represents a more objective strategy. However, such an approach has been largely overlooked in previous works on natural language processing (see Section 2).

Reinforcement learning aims at learning a suitable policy directly through trial-and-error experience. It updates its knowledge episodically and learns the policy from past experience using only limited feedback in the form of a reward. This reward indicates the current performance of the classifier, but does not necessarily specify how to improve the policy. In addition, this type of learning can also handle highly complex sentences and is thus well suited to the given task.

3.2 Learning Task for Negation Detection

Understanding negations and their influence on language is – as previously mentioned – a non-trivial computational problem, since the underlying learning task suffers from several undesirable features:

  1. Even though sentences follow grammatical rules, they can be nested up to arbitrary complexity and thus become arbitrarily long.

  2. Words have a meaning based on their context, which is implicitly established by their order. By merely rearranging the word order, one can produce a completely different meaning. This constitutes a dependency according to which the meaning of words depends, in part, on all other words and their order in the same document.

  3. Negation scopes affect individual words; however, we lack annotations on a word-by-word basis. Instead, we only observe a gold standard for the whole document upon which we must reconstruct negation scopes for each individual word within the document.

Based on these challenging features, we can formalize the problem, resulting in the following learning task. Both negation scopes and sentences are of different length depending on the specific document . This length can theoretically range from one to infinity. Each word in document with thus represents an individual classification task, which also depends on all other words in that document, i. e.


where is an ordered list of variable length providing context information.

Each document comes with a single label , i. e. the gold standard, which reflects the response of human decision-making to the text processing. In order to estimate , we minimize the expected error (or any other loss-like function) via the gold standard and the result of a text processing function. The latter function maps the words as a predictor onto the gold standard. Examples of text processing functions are functions that measure the accuracy of information retrieval or sentiment based on the presence of polarity words.

3.3 Reinforcement Learning

Reinforcement learning constructs a suitable policy for negation classification through trial-and-error experience. That is, it mimics human-like learning and thus appears well suited to natural language processing. In the following section, we introduce its key elements and tailor the method to our problem statement.

The overall goal is to train an agent based on a recurrent sequence of interactions. After observing the current state, the agent decides upon an action. Based on the result of the action, the agent receives immediate feedback via a reward. It is important to note that the agent aims only to maximize the rewards, but it never requires pairs of input and the true output (i. e. words and a flag indicating whether they are negated). This forms a setting in which the agent learns the latent negation scopes.

More formally, the model consists of a finite set of environmental states and a finite set of actions . The agent models the decision-maker by iteratively interacting with an environment over a sequence of discrete steps and seeks to maximize the reward over time. Here, the environment is a synonym for the states and the transition rules between them. At each iteration , the decision-making agent observes a state . Based on the current state , the agent picks an action , where is the subset of available actions in the given state . Subsequently, the agent receives feedback related to its decision in the form of a numerical reward , after which it moves then moving to the next state . The entire process is depicted in Figure 2.

Figure 2: Interaction between agent and environment in reinforcement learning (Sutton and Barto, 1998).

In order to build up knowledge, reinforcement learning updates a state-action function , which specifies the expected reward for each possible action in state . This knowledge can then be used to infer the optimal behavior, i. e. the policy that maximizes the expected reward from any state. Specifically, the optimal policy chooses in state the actions that maximizes .

Several algorithms have been devised to learn an optimal policy , among which is an approach known as Q-learning (Sutton and Barto, 1998; Watkins and Dayan, 1992). This methods seeks an optimal policy without an explicit model of the environment. In other words, it knows neither explicitly the reward function nor the state transition function (Hu and Wellman, 2003). Instead, it iteratively updates its action-value based on past experience (Watkins and Dayan, 1992). In our case, we use a variant with eligibility traces named Watkin’s due to better convergence; see (Sutton and Barto, 1998) for details.

3.4 Learning Negation Processing

In this section, we outline how we adapt reinforcement learning to our attempt to simulate human negation processing. In each iteration, the agent observes the current state that we engineer as the combination of the -th word in a document and the previous action . This specification establishes a recurrent architecture whereby the previous negation can pass on to the next word.111

Such a design is common in partially observable Markov decision processes (POMDP for short) that feature a similar relaxation into so-called belief states 

(Kaelbling et al., 1996). At the same time, this allows for nested negations, as a word can first introduce a negation scope and another subsequent negation can potentially revert it based on to follow a non-negating action again. In our case, we incorporate the actual words into the states, while other variants are also possible, such as using part-of-speech tags or word stems instead. The latter variants work similarly; however, our tests suggest a lower out-of-sample performance.222By definition, the use of -grams is not necessary, as the context is implicitly modeled by the ordered sequence of states and actions.

After observing the current state, the agent chooses an action from of two possibilities: (1) it can set the current word to negated or (2) it can mark it as not negated. Hence, we obtain the following set of possible actions . Based on the selected action, the agent receives a reward, which updates the knowledge in the state-action function . This state-action function is then used to infer the best possible action in each state , i. e. the optimal policy .

Our approach relies upon a text processing function that measures the correlation between a given gold standard at document level (e. g. the author’s rating in movie reviews) and the content of a document. We later show possible extensions (see Section 5.1), but for now demonstrate only how the tone (or sentiment) in a document works as a predictor of its exogenous assessment. Examples of such predicted variables are movie ratings in the case of reviews or stock market returns in the case of financial disclosures. Even though more advanced approaches from machine learning are possible, we prefer – for reasons of clarity – an approach is based on pre-defined lists of positive and negative terms, and . We then measure the tone in document as the difference between positively and negatively opinionated terms divided by the overall number of terms in that document (Pang and Lee, 2008), i. e.


If a term is negated by the policy, the polarity of the corresponding term is inverted, i. e. positively opinionated terms are counted as negative and vice versa. Our list of opinionated words originates from the Harvard IV General Inquirer dictionary which contains opinionated entries with a positive connotation and entries marked as negative.

Let us now demonstrate the learning process via an example, where the agent processes word-by-word the first document, which consists of “this is a good product”missing, with gold standard (i. e. positive content). The agent might then, at random, decide to explore the environment by negating the word good. Upon reaching the last word, it receives feedback in the form of a zero as there is no improvement from having negation scopes compared to having none. Thus, the agent will discard this action in the future. It then processes the second document: (“this isn’t a good product”missing with gold standard ), where it negates all words following isn’t. As this inversion now better reflects the gold standard, the agent receives a positive reward and will apply this rule in the future. Ultimately, the agent is also able to learn a suitable policy for nested negations, e. g. for the third document “this product isn’t good but fantastic”missing with gold standard . Based on the current policy, it negates all words following isn’t but receives a negative reward as there is an inferior resulting correlation compared to not incorporating negations. However, through further exploration, the agent learns that it is beneficial to terminate the negation scope after but. Thus, the agent will invert all words subsequent to isn’t and terminate the negation scope subsequent to but if (and potentially only if) the previous state is negated. Table 1 illustrates an exemplary resulting state-action function for this learning process.

Table 1: Exemplary table for the state-action function with recurrent state architecture and actions . Each cell contains the expected reward for the corresponding state-action pairs. The last column shows optimal policy for all states and actions.

We now specify a reward such that it incentivizes the outcome of the text processing function to match the gold standard. When processing a document, we cannot actually compute the reward (as not all negations are clear) until we have processed all words. Therefore, we set the reward before the last word to almost zero, i. e. for all . Upon reaching the final word, the agent compares the text processing function without any negation to the current policy . The former is defined by the absolute difference between gold standard and tone , whereas the latter is defined by the absolute difference between gold standard and the adjusted tone using the current policy . Then the difference between the text processing functions returns the terminal reward . This results in the reward


with constant that adds a small reward for default (i. e. non-negating) actions to avoid overfitting.

At the beginning, we initialize the action-value function , i. e. the current knowledge of the agent, to zero for all states and actions.333This also controls our default action when encountering unknown states or words in the out-of-sample dataset. In such cases, the non-negated action is preferred. The agent then successively observes a sequence of words in which it can select between exploring new actions or taking the current optimal one. This choice is made by -greedy selection according to which the agent explores the environment by selecting a random action with probability or, alternatively, exploits the current knowledge with probability . In the latter case, the agent chooses the action with the highest estimated reward for the given policy.444First, we perform iterations with a higher exploration rate as given by the following parameters: exploration , discount factor and learning rate . In a second phase, we run iterations for fine-tuning with exploration , discount factor and learning rate . See (Sutton and Barto, 1998; Watkins and Dayan, 1992) for detailed explanations.

3.5 Inferences, Understanding and Hypothesis Testing

Our approach features several beneficial characteristics that make inferences and statistical testing easy. In contrast to black-box approaches in machine learning, we can use the state-action function to infer rules regarding how the content is processed because the function reflects the ground truth. For instance, this state-action function specifically determines which cues introduce explicit or implicit negations. Additionally, we can gain a metric of confidence about the rules by comparing the largest reward to all other rewards in a specific state. A larger discrepancy expresses higher confidence with regard to a certain action.

Applying the policy to out-of-sample documents benchmarks its performance in comparison to the absence of negation handling. Furthermore, we can study, for instance, which cues prompt this policy to introduce negation scopes, as well as their position, size or other characteristics as a basis for statistical testing.

4 Evaluating Negation Processing

This section evaluates our method for replicating human negation processing. First, we show how policy learning can help to yield a more accurate interpretation of movie reviews. We then detail the role of negation cues and compare explicit versus implicit negations. In the next step, we validate the robustness of our results and introduce a second application scenario which addresses the relevance of accurate negation handling in financial disclosures.

4.1 Case Study: Recommender System

Recommender systems can benefit greatly from user-generated reviews, which represent a rich source of information. We thus demonstrate our method using a common dataset (Pang and Lee, 2005) of movie reviews from the Internet Movie Database archive (IMDb), each annotated with an overall rating at document level.555We use the scaled dataset available from www.cs.cornell.edu/people/pabo/movie-review-data/. All reviews are written by four different authors and preprocessed, e. g. by removing explicit rating indicators (Pang and Lee, 2005). It is widely accepted that measuring the tone of movie reviews is particularly difficult because positive movie reviews often mention some unpleasant scenes, while negative reviews, conversely, often detail certain pleasant scenes (Turney, 2002). Thus, this corpus appears particularly suitable for a case study since it allows one to examine the importance of human-like negation processing beyond simple rules. Accordingly, we use 10-fold cross validation to verify the predictive accuracy.

4.2 Policy Learning for Negation Processing

Shown below are the results from policy learning for negation processing. Table 2

summarizes the main results and we explicate these findings in more depth. As part of a benchmark, we study the proportion of variance of the gold standard that is explained by the tone when leaving negations untreated. We observe an in-sample

of and a of in the out-of-sample set. We then compare this to our approach of policy learning. We perform learning iterations, thereby yielding significant improvements: the in-sample increases by , leading to an overall of . Similarly, we see a rise by to a of in the out-of-sample set. A better handling of negations thus contributes to more accurate text processing (we later perform additional robustness checks; see Section 4.5).

30.6 (no negation handling) 30.6 (with negation policy) 30.6 Improvement (in %)
In-sample set
Out-of-sample set
Table 2: Comparison of gold standard variance explained by the tone. Figures are reported for both the in-sample and out-of-sample sets using 10-fold cross validation after learning iterations.

We now provide descriptive statistics of negation scopes in order to gain further insights (

Table 3). For this purpose, we apply the learned policy to the out-of-sample documents and record its effects. In the first place, the policy negates a large share () of opinionated words, i. e. words that convey a positive or negative polarity. On average, each document contains separate negation scopes of different size and extent, all of which invert opinion words. For example, the length of the corresponding negation scopes, i. e. sequences that are uniformly negated, is unevenly distributed and ranges from to words, whereas the average length of each scope is words. of all negation scopes consist of only a single word, while encompass two or more words.

Minimum length of negation scopes 1
Maximum length of negation scopes 18
Mean length of negation scopes 1.74
Share of negation scopes with word
Share of negation scopes with word
Share of negated polarity words
Mean number of negation scopes per document 75.18
Table 3: Descriptive statistics on the out-of-sample set after applying the in-sample policy.

4.3 Negation Cues

Negation scopes are typically initiated by specific cues that invert the meaning of surrounding words. These negation cues can be grouped in two categories. On the one hand, a negation cue can be explicit, such as not in the sentence, “This is not a terrible movie”. On the other hand, negations can also flip the meaning of sentences implicitly, e. g. “The actor did a great job in his last movie; it was the first and last time”.

Given this understanding, we investigate individual effects of explicit and implicit negations on text reception. For this purpose, we group the words that initiate a negation scope according to their part-of-speech tag and depict in Figure 3 the resulting share of negation cues by word class. Here, the last bar relies on a list of explicit negations as proposed by (Jia et al., 2009). We thus find evidence that a major share of negations (4 out of 8, i. e. ) are evoked by explicit cues, while the remainder originate from implicit negations.


Share of Negated Words in %

Implicit Negations

Explicit Negations
Figure 3: Negation cues per word class based on policy learned after iterations.

We now provide additional descriptions of the appearance of implicit negations. For instance, we frequently observe words such as fairly or hopefully as part of implicit negation cues, which often transform a positive statement into a negative one, e. g. “Hopefully, the movie is better next time”. Contrary to our prior expectations, conjunctions seem unlikely to initiate a negation scope, but are often accountable for double negations. Here, words such as but and nor frequently revert the meaning of negated words, i. e. terminate the negation scope, as in the sentence, “The movie is not great but absolutely unmissable”.

Table 4 provides statistics for all explicit negation cues based on (Jia et al., 2009).666Here, we divide the corpus into two subsets: an in-sample set of reviews which we use to learn the agent, and (b) an out-of-sample set with the remaining documents to test implications. We exclude the use of cross validation since we desire a single model with which can perform statistical analyses. Their frequency in the documents differs considerably, while some cues also involve a larger negation scope than others. For example, the negation word not negates subsequent words on average, whereas this figure stands at for the term without. Based on the -value, we assess their strength, i. e. a larger value indicates a higher reward from negating. We can also gain confidence in negations by comparing the gap between the highest and second highest -value of each word. Interestingly, several words that were previously considered negation cues do not negate surrounding words in our case, namely, barely, less, hardly and rarely.

Word Negating Action Q-Value 30.6 Confidence (Difference to Second Best Policy) Occurrences 30.6 Mean Length of Negation Scope
Table 4: Explicit negation words from (Jia et al., 2009) according to policy learned after iterations.

This provides evidence that static negation lists are generally inadequate in mimicking human perception. Even though explicit negations can be recognized with predefined lists of cues, implicit ones are often hidden and difficult to identify algorithmically. As a remedy to this shortcoming, our approach is capable of learning both kinds of negations and can handle them accordingly.

4.4 Behavioral Implications of Negation Processing

Policy learning is also a valuable tool for analyzing behavioral implications. In this section, we demonstrate potential policy learning applications that allow for the testing of certain hypotheses regarding human information processing of natural language.

As an example, our method allows one to test the hypothesis whether negations appear evenly throughout different parts of narrative content. Such a test is not tractable for rules or supervised learning with intermediate labeling, since these introduce a subjective choice of negation cues, rules or labels; however, our method infers a negation policy model from an exogenous response variable. Hence, we can evaluate where authors place negations when composing reviews, i. e. do they generally introduce negative aspects in the beginning or rather at the end? We thus compare the frequency of negations (as a proxy for negativity) across different parts of documents in the corpus. Let denote the mean of negated words in the first half and

in the second half of a document or sentence, respectively. We can then test a null hypothesis

to infer behavioral implications.

As a result, we find that the second half of an out-of-sample document contains more negations than its first half on average. This difference is statistically significant at even the significance level when performing a two-sided Welch -test. It also coincides with psychological research according to which senders of information are more likely to place negative content at the end (Legg and Sweeny, 2014); however, we can provide evidence outside of a laboratory setting by utilizing human information behavior in a real-life environment. Furthermore, the share of negated words also varies across different segments of sentences. However, at this level, the effect tends in the opposite direction as the first half of a sentence in the out-of-sample contains more negations than the second half. The latter is also statistically significant at the level.

4.5 Robustness Checks

We investigate the convergence of the reinforcement learning process to a stationary policy. Accordingly, Figure 4 visualizes the proportion of variance of the gold standard that is explained by the tone for the first learning iterations. Here, the horizontal lines denote the explained variance in the benchmark setting (i. e. no negation handling). Both the in-sample and out-of-sample improve relatively quickly and outperform the benchmark considerably. In the end, we use our above policy based on iterations, since the next iterations consistently show fluctuations below in terms of in-sample . This pattern indicates a fairly stationary outcome.

Figure 4: The fluctuating series show the converging explained variance of user ratings based on tone (i. e. sentiment) across different learning iterations using 10-fold cross validation, while the uniform lines shows it after smoothing. Here, the dark gray series corresponds to the in-sample set, whereas the light gray series corresponds to the out-of-sample set. The horizontal line denotes the in the benchmark setting (without handling negations).

Next, we compare the performance of our reinforcement learning approach to common rules proposed in the literature (Hogenboom et al., 2011; Taboada et al., 2011), which essentially try to imitate the grammatical structure of a sentence. For this purpose, the negation rules search for the occurrence of specific cues based on pre-defined lists and then invert the meaning of a fixed number of surrounding words. Hence, we apply the individual rules to each document and again compare the out-of-sample ; see Table 5 for results. Negating a fixed window of the next words achieves the highest fit among all rules similar to (Dadvar et al., 2011). This rule exceeds the benchmark with no negation handling by . Most importantly, our approach works even more accurately, and dominates all of the rules, outperforming them by at least .

Approach Correlation
Benchmark: no negation handling
Negating all subsequent words
Negating the whole sentence
Negating a fixed window of 1 word
Negating a fixed window of 2 words
Negating a fixed window of 3 words
Negating a fixed window of 4 words
Negating a fixed window of 5 words
Our approach (based on reinforcement learning)
Table 5: Table compares the out-of-sample (using 10-fold cross validation) of our approach and different rules from previous literature.

Finally, we evaluated further setups and methods for handling negations. We first tested alternative action sets for reinforcement learning that not only negate single words but also whole phrases, including backward negations. However, this configuration leads to inferior values on both the in-sample and out-of-sample set. We also explored other dictionaries of opinionated words and the performance of generative probabilistic models. Here, we find similar results for alternative dictionaries but inferior results for generative probabilistic models. All results confirm our findings.777Available on request.

4.6 Comparison with Negation Processing in Financial News

Our second case study investigates information processing in financial markets by analyzing how qualitative content in financial disclosures influences stock prices. For this purpose, we use regulated ad hoc announcements888Kindly provided by the Deutsche Gesellschaft für Ad-Hoc-Publizität (DGAP). from European companies, all of which are written in English. These entail several advantages; for example, ad hoc announcements must be authorized by company executives, their content is largely quality-checked by federal authorities and previous evidence finds a strong relationship between content and subsequent stock market reaction (Muntermann and Guettler, 2007). As our gold standard, we calculate the daily abnormal return of the corresponding stock (Konchitchki and O’Leary, 2011; MacKinlay, 1997). We measure the tone in these disclosures with the help of a finance-specific dictionary, the Loughran and McDonald dictionary (Loughran and McDonald, 2011). This dictionary contains entries with positive polarity, as well as entries marked as negative.

As detailed below, we derive a policy for negation processing and briefly introduce our main findings. Again, our reinforcement learning approach improves the link between tone and market response. Our benchmark without negation handling yields an out-of-sample of , while our method increases this by , resulting in an out-of-sample for 10-fold cross validation of . Both the absolute and its improvements are – as expected – higher for movie reviews; this is a domain-specific disparity since “very few control variables predict next-day returns” in efficient markets (Tetlock et al., 2008).

Next, we apply the learned policy to the out-of-sample documents and record its effects. Interestingly, we find that negations are less frequent in financial news. On average, each document contains separate negation scopes that invert of all opinion words. The length of the corresponding negation scopes is shorter, ranging from 1 to 14 words with an average length of words. Similarly, to the results for the movie reviews, we find that the end of a document is more likely to contain negations than the beginning. On average, the second half of an out-of-sample announcement contains more negation that its first half. It is noteworthy that this difference is significant at the 10 % level using a two-sided Welch -test. This might suggest that authors of financial disclosures utilize negations as a tool to convey negative information through positive words. Overall, the results show that negations are domain-specific and depend on the particularities of the chosen prose. Additionally, the comparison strongly affirms that negation handling enhances the understanding of information processing for natural language of an arbitrary domain.

5 Discussion

In the following sections, we discuss the implications of our research, as our method not only improves text comprehension, but also suggests a new approach to understanding decision-making in the social and behavioral sciences. Furthermore, our research is highly relevant for practitioners when extending information systems with interfaces for natural language.

5.1 Extensibility

Our method of negation learning is not limited to the study of tone or sentiment; on the contrary, one can easily adapt it to all applications of natural language processing which utilize a gold standard and where negations play an important role. To accomplish this, one replaces the function calculating the sentiment with a corresponding counterpart that maps words onto a gold standard for the given application. Our only requirement is that this function takes into consideration – in some way – whether each word is negated or not.

To better illustrate this concept, we briefly describe how this works using two examples. We first consider a medical question-answering system into which users enter their symptoms and, in return, are provided a list of potential illnesses. The system bases its answers on a collection of medical reports and one measures its performance by counting the number of correctly retrieved answers relative to the given input. For example, the system should return “fever”missing for input “flu”missing when the corpus contains “a flu causes fever”missing. Reinforcement learning can improve accuracy in the presence of negations; i. e. it learns that diseases can also be unrelated to symptoms, as in the statement, “A flu does not result in high blood pressure”missing. As a second example, we assume an information system for negotiations that proposes offers to customers, who then reply in natural language. Subsequently, the information system automatically determines whether a customer’s response was positive or negative based on its content. A naïve bag-of-words model considers only specific cues (such as accept without context), whereas our approach can even learn to correctly classify cases with negations, e. g. by appending the prefix “not_”missing to words that are negated (Wiegand et al., 2010).

We now generalize the reward function in order to search for an optimal negation policy for the above applications. For each document with gold standard , we calculate the predictive performance that should forecast the gold standard negation handling, as well as using the current policy . The agent then gains a reward


with a suitable constant . The first two cases add a small reward for default (i. e. non-negating) actions to avoid overfitting, while the last case rewards how much better the current policy approximates the gold standard compared to no treatment of negations. This definition thus extends reinforcement learning to seek optimal negation processing across almost arbitrary applications of natural language processing.

5.2 Limitations

The current research faces a number of limitations, which can provide investigative possibilities for further works as follows: first and foremost, our method exhibits shortcomings when language is intricate, such as when piece of text refers to content that is located in an entirely different part of the document. Sometimes one even requires additional background knowledge to correctly interpret the content, as in the statement, “The movie was not at all different from the last one”missing. This complexity poses challenges to natural language processing – not only for our method but also for those discussed in the related work. In addition, we predominantly focus on implementations where negations have a forward-looking scope, i. e. a negation cue affects subsequent words but not words that precede it. Therefore, we have also tested variants with a backward-looking analysis as part of our robustness checks; however, this offers opportunities for additional variations with advanced actions which could, for instance, invert the meaning of the full sentence or the subsequent object in order to further improve accuracy. Finally, further effort is necessary to develop an unsupervised variant which eliminates the need for a gold standard.

5.3 Implications for IS Research

The unique and enduring purpose of IS research as an academic field is to understand and improve the ways in which people, organizations and businesses create value with information (Briggs, 2015; Nunamaker and Briggs, 2011). Hence, the design and implementation of systems to provide “the right information to the right person at the right time was the raison d’être of early IS research”missing (Aggarwal et al., 2012). In the past, “information” predominantly referred to structured data, while companies nowadays also exploit unstructured data and especially textual materials. This development has found its way into IS research, which thus focuses on how textual information is processed. Among the earliest references to this area of inquiry is an article in Management of Information Systems Quarterly from 1982 that explicitly addresses information processing (Robey and Taggart, 1982).

The field of information processing has gained great traction with advances in neuroscience and NeuroIS (Dimoka et al., 2011). By acquiring neuro-physiological data, scholars can gather information on how the human brain reacts to external stimuli. For this purpose, one measures (neuro-)physiological parameters (e. g. heart rate and skin conductance) to study the information processing of human agents (Riedl et al., 2014). This makes it possible to measure informational and cognitive overload in users in the course of their interaction with information systems (Riedl et al., 2014) and text-based information (Minas et al., 2014). However, NeuroIS remains very costly and the methods exhibit many weaknesses (Dimoka et al., 2012). For example, interpreting data from functional magnetic resonance imaging (fMRI) is hampered by the complexity and non-localizable activities of the brain.

Our computational intelligence method promises to fill the gap in existing approaches to understanding negations. It is analogous to revealed preferences estimations in economics, where the choices of individuals reveal the individuals’ latent utility function, since we utilize text documents that are tagged by the users with a rating or gold standard. In addition, applying computational intelligence offers the potential to automatically unveil negations in texts – without the need to manually label individual words. This entails several advantages as the understanding of language is highly subjective and, in contrast, we derive the (latent) negation model that best fits the data. The results can thus also contribute to linguistic and psychological models of negation usage and representation (Khemlani et al., 2012). Overall, reinforcement learning manifests immense potential for future IS research involving the study of information processing in depth.

5.4 Implications for Practitioners

Previous IS research argues that “a basic issue in the design of expert systems is how to equip them with representational and computational capabilities”missing (Zhang, 1987). As a remedy, this paper presents a tool to practitioners in order to improve the automated processing of natural language in their information systems. As such, our methodology can enhance the accuracy of decision support based on textual data. It does not necessarily require changes in the derivation of the original algorithms; instead, our methodology can be built on top of routines and thus enables a seamless integration into an existing tool chain.

Practitioners can benefit from negation handling when assessing the semantic orientation of written materials. For example, in the case of recommender systems and opinion mining, texts provide decision support by tracking the public mood in order to measure brand perception or judge the launch of a new product based on blog posts, comments, reviews or tweets. Based on our case study, we see a significant improvement of up to in explained variance by adjusting for negated text units.

A better understanding of human language can spark business innovations in multiple areas. For instance, our approach facilitates the interactive control of information systems through natural language, such as in question-answering systems. With the advent of cognitive computing, the accurate processing of natural language will gain even more in importance (Modha et al., 2011). Ultimately, the relevance of our methodology goes beyond these examples and comprises almost all text-based applications of individuals, organizations and businesses.

6 Conclusion

Information is at the heart of all decision-making that affects humans, businesses and organizations. Consequently, understanding the formation of decisions represents a compelling research topic and yet knowledge gaps become visible when it comes to information processing with regard to natural language. Negations, for example, are a frequently utilized linguistic tool for expressing disapproval or framing negative content with positive words; however, existing methods struggle to accurately recognize and interpret negations in written text. Moreover, these methods are often tethered to large volumes of manually labeled data, which introduce an additional source of subjectivity and noise.

In order to address these shortcomings, this paper develops a novel approach based on reinforcement learning, which has the advantage of being human-like and thus capable of learning to replicate human decision-making. As a result, our evaluation shows superior performance in predicting negation scopes, while this method also reveals an unbiased approach to identifying negation scopes based on an exogenous response variable collected at document level. It thereby sheds light on the “ground truth” of negation scopes, which would have otherwise been latent and unobservable. In addition, reinforcement learning allows for hypothesis testing in order to pinpoint how humans process and act on negations. For instance, this paper demonstrates that negations are unequally distributed across document segments, showing that the second half of movie reviews and financial news items contain significantly more negations than the first half. Our approach serves as an intriguing alternative or supplement to experimental research, as it unleashes computational intelligence for the purpose of performing behavioral research, thereby fostering unprecedented insights into human information processing.


  • LaBerge and Samuels (1974) D. LaBerge, S. Samuels, Toward a Theory of Automatic Information Processing in Reading, Cognitive Psychology 6 (1974) 293–323.
  • Schneider and Shiffrin (1977) W. Schneider, R. M. Shiffrin, Controlled and Automatic Human Information Processing: Detection, Search, and Attention, Psychological Review 84 (1977) 1–66.
  • Briggs (2015) R. O. Briggs, Special Section: Cognitive Perspectives on Information Systems, Journal of Management Information Systems 31 (2015) 3–5.
  • Nunamaker and Briggs (2011) J. F. Nunamaker, R. O. Briggs, Toward A Broader Vision for Information Systems, ACM Transactions on Management Information Systems 2 (2011) 1–12.
  • Lacity and Janson (1994) M. C. Lacity, M. A. Janson, Understanding Qualitative Data: A Framework of Text Analysis Methods, Journal of Management Information Systems 11 (1994) 137–155.
  • Chau and Xu (2012) M. Chau, J. Xu, Business Intelligence in Blogs: Understanding Consumer Interactions and Communities, MIS Quarterly 36 (2012) 1189–1216.
  • Vodanovich et al. (2010) S. Vodanovich, D. Sundaram, M. Myers, Digital Natives and Ubiquitous Information Systems, Information Systems Research 21 (2010) 711–723.
  • Hirschberg and Manning (2015) J. Hirschberg, C. D. Manning, Advances in Natural Language Processing, Science 349 (2015) 261–266.
  • Vlas and Robinson (2012) R. E. Vlas, W. N. Robinson, Two Rule-Based Natural Language Strategies for Requirements Discovery and Classification in Open Source Software Development Projects, Journal of Management Information Systems 28 (2012) 11–38.
  • Cruz et al. (2015) N. P. Cruz, M. Taboada, R. Mitkov,

    A Machine–Learning Approach to Negation and Speculation Detection for Sentiment Analysis,

    Journal of the Association for Information Science and Technology In Press (2015).
  • Pang and Lee (2008) B. Pang, L. Lee, Opinion Mining and Sentiment Analysis, Foundations and Trends in Information Retrieval 2 (2008) 1–135.
  • Councill et al. (2010) I. G. Councill, R. McDonald, L. Velikovich, What’s Great and What’s Not: Learning to Classify the Scope of Negation for Improved Sentiment Analysis, in: Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, Association for Computational Linguistics, Stroudsburg, PA, USA, 2010, pp. 51–59.
  • Archak et al. (2011) N. Archak, A. Ghose, P. G. Ipeirotis, Deriving the Pricing Power of Product Features by Mining Consumer Reviews, Management Science 57 (2011) 1485–1509.
  • Thet et al. (2010) T. T. Thet, J.-C. Na, C. S. G. Khoo, Aspect-Based Sentiment Analysis of Movie Reviews on Discussion Boards, Journal of Information Science 36 (2010) 823–848.
  • Jensen et al. (2013) M. L. Jensen, J. M. Averbeck, Z. Zhang, K. B. Wright, Credibility of Anonymous Online Product Reviews: A Language Expectancy Perspective, Journal of Management Information Systems 30 (2013) 293–324.
  • Henry (2008) E. Henry, Are Investors Influenced by How Earnings Press Releases are Written?, Journal of Business Communication 45 (2008) 363–407.
  • Tetlock (2007) P. C. Tetlock, Giving Content to Investor Sentiment: The Role of Media in the Stock Market, Journal of Finance 62 (2007) 1139–1168.
  • Loughran and McDonald (2011) T. Loughran, B. McDonald, When Is a Liability Not a Liability? Textual Analysis, Dictionaries, and 10-Ks, Journal of Finance 66 (2011) 35–65.
  • Cruz Díaz et al. (2012) N. P. Cruz Díaz, Maña López, Manuel J., J. M. Vázquez, V. P. Álvarez, A Machine-Learning Approach to Negation and Speculation Detection in Clinical Texts, Journal of the American Society for Information Science and Technology 63 (2012) 1398–1410.
  • Rokach et al. (2008) L. Rokach, R. Romano, O. Maimon, Negation Recognition in Medical Narrative Reports, Information Retrieval 11 (2008) 499–538.
  • Johnson and Cooper (2015) N. A. Johnson, R. B. Cooper, Understanding the Influence of Instant Messaging on Ending Concessions During Negotiations, Journal of Management Information Systems 31 (2015) 311–342.
  • Lai et al. (2002) H. Lai, W.-J. Lin, G. E. Kersten, Effects of Language Familiarity on e-Negotiation: Use of Native vs. Nonnative Language, in: Proceedings of the 42nd Hawaii International Conference on System Sciences (HICSS), IEEE, 2002, pp. 1–9.
  • Twitchell et al. (2013) D. P. Twitchell, M. L. Jensen, D. C. Derrick, J. K. Burgoon, J. F. Nunamaker, Negotiation Outcome Classification Using Language Features, Group Decision and Negotiation 22 (2013) 135–151.
  • Fuller et al. (2013) C. M. Fuller, D. P. Biros, J. Burgoon, J. Nunamaker, An Examination and Validation of Linguistic Constructs for Studying High-Stakes Deception, Group Decision and Negotiation 22 (2013) 117–134.
  • Zhou et al. (2004) L. Zhou, J. K. Burgoon, D. P. Twitchell, T. Qin, J. F. Nunamaker, A Comparison of Classification Methods for Predicting Deception in Computer-Mediated Communication, Journal of Management Information Systems 20 (2004) 139–166.
  • Niv et al. (2015) Y. Niv, R. Daniel, A. Geana, S. J. Gershman, Y. C. Leong, A. Radulescu, R. C. Wilson, Reinforcement Learning in Multidimensional Environments Relies on Attention Mechanisms, Journal of Neuroscience 35 (2015) 8145–8157.
  • Gershman et al. (2014) S. J. Gershman, A. Radulescu, K. A. Norman, Y. Niv, O. Sporns, Statistical Computations Underlying the Dynamics of Memory Updating, PLoS Computational Biology 10 (2014).
  • Fiser et al. (2010) J. Fiser, P. Berkes, G. Orbán, M. Lengyel, Statistically Optimal Perception and Learning: From Behavior to Neural Representations, Trends in Cognitive Sciences 14 (2010) 119–130.
  • Wilson (1999) T. D. Wilson, Models in Information Behaviour Research, Journal of Documentation 55 (1999) 249–270.
  • Stieglitz and Dang-Xuan (2013) S. Stieglitz, L. Dang-Xuan, Emotions and Information Diffusion in Social Media: Sentiment of Microblogs and Sharing Behavior, Journal of Management Information Systems 29 (2013) 217–248.
  • Trung et al. (2014) D. N. Trung, T. T. Nguyen, J. J. Jung, D. Choi, Understanding Effect of Sentiment Content Toward Information Diffusion Pattern in Online Social Networks: A Case Study on TweetScope, in: P. C. Vinh, V. Alagar, E. Vassev, A. Khare (Eds.), Context-Aware Systems and Applications, volume 128 of Lecture Notes for Computer Sciences, Social Informatics and Telecommunications Engineering, Springer, Cham, Switzerland, 2014, pp. 349–358.
  • Aggarwal et al. (2012) R. Aggarwal, R. Gopal, R. Sankaranarayanan, P. V. Singh, Blog, Blogger, and the Firm: Can Negative Employee Posts Lead to Positive Outcomes?, Information Systems Research 23 (2012) 306–322.
  • Wiegand et al. (2010) M. Wiegand, A. Balahur, B. Roth, D. Klakow, A. Montoyo, A Survey on the Role of Negation in Sentiment Analysis, in: Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, Association for Computational Linguistics, Stroudsburg, PA, USA, 2010, pp. 60–68.
  • Jia et al. (2009) L. Jia, C. Yu, W. Meng, The Effect of Negation on Sentiment Analysis and Retrieval Effectiveness, in: D. Cheung (Ed.), Proceeding of the 18th ACM Conference on Information and Knowledge Management (CIKM ’09), ACM, New York, NY, 2009, pp. 1827–1830.
  • Pröllochs et al. (2015) N. Pröllochs, S. Feuerriegel, D. Neumann, Enhancing Sentiment Analysis of Financial News by Detecting Negation Scopes, in: 48th Hawaii International Conference on System Sciences (HICSS), 2015, pp. 959–968.
  • Pröllochs et al. (2016) N. Pröllochs, S. Feuerriegel, D. Neumann, Detecting Negation Scopes for Financial News Sentiment Using Reinforcement Learning, in: 49th Hawaii International Conference on System Sciences (HICSS), 2016, pp. 1164–1173.
  • Taboada et al. (2011) M. Taboada, J. Brooke, M. Tofiloski, K. Voll, M. Stede, Lexicon-Based Methods for Sentiment Analysis, Computational Linguistics 37 (2011) 267–307.
  • Hogenboom et al. (2011) A. Hogenboom, P. van Iterson, B. Heerschop, F. Frasincar, U. Kaymak, Determining Negation Scope and Strength in Sentiment Analysis, in: IEEE International Conference on Systems, Man, and Cybernetics, 2011, pp. 2589–2594.
  • Padmaja et al. (2014) S. Padmaja, S. Fatima, S. Bandu, Evaluating Sentiment Analysis Methods and Identifying Scope of Negation in Newspaper Articles,

    International Journal of Advanced Research in Artificial Intelligence 3 (2014) 1–6.

  • Sutton and Barto (1998) R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, 1998.
  • Watkins and Dayan (1992) C. J. C. H. Watkins, P. Dayan, Q-Learning, Machine Learning 8 (1992) 279–292.
  • Hu and Wellman (2003) J. Hu, M. P. Wellman, Nash Q-Learning for General-Sum Stochastic Games, Journal of Machine Learning Research 4 (2003) 1039–1069.
  • Kaelbling et al. (1996) L. P. Kaelbling, M. L. Littman, A. W. Moore, Reinforcement Learning: A Survey, Journal of Artificial Intelligence Research 4 (1996) 237–285.
  • Pang and Lee (2005) B. Pang, L. Lee, Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales, in: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL ’05), 2005, pp. 115–124.
  • Turney (2002) P. D. Turney, Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews, Association for Computational Linguistics, 2002.
  • Legg and Sweeny (2014) A. M. Legg, K. Sweeny, Do You Want the Good News or the Bad News First? The Nature and Consequences of News Order Preferences, Personality and Social Psychology Bulletin 40 (2014) 279–288.
  • Dadvar et al. (2011) M. Dadvar, C. Hauff, F. de Jong, Scope of Negation Detection in Sentiment Analysis, in: Proceedings of the Dutch-Belgian Information Retrieval Workshop, University of Amsterdam, Amsterdam, Netherlands, 2011, pp. 16–20.
  • Muntermann and Guettler (2007) J. Muntermann, A. Guettler, Intraday Stock Price Effects of Ad Hoc Disclosures: The German Case, Journal of International Financial Markets, Institutions and Money 17 (2007) 1–24.
  • Konchitchki and O’Leary (2011) Y. Konchitchki, D. E. O’Leary, Event Study Methodologies in Information Systems Research, International Journal of Accounting Information Systems 12 (2011) 99–115.
  • MacKinlay (1997) A. C. MacKinlay, Event Studies in Economics and Finance, Journal of Economic Literature 35 (1997) 13–39.
  • Tetlock et al. (2008) P. C. Tetlock, M. Saar-Tsechansky, S. Macskassy, More Than Words: Quantifying Language to Measure Firms’ Fundamentals, Journal of Finance 63 (2008) 1437–1467.
  • Robey and Taggart (1982) D. Robey, W. Taggart, Human Information Processing in Information and Decision Support Systems, MIS Quarterly 6 (1982) 61.
  • Dimoka et al. (2011) A. Dimoka, P. A. Pavlou, F. D. Davis, NeuroIS: The Potential of Cognitive Neuroscience for Information Systems Research: Research Commentary, Information Systems Research 22 (2011) 687–702.
  • Riedl et al. (2014) R. Riedl, F. D. Davis, A. R. Hevner, Towards a NeuroIS Research Methodology: Intensifying the Discussion on Methods, Tools, and Measurement, Journal of the Association for Information Systems 15 (2014) 1–35.
  • Minas et al. (2014) R. K. Minas, R. F. Potter, A. R. Dennis, V. Bartelt, S. Bae, Putting on the Thinking Cap: Using NeuroIS to Understand Information Processing Biases in Virtual Teams, Journal of Management Information Systems 30 (2014) 49–82.
  • Dimoka et al. (2012) A. Dimoka, R. D. Banker, I. Benbasat, F. D. Davis, A. R. Dennis, D. Gefen, A. Gupta, A. Ischebeck, P. H. Kenning, P. A. Pavlou, et al., On the Use of Neurophysiological Tools in IS Research: Developing a Research Agenda for NeuroIS, MIS Quarterly 36 (2012) 679–702.
  • Khemlani et al. (2012) S. Khemlani, I. Orenes, P. N. Johnson-Laird, Negation: A Theory of Its Meaning, Representation, and Use, Journal of Cognitive Psychology 24 (2012) 541–559.
  • Zhang (1987) W.-R. Zhang, POOL: A Semantic Model for Approximate Reasoning and Its Application in Decision Support, Journal of Management Information Systems 3 (1987) 65–78.
  • Modha et al. (2011) D. S. Modha, R. Ananthanarayanan, S. K. Esser, A. Ndirango, A. J. Sherbondy, R. Singh, Cognitive Computing, Communications of the ACM 54 (2011) 62.