1 Introduction
The Enlightenment was the golden age of human rationality. In the eyes of a rationalist like Descartes or Spinoza, human reasoning is flawless, marching toward uncovering ultimate truth. Adopting the view that human experience is, at best, partial, and at worst misleading, human reasoning was held by rationalists to be the
means for attaining truth. In Kantian terminology, rationalists maintained that human a posteriori knowledge is prone to imperfections, but a priori knowledge is free of such impurities. Upon careful empirical investigations, however, human reasoning was found to systematically deviate from normative principles like the laws of probability and logic in numerous ways, thus strongly challenging rationalism. A few centuries since the birth of rationalism, human reasoning is now portrayed as anything but flawless, filled with numerous misjudgments, biases, and cognitive fallacies (e.g., Simon, 1972; Kahneman & Tversky, 1974; Kahneman, 2011).
In the past few decades, empirical investigations into human reasoning led to discovery of new cognitive fallacies, giving rise to a large, evergrowing number of documented fallacies, a state of affairs which can fairly be characterized as a zoo^{1}^{1}1The term ‘cognitive fallacy zoo’ is inspired by an analogous terminology in the computational complexity literature, called ‘complexity zoo.’ For details, visit: https://complexityzoo.uwaterloo.ca/Complexity_Zoo of cognitive fallacies (e.g., Tversky Kahneman, 1973, 19812). A glance at over a hundred cognitive fallacies listed on Wikipedia (see Fig. 1) attests to this claim.
In this largely methodological work, we formally present a principled way to bring order to the cognitive fallacy zoo, allowing for a precise characterization of how various fallacies relate to one another. We introduce the idea of establishing an implication relationship (IR) (denoted by ) between a pair of cognitive fallacies, formally characterizing how the occurrence of one fallacy implies another. More formally, for two cognitive fallacies , the expression denotes that leads to , i.e., the occurrence of logically implies the occurrence of . As a proofofconcept, we present several examples of IRs involving experimentally welldocumented cognitive fallacies: baserate neglect (Tversky Kahneman, 19811), availability bias (Kahneman & Tversky, 1973), conjunction fallacy (Kahneman & Tversky, 1983), decoy effect (Huber, Joel, & Puto, 1982), framing effect (Tversky Kahneman, 19812) and Allais paradox (Allais, 1953).
The idea of establishing IRs between pairs of cognitive fallacies is analogous to, and partly inspired by, the foundational concept of reduction in computational complexity theory (see Karp, 1972; Papadimitriou, 2003; Arora & Barak, 2009; Sipser 2006), which has played a profound role in theoretical computer science, allowing us to formally establish how various computational problems relate to each other and how the solution to one sheds light on that of another. After a brief discussion on the role of reduction in computational complexity, we return to the formal characterization of the notion of IR and subsequently present several examples of IRs involving experimentally welldocumented cognitive fallacies. But first, some historical notes on reduction in computational complexity.
2 Reduction in Computational Complexity
The notion of reduction plays a fundamental role in computational complexity theory, and in theoretical computer science more generally. Informally put, a computational problem is reducible to computational problem , if every instance of can be transformed into an instance of . Therefore, the reduction of to offers an indirect way of solving , by first reducing to , and then solving .
To further clarify the idea of reduction, we provide two examples. As a first example, consider two wellknown computational problems, namely, HamiltonianPath and Hamiltoniancycle. The HamiltonianPath problem is defined as follows: given a (directed) graph , is there a path which visits every node of exactly once? The Hamiltoniancycle is defined as follows: given a (directed) graph , is there a cycle which visits every node of exactly once? It turns out that HamiltonianCycle is reducible to HamiltonianPath. Given that the definitions HamiltonianCycle and HamiltonianPath are closely related (since a cycle is a path with its endpoints coinciding), this reduction is not especially surprising.
As a second example, let us consider HamiltonianPath together with the 3Colorability problem, defined as follows: given a graph and distinct colors, can you color the nodes of such that the endpoints of every edge are colored differently? At fist glance, the HamiltonianPath and 3Colorability appear to have no connection with one another whatsoever. Surprisingly, however, it turns out that HamiltonianPath can be reduced to 3Colorability.^{2}^{2}2The reduction can be established by a chain of straightforward reductions: HamiltonianPath to SAT, SAT to 3SAT, and finally, 3SAT to 3Coloring. Thus, the question of whether a graph has a Hamiltonian path can be resolved by answering if a corresponding graph is 3colorable.
The idea of reduction has had profound implications for theoretical computer science, allowing for formally connecting seemingly unrelated computational problems to one another (see Fig. 2(a)). Had reduction not been introduced into theoretical computer science, every computational problem would have had to be investigated on its own, because the solution to one would not have shed any light on the solution to others. It was a major breakthrough when Richard Karp (1972) showed that a key computational problem called Satisfiability could be reduced to a number of other wellknown computational problems, a contribution for which he was eventually awarded the Turing award in 1985. It is also worth noting that the (in)famous vs. problem in theoretical computer science, at its core, concerns the possibility or impossibility of establishing a particular form of reduction.
One might wonder if an idea broadly analogous to reduction in computational complexity could be introduced into cognitive science, allowing for formally connecting seemingly different cognitive fallacies with one another, and, hence, bring order to the cognitive fallacy zoo in a principled manner. Primarily motivated by this, and, by analogy with the notion of reduction in theoretical computer science, we introduce the idea of establishing IRs between cognitive fallacies, formally characterizing how one fallacy would imply another.
3 Implication Relationships: Formalization
In what follows, we first formally introduce the idea of establishing an implication relationship (IR) (denoted by ) between a pair of cognitive fallacies, followed by several examples of IRs involving experimentally welldocumented cognitive fallacies.
Definition (Implication Relationship). For two cognitive fallacies/biases , the fallacy is said to implicate the fallacy (denoted by ) if and only if the occurrence of logically implies the occurrence of .
4 Examples on Implication Relationships
As a proofofconcept, We next present several examples of IRs involving experimentally welldocumented cognitive fallacies, namely, baserate neglect (Tversky Kahneman, 19811), availability bias (Kahneman & Tversky, 1973), conjunction fallacy (Kahneman & Tversky, 1983), decoy effect (Huber et al., 1982), framing effect (Tversky Kahneman, 19812), and Allais paradox (Allais, 1953).
4.1 Case Study 1: Decoy Effect Framing Effect
As our first example, we formally establish an IR between two welldocumented cognitive fallacies, namely, the decoy effect (DE) and the framing effect (FE).
The Framing Effect (FE): If people produce different responses for two equivalent tasks, the framing effect (FE) has occurred (Tversky Kahneman, 19812; Kahneman Tversky, 1984). In that light, FE is a violation of the extensionality principle (BourgeoisGironde & Giraud, 2009), which prescribes that two equivalent tasks should elicit the same response.
FE is well captured by Tversky Kahneman (19812): Subjects were asked to “imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill
people. Two alternative programs to combat the disease have been proposed. Assume the exact scientific estimate of the consequences of the programs are as follows.” In one condition, subjects were presented with a choice between Programs A and B, while in another condition, subjects were asked to choose between Programs C and D:

Program A: people will be saved.

Program B: There is a probability that 600 people will be saved, and a probability that no people will be saved.

Program C: people will die.

Program D: there is a probability that nobody will die, and a probability that 600 people will die.
Despite the equivalence of these Programs pairs, a majority of the first group preferred Program A (), while a majority of the second group preferring Program D ( B).
The Decoy Effect (DE): The decoy effect (DE) refers to a change in people’s preference between two options, when presented with a third asymmetricallydominated option, i.e., an option which is inferior to one option in all respects, but, in comparison to the other option, it is inferior in some respects and superior in others. In that light, DE is a violation of the independence of irrelevant alternative axiom of rational decision theory (Ray, 1973), which prescribes the following: If is preferred to out of the choice set , introducing a third option , hence expanding the choice set to , should not make preferable to .
We are now wellpositioned to formally present our result.
Proposition 1. The following holds:
Proof. According to normative principles, preference for the choice sets and should be the same, with being an asymmetricallydominated option. The rationale is the following: Since is inferior to one option in all respects, rationally should never be chosen; hence, the preference pattern for the choice sets and should be identical. Therefore, whenever people’s preference pattern for the choice sets and differs (which is the case for DE), it logically implies the violation of the extensionality principle, hence granting the occurrence of FE. This concludes the proof.
The message of Proposition 1 is simple: From the standpoint of normative principles, the two choice sets and (with being an asymmetricallydominated option) are equivalent, therefore people’s showing different preference patterns for the two choice sets, as is the case in DE, is a clear indication of FE. Proposition 1, therefore, formally establishes that the occurrence of DE leads to the occurrence of FE, that is, whenever DE occurs, so does FE.
4.2 Case Study 2: BaseRate Neglect Conjunction Fallacy
As our second example, we formally establish an IR between another pair of welldocumented cognitive fallacies, namely, the baserate neglect (BRN) and the conjunction fallacy (CF). BRN and CF can be characterized as follows.
The BaseRate Neglect (BRN): Baserate neglect (BRN) (Tversky Kahneman, 19811)
refers to people not considering prior probabilities in their judgments under uncertainty.
The Conjunction Fallacy (CF): For two events and presented with evidence , people judge the probability of the event to be greater than that of (or ), in isolation. That is, more formally, people judge: . In that light, CF is a clear violation of the axioms of probability (since ; that is, the probability of a subset of , in principle, cannot be greater than that of ).
CF is well captured in the famous Linda experiment by Tversky and Kahneman (1981). Presented with a description () of Linda, a politically active, single, outspoken, and very bright 31yearold female, people overwhelmingly judge that Linda is more likely to be a feminist bankteller () than to be a bankteller ().
We are now wellpositioned to formally present our result.
Proposition 2. The following holds.
Proof. Since and , we have:
where the term indicates the ratio between priors and . If BRN occurs (which results in the term being dropped), it follows that:
Assuming that , which is the case in the context of CF (see the Linda experiment discussed above), it follows that:
hence CF occurs. This completes the proof.
In simple terms, Proposition 2 shows that the occurrence of BRN leads to the occurrence of CF, i.e., BRN gives rise to CF.
4.3 Case Study 3: Allais Paradox Framing Effect
As our third example, we formally establish an IR between the Allais paradox (APX) and FE. APX can be characterized as follows. (See Sec. 4.1 for the characterization of FE.)
The Allais Paradox (APX): The Allais paradox refers to an observed reversal in participants’ choices in two different experiments, each of which consists of a choice between two gambles, and , while in fact, according to the independence axiom of rational decisionmaking (Von Neumann & Morgenstern, 1953), no such a reversal should occur. That is, although the independence axiom grants the equivalence of the two experiments, the pattern of people’s preference nevertheless reverses from the first experiment to the second.^{3}^{3}3The reader is referred to Allais (1953) for a clear description of the two experiments.
Proposition 3. The following holds:
Proof. The proof is evident from the characterization of APX given above. Although the independence axiom of rational decisionmaking (Von Neumann & Morgenstern, 1953) grants the equivalence of the two experiments entertained in APX, the pattern of people’s preference nevertheless reverses from one to the other. That is, in the case of APX, people produce different responses for two equivalent experiments. Therefore, the occurrence of the Allais paradox logically implies the occurrence of the framing effect. This concludes the proof.
4.4 Case Study 4: Availability Bias Conjunction Fallacy
As our final example, we formally establish an IR between the welldocumented Availability bias (AvB) and CF. AvB can be concisely characterized as follows: (See Sec. 4.2 for the characterization of CF.)
The Availability Bias (AvB): Extreme events come to mind easily, people overestimate their probabilities, and overrepresent them in decisionmaking (Tversky Kahneman, 1973; Lieder ., 2018; Nobandegani ., 2018). Formally, people overestimate the probability of an event , , proportional to the absolute value of its subjective utility (Lieder et al., 2018; Bordalo, Gennaioli, & Shleifer, 2012). That is, people’s subjective probability of event , , is given by^{4}^{4}4We must emphasize that our establishing of the IR between AvB and CF only depends on the broad assumption that the more extreme an event is, the more people overestimate its probability, and holds for any which satisfies this condition, e.g., (Nobandegani ., 2018). Therefore, the assumption made in the characterization of AvB is only one choice out of infinitelymany possibilities satisfying the said condition, and hence, is not necessary. .
Proposition 4. Let and be two events, and let denote the event corresponding to the occurrence of and together, i.e., the one corresponding to the conjunction of the two events and . Assuming that , the following holds:
Proof. According to the characterization of AvB given above, and . We have,
It follows from the axioms of probability that ; hence, . However, since , it follows that . Therefore, altogether, which implies , granting the validity of the conjunction fallacy (CF). This concludes the proof.
The message of Proposition 4 is simple. If people judge the conjunction of two events to be much more extreme than each of them individually (i.e., ), then the occurrence of AvB leads to the occurrence of CF; that is, whenever AvB happens, so does CF.
5 General Discussion
In this largely methodological work, we introduce the notion of implication relation (IR) between a pair of cognitive fallacies, formally characterizing how one would logically imply the other.
A closer examination of Propositions 1 to 4 and their proofs reveals that IRs can be categorized into two broad types: logicalIRs (denoted by ) and causalIRs (denoted by ). Establishing a logicalIR, , from a fallacy to another fallacy implies that is a special case of , with every instance of being an instance of . For example, a closer examination of Proposition 1 and its proof reveals that DE is a special case of FE, with every instance of DE being an instance of FE in disguise. The same understanding holds for Proposition 3 and its proof, indicating that APX is simply a special case of FE, with every instance of APX being an instance of FE in disguise. Hence, using our newly introduced notation: and . Establishing a causalIR, , from a fallacy to another fallacy implies that the occurrence of brings about (i.e., causes) the occurrence of . For example, a closer examination of Proposition 2 and its proof reveals that the occurrence of BRN brings about the occurrence of CF, i.e., there is a causeeffect relationship between BRN and CF, with BRN being the cause and CF the effect. The same understanding holds for Proposition 4 and its proof, indicating that the occurrence of AvB brings about the occurrence of CF, i.e., there is a causeeffect relationship between AvB and CF, with AvB being the cause and CF the effect. Hence, using our newly introduced notation: and . Drawing further on the analogy between IR and reduction in computational complexity, it is worth noting that there also exist several types of reduction in computational complexity, namely, Karp’s reduction, Cook’s reduction, truthtable reduction, Lreduction, Areduction, Preduction, Ereduction, APreduction, PTASreduction, etc.
Importantly, logicalIRs and causalIRs have quite different implications. If holds (implying that is a special case of as discussed above), it then follows that a complete account of should also account for , and, in that sense, accounting for is more demanding^{5}^{5}5Accounting for is “more demanding” than for , as a complete account of would necessarily have to explain a wider range of cases, including all instances of as a subset. than accounting for . For example, since DE is a spacial case of FE (see Proposition 1 and its proof), that is, DE is nothing but FE in disguise, any complete account for FE inevitably should also account for DE, implying that accounting for FE is more demanding than accounting solely for a special case of FE, DE. However, if holds (implying that the occurrence of brings about ), it then follows that an account of naturally serves as an account of due to the following rationale: If causes , and causes , it then follows that causes , with serving as a mediator. In that light, establishing causalIRs between various cognitive biases/fallacies has an intriguing implication: For any chain of causalIRs , any mechanistic account of naturally serves as an account of . For example, since the occurrence of BRN causes the occurrence of CF (see Proposition 2 and its proof), it then follows that any mechanistic account of BRN naturally serves as an account of CF, with BRN serving as a mediator. This understanding has a very intriguing implications for studies of cognitive fallacies in general: Establishing a chain of causalIRs , clearly reveals which of the fallacies is more pivotal or fundamental to account for; the answer is of course the leftmost fallacy in the chain, i.e., . This strongly suggests that, directing efforts toward finding a comprehensive, satisfying account of would be the most rewarding research agenda, because, thanks to the established chain of causalIRs, we would get a set of comprehensive, satisfying accounts of all for free! Therefore, identifying IRs could systematize and guide a research agenda, with a huge increase in research efficiency.
Proposition 4, establishing , demonstrates an interesting possibility wherein, under a set of auxiliary assumptions (e.g. in this case), an IR can be established between two fallacies. The idea of establishing IRs under a set of assumptions widens the applicability of the notion of IR, allowing it to link together pairs of cognitive fallacies that would have little connections unless further assumptions are invoked. Drawing again on the analogy between IR and reduction in computational complexity, it is worth noting that in establishing reductions it is common practice to evoke various assumptions/constraints on the characterization of computational problems (e.g. Satinstead of Sat) and/or on the forms of reductions themselves (e.g. polynomialtime reductions or lineartime reductions). Importantly, these auxiliary assumptions should be empirically confirmed, motivating new and exciting experimental avenues of research. Empirical confirmations of such auxiliary assumptions, empirically justifies the validity of invoking such assumptions. Importantly, empirical disconfirmation of such assumptions, of course, discredit the said established IR, inviting attempts for establishing other IRs (in the hope that they would survive empirical tests), or for invoking other empirically validated assumptions which would save the established IR, motivating new theoretical and empirical work.
In this work, as a proofofconcept, we established IRs between several welldocumented cognitive biases; see Fig.2(b). Future work should investigate the possibility of establishing IRs between a wider range cognitive biases/fallacies, with the ultimate goal of developing a principled, comprehensive map of cognitive biases/fallacies, broadly resembling what is shown in Fig. 2(a) in the context of computational complexity. While many questions remain open, and much work is left to be done in this direction, we hope to have made some progress toward systematically bringing order to the cognitive fallacy zoo. We see our work as a first step in this direction.
Much like the fundamental concept of reduction in computational complexity, as eminently featured in the work by Karp (1972), that revolutionized the fields of computational complexity and, arguably, theoretical computer science as a whole, we hope the pursuit of the vision outlined in our paper to have a comparable impact on the fields of cognitive science and cognitive psychology.
Acknowledgments We would like to thank Marcel Montrey, Kevin da Silva Castanheira, and Peter Helfer for helpful comments on an earlier draft of this work. This work is supported by an operating grant to TRS from the Natural Sciences and Engineering Research Council of Canada.
References
 Allais (1953) allais1953Allais, M. 1953. Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’Ecole Americaine Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole americaine. Econometrica214503546. http://www.jstor.org/stable/1907921
 Arora Barak (2009) arora2009computationalArora, S. Barak, B. 2009. Computational Complexity: A Modern Approach Computational Complexity: A Modern Approach. Cambridge University Press.
 BourgeoisGironde Giraud (2009) bourgeois2009framingBourgeoisGironde, S. Giraud, R. 2009. Framing effects as violations of extensionality Framing effects as violations of extensionality. Theory and Decision674385–404.
 Huber . (1982) huber1982addingHuber, J., Payne, JW. Puto, C. 1982. Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of Consumer Research9190–98.
 Kahneman (2011) kahneman2011thinkingKahneman, D. 2011. Thinking, Fast and Slow Thinking, fast and slow. Macmillan.
 Kahneman Tversky (1984) kahneman1984choicesKahneman, D. Tversky, A. 1984. Choices, values, and frames Choices, values, and frames. American Psychologist341–350.
 Karp (1972) karp1972reducibilityKarp, RM. 1972. Reducibility among combinatorial problems Reducibility among combinatorial problems. Complexity of Computer Computations Complexity of computer computations ( 85–103). Springer.
 Lieder . (2018) lieder2018overrepresentationLieder, F., Griffiths, TL. Hsu, M. 2018. Overrepresentation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources. Psychological Review.
 Nobandegani . (2018) Nobandegani2018Nobandegani, AS., da Silva Castanheira, K., Otto, AR. Shultz, TR. 2018. Overrepresentation of Extreme Events in DecisionMaking: A Rational Metacognitive Account Overrepresentation of extreme events in decisionmaking: A rational metacognitive account. Proceedings of the Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Proceedings of the Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
 Papadimitriou (2003) papadimitriou2003computationalPapadimitriou, CH. 2003. Computational Complexity Computational Complexity. John Wiley and Sons Ltd.
 Ray (1973) ray1973independenceRay, P. 1973. Independence of irrelevant alternatives Independence of irrelevant alternatives. Econometrica: Journal of the Econometric Society987–991.
 Simon (1972) simon1972theoriesSimon, HA. 1972. Theories of bounded rationality Theories of bounded rationality. Decision and Organization11161–176.

Sipser (2006)
sipser2006introductionSipser, M.
2006.
Introduction to the Theory of Computation Introduction to the Theory of Computation (
2). Thomson Course Technology Boston. 
Tversky Kahneman (1973)
tversky1973availabilityTversky, A. Kahneman, D.
1973.
Availability: A heuristic for judging frequency and probability Availability: A heuristic for judging frequency and probability.
Cognitive Psychology52207–232.  Tversky Kahneman (1974) tversky1974judgmentTversky, A. Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases Judgment under uncertainty: Heuristics and biases. Science18541571124–1131.
 Tversky Kahneman (19811) tversky1981evidentialTversky, A. Kahneman, D. 19811. Evidential impact of base rates Evidential impact of base rates . Stanford University, Department of Psychology.
 Tversky Kahneman (19812) tversky1981framingTversky, A. Kahneman, D. 19812. The framing of decisions and the psychology of choice The framing of decisions and the psychology of choice. Science2114481453–458.
 Tversky Kahneman (1983) tversky1983extensionalTversky, A. Kahneman, D. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review904293.
Comments
There are no comments yet.