Log In Sign Up

Computational Complexity and the Nature of Quantum Mechanics (Extended version)

by   Alessio Benavoli, et al.

Quantum theory (QT) has been confirmed by numerous experiments, yet we still cannot fully grasp the meaning of the theory. As a consequence, the quantum world appears to us paradoxical. Here we shed new light on QT by having it follow from two main postulates (i) the theory should be logically consistent; (ii) inferences in the theory should be computable in polynomial time. The first postulate is what we require to each well-founded mathematical theory. The computation postulate defines the physical component of the theory. We show that the computation postulate is the only true divide between QT, seen as a generalised theory of probability, and classical probability. All quantum paradoxes, and entanglement in particular, arise from the clash of trying to reconcile a computationally intractable, somewhat idealised, theory (classical physics) with a computationally tractable theory (QT) or, in other words, from regarding physics as fundamental rather than computation.


Computational Complexity and the Nature of Quantum Mechanics

Quantum theory (QT) has been confirmed by numerous experiments, yet we s...

Quantization of Blackjack: Quantum Basic Strategy and Advantage

Quantum computers that process information by harnessing the remarkable ...

Cost of quantum entanglement simplified

Quantum entanglement is a key physical resource in quantum information p...

Development of a Thermodynamics of Human Cognition and Human Culture

Inspired by foundational studies in classical and quantum physics, and b...

Quantum Interference for Counting Clusters

Counting the number of clusters, when these clusters overlap significant...

The pre-history of quantum computation

The main ideas behind developments in the theory and technology of quant...

1 Introduction

Quantum theory (QT) is one of the most fundamental, and accurate, mathematical descriptions of our physical world. It dates back to the 1920s, and in spite of nearly one century passed by since its inception, we do not have a clear understanding of such a theory yet. In particular, we cannot fully grasp the meaning of the theory: why it is the way it is. As a consequence, we cannot come to terms with the many paradoxes it appears to lead to — its so-called “quantum weirdness”.

This paper aims at finally explaining QT while giving a unified reason for its many paradoxes. We pursue this goal by having QT follow from two main postulates:


The theory should be logically consistent.


Inferences in the theory should be computable in polynomial time.

The first postulate is what we essentially require to each well-founded mathematical theory, be it physical or not: it has to be based on a few axioms and rules from which we can unambiguously derive its mathematical truths. The second postulate will turn out to be central. It requires that there should be an efficient way to execute the theory in a computer.

QT is an abstract theory that can be studied detached from its physical applications. For this reason, people often wonder which part of QT actually pertains to physics. In our representation, the answer to this question shows itself naturally: the computation postulate defines the physical component of the theory. But it is actually stronger than that: it states that computation is more primitive than physics.

Let us recall that QT is widely regarded as a “generalised” theory of probability. In this paper we make the adjective “generalised” precise. In fact, our coherence postulate leads to a theory of probability, in the sense that it disallows “Dutch books”: this means, in gambling terms, that a bettor on a quantum experiment cannot be made a sure loser by exploiting inconsistencies in their probabilistic assessments. But probabilistic inference is in general NP-hard. By imposing the additional postulate of computation, the theory becomes one of “computational rationality”: one that is consistent (or coherent), up to the degree that polynomial computation allows. This weaker, and hence more general, theory of probability is QT.

As a result, for a subject living inside QT, all is coherent. For us, living in the classical, and somewhat idealised, probabilistic world (not restricted by the computation postulate), QT displays some inconsistencies: precisely those that cannot be fixed in polynomial time. All quantum paradoxes, and entanglement in particular, arise from the clash of these two world views: i.e., from trying to reconcile an unrestricted theory (i.e., classical physics) with a theory of computational rationality (quantum theory). Or, in other words, from regarding physics as fundamental rather than computation.

But there is more to it. We show that the theory is “generalised” also in another direction, as QT turns out to be a theory of “imprecise” probability: in fact, requiring the computation postulate is similar to defining a probabilistic model using only a finite number of moments; and therefore, implicitly, to defining the model as the set of all probabilities compatible with the given moments. In QT, some of these compatible probabilities can actually be

signed, that is, they allow for “negative probabilities”. In our setting, these have no meaning per se, they are just a mathematical consequence of polynomially bounded coherence (or rationality).

1.1 Relations with the literature

Since its foundation, there have been two main ways to explain the differences between QT and classical probability. The first one, that goes back to Birkhoff and von Neumann [birkhoff1936logic], explains this differences with the premise that, in QT, the Boolean algebra of events is taken over by the “quantum logic” of projection operators on a Hilbert space. The second one is based on the view that the quantum-classical clash is due to the appearance of negative probabilities [feynman1987negative].

Recently, there has been a research effort, the so-called “quantum reconstruction”, which amounts to trying to rebuild the theory from more primitive postulates. The search for alternative axiomatisations of QT has been approached following different avenues: extending Boolean logic [birkhoff1936logic, mackey2013mathematical, jauch1963can], using operational primitives [hardy2011foliable, hardy2001quantum, barrett2007information, chiribella2010probabilistic], using information-theoretic postulates [barrett2007information, barnum2011information, van2005implausible, pawlowski2009information, dakic2009quantum, fuchs2002quantum, brassard2005information, mueller2016information], building upon the subjective foundation of probability [Caves02, Appleby05a, Appleby05b, Timpson08, longPaper, Fuchs&SchackII, mermin2014physics, pitowsky2003betting, Pitowsky2006, benavoli2016quantum, benavoli2017gleason] and starting from the phenomenon of quantum nonlocality [barrett2007information, van2005implausible, pawlowski2009information, popescu1998causality, navascues2010glance].

A common trait of all these approaches is that of regarding QT as a generalised theory of probability. But why is probability generalised in such a way, and what does it mean? Our paper appears to be the first to show that the answer to this question rests in the computational intractability of classical probability theory contrasted to the polynomial-time complexity of QT.

Note that there have been previous investigations into the computational nature of QT but they have mostly focused on topics of undecidability and of potential computational advantages of non-standard theories involving modifications of quantum theory [bacon2004quantum, aaronson2004quantum, aaronson2005quantum, chiribella2013quantum].111The undecidability results in QT are usually obtained via a limiting argument, as the number of particles goes to infinity (see, e.g., [cubitt2015undecidability]). These results do not apply to our setting as we rather take the stance that the Universe is a finite physical system.

1.2 Outline of the paper

Section 2

is concerned with the coherence principle. We recall how Bayesian probability can be derived (via mathematical duality) from a set of logical axioms. Addressing self-consistency (coherence or rationality) in such a setting is a standard task in logic; in practice, it reduces to prove that a certain real-valued bounded function is non-negative.

Section 3 details the computation principle. We consider the problem of verifying the non-negativity of a function as above. This problem is generally undecidable or, when decidable, NP-hard. We make the problem polynomially solvable by redefining the meaning of (non-)negativity. We give our fundamental theorem (Theorem 2) showing that the redefinition is at the heart of the clash between Bayesian probability and computational rationality.

We show in Section 4 that QT is a special instance of computational rationality and hence that Theorem 2 is not only the sole difference between quantum and classical probability, but also the distinctive reason for all quantum paradoxes; this latter part is discussed in Section 5. In particular, to give further insight about the quantum-classical clash, in Section 5.2 we reconsider the question of local realism in the light of computational rationality; in Section 5.3 we show that the witness function, in the fundamental “entanglement witness theorem”, is nothing else than a negative function whose negativity cannot be assessed in polynomial time—whence it is not “negative” in QT.

Moreover, using Theorem 2, in Section 6 we devise an example of a computationally tractable theory of probability that is unrelated to QT but that admits entangled states. This shows in addition that the “quantum logic” and the “quasi-probability” foundations of QT are two faces of the same coin, being natural consequences of the computation principle.

We finally discuss the results in Section 7. The technical proofs of the paper are in Appendix.

2 Desirability

2.1 Coherence postulate

De Finetti’s subjective foundation of probability [finetti1937] is based on the notion of rationality (self-consistency or coherence). This approach has then been further developed in [williams1975, walley1991], giving rise to the so-called theory of desirable gambles (TDG).222Contrarily to what it may seem, TDG is not an “exotic” theory of probability; loosely speaking, it is just an equivalent reformulation of the well-known Bayesian decision theory (à la Anscombe-Aumann [anscombe1963]) once this is extended to deal with incomplete preferences [zaffalon2017a, zaffalon2018a]. In this setting probability is a derived notion in the sense that it can be inferred via mathematical duality from a set of logical axioms that one can interpret as rationality requirement in the way a subject, let us call her Alice, accepts gambles on the results of an uncertain experiment. It goes as follow.

Let denote the possibility space of an experiment (e.g., or in QT). A gamble on is a bounded real-valued function of , interpreted as an uncertain reward. It plays the traditional role of variables or, using a physical parlance, of observables. In the context we are considering, accepting a gamble by an agent is regarded as a commitment to receive, or pay (depending on the sign), utiles333Abstract units of utility, indicating the satisfaction derived from an economic transaction; we can approximately identify it with money provided we deal with small amounts of it [finetti1974, Sec. 3.2.5]. whenever occurs. Given this view, if by we denote the set of all the gambles on , the subset of all non-negative gambles, that is, of gambles for which Alice is never expected to lose utiles, is given by . Analogously, negative gambles, those gambles for which Alice will certainly lose some utiles, even an epsilon, is defined as . In what follows, with we denote a finite444We will comment on the case when may not be finite. set of gambles that Alice finds desirable: these are the gambles that she is willing to accept and thus commits herself to the corresponding transactions.

The crucial question is now to provide a criterion for a set of gambles representing assessments of desirability to be called rational. Intuitively Alice is rational if she avoids sure losses: that is, if, by considering the implications of what she finds desirable, she is not forced to find desirable a negative gamble. This postulate of rationality is called “no arbitrage” in economics and “no Dutch book” in the subjective foundation of probability. In TDG we formulate it thorough the notion of logical coherence which, despite the informal interpretation given above, is a purely syntactical (structural) notion. To show this, we need to define an appropriate logical calculus, that is, the tautologies and the inference rules (characterising the set of gambles that Alice must find desirable as a consequence of having desired in the first place), and based on it to characterise the family of consistent sets of assessments..

Given that non-negative gambles may increase Alice’s utility without ever decreasing it, we have that:

  1. should always be desirable.

This defines the tautologies of the calculus. We thus characterise the set of gambles that we must find desirable as a consequence of having desired in the first place, that is its the deductive closure of a set . Those gambles are the conical hull of gambles in . Indeed, whenever are desirable for Alice, then any positive linear combination of them should also be desirable (this amounts to assuming that Alice has a linear utility scale, which is a standard assumption in probability):


Moreover, we can assume that if Alice find all gambles of type desirable, for any arbitrary small positive , then she should also find desirable. This means that the actual deductive closure we are after is given by the map associating to the set:

  1. .

where is the topological closure operator given the supremum norm topology on . The set is the smallest closed convex cone that includes , and it is called the natural extension of , and sometimes is simply denoted by . Note that whenever is finite.

In a betting system, a sure loss for an agent is represented by a negative gamble. Indeed, whenever the outcome of the experiment may be, accepting means to accept to pay some non zero utiles. We therefore say that:

Definition 1 (Coherence postulate).

A set of desirable gambles is coherent if and only if

  1. .

As simple as it looks, expression A alone captures the coherence postulate as formulate in the introduction in case of classical probability theory. This will be make precise in Section 2.3.

The following result, in addition to providing a necessary and sufficient condition for coherence, states that can be regarded as playing the role of the Falsum and A can be reformulated as . Note that. we have introduced the symbol to distinguish the unitary function in , i.e., for all , from the scalar (real number) . This will be convenient later in Section 3. It is an immediate consequence of Theorem 3.8.5 and Claim 3.7.4 in [Walley91].

Proposition 1.

Let be a set of gambles. The following claims are equivalent

  1. is coherent,

  2. ,

  3. ,

  4. , for some gamble .

Postulate A, which presupposes postulates A and 1, provides the normative definition of TDG, referred to by . Based on it, in Subsection 2.3 we derive the axioms of classical, Bayesian, probability theory. This is simply based on the fact that, geometrically, is a closed convex cone. It is thence clear from the above definition that is the minimal coherent set of desirable gambles. It characterises a state of full ignorance – a subject without knowledge about should only accept nonnegative gambles. Conversely, a coherent set of desirable gambles is called maximal if there is no other coherent set of desirable gambles including it. In terms of rationality, a maximal coherent set of desirable gambles is a set of gambles that Alice cannot extend by accepting other gambles while keeping at the same time rationality. It also represents a situation in which Alice is sure about the state of the system, as we will show in the next examples and section.

Example 1.

Let us consider the toss of a fair coin . A gamble in this case has two components and . If Alice accepts then she commits herself to receive/pay if the outcome is Heads and if Tails. Since a gamble is in this case an element of , , we can plot the gambles Alice accepts in a 2D coordinate system with coordinate and , see Figure 1. A says that Alice is willing to accept any gamble that, no matter the result of the experiment, may increase her wealth without ever decreasing it, that is with – Alice always accepts the first quadrant, Figure 1(a). Similarly. Alice does not accept any gamble that will surely decrease her wealth, that is with (this follows by A). In other words, Alice always does not accept the interior of the third quadrant, Figure 1(b). Then we ask Alice about – she loses if Heads and wins if Tails. Since Alice knows that the coin is fair, she accepts this gamble as well as all the gambles of the form with , because this is just a “change of currency” (scaling). Similarly, she accepts all the gambles for any , since these gambles are even more favourable for her (additivity). Scaling and additivity follow by 1.

Now, we can ask Alice about and the argument is symmetric to the above case. We therefore obtain the following set of desirable gambles (see Figure 1(c)): . Finally, we can ask Alice about – she loses if Heads and wins if Tails. Since the coin is fair, Alice accepts this gamble. A similar conclusion can be derived for the symmetric gamble . Figure 1(d) is her final set of desirable gambles about the experiment concerned with the toss of a fair coin, which in a formula becomes . The resulting closed convex cone is maximal. Alice does not accept any other gamble. In fact, if Alice would also accept for instance then, since she has also accepted , i.e., , she must also accept (because of 1). However, is always negative, Alice always loses utiles in this case. In other words, by accepting Alice incurs a sure loss – she is irrational (A does not hold).




Figure 1: Alice’s sets of coherent set of desirable gambles for the experiment of tossing a fair coin.

2.2 Inference

In the operational interpretation of , agents can buy/sell gambles from/to each other. Therefore, an agent must be able to determine the selling/buying prices for gambles. This can be formulated as an inference procedure on . For simplicity, we consider finite sets of assessments, and denote by the cardinality of a finite set .

Definition 2.

Let be a finite set of assessments of desirability, and be a coherent set of desirable gambles. Given , we denote with


the lower prevision of . The upper prevision of is equal to .

The lower prevision of a gamble is Alice’s supremum buying price for , i.e., how much she should pay to buy the gamble . The upper prevision is Alice’s infimum selling price for , i.e., how much she should ask to sell the gamble . We will show in Section 2.3 that the lower and upper prevision are just the lower and upper expectation for the gamble . By exploiting (1)–(1), we can equivalently rewrite (2) as:


or equivalently,


In other words, we have expressed the constraint in the above optimisation problems as a membership.

Example 2.

Let us consider again the coin example and the set of assessments . It can be verified that coincides with the maximal closed convex cone in Figure 1(d). In this case, the lower prevision for the gamble is and it is equal to the upper prevision. For maximal coherent set of desirable gambles, lower and upper previsions always coincide. If Alice had accepted only the gambles resulting in the closed convex cone of Figure 1(c), then the lower prevision for the gamble would be and the upper prevision .

Having defined lower and upper previsions, we can better understand A. A can be formulated as the following decision problem


there exists a combination of Alice’s desirable gambles that is negative. Let us assume such exist, that is . Then another agent, Bob, could sell to Alice the gambles and she would accept them because is desirable to her and so (by 1). Overall Bob would give away . However, since , he actually gains utiles no matter the result of the experiment. Bob’s gain is equivalent to Alice’s loss (), hence Alice can be used as a money pump.555By 1, Alice would also accept the gambles for allowing Bob to multiply his gain of . In Economics, such situation is called an arbitrage, while in the subjective definition of probability is called a Dutch book.

Hence, finally we notice that, by Equation (3) and Proposition 1, the problem of checking whether is coherent (the coherence problem) can be formulated as the following decision problem:


If the answer is “yes”, then the gamble belongs to , proving ’s incoherence. The coherence problem therefore also reduces to the problem of evaluating the nonnegativity of a function in the considered space (let us call this problem the “nonnegativity decision problem”).

2.3 Probabilistic interpretation thorough duality

The aim of this Section is to provide a natural probabilistic interpretation to the theory of desirable gambles . This is done by showing a stronger result, namely that the dual of a coherent set of desirable gambles is a closed convex set of probability charges:


where is the set of nonnegative charges. Observe that the term “charge” is used in Analysis to denote a finitely additive set function [aliprantisborder, Ch.11]. Conversely a measure is a countably additive set function. In this paper we use charges to be more general, but this does not really matter for the results about QT that we are going to present later on.

The key point in the duality proof is that (the set of all nonegative gambles (real-valued bounded function) on ) includes indicator functions.666An indicator function defined on a subset is a function that is equal to one for all elements in and for all elements outside . This is crucial to prove that the dual of is always included in . We will see in the next sections that when this is not the case, the dual of a coherent set of desirable gambles is not anymore a convex set of probabilities.

Note that, equipped with the supremum norm, constitutes a Banach space, and its topological dual is the space of all bounded functionals on it. We assume the weak topology on .

Let be the algebra of subsets of and denotes a charge: that is is a finitely additive set function of [aliprantisborder, Ch.11], [bhaskara1983], that can take positive and negative values. We have that every gamble on is integrable with respect to any finite charge [aliprantisborder, Th.11.8]. Therefore, for any gamble and finite charge we can define , which we can interpret as a linear functional on . We denote by the set of all finite charges on and by the set of nonnegative charges. is isometrically isomorphic to . The duality bracket between and is given by , with and .

A linear functional of gambles is said to be nonnegative whenever it satisfies : , for . A nonnegative linear functional is called a state if moreover it preserves the unitary constant gamble. In our context, this means , i.e., the linear functional is scale preserving. Hence, the set of states corresponds to the closed convex set of all probability charges.

We define the dual of a subset of as:

Proposition 2.

The dual of coincides with , whereas the dual of is the set of nonnegative charges .

Since is an anti-monotonic operation on the complete lattice of subsets of , the dual of any coherent set of desirable gambles is a closed convex cone in between those two extremes. Can they be characterised in some way? It actually turns out that the dual of a coherent set of desirable gambles can be completely described in terms of a (closed convex) set of states (probability charges). More precisely, we have that:777All proofs can be found in the Appendix

Theorem 1.

The map

establishes a bijection between coherent sets of desirable gambles and non-empty closed convex sets of states.

This means that we can write the dual of as the set


which is a closed convex-set of probability charges. We have derived the axioms of probability—a non-negative function that integrates to one—from the the coherence postulate A. Hence, as we are going to see at the end of this subsection, whenever an agent is coherent, Equation (9) states that desirability corresponds to non-negative expectation (for all probabilities in ). When is incoherent, turns out to be empty—there is no probability compatible with the assessments in . It is thus form this perspective that it has to be understood the claim that expression A alone captures the coherence postulate as formulate in the introduction in case of classical probability theory, and thus that the latter follows from it.

As an immediate corollary of the Theorem 1, to say that a closed convex cone is coherent is equivalent to say that its dual is a closed convex subset of states.

As already mentioned, once we have defined the duality between TDG and probability theory, we can immediately reformulate the lower and upper previsions by means of probabilities. Indeed, let be a finite set of assessments, and be a coherent set of desirable gambles. Given , the lower prevision of defined in Equation (2) can also be computed as:


which is equivalent to


The upper prevision of is defined .

Hence, the lower and upper prevision of w.r.t.  are just the lower and upper expectation of w.r.t. . In case is maximal, then includes only a single probability and, therefore, in this case:

That is, the solution of (10) coincides with the expectation of . We have considered the more general case because, as we will discuss in Section 4.3, QT is a theory of “imprecise” probability [walley1991].888The term “imprecise” refers to the fact that the closed convex set may not be a singleton, that is the probability may not be “precisely” specified. Imprecise probability theory is also referred as robust Bayesian.

3 Taking computational complexity seriously

We have seen in Section 2.2 that the problem of checking whether or not is coherent can be formulated as the following decision problem:


If the answer is “yes”, then the gamble belongs to , proving ’s incoherence. Moreover, any inference task can ultimately be reduced to a problem of the form (12), see Section 2.2. Hence, the above decision problem unveils a crucial fact: the hardness of inference in classical probability corresponds to the hardness of evaluating the non-negativity of a function in the considered space (let us call this the “non-negativity decision problem”).

When is infinite (in this paper we consider the case ) and for generic functions, the non-negativity decision problem is undecidable. To avoid such an issue, we may impose restrictions on the class of allowed gambles and thus define on a appropriate subspace of .999The point is that we want defined on to coincide with the restriction to of when defined on . Given this property, we are then assured that the dual of a coherent set in can be identified with the dual of its deductive closure in , i.e. with the closed convex set of probability charges . In Appendix B we make the construction and claims precise. For instance, instead of , we may consider : the class of multivariate polynomials of degree at most (we denote by the subset of non-negative polynomials and by the negative ones). In doing so, by Tarski-Seidenberg quantifier elimination theory [tarski1951decision, seidenberg1954new], the decision problem becomes decidable, but still intractable, being in general NP-hard. If we accept the so-called “Exponential Time Hypothesis” (that PNP) and we require that inference should be tractable (in P), we are stuck. What to do? A solution is to change the meaning of “being non-negative” for a function by considering a subset for which the membership problem in (12) is in P.

In other words, a computationally efficient TDG, which we denote by , should be based on a logical redefinition of the tautologies, i.e., by stating that

  1. should always be desirable,

in the place of A. The rest of the theory can develop following the footprints of the original theory. In particular, the deductive closure for is is defined by:

  1. .

And sometimes we denote by . Again, for finite .

Finally, the coherence postulate, which now naturally encompasses the computation postulate, states that:

Definition 3 (P-coherence).

A set of desirable gambles is P-coherent if and only if

  1. .

Above we called P-coherent a set that satisfies a since, whenever contains all positive constant gambles, its incoherence can be verified in polynomial time by solving:101010For a justification of the non computational part of this claim, see Proposition 9.


where denotes the unitary gamble in , i.e., for all . Hence, and (defined over ) have the same deductive apparatus; they just possibly differ in the considered set of tautologies, and thus in their (in)consistencies, as we only ask Alice to always accept gambles for which she can efficiently determine the nonnegativity (P-nonnegative gambles) and to never accept gambles for which she can efficiently determine the negativity (P-negative gambles).

3.1 Computationally efficient coherence and its consequences

Interestingly, we can associate a “probabilistic” interpretation as before to the calculus defined by aa by computing the dual of a P-coherent set.


is a topological vector space, we can consider its dual space

of all bounded linear functionals . Hence, with the additional condition that linear functional preserves the unitary gamble (, the dual cone of a P-coherent is given by


Analogously to the previous cases, we call states the elements of the following closed convex set of linear functionals:


Hence, we can rewrite the dual as


To we can then associate its extension in , that is, the set of all charges on extending an element in . In general however this set does not yield a classical probabilistic interpretation to . This is because, whenever , there are negative gambles that cannot be proved to be negative in polynomial time.

Theorem 2.

Assume that includes all positive constant gambles and that it is closed (in ). Let be a P-coherent set of desirable gambles. The following statements are equivalent:

  1. includes a negative gamble that is not in .

  2. is incoherent, and thus is empty

  3. is not (the restriction to of) a closed convex set of mixtures of classical evaluation functionals.

  4. The extension of in the space of all charges in includes only signed ones (negative-probabilities).

Theorem 2 is the central result of this paper. It states that whenever includes a negative gamble (item 1), there is no classical probabilistic interpretation for it (item 2). The other points suggest alternatives solutions to overcome this deadlock: either to change the notion of evaluation functional (item 3) or to use negative-probabilities as a means for interpreting (item 4).

Let us clarify item 3 above. In doing so, we introduce some terminology. Recall that is the collection of states (probability charges), that is of all nonnegative linear functionals on that preserve the unitary constant gamble. The extremes of are the so-called atomic charges (Dirac’s delta), that is the functional assigning to some given and 0 elsewhere. The linear functional defined by an atomic charge is a classical evaluation function – it evaluates the function at . By Krein-Milman Theorem each state is a convex combination of atomic charges, or (when the space is not finite) a limit of such combinations. Hence, the linear functional induced by a state is a convex combination (mixture) of classical evaluation functions, or a limit of such combinations. Recall that any positive functional on can be extended (possibly non uniquely) to a positive functional on and that the restriction to of a positive functional on is a positive functional on . Hence, since includes charges (that are only affine combinations of classical evaluation functions), its restriction to cannot be a closed convex set of mixtures of classical evaluation functions. This is the last statement of the last result.

Implicitly, Theorem 2 is also informing us on the properties of . Indeed, the fact that there is a P-coherent set that includes a negative gamble that is not P-negative, yields that , and therefore . As a consequence, the extremes of are in general only affine combinations of classical evaluation functions.

In the next section, we show that Quantum Theory is a paradigmatic instance of . Given this, Theorem 2 applies and it turns out to be the key to explains the weirdness of the microscopic world from the perspective of an external classic observer. However, before doing that, we briefly discuss inference in P-coherent theories.

3.2 Inference in P-coherent theories

In this subsection, we compare inference in theory with inference in the classical theory .

Let be a finite set of assessment in , and be the corresponding P-coherent set of desirable gambles in . The lower prevision of a gamble is defined as


The upper prevision as . Comparing (3) and (17), the reader can notice that in (3) becomes in (17).

Note that, by definition of , the membership of to can be evaluated in polynomial-time (its complexity class is P).

Since , may not be coherent (in ), we therefore have that, for every


meaning that cannot always be interpreted as a lower expectation. We claim that (18) is just a general formulation of so-called Bell-type inequalities in QT.

4 Coherence model for a quantum experiment

The aim of this section is to write down the gambling system for a quantum mechanics experiment. Since in QT any real-valued quantum observable is described by a Hermitian operator, this naturally defines a vector subspace of gambles . We will then show that evaluating the nonnegativity of a gamble in is not computationally efficient. This will lead us to define a P-coherence postulate in and, thus, via duality, to derive the first postulate of QT

Associated to any isolated physical system is a complex vector space with inner product (that is, a Hilbert space) known as the state space of the system. The system is completely described by its density operator, which is a positive operator with trace one, acting on the state space of the system.

In the last subsection we thus briefly discuss the case of all other axioms, and how to derive them.

4.1 Space of gambles in QT

Consider first a single particle system with

-degrees of freedom and let

be the -dimensional complex unit-sphere, i.e.,:

We can interpret an element as “input data” for some classical preparation procedure. For instance, in the case of the spin- particle (), if is the direction of a filter in the Stern-Gerlarch experiment, then is its one-to-one mapping into (apart from a phase term). For spin greater than , the variable associated to the preparation procedure cannot directly be interpreted in terms only of “filter direction”. Nevertheless, at least on the formal level, plays the role of a “hidden variable” in our model. Two vectors correspond to the same preparation procedure if with and (phase). will therefore plays the role of the phase space (the possibility space ) and of the “hidden-variable”. This hidden-variable model for QT is also discussed in [holevo2011probabilistic, Sec. 1.7], where the author explains why this model does not contradict the existing “no-go” theorems for hidden-variables, see also Section 7.3.

In QT any real-valued observable is described by a Hermitian operator. This naturally imposes restrictions on the type of functions in (12):

where and , with being the set of Hermitian matrices of dimension . Since is Hermitian and is bounded (), is a real-valued bounded function (a gamble). By using the bra-ket notation, a gamble is thus .

As before, Alice’s acceptance of a gamble depends on her beliefs (uncertainty) about the preparation procedure.

More generally, we can consider composite systems of particles each one with degrees of freedom. The corresponding possibility space is the cartesian product of the systems

whereas gambles are of the form


with , and where

denotes the tensor product between vectors, seen as column matrices.

The justification for the use of the tensor product in composite systems is the following (for a more in depth discussed see Section 7.4). In the theory of desirable gambles, structural judgements such as independence, corresponds to the product of gambles on the single variables. In the specific case we are considering, they have the form . Now, it is not difficult to verify that such product is mathematically the same as . By closing the set of product gambles under the operations of addition and scalar (real number) multiplication, we get the vector space whose domain coincide with the collection of gambles of the form as in (19). Hence, in the setting under consideration, the tensor product is ultimately a derived notion, not a primitive one.

For (a single particle), evaluating the non-negativity of the quadratic form boils down to checking whether the matrix is Positive Semi-Definite (PSD) and therefore can be performed in polynomial time. This is no longer true for : indeed, in this case there exist polynomials of type (19) that are non-negative, but whose matrix

is indefinite (it has at least one negative eigenvalue). Moreover, it turns out that problem (

12) is not tractable:

Proposition 3.

The problem of checking the nonnegativity of in (19) is NP-hard for .

This result was proven by [gurvits2003classical] for Hermitian biquadratic forms and by [ling2009biquadratic] in the real valued case.

Example 3.

Consider the following polynomial of complex variables of dimension , and

We have that is nonnegative in (it will be verified later), but it cannot be written as with (PSD).

4.2 QT as computational rationality

We have seen that the problem of checking the nonnegativity of a quadratic forms is in general a NP-hard problem. What to do? As discussed previously, a solution is to change the meaning of “being non-negative” by considering a subset for which the membership problem, and thus (12), is in P.

For functions of type (19), we can extend the notion of non-negativity that holds for a single particle to particles:

That is, the function is “non-negative” (P-nonnegative) whenever is PSD. P-nonnegative gambles are also called Hermitian sum-of-squares (see e.g. [d2009polynomial]). We will discuss about sum-of-squares polynomials in Sections 7.2 and 7.6. Observe that, in the non-negative constant functions have the form

with and

being the unitary gamble.

Similarly, a gamble is P-negative whenever is Negative-Definite (ND), that is:


We therefore can formulate postulates a-a, and thus in particular that is a P-coherent set of desirable gambles whenever .

In the following subsections, we are thence going to show how QT can be derived from this “computational rationality” model and how all paradoxes of QT can be explained as a consequence of P-coherence. Hence again, since both 1,a and 1,A are the same logical postulates parametrised by the appropriate meaning of “being negative/non-negative”, the only axiom truly separating classical probability theory from the quantum one is a (with the specific form of a), thus implementing the requirement of computational efficiency.

4.2.1 Duality

Recall from Section 3.1 that the set is the dual of .

The monomials form a basis of the space . Define the Hermitian matrix of scalars


and let , with and , be the vector of variables obtained by taking the elements of the upper triangular part of . Given any gamble , we can therefore rewrite as a function of the vector . This means that the dual space is isomorphic to , and we can thence define the dual maps between and as follows.

Definition 4.

Let be a closed convex cone in . Its dual cone is defined as


where is completely determined by via the definition (21).

Example 4.

Consider the case , then


and so


with , etc..

In discussing properties of the dual space, we need the following well-known result from linear algebra:

Lemma 1.

For any and , it holds that


By Lemma 1 and the definitions of and , we obtain the following result.

Proposition 4.

Let and Hermitian. Then for every , it holds that , where is defined in (21).

The next lemma states that the only symmetry in the matrix with is that of being self-adjoint.

Lemma 2.

Consider the matrix


Let denote the -th element of then for all (upper triangular elements) we have that iff and .

We then verify that

Proposition 5.

Let be a P-coherent set of desirable gambles. The following holds:


By P-coherence, includes , which is isomorphic to the closed convex cone of PSD matrices. We have that

From a standard result in linear algebra, see for instance [holevo2011probabilistic, Lemma 1.6.3], this implies that , i.e., it must be a PSD matrix. ∎

In what follows, we verify that, analogously to Section 3.1, the dual is completely characterised by a closed convex set of states. But before doing that, we have to clarify what is a state in this context.

In a P-coherent theory, postulate A is replaced with postulate a. Hence, to define what a state is, one cannot anymore refer to nonnegative gambles but to gambles that are P-nonnegative. This means that states are linear operators that assign nonnegative real numbers to P-nonnegative, and that additionally preserve the unit gamble. In the context of Hermitian gambles, the unitary gamble is



is the identity matrix. Therefore, we want that


Hence, the set of states is


By reasoning exactly as for Theorem 1, we then have the following result.

Theorem 3.

The map

is a bijection between P-coherent set of desirable gambles in and closed convex subsets of .

Hence, we can identify the dual of a P-coherent set of desirable gambles , with the closed convex set of states


which is equivalent to .

Notice that the matrices corresponding to states are density matrices, in fact (31) is equivalent to