A Data-Centric View on Computational Complexity: P = NP

P = NP SAT ∈ P. We propose this to be true because the satisfiability problem for propositional logic formulas (SAT) is the Existential Halting Problem in disguise and therefore undecidable. Since the input space is finite, however, SAT can still be solved by simulating no less than all possible, that is exponentially many, configurations. In a nutshell, the halting portion of a program formulated for a Turing Machine can be expressed as one long propositional logic formula based on previous memory states (binary variables). Therefore solving SAT by analyzing a formula syntactically would equate to solving the Existential Halting Problem. A propositional logic formula is nothing else but a specific encoding of a truth table. There are 2^2^n unique truth tables of n binary variables and this means we need at least 2^n bits to universally encode any truth table. Thus simulating less than 2^n configurations would be in violation of the pigeon hole principle and therefore imply lossy compression. SAT requires an exact solution, however, and not an approximate one. Consequently, SAT needs an exponential amount of decisions to be solved.



page 1

page 2

page 3

page 4


Principle of Conservation of Computational Complexity

In this manuscript, we derive the principle of conservation of computati...

On Enumerating Short Projected Models

Propositional model enumeration, or All-SAT, is the task to record all m...

Local Backbones

A backbone of a propositional CNF formula is a variable whose truth valu...

A New Approach to CNF-SAT From a Probabilistic Point of View

The following paper proposes a new approach to determine whether a logic...

Improve SAT-solving with Machine Learning

In this project, we aimed to improve the runtime of Minisat, a Conflict-...

Reverse Engineering Code Dependencies: Converting Integer-Based Variability to Propositional Logic

A number of SAT-based analysis concepts and tools for software product l...

GraphSAT – a decision problem connecting satisfiability and graph theory

Satisfiability of boolean formulae (SAT) has been a topic of research in...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The mathematicians Alonzo Church and Alan Turing formulated the thesis that a function on the natural numbers is computable by a human being following an algorithm, ignoring resource limitations, if and only if it is computable by a Turing machine [Chu36, Tur37]

. Since then, the theory of computational complexity has built upon this thesis. In this article, we will show that the neglection of resource limitations is the cause for a large amount of confusion among modern-day computer scientists. Intuitively, if one asks “what is

?”, the answer is immediately “12”. In contrast, the response to“what is ?” will likely take a human much longer, despite the fact that both computations are considered constant time, commonly denoted as . So even in an intuitive understanding of computability, there must be a notion that the same truth, encoded in different ways, will take longer to process. In our example, the second description of the number has not been fully reduced yet. In other words, the description still contains potential computations that have to be performed to reduce the description length to a minimum.

The main focus of this article is to introduce a resource-aware interpretation of computability. We demonstrate that a computed decision is equivalent to a recorded decision (event), both of which can be measured in bits. This automatically leads to a conservation principle: A computation result that decides on an equiprobable [Bay63] input is not predictable or avoidable [Sha48]. The minimum number of decisions required to solve a particular problem therefore remains constant. However, decisions can be transferred from the input and therefore reduce computation. We explain why applying this conservation principle is advantageous by demonstrating it on a prominent problem [Coo71]: propositional logic satisfiability (). We demonstrate that the cardinality of, what we call, the solution space of is bits. Therefore an exact solution to requires an exponential number of computation decisions when the input has only polynomial length of bits. We then set the result aside and analyze the solution space of another problem, which we call Binary Codebreaker (BCB). One can see that is in . However, since the input length in bits is linear, there cannot be a polynomial algorithm for , this is . Applying the principle of complexity conservation, we therefore finally find certainty that the computational complexity classes and are not the same [Woe17, Cla00]. Finally, the undecidability of the halting problem [Tur37] is a direct result of complexity conservation as well: predicting the decisions made during the execution of a program by only analyzing its syntax is not universally possible. One cannot reduce the size of the solution space to the length of the binary encoding of the program.

Formulas will be written in a notation familiar to most computer scientists [Knu08]. We will use the words algorithm, language and program interchangeably.

2 On the Length of a Propositional Logic Formula

A common myth among practitioners is that logic formulas are shorter than the corresponding truth tables. This is indeed commonly so but not universally, especially when formulas are encoded in binary. We start busting this myth by defining the notion of a truth table.

Definition 2.1 (Truth Table  [Pei02]).

A truth table of a corresponding propositional logic formula is a 2-dimensional matrix with entries . The matrix has one column for each variable and one final column containing the result of the evaluation of the formula when the variables are configured according to the values contained in the previous columns of the same line. On each row all combinations of the input variables and the evaluated output are listed.

For convenience, we will refer to a truth table as complete truth table when it contains lines with all possible configurations and results. We will commonly refer to the results as decisions.

Definition 2.2 (Binary Decision).

A binary decision is the result of evaluating one variable configuration of a propositional logic formula.

Naturally, it follows that a binary decision is . There is no apparent reason, not to define decisions for larger alphabets as well but this is left for future work.

A truth table can be represented (encoded) in many ways – in fact, infinitely many ways. For example, the table can be compressed with the Lempel Ziv algorithm [ZL78]

or it can be learned using an Artificial Neural Network 

[FK17]. In mathematics, a preferred way to represent a truth table is using the pre-defined functions of the Boolean Algebra [Boo54], thus creating the very encoding we call Boolean formula [Hun33]. In fact, algebraic reformulation of a propositional logic formula using the equivalence operator () denotes nothing else but a lossless transformation (re-coding) of the underlying truth table. If an equivalent formula is shorter, the result is a reversible reduction of the description length. In computer science and information theory, this is commonly referred to as lossless compression.

Let and be propositional logic formulas. We denote the number of symbols needed to represent the formulas as and . Furthermore, we define and as the minimum number of symbols needed to equivalently represent the formulas and , respectively.

Lemma 2.1 (Logic Encoding Lemma).



and can be represented as complete truth tables and respectively. . Now, we can find the minimum length formula representation for , which we call . It holds that . This means . ∎

This proof does not specify a rule to generate because this is trivially not relevant. Therefore, we use propositional logic formula and Boolean formula interchangeably in this article.

We will now recite a commonly known proof, invoking the pigeon hole principle, that universal lossless compression cannot exist.

Let be the set containing all the strings that can be generated from an alphabet . Strings constructed from consist of binary digits, commonly referred to as bits [Sha48]. Let the sets be . contains all words of length bits and all words of length bits, this is bits and bits. We can now define a transformation scheme with encoding function , and decoding function . Compression is achieved iff . The compression scheme is lossless iff the output of the decoder is equal to the input of the encoder: . To achieve universal lossless compression, we would have to guarantee lossless compression for any . For convenience, let us define the set . This is, given a compressions scheme, contains all symbols that cannot be decoded without loss. Universal lossless compression therefore implies that must be an empty set.

Lemma 2.2 (No Universal Reduction of Description Length).


Let hence, bit. By definition, we require a set such that with . Since bit, the only possible length for each element of is . If this would mean that the encoder will output 0 bits. It is self-evident that the only way for this to work is , but the only element in is 0 bits long. Therefore, . ∎

For our purposes, we are now interested in the worst case minimum number of bits required to represent a propositional logic formula. We call this the worst case minimum length. Intuitively, the worst case minimum length is the minimum number of bits that need to be allocated in memory such that a program is able to allow the user to input any propositional logic formula of variables. We measure the number of bits needed to represent a set of symbols by taking the logarithm base . This is, is the total number of bits needed to represent all symbols in with a unique binary code.

Let denote the length of a propositional logic formula and the worst case minimum length of an encoding for in bits.

Lemma 2.3.

bits, with being the number of variables in the formula.


There are unique truth tables. By Lemma 2.2, we need a minimum of bits to encode them. So bits. ∎

Example 2.4.

Let us demonstrate our thought process as follows using the Boolean Algebra on a formula of variables: there are basic operations (NOT: , AND: , OR: ) which require at least bits to encode, practically speaking bits. The unary negation operation describes state changes and the binary operations describe state changes each, with states being redundant ( and ). This sums up to state changes. The total amount of possible state changes on variables is . This means for the remaining state changes, we will need at least basic operations in the formula. Encoding operations of each bits makes bits for the operations and bit each for the variables: a total of bits. With variables we are therefore already above the worst case minimum length of bits, even if we choose to repeat this calculation without rounding to integer.

As explained before, if, absurdly, one was able to compress a binary sequence of bits to less than bits universally, one could iteratively apply compression and shrink any binary sequence losslessly to

bits. If there was any set of logic operators that could make a formula universally shorter than a truth table, we could encode any binary file on a computer using these operators and decompress losslessly by simply evaluating that formula. In the general case, where no external assumptions can be applied, it does not matter whether we use a classical compression algorithm for the table, a machine learning method, or the predefined functions of the Boolean Algebra: only

bits can universally represent unique objects.

The human representation of a propositional logic formula is commonly smaller relative to a truth table because it uses a larger alphabet than the binary alphabet of the truth table. A typical alphabet for a Boolean formula of variables is . With a larger alphabet, possible states can be represented in less than symbols. In the same way, decimal number representation is shorter than binary number representation, this is . For propositional formulas, the shortest representation can be achieved if we choose to encode the formula with an alphabet of size . Then with . For example, then denotes . In this notation, we can immediately see if a formula is satisfiable, this is, if there is a configuration of the variables that evaluates to : any formula that is not is satisfiable. However, we will demonstrate in this article that it takes exponentially many computation steps to get all formulas into this representation.

3 Conservation of Computational Complexity

Let us continue with the thought from the end of the previous Section. The shortest representation for propositional logic formulas can be achieved if we choose to encode the formula with variables using an alphabet of size . Then with . Using a different example, then denotes . We explained that, in this notation, we can immediately see if a formula is satisfiable because any formula that is not is satisfiable. The word immediate stands to be corrected here: it takes comparisons to see if the formula is satisfiable or not, with being the number base we are implementing the comparisons in. For example, in base , we require comparisons to be sure a formula is not satisfiable (see also Lemma 4.1). On the other end of the spectrum, it is easy to see that if the number base is chosen to be , the search would take . This is a direct consequence of Lemma 2.1: we can define a formula as whatever we want – as a truth table, as Neural Network, or even as an algorithm that can be queried by a user (see Lemma 3.1). Ultimately, however, any equivalent transformation that is universally able to encode all formulas, needs to describe the same bits. Intuitively: regardless of any computational assumptions, unless we are settling with an approximative result, the bits that a formula represents must be accessible to the computer in some way, either as input or as a decision made in computation. We will now formalize this discussion.

The standard model for studying algorithmic complexity is the Turing Machine [Tur37]. Varying definitions can be found in the literature. For the discussion in this article, it suffices to distinguish between two main classes: deterministic and non-deterministic. Non-deterministic Turing Machines can perform more than one computation step at the same time. Deterministic Turing Machines can only perform one computation step at a time. In particular, we are interested in the languages they can implement: is the class of algorithms that can be solved on a non-deterministic Turing Machine (NTM) in polynomial time and is the class of algorithms that can be solved on a deterministic Turing Machine (DTM) in polynomial time. For almost 50 years, it has been an open question if the two complexity classes are the same. That is, if there is a way to universally transform non-deterministic polynomial algorithms into deterministic-polynomial algorithms. As a consequence, all algorithms would be in .

Without loss of generality [Coo00], we will assume the alphabet of any Turing Machine discussed in this article to be . In such a Turing Machine, all state transitions can be described by propositional logic functions. Algorithms in are therefore allowed to have a number of computation steps bounded by while algorithms in can only have steps, where is a polynomial function, this is for a constant and the number of symbols in the input to the algorithm. We note that, for an input of zero length it is trivially impossible to establish a model of complexity based on this definition.

We are now ready to address these problems by introducing the concept of a solution space.

Definition 3.1 (Solution Space).

The solution space of a corresponding problem is the smallest multiset [Knu98] of symbols that a Turing Machine must consider for an exact solution of the problem.

Considering a symbol here means that it can either be read from the input or computed.

Definition 3.2 (Independent Decision).

Let be a binary decision (see Definition 2.2). We call independent iff does not depend on the outcome of any other decision.

It follows that a decision can only be independent iff none of the variables in the configuration of the propositional logic formula underlying the decision are dependent on another decision. For example, would be independent if each variable depended on individual coin flips. The concept that a random coin flip defines the information content of bit has been formalized by Shannon [Sha48]

. He defined the Entropy of a discrete random variable

with possible values

and probability mass function

as: . The result is measured in bits. Shannon’s definition is more general than we will need in this article as we will leave dependent decisions to future work. However, we demonstrate the following consistency.

Let be a binary decision. Consistent with [Sha48], we call the information content of , measured in the unit bit.

Lemma 3.1.

[Equivalence of Computation and Encoding] is independent =1 bit


Let . Let be the variables for a propositional logic formula . Let be a configuration of and be the set of all possible configurations. Let .

By definition, if is independent then all of the variables in the configuration of the propositional logic formula underlying the decision are independent. Using probabilities [Bay63], this is , and . In other words, all configurations are equiprobable. With it follows bits. Now with representing th of the result columns of a truth table, it follows that the information content is bit which we denote consistently as .

In the other direction, =1 bit needs to imply that and thus . This can be easily verified using the notation, defined in the beginning of this section, where is the decimal representation of the result column of the truth table for . Without losing generality, we will now only focus on the first line of the result column of the truth table for . Since

numbers are odd and

numbers are even or , one can verify that over all truth tables represented by . This is consistent with Definition 3.2. ∎

As explained before, assuming the alphabet of a Turing Machine implies that all state transitions can be described by propositional logic decisions. In other words, bit measures an independent decision, no matter if it is made before computation and passed as input or during computation. Independent decisions cannot be predicted or avoided without loss of accuracy. Therefore, we can refer to them as irreducible. Consequently, we can now use the bit to measure worst-case computation steps.

Principle 1 (Conservation of Computational Complexity).

Decisions in the solution space defined by a problem can neither be predicted nor discarded, only transferred between input and algorithm.

4 Satisfiability

We are now ready to take a closer look at satisfiability. The satisfiability problem of propositional logic (), is the following: given a propositional logic formula , is satisfiable? is satisfiable iff there is a non-empty set of configurations of the binary variables with alphabet such that evaluates to . More formally, . This is, the language is defined as , where is the binary representation of .

Lemma 4.1.

The size of the solution space (see Definition 3.1) of is bits, with being the number of variables in the formula.


It follows from Lemma 2.3 that a propositional logic formula has to be represented in at least bits. Since all decisions in the solution column are independent, the minimum number of bits it can be universally represented in is bits. By Lemma 3.1, we therefore need at least independent binary decisions to determine the unsatisfiability of a formula that is already fully represented in bits (e.g. truth table). The solution space is therefore bits. ∎

Before we analyze SAT further, it is helpful to remember that most formulas can be encoded in a polynomial number of bits. For the proof that follows, we are interested in patterns of unsatisfiable formulas that can be encoded in polynomial-length bits. Finding an example of such a polynomial pattern is straightforward. We can use a prefix bit to encode “operator follows" and a prefix bit “1" to encode “number with digits follows" where is the number of independent variables in the formula and is the number of the variable. Without loss of generalization we assume that is known based on a separate transmission of that information (which takes bits). With three boolean operators, we therefore need bits per operator and approximately bits per variable. Now we encode the following pattern of unsatisfiable formulas: with . It is easy to see that this formula pattern is bounded in length by bits and is therefore polynomial. This particular pattern alone can encode different unsatisfiable formulas.

We are now ready to demonstrate the worst case computational complexity of .

Theorem 4.2.


For a proof by contradiction, let us assume , an algorithm that solves in polynomial time on a deterministic Turing Machine. From Lemma 4.1 we know that the solution space of SAT is bits. A polynomial encoding of and a polynomial number of computation decisions would result in an overall polynomial count of decisions. Since does not allow any a-priori assumptions about the structure of the input, a universal lossless reduction (see Lemma 2.2) of the solution space would be required to implement such that it can cope with such input universally. Per Lemma 2.2, a universal lossless compression scheme does not exist. The number of decisions in can therefore not be bounded by . It follows . ∎

Consider to be a syntax analyzer. This is, an algorithm that uses deduction rules. In general, there is an infinite number of formulas describing one truth table. This is easily seen from the fact that every formula can, for example, be “mirrored" infinitely with . Even though the worst case minimum length of a formula is bits, the maximum length is infinite. Hence, in general, we would have to match against an infinite set of finite-length patterns that could be used to represent the same truth table: this would take infinite decisions. In other words, a syntax analysis is undecidable. We will present an additional demonstration for that in Section 6. If we consider a semantic analyzer, then only a reproduction of the complete truth table can lead to an exact result. This cannot be universally done in polynomial time as the truth table has an exponential number of entries. It immediately follows that the current solution of is actually the best case. Given a formula , one needs to guess all variable configurations and then evaluate in linear time.

Our result is consistent with the No-Free-Lunch theorem [WM97]. It is well known that optimization needs context and cannot be universal. This is an equivalent formulation of the fact that needs to rely on a universal lossless compression scheme (that cannot exist).

For a bigger picture, consider all unique files of length bits. There are such files. We now encode all files using a binary-alphabet non-deterministic Turing Machine such that each non-deterministic path encodes one file (e.g., by guessing). This defines a non-deterministic Turing Machine of size with path length . That is, the path length is polynomial. For example, a file of size bits could be verified against a path in this machine in linear time. Now, from Lemma 2.2 it is clear that we cannot reduce this non-deterministic Turing Machine to a polynomial-size deterministic-Turing Machine and be able to reproduce the content of all files. It immediately follows that the two machine types implement different solution spaces that are not universally reducible.

5 Applying the Conservation Principle Directly

With the concept of the solution space, the question if can also be solved more directly. Let us ignore the result described in Section 4 for now.

We now define the following problem.

Definition 5.1 (Binary Codebreaker).

Given a set of electric switches of elements from with cardinality . The switches are configured secretly into a code lock such that only one of the possible sequences serves as a code to unlock a door. The manufacturing company built and sold exactly all locks of size , each with unique code.

The lock has a non-deterministic programmatic interface (reading multiple switch configurations at the same time). Also the code verification mechanism inside the lock is a non-deterministic Turing Machine. The lock and the codebreaker machine can therefore be treated as one machine.

The obvious question we want to answer is: what is the computational complexity of opening a lock with an unknown code? The language that breaks the code to open the door is therefore code opens the lock.

Lemma 5.1.


It is easy to see that can be verified in polynomial time as even a deterministic Turing Machine can open the door in time linear to using a binary comparison of the secret integer .

A “guess and check” non-deterministic Turing Machine can therefore run through all combinations in polynomial time of the number of switches. This is, the computational steps are bounded by . ∎

Lemma 5.2.

The size of the solution space of is bits, with being the number of switches.


There are no assumptions to be made about the configurations of switches. This is, all switch configurations are independent and so is any decision over them. In probabilistic terms, every configuration of switches has the same probability to be successful. This is, the information content of a single binary decision is with for an unknown . The solution space of is therefore the sum of all combinations ( bits) and the time it takes to verify a solution. That is bits. So bits. ∎

Corollary 5.2.1.



Since the input is of polynomial size and the solution space is exponential, by the principle of conservation of computational complexity, cannot run on a deterministic Turing Machine in polynomial time. This is . ∎

but not . As additional note, seems not polynomially reducible to as the formula implied by is by definition not only unknown but also satisfiable. This implies that is most likely not NP-complete but this is irrelevant.

Corollary 5.2.2.



To show that , it suffices to show that with with being a language [Coo00]. Therefore it follows by corollary from Corollary 5.2.1 or Theorem 4.2 that . ∎

6 On the Halting Problem

We now introduce an alternative explanation for the undecidability of the halting problem [Tur37] based on the principle of conservation of computational complexity.

A solution to a decision problem requires a certain number of irreducible decisions. Based on Lemma 2.2 and Lemma 3.1 it is universally impossible for a Turing Machine to skip decisions, thus reducing the solution space. If analyzing the syntax of a program of another Turing Machine requires less decisions than running the program, it would imply, again, a violation of the conservation of the solution space. We can observe the following.

Let the language that a Turing Machine accepts be with . For simplicity, we call the tape memory and allow random access by direct addressing. Furthermore, we will use a Turing-Machine-equivalent computation model, namely the program [Sch97] (p. 106). This model is based on Kleene’s Normal Form Theorem [Kle43], which states that any Turing-complete program can be expressed by only one loop with a Boolean condition. The condition cannot be omitted because otherwise the program is not Turing complete (-recursive), only -calculatable (primitive recursive). This has been shown by [Sch97] (p.121) based on [Ack28].

This means, any program can be expressed as


where is a function returning or and is an arbitrary sub program. If we choose the alphabet of the Turing Machine to be binary, the halting condition must be a propositional logic formula. checks a set of memory cells, modified by for the acceptance of .

More specifically, we can express the program as

WHILE NOT F(cell[k],...,[m]) { P }

where are memory cells (Boolean variables) and is only responsible for modifying their values. Without losing generality, we inverted the loop. ultimately only leads to the configuration of variables .

Deciding if is satisfiable, that is determining if there exist a configuration for such that is true based on ’s syntax (this is, without running the program), equates to solving if the program halts on some input : alone would be predicting if the loop can ever halt, ignoring the values of the variables. This is undecidable and known as the Existential Halting Problem (EHP). EHP is defined as follows: given a Turing Machine , is there some input on which halts, formally: w.

This reduction of to is a another way of showing that is only solvable by simulating all states of the independent variables. In the case of encased in a program, the independence needs to be resolved fully. This is, if some variables in (memory cells) depend on other variables, they must be evaluated until each variable in is independently valid. For example, until the only dependency left is a decision by the user. For many reasons, this can take infinite time. This means, the cardinality (size) of the solution space might be infinite.

The halting problem is therefore undecidable because predictions of irreducible decisions are impossible.

7 Conclusions

To the best of our knowledge [Woe17, Cla00]

, the proofs contained in our article have not been proposed before. We presented the principle of conservation of computational complexity. This principle is derived by the simple fact that one bit represents two equiprobable states based on one independent binary decision. Under this viewpoint, it becomes obvious that skipping irreducible decisions is analogous to applying a lossy compression scheme that cannot guarantee an exact solution. Binarization of the input and computing space might seem strange at first. In the end, it is only a modern form of Goedelization 

[Göd31]. We would like to note that the field of Human Computer Interaction has long sensed the existence of a law of conservation of complexity [Saf10]

that is in full agreement with our findings. Similarly, it is known in the machine learning and computer vision communities that the runtime of a signal processing algorithm for the same length input depends on the noisiness of the signal 

[FJ14]. Last but not least, the consequences of the principle of the conservation of computational complexity inevitably leads to the determination that .

In our minds, understanding complexity is an effort of reducing it. The original works of [Coo71] and [Kar72] demonstrate ingenious examples of complexity reduction. However, the apparent disadvantage of current complexity theory is that it neglects that an algorithm by itself is meaningless. An algorithm is only part of a more complex system. It is somewhat surprising that even a fantastic book like [Sch97] does not even mention once the notion of a bit. The bit connects computation to information theory and physics [Lan61, DAV98]. After all, computers are part of the physical universe and bound to its principles, such as the conservation of energy. As mentioned previously, investigating how Shannon’s Source Coding Theorem [Sha48] can be used as a universal measure for computational complexity therefore seems like a viable path for future research [CCV10, FM17].


This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. It was also partially supported by a Lawrence Livermore Laboratory Directed Research & Development grants (17-ERD-096, 17-SI-004, and 18-ERD-021). IM number LLNL-JRNL-743757. Any findings and conclusions are those of the authors, and do not necessarily reflect the views of the funders. We want to thank our families for their support enduring weekend and late-night shifts writing this article. We also want to thank Dr. Mario Krell, Dr. Tomas Oppelstrup, Dr. Markus Schordan, Dr. Jason Lenderman, Dr. Adam Janin and Dr. Jeffrey Hittinger for encouraging remarks on this article. We want to thank Prof. Satish Rao and Prof. Richard Karp for taking the time to discuss with us. We thank our former PhD advisors, Prof. Raúl Rojas and Prof. Mikhail Dzugutov as they continue to be incredible mentors. Prof. Jerome Feldman deserves thanks and credit for instrumental fundamental advise on this and other articles. We would like to point out that this article would not have been possible without the existence of Wikipedia.org.


  • [Ack28] Wilhelm Ackermann. Zum Hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, 99(1):118–133, 1928.
  • [Bay63] Thomas Bayes. An essay towards solving a problem in the doctrine of chances. 1763.
  • [Boo54] George Boole. An Investigation of the Laws of Thought on which are Founded the Mathematical Theories of Logic and Probabilities. Walton and Maberly, 1854.
  • [CCV10] Massimo Cencini, Fabio Cecconi, and Angelo Vulpiani. Chaos. From Simple Models to Complex Systems, volume 17. World Scientific, 2010.
  • [Chu36] Alonzo Church. A note on the entscheidungsproblem. Journal of Symbolic Logic, 1(1):40–41, 1936.
  • [Cla00] Clay Mathematical Institute. Millennium Problems. http://www.claymath.org/millennium-problems, 2000.
  • [Coo71] Stephen A Cook. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing, pages 151–158. ACM, 1971.
  • [Coo00] Stephen Cook. The P versus NP Problem. Official Problem Description, Clay Mathematical Institute, 2000.
  • [DAV98] Mikhail Dzugutov, Erik Aurell, and Angelo Vulpiani. Universal relation between the kolmogorov-sinai entropy and the thermodynamical entropy in simple liquids. Physical review letters, 81(9):1762, 1998.
  • [FJ14] Gerald Friedland and Ramesh Jain. Multimedia Computing. Cambridge University Press, 2014.
  • [FK17] Gerald Friedland and Mario Krell. A capacity scaling law for artificial neural networks. CoRR, abs/1708.06019, 2017.
  • [FM17] Gerald Friedland and Alfredo Metere. An isomorphism between maximum lyapunov exponent and shannon’s channel capacity. arXiv preprint arXiv:1706.08638, 2017.
  • [Göd31] Kurt Gödel. Über formal unentscheidbare sätze der principia mathematica und verwandter systeme i. Monatshefte für mathematik und physik, 38(1):173–198, 1931.
  • [Hun33] Edward V Huntington. New sets of independent postulates for the algebra of logic, with special reference to whitehead and russell’s principia mathematica. Transactions of the American Mathematical Society, 35(1):274–304, 1933.
  • [Kar72] Richard M Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
  • [Kle43] Stephen Cole Kleene. Recursive predicates and quantifiers. Transactions of the American Mathematical Society, 53(1):41–73, 1943.
  • [Knu98] Donald Ervin Knuth. The art of computer programming, volume 2. Pearson Education, 1998.
  • [Knu08] Donald Ervin Knuth. The art of computer programming, volume 4, Pre-Fascicle 0B. Addison Wesley, 2008.
  • [Lan61] Rolf Landauer. Irreversibility and heat generation in the computing process. IBM journal of research and development, 5(3):183–191, 1961.
  • [Pei02] Charles Sanders Peirce. Collected Papers of Charles Sanders Peirce, volume 5. Harvard University Press (1974), 1902.
  • [Saf10] Dan Saffer. Designing for interaction. New Riders Berkeley, 2010.
  • [Sch97] Uwe Schöning. Theoretische Informatik – Kurzgefaßt. Springer Verlag Berlin, 1997.
  • [Sha48] Claude E Shannon. A mathematical theory of communication, part i, part ii. Bell Syst. Tech. J., 27:623–656, 1948.
  • [Tur37] Alan Mathison Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London mathematical society, 2(1):230–265, 1937.
  • [WM97] David H Wolpert and William G Macready. No free lunch theorems for optimization.

    IEEE transactions on evolutionary computation

    , 1(1):67–82, 1997.
  • [Woe17] GJ Woeginger. The P-versus-NP page. https://www.win.tue.nl/~gwoegi/P-versus-NP.htm, 2017.
  • [ZL78] Jacob Ziv and Abraham Lempel. Compression of individual sequences via variable-rate coding. IEEE transactions on Information Theory, 24(5):530–536, 1978.