Imagine that you have a collection of one billion lottery tickets scattered throughout your basement in no particular order. An official from the lottery announces the number of the winning lottery ticket. For a possible prize of one billion dollars, is it a good idea to search your basement until you find the winning ticket or until you come to the conclusion that you do not possess the winning ticket? Most people would think not - even if the winning lottery ticket were in your basement, performing such a search could take years, over thirty work-years, assuming that it takes you at least one second to examine each lottery ticket. Now imagine that you have a collection of only one thousand lottery tickets in your basement. Is it a good idea to search your basement until you find the winning ticket or until you come to the conclusion that you do not possess the winning ticket? Most people would think so, since doing such would take at most a few hours.
From these scenarios, let us postulate a general rule that the maximum time that it may take for one person to search unsorted objects for one specific object is directly proportional to . This is clearly the case for physical objects, but what about abstract objects? For instance, let us suppose that a dating service is trying to help single women and single men to get married. Each woman gives the dating service a list of characteristics that she would like to see in her potential husband, for instance, handsome, caring, athletic, domesticated, etc. And each man gives the dating service a list of characteristics that he would like to see in his potential wife, for instance, beautiful, obedient, good cook, thrifty, etc. The dating service is faced with the task of arranging dates for each of its clients so as to satisfy everyone’s preferences.
Now there are (which is shorthand for ) possible ways for the dating service to arrange dates for each of its clients, but only a fraction of such arrangements would satisfy all of its clients. If , it would take too long for the dating service’s computer to evaluate all possible arrangements until it finds an arrangement that would satisfy all of its clients. ( is too large a number of possibilities for any modern computer to handle.) Is there an efficient way for the dating service’s computer to find dates with compatible potential spouses for each of the dating service’s clients so that everyone is happy, assuming that it is possible to do such? Yes, and here is how:
Matchmaker Algorithm - Initialize the set . Search for a list of compatible relationships between men and women that alternates between a compatible relationship not contained in set , followed by a compatible relationship contained in set , followed by a compatible relationship not contained in set , followed by a compatible relationship contained in set , and so on, ending with a compatible relationship not contained in set , where both and are not members of any compatible relationships contained in set . Once such a list is found, for each compatible relationship in the list, add to if is not contained in or remove from if is contained in . (Note that this procedure must increase the size of set by one.) Repeat this procedure until no such list exists.
Such an algorithm is guaranteed to efficiently find an arrangement that will satisfy all of the dating service’s clients whenever such an arrangement exists . So we see that with regard to abstract objects, it is not necessarily the case that the maximum time that it may take for one to search unsorted objects for a specific object is directly proportional to ; in the dating service example, there are possible arrangements between men and women, yet it is not necessary for a computer to examine all arrangements in order to find a satisfactory arrangement. One might think that the problem of finding a satisfactory dating arrangement is easy for a modern computer to solve because the list of pairs of men and women who are compatible is relatively small (of size at most , which is much smaller than the number of possible arrangements ) and because it is easy to verify whether any particular arrangement will make everyone happy. But this reasoning is invalid, as we shall demonstrate:
2 The SUBSET-SUM Problem
Consider the following problem: You are given a set of integers and another integer which we shall call the target integer. You want to know if there exists a subset of for which the sum of its elements is equal to . (We shall consider the sum of the elements of the empty set to be zero.) This problem is called the SUBSET-SUM problem . Now, there are subsets of , so one could naively solve this problem by exhaustively comparing the sum of the elements of each subset of to until one finds a subset-sum equal to , but such a procedure would be infeasible for even the fastest computers in the world to implement when . Is there an algorithm which can considerably reduce the amount of work for solving the SUBSET-SUM problem? Yes, there is an algorithm discovered by Horowitz and Sahni in 1974 , which we shall call the Meet-in-the-Middle algorithm, that takes on the order of steps to solve the SUBSET-SUM problem instead of the steps of the naive exhaustive comparison algorithm:
Meet-in-the-Middle Algorithm - First, partition the set into two subsets, and . Let us define and as the sets of subset-sums of and , respectively. Sort sets and in ascending order. Compare the first elements in both of the lists. If they match, then stop and output that there is a solution. If not, then compare the greater element with the next element in the other list. Continue this process until there is a match, in which case there is a solution, or until one of the lists runs out of elements, in which case there is no solution.
This algorithm takes on the order of steps, since it takes on the order of steps to sort sets and (assuming that the computer can sort in linear-time) and on the order of steps to compare elements from the sorted lists and . Are there any faster algorithms for solving SUBSET-SUM? is still a very large number when , even though this strategy is a vast improvement over the naive strategy. It turns out that no algorithm with a better worst-case running-time has ever been found since the Horowitz and Sahni paper . And the reason for this is because it is impossible for such an algorithm to exist. Here is an explanation why:
Explanation: To understand why there is no algorithm with a faster worst-case running-time than the Meet-in-the-Middle algorithm, let us travel back in time seventy-five years, long before the internet. If one were to ask someone back then what a computer is, one would have gotten the answer, “a person who computes (usually a woman)” instead of the present day definition, “a machine that computes” . Let us imagine that we knew two computers back then named Mabel and Mildred (two popular names for women in the 1930’s ). Mabel is very efficient at sorting lists of integers into ascending order; for instance she can sort a set of ten integers in 15 seconds, whereas it takes Mildred 20 seconds to perform the same task. However, Mildred is very efficient at comparing two integers and to determine whether or or ; she can compare ten pairs of integers in 15 seconds, whereas it takes Mabel 20 seconds to perform the same task.
Let’s say we were to give both Mabel and Mildred the task of determining whether there exists a subset of some four element set, , for which the sum of its elements adds up to . Since Mildred is good at comparing but not so good at sorting, Mildred chooses to solve this problem by comparing to all of the sixteen subset-sums of . Since Mabel is good at sorting but not so good at comparing, Mabel decides to solve this problem by using the Meet-in-the-Middle algorithm. In fact, of all algorithms that Mabel could have chosen to solve this problem, the Meet-in-the-Middle algorithm is the most efficient for her to use on sets with only four integers. And of all algorithms that Mildred could have chosen to solve this problem, comparing to all of the sixteen subset-sums of is the most efficient algorithm for her to use on sets with only four integers.
Now we are going to use the principle of mathematical induction to prove that the best algorithm for Mabel to use for solving the SUBSET-SUM problem for large is the Meet-in-the-Middle algorithm: We already know that this is true when . Let us assume that this is true for , i.e., that of all possible algorithms for Mabel to use for solving the SUBSET-SUM problem on sets with integers, the Meet-in-the-Middle algorithm has the best worst-case running-time. Then we shall prove that this is also true for :
Let be the set of all subset-sums of the set . Notice that the SUBSET-SUM problem on the set of integers and target is equivalent to the problem of determining whether (1) or (2) (where ). (The symbol means “is a member of”.) Also notice that these two subproblems, (1) and (2), are independent from one another in the sense that the values of and are unrelated to each other and are also unrelated to set ; therefore, in order to determine whether or , it is necessary to solve both subproblems (assuming that the first subproblem solved has no solution). So it is clear that if Mabel could solve both subproblems in the fastest time possible and also whenever possible make use of information obtained from solving subproblem (1) to save time solving subproblem (2) and whenever possible make use of information obtained from solving subproblem (2) to save time solving subproblem (1), then Mabel would be able to solve the problem of determining whether (1) or (2) in the fastest time possible.
We shall now explain why the Meet-in-the-Middle algorithm has this characteristic for sets of size : It is clear that by the induction hypothesis, the Meet-in-the-Middle algorithm solves each subproblem in the fastest time possible, since it works by applying the Meet-in-the-Middle algorithm to each subproblem, without loss of generality sorting and comparing elements in lists and and also sorting and comparing elements in lists and as the algorithm sorts and compares elements in lists and . There are two situations in which it is possible for the Meet-in-the-Middle algorithm to make use of information obtained from solving subproblem (1) to save time solving subproblem (2) or to make use of information obtained from solving subproblem (2) to save time solving subproblem (1). And the Meet-in-the-Middle algorithm takes advantage of both of these opportunities:
Whenever the Meet-in-the-Middle algorithm compares two elements from lists and and the element in list turns out to be less than the element in list , the algorithm makes use of information obtained from solving subproblem (1) (the fact that the element in list is less than the element in list ) to save time, when
is odd, solving subproblem (2) (the algorithm does not consider the element in listagain).
Whenever the Meet-in-the-Middle algorithm compares two elements from lists and and the element in list turns out to be less than the element in list , the algorithm makes use of information obtained from solving subproblem (2) (the fact that the element in list is less than the element in list ) to save time, when is odd, solving subproblem (1) (the algorithm does not consider the element in list again).
Therefore, we can conclude that the Meet-in-the-Middle algorithm whenever possible makes use of information obtained from solving subproblem (1) to save time solving subproblem (2) and whenever possible makes use of information obtained from solving subproblem (2) to save time solving subproblem (1). So we have completed our induction step to prove true for , assuming true for .
Therefore, the best algorithm for Mabel to use for solving the SUBSET-SUM problem for large is the Meet-in-the-Middle algorithm. But is the Meet-in-the-Middle algorithm the best algorithm for Mildred to use for solving the SUBSET-SUM problem for large ? Since the Meet-in-the-Middle algorithm is not the fastest algorithm for Mildred to use for small , is it not possible that the Meet-in-the-Middle algorithm is also not the fastest algorithm for Mildred to use for large ? It turns out that for large , there is no algorithm for Mildred to use for solving the SUBSET-SUM problem with a faster worst-case running-time than the Meet-in-the-Middle algorithm. Why?
Notice that the Meet-in-the-Middle algorithm takes on the order of steps regardless of whether Mabel or Mildred applies it. And notice that the algorithm of naively comparing the target to all of the subset-sums of set takes on the order of steps regardless of whether Mabel or Mildred applies it. So for large , regardless of who the computer is, the Meet-in-the-Middle algorithm is faster than the naive exhaustive comparison algorithm - from this example, we can understand the general principle that the asymptotic running-time of an algorithm does not differ by more than a polynomial factor when run on different types of computers [39, 40]. Therefore, since no algorithm is faster than the Meet-in-the-Middle algorithm for solving SUBSET-SUM for large when applied by Mabel, we can conclude that no algorithm is faster than the Meet-in-the-Middle algorithm for solving SUBSET-SUM for large when applied by Mildred. And furthermore, using this same reasoning, we can conclude that no algorithm is faster than the Meet-in-the-Middle algorithm for solving SUBSET-SUM for large when run on a modern computing machine. ∎
So it doesn’t matter whether the computer is Mabel, Mildred, or any modern computing machine; the fastest algorithm which solves the SUBSET-SUM problem for large is the Meet-in-the-Middle algorithm. Because once a solution to the SUBSET-SUM problem is found, it is easy to verify (in polynomial-time) that it is indeed a solution, we say that the SUBSET-SUM problem is in class . And because there is no algorithm which solves SUBSET-SUM that runs in polynomial-time (since the Meet-in-the-Middle algorithm runs in exponential-time and is the fastest algorithm for solving SUBSET-SUM, as we have shown above), we say that the SUBSET-SUM problem is not in class . Then since the SUBSET-SUM problem is in class but not in class , we can conclude that , thus solving the versus problem. The solution to the versus problem demonstrates that it is possible to hide abstract objects (in this case, a subset of set ) without an abundance of resources - it is, in general, more difficult to find a subset of a set of only one hundred integers for which the sum of its elements equals a target integer than to find the winning lottery-ticket in a pile of one billion unsorted lottery tickets, even though the lottery-ticket problem requires much more resources (one billion lottery tickets) than the SUBSET-SUM problem requires (a list of one hundred integers).
3 Does really matter?
Even though , might there still be algorithms which efficiently solve problems that are in but not in the average-case scenario? (Since the result deals only with the worst-case scenario, there is nothing to forbid this from happening.) The answer is yes; for many problems which are in but not , there exist algorithms which efficiently solve them in the average-case scenario [27, 38], so the statement that is not as ominous as it sounds. In fact, there is a very clever algorithm which solves almost all instances of the SUBSET-SUM problem in polynomial-time [11, 25, 27]
. (The algorithm works by converting the SUBSET-SUM problem into the problem of finding the shortest non-zero vector of a lattice, given its basis.) But even though for many problems which are inbut not
, there exist algorithms which efficiently solve them in the average-case scenario, in the opinion of most complexity-theorists, it is probably false that for all problems which are inbut not , there exist algorithms which efficiently solve them in the average-case scenario .
Even though , might it still be possible that there exist polynomial-time randomized algorithms which correctly solve problems in but not in
with a high probability regardless of the problem instance? (The word “randomized” in this context means that the algorithm bases some of its decisions upon random variables. The advantage of these types of algorithms is that whenever they fail to output a solution, there is still a good chance that they will succeed if they are run again.) The answer is probably no, as there is a widely believed conjecture that, where is the class of decision problems for which there are polynomial-time randomized algorithms that correctly solve them at least two-thirds of the time regardless of the problem instance .
4 Are Quantum Computers the Answer?
A quantum computer is any computing device which makes direct use of distinctively quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. As of today, the field of practical quantum computing is still in its infancy; however, much is known about the theoretical properties of a quantum computer. For instance, quantum computers have been shown to efficiently solve certain types of problems, like factoring integers , which are believed to be difficult to solve on a classical computer, e.g., a human-computer like Mabel or Mildred or a machine-computer like an IBM PC or an Apple Macintosh.
Is it possible that one day quantum computers will be built and will solve problems like the SUBSET-SUM problem efficiently in polynomial-time? The answer is that it is generally suspected by complexity theorists to be impossible for a quantum computer to solve the SUBSET-SUM problem (and all other problems which share a characteristic with the SUBSET-SUM problem in that they belong to a subclass of problems known as NP-complete problems ) in polynomial-time. A curious fact is that if one were to solve the SUBSET-SUM problem on a quantum computer by brute force, the algorithm would have a running-time on the order of steps, which (by coincidence?) is the same asymptotic running-time of the fastest algorithm which solves SUBSET-SUM on a classical computer, the Meet-in-the-Middle algorithm [1, 4, 18].
In any case, no one has ever built a practical quantum computer and some scientists are even of the opinion that building such a computer is impossible; the acclaimed complexity theorist, Leonid Levin, wrote: “QC of the sort that factors long numbers seems firmly rooted in science fiction. It is a pity that popular accounts do not distinguish it from much more believable ideas, like Quantum Cryptography, Quantum Communications, and the sort of Quantum Computing that deals primarily with locality restrictions, such as fast search of long arrays. It is worth noting that the reasons why QC must fail are by no means clear; they merit thorough investigation. The answer may bring much greater benefits to the understanding of basic physical concepts than any factoring device could ever promise. The present attitude is analogous to, say, Maxwell selling the Daemon of his famous thought experiment as a path to cheaper electricity from heat. If he did, much of insights of today s thermodynamics might be lost or delayed .”
5 Unprovable Conjectures
In the early twentieth century, the famous mathematician, David Hilbert, proposed the idea that all mathematical facts can be derived from only a handful of self-evident axioms. In the 1930’s, Kurt Gödel proved that such a scenario is impossible by showing that for any proposed finite axiom system for arithmetic, there must always exist true statements that are unprovable within the system, if one is to assume that the axiom system has no inconsistencies. Alan Turing extended this result to show that it is impossible to design a computer program which can determine whether any other computer program will eventually halt. In the latter half of the 20th century, Gregory Chaitin defined a real number between zero and one, which he calls , to be the probability that a computer program halts. And Chaitin proved that:
Theorem 1 - For any mathematics problem, the bits of , when is expressed in binary, completely determine whether that problem is solvable or not.
Theorem 2 - The bits of are random and only a finite number of them are even possible to know.
From these two theorems, it follows that the very structure of mathematics itself is random and mostly unknowable! 
Even though Hilbert’s dream to be able derive every mathematical fact from only a handful of self-evident axioms was destroyed by Gödel in the 1930’s, this idea has still had an enormous impact on current mathematics research . In fact, even though mathematicians as of today accept the incompleteness theorems proven by Gödel, Turing, and Chaitin as true, in general these same mathematicians also believe that these incompleteness theorems ultimately have no impact on traditional mathematics research, and they have thus adopted Hilbert’s paradigm of deriving mathematical facts from only a handful of self-evident axioms as a practical way of researching mathematics. Gregory Chaitin has been warning these mathematicians for decades now that these incompleteness theorems are actually very relevant to advanced mathematics research, but the overwhelming majority of mathematicians have not taken his warnings seriously . We shall directly confirm Chaitin’s assertion that incompleteness is indeed very relevant to advanced mathematics research by giving very strong evidence that two famous mathematics problems, determining whether the Collatz Conjecture is true and determining whether the Riemann Hypothesis is true, are impossible to solve:
The Collatz Conjecture - Here’s a fun experiment that you, the reader, can try: Pick any positive integer, . If is even, then compute or if is odd, then compute . Then let equal the result of this computation and perform the whole procedure again until . For instance, if you had chosen , you would have obtained the sequence , , , 20, 10, 5, 8, 4, 2, 1.
The Collatz Conjecture states that such an algorithm will always eventually reach and halt . Computers have verified this conjecture to be true for all positive integers less than . Why does this happen? One can give an informal argument as to why this may happen  as follows: Let us assume that at each step, the probability that is even is one-half and the probability that is odd is one-half. Then at each iteration, will decrease by a multiplicative factor of about on average, which is less than one; therefore, will eventually converge to one with probability one. But such an argument is not a rigorous mathematical proof, since the probability assumptions that the argument is based upon are not well-defined and even if they were well-defined, it would still be possible (although extremely unlikely, with probability zero) that the algorithm will never halt for some input.
Is there a rigorous mathematical proof of the Collatz Conjecture? As of today, no one has found a rigorous proof that the conjecture is true and no one has found a rigorous proof that the conjecture is false. In fact, Paul Erdös, who was one of the greatest mathematicians of the twentieth century, commented about the Collatz Conjecture: “Mathematics is not yet ready for such problems .” We can informally demonstrate that there is no way to deductively prove that the conjecture is true, as follows:
Explanation: First, notice that in order to be certain that the algorithm will halt for a given input , it is necessary to know whether the value of at the beginning of each iteration of the algorithm is even or odd. (For a rigorous proof of this, see “The Collatz Conjecture is Unprovable” .) For instance, if the algorithm starts with input , then in order to know that the algorithm halts at one, it is necessary to know that 11 is odd, is odd, is even, is odd, 20 is even, 10 is even, 5 is odd, 8 is even, 4 is even, and 2 is even. We can express this information (odd, odd, even, odd, even, even, odd, even, even, even) as a vector of zeroes and ones, . Let us call this vector the parity vector of . (If never converges to one, then its parity vector must be infinite-dimensional.) If one does not know the parity vector of the input, then it is impossible to know what the algorithm does at each iteration and therefore impossible to be certain that the algorithm will converge to one. So any proof that the algorithm applied to halts must specify the parity vector of . Next, let us give a definition of a random vector:
Definition - We shall say that a vector is random if cannot be specified in less than bits in a computer text-file .
Example 1 - The vector of one million concatenations of the vector is not random, since we can specify it in less than two million bits in a computer text-file. (We just did.)
Example 2 - The vector of outcomes of one million coin-tosses has a good chance of fitting our definition of “random”, since much of the time the most compact way of specifying such a vector is to simply make a list of the results of each coin-toss, in which one million bits are necessary.
Now let us suppose that it were possible to prove the Collatz Conjecture and let be the number of bits in a hypothetical computer text-file containing such a proof. And let be a random vector, as defined above. (It is not difficult to prove that at least half of all vectors with zeroes and ones are random .) There is a mathematical theorem  which says that there must exist a number with the first bits of its parity vector equal to ; therefore, any proof of the Collatz Conjecture must specify vector (as we discussed above), since such a proof must show that the Collatz algorithm halts when given input . But since vector is random, bits are required to specify vector , contradicting our assumption that is the number of bits in a computer text-file containing a proof of the Collatz Conjecture; therefore, a formal proof of the Collatz Conjecture cannot exist . ∎
The Riemann Hypothesis - There is also another famous unresolved conjecture, the Riemann Hypothesis, which has a characteristic similar to that of the Collatz Conjecture, in that it too can never be proven true. In the opinion of many mathematicians, the Riemann Hypothesis is the most important unsolved problem in mathematics . The reason why it is so important is because a resolution of the Riemann Hypothesis would shed much light on the distribution of prime numbers: It is well known that the number of prime numbers less than is approximately . If the Riemann Hypothesis is true, then for large , the error in this approximation must be bounded by for some constant , which is also a bound for a random walk, i.e., the sum of independent random variables, , for in which the probability that is one-half and the probability that is one-half.
The Riemann-Zeta function is a complex function which is defined to be when the real part of the complex number is positive. The Riemann Hypothesis states that if is a complex root of and , then . It is well known that there are infinitely many roots of that have . And just like the Collatz Conjecture, the Riemann Hypothesis has been verified by high-speed computers - for all where . But it is still unknown whether there exists a such that , where . And just like the Collatz
Conjecture, one can give a heuristic probabilistic argument that the Riemann Hypothesis is true, as follows:
It is well known that the Riemann Hypothesis follows from the assertion that for large , is bounded by for some constant , where is the Möbius Inversion function defined on in which if is the product of an odd number of distinct primes, if is the product of an even number of distinct primes, and otherwise. If we were to assume that
is distributed as a random walk, which is certainly plausible since there is no apparent reason why it should not be distributed as a random walk, then by probability theory,is bounded for large by for some constant , with probability one; therefore, it is very likely that the Riemann Hypothesis is true. We shall now explain why the Riemann Hypothesis is unprovable, just like the Collatz Conjecture:
Explanation: The Riemann Hypothesis is equivalent to the assertion that for each , the number of real roots of , where , is equal to the number of roots of in . It is well known that there exists a continuous real function (called the Riemann-Siegel function) such that , so the real roots of are the same as the real roots of . (The formula for is , where .) Then because the formula for the real roots of cannot be reduced to a formula that is simpler than the equation, , the only way to determine the number of real roots of in which is to count the changes in sign of the real function , where .
So in order to prove that the number of real roots of , where , is equal to the number of roots of in , which can be computed via a theorem known as the Argument Principle without counting the changes in sign of , where [26, 30, 31], it is necessary to count the changes in sign of , where . (Otherwise, it would be possible to determine the number of real roots of , where , without counting the changes in sign of by computing the number of roots of in via the Argument Principle.) As becomes arbitrarily large, the time that it takes to count the changes in sign of , where , approaches infinity for the following reasons: (1) There are infinitely many changes in sign of . (2) The time that it takes to evaluate the sign of approaches infinity as . Hence, an infinite amount of time is required to prove that for each , the number of real roots of , where , is equal to the number of roots of in (which is equivalent to proving the Riemann Hypothesis), so the Riemann Hypothesis is unprovable. ∎
Chaitin’s incompleteness theorem implies that mathematics is filled with facts which are both true and unprovable, since it states that the bits of completely determine whether any given mathematics problem is solvable and only a finite number of bits of are even knowable . And we have shown that there is a very good chance that both the Collatz Conjecture and the Riemann Hypothesis are examples of such facts. Of course, we can never formally prove that either one of these conjectures is both true and unprovable, for obvious reasons. The best we can do is prove that they are unprovable and provide computational evidence and heuristic probabilistic reasoning to explain why these two conjectures are most likely true, as we have done. And of course, it is conceivable that one could find a counter-example to the Collatz Conjecture by finding a number for which the Collatz algorithm gets stuck in a nontrivial cycle or a counter-example to the Riemann Hypothesis by finding a complex root, , of for which and . But so far, no one has presented any such counter-examples.
The theorems that the Collatz Conjecture and the Riemann Hypothesis are unprovable illustrate a point which Chaitin has been making for years, that mathematics is not so much different from empirical sciences like physics [8, 14]. For instance, scientists universally accept the law of gravity to be true based on experimental evidence, but such a law is by no means absolutely certain - even though the law of gravity has been observed to hold in the past, it is not inconceivable that the law of gravity may cease to hold in the future. So too, in mathematics there are conjectures like the Collatz Conjecture and the Riemann Hypothesis which are strongly supported by experimental evidence but can never be proven true with absolute certainty.
6 Computational Irreducibility
Up until the last decade of the twentieth century, the most famous unsolved problem in all of mathematics was to prove the following conjecture:
Fermat’s Last Theorem (FLT) - When , the equation has no nontrivial integer solutions.
After reading the explanations in the previous section, a skeptic asked the author what the difference is between the previous argument that the Collatz Conjecture is unprovable and the following argument that Fermat’s Last Theorem is unprovable (which cannot possibly be valid, since Fermat’s Last Theorem was proven by Wiles and Taylor in the last decade of the twentieth century ):
Invalid Proof that FLT is unprovable: Suppose that we have a computer program which computes for each and until it finds a nontrivial such that and then halts. Obviously, Fermat’s Last Theorem is equivalent to the assertion that such a computer program can never halt. In order to be certain that such a computer program will never halt, it is necessary to compute for each and to determine that for each nontrivial . Since this would take an infinite amount of time, Fermat’s Last Theorem is unprovable. ∎
This proof is invalid, because the assertion that “it is necessary to compute for each and to determine that for each nontrivial ” is false. In order to determine that an equation is false, it is not necessary to compute both sides of the equation - for instance, it is possible to know that the equation has no integer solutions without evaluating for every , since one can see that if there were any integer solutions, the left-hand-side of the equation would be divisible by three but the right-hand-side would not be divisible by three.
Question - So why can’t we apply this same reasoning to show that the proof that the Collatz Conjecture is unprovable is invalid? Just as it is not necessary to compute in order to determine that , is it not possible that one can determine that the Collatz algorithm will converge to one without knowing what the algorithm does at each iteration?
Answer - Because what the Collatz algorithm does at each iteration is what determines whether or not the Collatz sequence converges to one , it is necessary to know what the Collatz algorithm does at each iteration in order to determine that the Collatz sequence converges to one. Because the exact values of are not relevant to knowing that for each nontrivial , it is not necessary to compute each in order to determine that for each nontrivial .
Exercise - You are given a deck of cards labeled . You shuffle the deck. Then you perform the following “reverse-card-shuffling” procedure: Look at the top card labeled . If , then stop. Otherwise, reverse the order of the first cards in the deck. Then look at the top card again and repeat the same procedure. For example, if and the deck were in order (where is the top card), then you would obtain . Now, we present two problems:
Prove that such a procedure will always halt for any and any shuffling of the cards.
Find a closed formula for the maximum number of iterations that it may take for such a procedure to halt given the number of cards in the deck, or prove that no such formula exists. (The maximum number of iterations for are 0,1,2,4,7,10,16,22,30,38,51,65,80,101,113,139 .)
It is easy to use the principle of mathematical induction to solve the first problem. As for the second problem, it turns out that there is no closed formula; in other words, in order to find the maximum number of iterations that it may take for such a procedure to halt given the number of cards in the deck, it is necessary to perform the reverse-card-shuffling procedure on every possible permutation of . This property of the Reverse-Card-Shuffling Problem in which there is no way to determine the outcome of the reverse-card-shuffling procedure without actually performing the procedure itself is called computational irreducibility . Notice that the notion of computational irreducibility also applies to the Collatz Conjecture and the Riemann Hypothesis in that an infinite number of irreducible computations are necessary to prove these two conjectures.
Stephen Wolfram, who coined the phrase “computational irreducibility”, argues in his famous book, A New Kind of Science , that our universe is computationally irreducible, i.e., the universe is so complex that there is no general method for determining the outcome of a natural event without either observing the event itself or simulating the event on a computer. The dream of science is to be able to make accurate predictions about our natural world; in a computationally irreducible universe, such a dream is only possible for very simple phenomena or for events which can be accurately simulated on a computer.
7 Open Problems in Mathematics
In the present year of 2006, the most famous unsolved number theory problem is to prove the following:
Goldbach’s Conjecture - Every even number greater than two is the sum of two prime numbers.
Just like the Collatz Conjecture and the Riemann Hypothesis, there are heuristic probabilistic arguments which support Goldbach’s Conjecture, and Goldbach’s Conjecture has been verified by computers for a large number of even numbers . The closest anyone has come to proving Goldbach’s Conjecture is a proof of the following:
Chen’s Theorem - Every sufficiently large even integer is either the sum of two prime numbers or the sum of a prime number and the product of two prime numbers .
Although the author cannot prove it, he believes the following:
Conjecture 1 - Goldbach’s Conjecture is unprovable.
Another famous conjecture which is usually mentioned along with Goldbach’s Conjecture in mathematics literature is the following:
The Twin Primes Conjecture - There are infinitely many prime numbers for which is also prime .
Just as with Goldbach’s Conjecture, the author cannot prove it, but he believes the following:
Conjecture 2 - The Twin Primes Conjecture is undecidable, i.e., it is impossible to know whether the Twin Primes Conjecture is true or false.
The problem, the Collatz Conjecture, and the Riemann Hypothesis demonstrate to us that as finite human beings, we are all severely limited in our ability to solve abstract problems and to understand our universe. The author hopes that this observation will help us all to better appreciate the fact that there are still so many things which G-d allows us to understand.
I thank G-d, my parents, my wife, and my children for their support.
-  S. Aaronson, “NP-complete Problems and Physical Reality”, SIGACT News Complexity Theory Column, March 2005.
-  E. Belaga, “Reflecting on the Mystery: Outline of a scenario”, U. Strasbourg preprint, 10 pages, 1998.
-  S. Ben-David, B. Chor, O. Goldreich, and M. Luby, “On the Theory of Average Case Complexity”, Journal of Computer and System Sciences, Vol. 44, No. 2, April 1992, pp. 193-219.
-  C. Bennett, E. Bernstein, G. Brassard, and U. Vazirani. “Strengths and weaknesses of quantum computing”, SIAM J. Comput., 26(5):1510-1523, 1997.
-  P.B. Bovet and P. Crescenzi, Introduction to the Theory of Complexity, Prentice Hall, 1994.
-  G.J. Chaitin, Algorithmic Information Theory, revised third printing, Cambridge University Press, 1990.
-  G.J. Chaitin, “Thoughts on the Riemann Hypothesis”, 2003. arXiv: math/0306042.
-  G.J. Chaitin, Meta Math!, Pantheon, October 2005.
-  J.R. Chen,“On the Representation of a Large Even Integer as the Sum of a Prime and the Product of at Most Two Primes”, Kexue Tongbao 17, 385-386, 1966.
-  T.H. Cormen, C.E. Leiserson, and R.L. Rivest, Introduction to Algorithms, McGraw-Hill, 1990.
-  M.J. Coster, A. Joux, B.A. LaMacchia, A.M. Odlyzko, C.P. Schnorr, and J. Stern, “Improved low-density subset sum algorithms”, Computational Complexity, 2 (1992), pp. 111-128.
-  R.E. Crandall, “On the problem”, Math. Comp., 32 (1978) pp 1281-1292.
-  J. Derbyshire, Prime Obsession, Joseph Henry Press, 2003.
-  K. Dombrowski, “Rational Numbers Distribution and Resonance”, Progress in Physics, vol. l, April 2005, pp. 65-67.
-  C.A. Feinstein, “The Collatz Conjecture is Unprovable”, 2005. arXiv: math/0312309.
-  I.J. Good and R.F. Churchhouse, “The Riemann Hypothesis and Pseudorandom Features of the Möbius Sequence”, Math. Comp. 22 (1968), 857-861.
-  D.A. Grier, When Computers Were Human, Princeton University Press, 2005.
L.K. Grover, “A fast quantum mechanical algorithm
for database search”,
Proceedings, 28th Annual ACM Symposium on the Theory of Computing, pp. 212-219, May 1996.
-  R.K. Guy, Unsolved Problems in Number Theory, 3rd ed. New York: Springer-Verlag, 2004.
-  E. Horowitz, and S. Sahni, “Computing Partitions with Applications to the Knapsack Problem”, Journal of the ACM, vol. 2l, no. 2, April 1974, pp 277-292.
-  R. Impagliazzo and A. Wigderson, “ if requires exponential circuits: Derandomizing the Lemma.” Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, pages 220-229, 1997.
-  J.C. Lagarias, “The problem and its generalizations”, Amer. Math. Monthly 92 (1985) 3-23. [Reprinted in: Conference on Organic Mathematics, Canadian Math. Society Conference Proceedings vol 20, 1997, pp. 305-331]. http://www.cecm.sfu.ca/organics/papers.
-  J.C. Lagarias, “ Problem Annotated Bibliography”, 2004. arXiv: math/0309224.
-  L.A. Levin. “The tale of one-way functions”. Problems of Information Transmission, 39(1):92-103, 2003.
-  A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography, CRC Press, 1996.
-  A.M. Odlyzko, “Analytic computations in number theory”, Mathematics of Computation 1943-1993: A Half-Century of Computational Mathematics, W. Gautschi (ed.), Amer. Math. Soc., Proc. Symp. Appl. Math. #48 (1994), pp. 451-463.
-  A.M. Odlyzko, “The rise and fall of knapsack cryptosystems”, Cryptology and Computational Number Theory, C. Pomerance (ed.), Am. Math. Soc., Proc. Symp. Appl. Math., pp. 75-88 #42 (1990).
-  A.M. Odlyzko, “Supercomputers and the Riemann zeta function”, Supercomputing 89: Supercomputing Structures & Computations, Proc. 4-th Intern. Conf. on Supercomputing, L.P. Kartashev and S.I. Kartashev (eds.), International Supercomputing Institute 1989, 348-352.
-  C.H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Englewood Cliffs, NJ, 1982.
-  G.R. Pugh, “The Riemann-Siegel Formula and Large Scale Computations of the Riemann Zeta Function”, Master’s Thesis, University of British Columbia, 1998.
-  M. Rao and H. Stetkaer, Complex Analysis, World Scientific, 1991.
-  E. Roosendaal, (2003+), “On the problem”. http://personal.computrain.nl/eric/wondrous/.
-  M.W. Shackleford, “Name Distributions in the Social Security Area, August 1997”, Actuarial Note 139, Social Security Administration, May 1998. http://www.ssa.gov/OACT/babynames/.
-  P. Shor, “Algorithms for quantum computation: discrete logarithms and factoring”, Proceedings 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20-22 November, 1994, IEEE Comput. Soc. Press, 124-134.
-  N.J.A. Sloane, On-Line Encyclopedia of Integer Sequences, No. A000375, 2005.
-  E.W. Weisstein. “Fermat’s Last Theorem.” From MathWorld–A Wolfram Web Resource, 2005. http://www.mathworld.wolfram.com.
-  E.W. Weisstein, “Riemann Hypothesis.” From MathWorld–A Wolfram Web Resource, 2005. http://www.mathworld.wolfram.com.
-  H.S. Wilf, “Backtrack: An Expected Time Graph Coloring Algorithm”, Inform. Proc. Letters, vol. 18, 1984, pp. 119-122.
-  G.J. Woeginger, “Exact Algorithms for NP-Hard Problems”, Lecture Notes in Computer Science, Springer-Verlag Heidelberg, Volume 2570, pp. 185-207, 2003.
-  S. Wolfram, “Undecidability and Intractability in Theoretical Physics”, Physical Review Letters, 54, pp. 735-738, 1985.
-  S. Wolfram, A New Kind of Science, Wolfram Media, Champaign, IL, 2002.
-  R. Zach, “Hilbert’s Program Then and Now”, 2005. arXiv: math/0508572.