Verification of PCP-Related Computational Reductions in Coq

11/19/2017 ∙ by Yannick Forster, et al. ∙ Universität Saarland 0

We formally verify several computational reductions concerning the Post correspondence problem (PCP) using the proof assistant Coq. Our verifications include a reduction of a string rewriting problem generalising the halting problem for Turing machines to PCP, and reductions of PCP to the intersection problem and the palindrome problem for context-free grammars. Interestingly, rigorous correctness proofs for some of the reductions are missing in the literature.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A problem can be shown undecidable by giving an undecidable problem and a computable function reducing  to . There are well known reductions of the halting problem for Turing machines (TM) to the Post correspondence problem (PCP), and of PCP to the intersection problem for context-free grammars (CFI). We study these reductions in the formal setting of Coq’s type theory [16] with the goal of providing elegant correctness proofs.

Given that the reduction of TM to PCP appears in textbooks [9, 3, 15] and in the standard curriculum for theoretical computer science, one would expect that rigorous correctness proofs can be found in the literature. To our surprise, this is not the case. Missing is the formulation of the inductive invariants enabling the necessary inductive proofs to go through. Speaking with the analogue of imperative programs, the correctness arguments in the literature argue about the correctness of programs with loops without stating and verifying loop invariants.

By inductive invariants we mean statements that are shown inductively and that generalise the obvious correctness statements one starts with. Every substantial formal correctness proof will involve the construction of suitable inductive invariants. Often it takes ingenuity to generalise a given correctness claim to one or several inductive invariants that can be shown inductively.

It took some effort to come up with the missing inductive invariants for the reductions leading from TM to PCP. Once we had the inductive invariants, we had rigorous and transparent proofs explaining the correctness of the reductions in a more satisfactory way than the correctness arguments we found in the literature.

Reduction of problems is transitive. Given a reduction and a reduction , we have a reduction . This way, complex reductions can be factorised into simpler reductions. Following ideas in the literature, we will establish the reduction chain

where TM is the halting problem of single-tape Turing machines, SRH is a generalisation of the halting problem for Turing machines, SR is the string rewriting problem, and MPCP is a modified version of PCP fixing a first card. The most interesting steps are and .

We also consider the intersection problem (CFI) and the palindrome problem (CFP) for a class of linear context-free grammars we call Post grammars. CFP asks whether a Post grammar generates a palindrome, and CFI asks whether for two Post grammars there exists a string generated by both grammars. We will verify reductions and , thus showing that CFP and CFI are both undecidable.

Coq’s type theory provides an ideal setting for the formalisation and verification of the reductions mentioned. The fact that all functions in Coq are total and computable makes the notion of computable reductions straightforward.

The correctness arguments coming with our approach are inherently constructive, which is verified by the underlying constructive type theory. The main inductive data types we use are numbers and lists, which conveniently provide for the representation of strings, rewriting systems, Post correspondence problems, and Post grammars.

The paper is accompanied by a Coq development covering all results of this paper. The definitions and statements in the paper are hyperlinked with their formalisations in the HTML presentation of the Coq development at http://www.ps.uni-saarland.de/extras/PCP.

Organisation

We start with the necessary formal definitions covering all reductions we consider in Section 2. We then present each of the six reductions and conclude with a discussion of the design choices underlying our formalisations. Sections 3 to 8 on the reductions are independent and can be read in any order.

We only give definitions for the problems and do not discuss the underlying intuitions, because all problems are covered in a typical introduction to theoretical computer science and the interested reader can refer to various textbooks providing good intuitions, e.g. [9, 15, 3].

Contribution

Our reduction functions follow the ideas in the literature. The main contributions of the paper are the formal correctness proofs for the reduction functions. Here some ingenuity and considerable elaboration of the informal arguments in the literature were needed. As one would expect, the formal proofs heavily rely on inductive techniques. In contrast, the informal proof sketches in the literature do not introduce the necessary inductions (in fact, they don’t even mention inductive proofs). To the best of our knowledge, the present paper is the first paper providing formal correctness proofs for basic reductions to and from PCP.

2 Definitions

Definitions

Formalising problems and computable reductions in constructive type theory is straightforward. A problem consists of a type and a unary predicate on , and a reduction of to is a function such that . Note that the usual requirement that is total and computable can be dropped since it is satisfied by every function in a constructive type theory. We write and say that reduces to if a reduction of to exists.

[][reduces_transitive] If and , then .

The basic inductive data structures we use are numbers () and lists (). We write for the concatenation of two lists, for the reversal of a list, for a map over a list, and for a map and filter over a list. Moreover, we write if is a member of , and if every member of  is a member of .

A string is a list of symbols, and a symbol is a number. The letters , , , , and range over strings, and the letters , , range over symbols. We write for and for . We use to denote the empty string. A palindrome is a string such that .

and .

[][list_prefix_inv] If , , and , then and .

Proof

By induction on . ∎

A card or a rule a is a pair of two strings. When we call a card we see as the upper and as the lower string of the card. When we call a rule we see as the left and as the right side of the rule.

The letters , , , , range over list of cards or rules.

2.1 Post Correspondence Problem

A stack is a list of cards. The [tau1]upper trace and the [tau2]lower trace of a stack are strings defined as follows:

Note that is the concatenation of the upper strings of the cards in , and that  is the concatenation of the lower strings of the cards in . We say that a stack matches if and a match is a matching stack. An example for a match is the list , which satisfies .

We can now define the predicate for the Post correspondence problem:

Note that holds iff there exists a nonempty match . We then say that  is a solution of . For instance,

is solved by the match

While it is essential that is a list providing for order and duplicates, may be thought of as a finite set of cards.

We now define the predicate for the modified Post correspondence problem:

Informally, is like with the additional constraint that the solution for starts with the first card .

Note that in contrary to most text books we leave open whether is an element of and instead choose as subset of . While this might first seem more complicated, it actually eases formalisation. Including into would require to be a predicate on arguments of the form , i.e. dependent pairs containing a proof.

2.2 String Rewriting

Given a list of rules, we define string rewriting with two inductive predicates [rew] and [rewt]: * x/y∈R uxv≻_Ruyv * z≻^*_Rz * x≻_Ry
y≻^*_Rz x≻^*_Rz Note that is the reflexive transitive closure of , and that says that can be obtained from with a single rewriting step using a rule in .

The following hold:

  1. [PreOrder_rewt] If and , then . [rewt_app] If , then . [rewt_subset] If and , then .

Proof

By induction on . ∎

Note that the induction lemma for string rewriting can be stated as

This is stronger than the lemma Coq infers, because of the quantification over on the outside. The quantification is crucial for many proofs that do induction on derivations , and we use the lemma throughout the paper without explicitly mentioning it.

We define the predicates for the string rewriting problem and the generalised halting problem as follows:

We call the second problem generalised halting problem, because it covers the halting problem for deterministic single-tape Turing machines, but also the halting problems for nondeterministic machines or for more exotic machines that e.g. have a one-way infinite tape or can read multiple symbols at a time.

We postpone the definition of Turing machines and of the halting problem TM to section 8.

2.3 Post Grammars

A Post grammar is a pair of a list of rules and a symbol . Informally, a Post grammar is a special case of a context-free grammar with a single nonterminal  and two rules and for every rule , where and  does not occur in . We define the [sigma]projection of a list of rules with a symbol  as follows:

We say that a Post grammar generates a string if there exists a nonempty list such that . We then say that  is a derivation of in .

We can now define the predicates for the problems CFP and CFI:

Informally, holds iff the grammar generates a palindrome, and holds iff there exists a string that is generated by both grammars and . Post_CFG Note that as Post grammars are special cases of context-free grammars, the reduction of PCP to CFG and CFI can be trivially extended to reductions to the respective problems for context-free grammars. We prove this formally [reduce_grammars]in the accompanying Coq development.

2.4 Alphabets

For some proofs it will be convenient to fix a finite set of symbols. We represent such sets as lists and speak of alphabets. The letter  ranges over alphabets. We say that an alphabet covers a string, card, or stack if contains every symbol occurring in the string, card, or stack. We may write to say that covers  since both and are lists of symbols.

2.5 Freshness

Definitions At several points we will need to pick fresh symbols from an alphabet. Because we model symbols as natural numbers, a very simple definition of freshness suffices. We define a function such that for an alphabet as follows:

has the following characteristic property:

Lemma 1 ()

[fresh_spec’] For all , .

Proof

By induction on , with generalised. ∎

The property is most useful when exploited in the following way:

Corollary 1 ()

[fresh_spec] For all , .

An alternative approach to this is to formalise alphabets explicitly as types . This has the advantage that arbitrarily many fresh symbols can be introduced simultaneously using definitions like , and symbols in stemming from can easily be shown different from fresh symbols stemming from by inversion. However, this means that strings have to be explicitly embedded pointwise when used as strings of type , which complicates proofs.

In general, both approaches have benefits and tradeoffs. Whenever proofs rely heavily on inversion (as e.g. our proofs in Section 8), the alternative approach is favorable. If proofs need the construction of many strings, as most of our proofs do, modelling symbols as natural numbers shortens proofs.

3 SRH to SR

SRH_SR

We show that SRH (the generalised halting problem) reduces to SR (string rewriting). We start with the definition of the reduction function. Let , , and be given.

We fix an alphabet covering , , and . We now add rules to that allow if .

Lemma 2 ()

[x_rewt_a0] If , then .

Proof

For all , and follow by induction on . The claim now follows with Fact 2.2 (1,2). ∎

Lemma 3 ()

[equi] .

Proof

Let and . Then by Lemma 2. Moreover, by Fact 2.2 (3). Thus by Fact 2.2 (1).

Let . By induction on it follows that there exists such that and . ∎

Theorem 3.1 ()

[reduction] SRH reduces to SR.

Proof

Follows with Lemma 3. ∎

4 SR to MPCP

SR_MPCP

We show that SR (string rewriting) reduces to MPCP (the modified Post correspondence problem). We start with the definition of the reduction function.

Let , and be given. We fix an alphabet covering , , and . We also fix two symbols and define:

The idea of the reduction is as follows: Assume and rules and in . Then and we have , , and , omitting possibe further rules in . Written suggestively, the following stack matches:

And, vice versa, every matching stack starting with will yield a derivation of .

We now go back to the general case and state the correctness lemma for the reduction function.

Lemma 4 ()

[SR_MPCP_cor] if and only if there exists a stack such that matches.

From this lemma we immediately obtain the reduction theorem (Theorem 4.1). The proof of the lemma consists of two translation lemmas: Lemma 5 and Lemma 6. The translation lemmas generalise the two directions of Lemma 4 such that they can be shown with canonical inductions.

Lemma 5 ()

[SR_MPCP] Let and . Then there exists such that .

Proof

By induction on . In the first case, and . In the second case, and . By induction hypothesis there is such that . Let and for . We define . Now . ∎

Lemma 6 ()

[MPCP_SR] Let , , and . Then .

Proof

By induction on with and generalised. We do all cases in detail:

  • The cases where or are contradictory.

  • Let . By assumption, . Then , and .

  • Let for . Because is not in and by assumption , . And by induction hypothesis.

  • Let . By assumption, . Then and we have . By induction hypothesis, this yields as needed.

  • Let for and assume . Then and . By induction hypothesis, this yields as needed.

Theorem 4.1 ()

[reduction] SR reduces to MPCP.

Proof

Follows with Lemma 4. ∎

The translation lemmas formulate what we call the inductive invariants of the reduction function. The challenge of proving the correctness of the reduction function is finding strong enough inductive invariants that can be verified with canonical inductions.

5 MPCP to PCP

MPCP_PCP

We show that MPCP (modified PCP) reduces to PCP.

The idea of the reduction is that for a stack and a first card where and we have

if and only if we have

The reduction function implements this idea by constructing a dedicated first and a dedicated last card and by inserting -symbols into the MPCP cards:

Let and be given. We fix an alphabet covering and . We also fix two symbols . We define two functions [hash_L] and [hash_R] inserting the symbol before and after every symbol of a string :

We define:

We now state the correctness lemma for the reduction function.

Lemma 7 ()

[MPCP_PCP_cor] There exists a stack such that if and only if there exists a nonempty stack such that .

From this lemma we immediately obtain the desired reduction theorem (Theorem 5.1). The proof of the lemma consists of two translation lemmas (Lemmas 10 and 11) and a further auxiliary lemma (Lemma 8).

Lemma 8 ()

[match_start] Every nonempty match starts with .

Proof

Let be a nonempty match . Then cannot be the first card of since the upper string and lower string of  start with different symbols. For the same reason cannot be the first card of if and both and are nonempty.

Consider . Then cannot be the first card of since no card of  has an upper string starting with .

Consider . Then cannot be the first card of since no card of  has a lower string starting with . ∎

For the proofs of the translation lemmas we need a few facts about and .

Lemma 9

The following hold:

  1. [hash_swap] . [hash_L_app] . [hash_R_app] . [hash_L_diff] . [hash_R_inv] .

Proof

By induction on . ∎

Lemma 10 ()

[MPCP_PCP] Let and . Then there exists a stack such that .

Proof

By induction on with and generalised. The case for follows from Lemma 9 (1) by choosing .

For the other case, let . Then by assumption . And thus by induction hypothesis there exists such that . By Lemma 9 (2) and (3), .

If , then choosing works. Otherwise, works. ∎

Lemma 11 ()

[PCP_MPCP] Let such that and . Then there exists a stack such that .

Proof

By induction on . The cases and yield contradictions using Lemma 9 (4). For , choosing works by Lemma 9 (5).

The interesting case is for with . By assumption and Lemma 9 (2) and (3) we know that . Now by induction hypothesis, where all premises follow easily, there is with and thus works. ∎

Theorem 5.1 ()

[reduction] MPCP reduces to PCP.

Proof

Follows with Lemma 7. ∎

6 PCP to CFP

PCP_CFP

We show that PCP reduces to CFP (the palindrome problem for Post grammars).

Let be a symbol.

Let . Then is a palindrome iff .

Proof

Follows with Facts 2 and 2. ∎

There is an obvious connection between matching stacks and palindromes: A stack

matches if and only if the string

is a palindrome, provided the symbol does not appear in the stack (follows with Facts 2 and 6 using ). Moreover, strings of the form with may be generated by a Post grammar having a rule for every card in the stack. The observations yield a reduction of PCP to CFP.

We formalise the observations with a function

Lemma 12 ()

[sigma_gamma] .

Proof

By induction on using Fact 2. ∎

Lemma 13 ()

[tau_eq_iff] Let be a stack and be a symbol not occurring in . Then is a match if and only if is a palindrome.

Proof

Follows with Lemma 12 and Facts 6 and 2. ∎

Lemma 14 ()

[gamma_invol] and .

Proof

By induction on using Fact 2. ∎

Theorem 6.1 ()

[PCP_CFP] PCP reduces to CFP.

Proof

Let be a list of cards. We fix a symbol that is not in and show .

Let be a nonempty match. It suffices to show that and is a palindrome. The first claim follows with Lemma 14, and the second claim follows with Lemma 13.

Let be a nonempty stack such that is a palindrome. By Lemma 14 we have and . Since matches by Lemma 13, we have . ∎

7 PCP to CFI

PCP_CFI

We show that PCP reduces to CFI (the intersection problem for Post grammars). The basic idea is that a stack with matches if and only if the string

equals the string

provided the symbol does not occur in . Moreover, strings of these forms can be generated by the Post grammars and , respectively.

We fix a symbol and formalise the observations with two functions

and a function [gamma] defined as follows:

Lemma 15 ()

[sigma_gamma1] and .

Proof

By induction on . ∎

Lemma 16 ()

[gamma1_spec] Let . Then there exists such that .

Proof

By induction on using Fact 2. ∎

Lemma 17 ()

[gamma_inj] Let not occur in and . Then implies .

Proof

By induction on using Fact 2. ∎

Theorem 7.1 ()

[reduction] PCP reduces to CFI.

Proof

Let be a list of cards. We fix a symbol not occurring in and define and . We show .

Let be a nonempty match. Then , , and by Lemma 15.

Let and be nonempty lists such that . By Lemma 16 there exist nonempty stacks such that . By Lemma 15 we have . By Fact 2 we have and . Thus by Lemma 17. Hence is a nonempty match. ∎

Hopcroft et al. [9] give a reduction of PCP to CFI by using grammars equivalent to the following Post grammars:

While being in line with the presentation of PCP with indices, it complicates both the formal definition and the verification.

Hesselink [8] directly reduces CFP to CFI for general context-free grammars, making the reduction PCP to CFI redundant. The idea is that a context-free grammar over contains a palindrome if and only if its intersection with the context-free grammar of all palindromes over is non-empty. CFP_CFI We give a [CFP_CFI]formal proof of this statement using a definition of context-free rewriting with explicit alphabets.

For Post grammars, CFP is not reducible to CFI, because the language of all palindromes is not expressible by a Post grammar.

8 TM to SRH

TM_SRH

A Turing machine, independent from its concrete type-theoretic definition, always consists of an alphabet , a finite collection of states , an initial state , a collection of halting states , and a step function which controls the behaviour of the head on the tape. The halting problem for Turing machines TM then asks whether a Turing machine reaches a final state when executed on a tape containing a string .

In this section, we briefly report on our formalisation of a reduction from TM to SRH following ideas from Hopcroft et al. [9]. In contrast to the other sections, we omit the technical details of the proof, because there are abundantly many, and none of them is interesting from a mathematical standpoint. We refer the interested reader to [7] for all details.

In the development, we use a formal definition of Turing machines from Asperti and Ricciotti [1].

To reduce TM to SRH, a representation of configurations of Turing machines as strings is needed. Although the content of a tape can get arbitrarily big over the run of a machine, it is finite in every single configuration. It thus suffices to represent only the part of the tape that the machine has previously written to.

We write the current state to the left of the currently read symbol and, following [1], distinguish four non-overlapping situations: The tape is empty (), the tape contains symbols and the head reads one of them (), the tape contains symbols and the head reads none of them, because it is in a left-overflow position where no symbol has been written before () or the right-overflow counterpart of the latter situation (). Note the usage of left and right markers to indicate the end of the previously written part.

The reduction from TM to SRH now works in three steps. Given a Turing machine , one can define whether a configuration is reachable from a configuration using its transition function [1, 7]. First, we translate the transition function of the Turing machine into a string rewriting system using the translation scheme depicted in Table 1.

=0ex =0ex

Read Write Move
Table 1: Rewriting rules in if the machine according to its transition function in state continues in and reads, writes and moves as indicated. For example, if the transition function of the machine indicates that in state if symbol is read, the machine proceeds to state , writes nothing and moves to the left, we add the rule and rules for every in the alphabet.
Lemma 18 ()

[reduction_reach_] For all Turing machines and configurations and there is a SRS such that if and only if the configuration is reachable from the configuration by the machine .

In the development, we first reduce to a version of string rewriting with explicit alphabets, and then [reduction]reduce this version to string rewriting as defined before.

This proof is by far the longest in our development. In its essence, it is only a shift of representation, making explicit that transition functions encode a rewriting relation on configurations. The proof is mainly a big case distinction over all possible shapes of configurations of a machine, which leads to a combinatorial explosion and a vast amount of subcases. The proof does, however, not contain any surprises or insights.

Note that, although we work with deterministic machines in the Coq development, the translation scheme described in Table 1 also works for nondeterministic Turing machines.

The second step of the reduction is to incorporate the set of halting states . We define an intermediate problem , generalising the definition of to strings:

Note that . TM can then easily be reduced to :

Lemma 19 ()

[halt_SRH’] reduces to .

Proof

Given a Turing machine and a string , accepts if and only if , where is the system from the last lemma, is the starting state of and is a string containing exactly all halting states of . ∎

Third, we can reduce to :

Lemma 20 ()

[SRH’_SRH] reduces to .

Proof

Given a SRS , a string and a string , we first fix an alphabet covering and , and a fresh symbol . We then have if and only if . ∎

All three steps combined yield:

Theorem 8.1 ()

[Halt_SRH] reduces to .

9 Discussion

We have formalised and verified a number of computational reductions to and from the Post correspondence problem based on Coq’s type theory. Our goal was to come up with a development as elegant as possible. Realising the design presented in this paper in Coq yields an interesting exercise practising the verification of list-processing functions. If the intermediate lemmas are hidden and just the reductions and accompanying correctness statements are given, the exercise gains difficulty since the correctness proofs for the reductions require the invention of general enough inductive invariants (Lemmas 5, 6, 1011). To our surprise, we could not find rigorous correctness proofs for the reductions in the literature (e.g, [9, 3, 15]). Teaching these reductions without rigorous correctness proofs in theoretical computer science classes seems bad practice. As the paper shows, elegant and rigorous correctness proofs using techniques generally applicable in program verification are available.

The ideas for the reductions are taken from Hopcroft et al. [9]. They give a monolithic reduction of the halting problem for Turing machines to MPCP. The decomposition is novel. Davis et al. [3] give a monolithic reduction based on different ideas. The idea for the reduction is from Hesselink [8], and the idea for the reduction appears in Hopcroft et al. [9].

There are several design choices we faced when formalising the material presented in this paper.

  1. We decided to formalise PCP without making use of the positions of the cards in the list . Most presentations in the literature (e.g., [9, 15]) follow Post’s original paper [13] in using positions (i.e., indices) rather than cards in matches. An exception is Davis et al. [3]. We think formulating PCP with positions is an unnecessary complication.

  2. We decided to represent symbols as numbers rather than elements of finite types serving as alphabets. Working with implicit alphabets represented as lists rather than explicit alphabets represented as finite types saves bureaucracy.

  3. We decided to work with Post grammars (inspired by Hesselink [8]) rather than general context-free grammars since Post grammars sharpen the result and enjoy a particularly simple formalisation. In the Coq development, we show that Post_CFG[reduce_grammars]Post grammars are an instance of context-free grammars.

Furthermore, we decided to put the focus of this paper on the elegant reductions and not to cover Turing machines in detail. While being a wide-spread model of computation, even the concrete formal definition of Turing machines contains dozens of details, all of them not interesting from a mathematical perspective.

The Coq development verifying the results of sections 3 to 7 consists of about 850 lines of which about one third realises specifications. The reduction takes 70 lines, takes 105 lines, takes 206 lines, takes 60 lines, and takes 107 lines. singleTM The reduction takes 610 lines, 230 of them specification, plus a definition of Turing machines taking 291 lines.

Future Work

Undecidability proofs for logics are often done by reductions from PCP or related tiling problems. We thus want to use our work as a stepping stone to build a library of reductions which can be used to verify more undecidability proofs. We want to reduce PCP to the halting problem of Minsky machines to prove the undecidability of intuitionistic linear logic [11]. Another possible step would be to reduce PCP to validity for first-order logic [2], following the reduction from e.g. [12]. Many other undecidability proofs are also done by direct reductions from PCP, like the intersection problem for two-way-automata [14], unification in third-order logic [10], typability in the -calculus [4], satisfiability for more applied logics like HyperLTL [5], or decision problems of first order theories [17].

In this paper, we gave reductions directly as functions in Coq instead of appealing to a concrete model of computation. Writing down concrete Turing machines computing the reductions is possible in principle, but would be very tedious and distract from the elegant arguments our proofs are based on.

In previous work [6] we studied an explicit model of computation based on a weak call-by-value calculus L in Coq. L would allow an implementation of all reduction functions without much overhead, which would also formally establish the computability of all reductions.

Moreover, it should be straightforward to reduce PCP to the termination problem for L. Reducing the termination problem of L to TM would take considerable effort. Together, the two reductions would close the loop and verify the computational equivalence of TM, SRH, SR, PCP, and the termination problem for L. Both reducing PCP to L and implementing all reductions in L is an exercise in the verification of deeply embedded functional programs, and orthogonal in the necessary methods to the work presented