Lecture Notes on Automata, Languages, and Grammars

07/30/2019 ∙ by Cristopher Moore, et al. ∙ 0

These lecture notes are intended as a supplement to Moore and Mertens' The Nature of Computation or as a standalone resource, and are available to anyone who wants to use them. Comments are welcome, and please let me know if you use these notes in a course. There are 61 exercises. I emphasize that automata are elementary playgrounds where we can explore the issues of deterministic and nondeterministic computation. Unlike P vs. NP, we can prove that nondeterminism is equivalent to determinism, or strictly more powerful than determinism, in finite-state and push-down automata respectively. I also correct several historical and aesthetic injustices: in particular, the Myhill-Nerode theorem and the idea of building minimal DFAs from equivalence classes of prefixes is restored to its rightful place above the Pumping Lemma for regular languages. I also discuss the Pumping Lemma for context-free languages, and briefly discuss counter automata, queue automata, and the connection between unambiguous context-free languages and algebraic generating functions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Finite-State Automata

Here is a deterministic finite-state automaton, or DFA for short:

It has three states, , and an input alphabet with two symbols, . It starts in the state (circled in bold) and reads a string of s and s, making transitions from one state to another along the arrows. It is deterministic because at each step, given its current state and the symbol it reads next, it has one and only one choice of state to move to.

This DFA is a tiny kind of computer. It’s job is to say “yes” or “no” to input strings—to “accept” or “reject” them. It does this by reading the string from left to right, arriving in a certain final state. If its final state is in the dashed rectangle, in state or , it accepts the string. Otherwise, it rejects it. Thus any DFA answers a simple yes-or-no question—namely, whether the input string is in the set of strings that it accepts.

The set of yes-or-no questions that can be answered by a DFA is a kind of baby complexity class. It is far below the class P of problems that can be solved in polynomial time, and indeed the problems it contains are almost trivial. But it is interesting to look at, partly because, unlike P, we understand exactly what problems this class contains. To put this differently, while polynomial-time algorithms have enormous richness and variety, making it very hard to say exactly what they can and cannot do, we can say precisely what problems DFAs can solve.

Formally, a DFA consists of a set of states , an input alphabet , an initial state , a subset of accepting states, and a transition function

that tells the DFA which state to move to next. In other words, if it’s in state and it reads the symbol , it moves to a new state . In our example, , , , , and the transition function is

Given a finite alphabet , let’s denote the set of all finite strings . For instance, if then

Here denotes the empty string—that is, the string of length zero. Note that is an infinite set, but it only includes finite strings. It’s handy to generalize the transition function and let denote the state we end up in if we start in a state and read a string from left to right. For instance,

We can define formally by induction,

Here denotes the string followed by the symbol . In other words, if is empty, do nothing. Otherwise, read all but the last symbol of and then read the last symbol. More generally, if , i.e., the concatenation of and , an easy proof by induction gives

In particular, if the input string is , the final state the DFA ends up in is . Thus the set of strings, or the language, that accepts is

If , we say that recognizes . That is, answers the yes-or-no question if whether is in , accepting if and only if . For instance, our example automaton above recognizes the languages of strings with no two s in a row,

Some languages can be recognized by a DFA, and others can’t. Consider the following exercise:

Exercise 1.

Show that for a given alphabet , the set of possible DFAs is countably infinite (that is, it’s the same as the number of natural numbers) while the set of all possible languages is uncountably infinite (that is, it’s as large as the number of subsets of the natural numbers).

By Cantor’s diagonalization proof, this shows that there are infinitely (transfinitely!) many more languages than there are DFAs. In addition, languages like the set of strings of s and s that encode a prime number in binary seem too hard for a DFA to recognize. Below we will see some ways to prove intuitions like these. First, let’s give the class of languages that can be recognized by a DFA a name.

Definition 1.

If is a finite alphabet, a language is DFA-regular if there is exists a DFA such that . That is, accepts a string if and only if .

This definition lumps together DFAs of all different sizes. In other words, a language is regular if it can be recognized by a DFA with one state, or two states, or three states, and so on. However, the number of states has to be constant—it cannot depend on the length of the input word.

Our example above shows that the language of strings of s and s with no is DFA-regular. What other languages are?

Exercise 2.

Show that the the following languages are DFA-regular.

  1. The set of strings in with an even number of ’s.

  2. The set of strings in where there is no anywhere to the left of an .

  3. The set of strings in that encode, in binary, an integer that is a multiple of  . Interpret the empty string as the number zero.

In each case, try to find the minimal DFA , i.e., the one with the smallest number of states, such that . Offer some intuition about why you believe your DFA is minimal.

As you do the previous exercise, a theme should be emerging. The question about each of these languages is this: as you read a string from left to right, what do you need to keep track of at each point in order to be able to tell, at the end, whether or not?

Exercise 3.

Show that any finite language is DFA-regular.

Now that we have some examples, let’s prove some general properties of regular languages.

Exercise 4 (Closure under complement).

Prove that if is DFA-regular, then is DFA-regular.

This is a simple example of a closure property—a property saying that the set of DFA-regular languages is closed under certain operations.. Here is a more interesting one:

Proposition 1 (Closure under intersection).

If and are DFA-regular, then is DFA-regular.

Proof.

The intuition behind the proof is to run both automata at the same time, and accept if they both accept. Let and denote the automata that recognize and respectively. They have sets of states and , initial states and , and so on. Define a new automaton as follows:

Thus runs both two automata in parallel, updating both of them at once, and accepts if they both end in an accepting state. To complete the proof in gory detail, by induction on the length of we have

Since if and only if and , in which case and , recognizes . ∎

Note that this proof creates a combined automaton with states. Do you we think we can do better? Certainly we can in some cases, such as if is the empty set. But do you think we can do better in general? Or do you think there is an infinite family of pairs of automata of increasing size such that, for every pair in this family, the smallest automaton that recognizes has states? We will resolve this question later on—but for now, see if you can come up with such a family, and an intuition for why states are necessary.

Now that we have closure under complement and intersection, de Morgan’s law

tells us that the union of two DFA-regular languages is DFA-regular. Thus we have closure under union as well as intersection and complement. By induction, any Boolean combination of DFA-regular languages is DFA-regular.

Exercise 5.

Pretend you don’t know de Morgan’s law, and give a direct proof in the style of Proposition 1 that the set of DFA-regular languages is closed under union.

Exercise 6.

Consider two DFAs, and , with and states respectively. Show that if , there is at least one word of length less than that accepts and rejects or vice versa. Contrapositively, if all words of length less than are given the same response by both DFAs, they recognize the same language.

These closure properties correspond to ways to modify a DFA, or combine two DFAs to create a new one. We can switch the accepting and rejecting states, or combine two DFAs so that they run in parallel. What else can we do? Another way to combine languages is concatenation. Given two languages and , define the language

In other words, strings in consist of a string in followed by a string in . Is the set of DFA-regular languages closed under concatenation?

In order to recognize , we would like to run until it accepts , and then start running on . But there’s a problem—we might not know where ends and begins. For instance, let be from our example above, and let be the set of strings from the first part of Exercise 2, where there are an even number of s. Then the word is in . But is it , or , or ? In each case we could jump from an accepting state of to the initial state of . But if we make this jump in the wrong place, we could end up rejecting instead of accepting: for instance, if we try .

If only we were allowed to guess where to jump, and somehow always be sure that if we’ll jump at the right place, if there is one…

2 Nondeterministic Finite-State Automata

Being deterministic, a DFA has no choice about what state to move to next. In the diagram of the automaton for above, each state has exactly one arrow pointing out of it labeled with each symbol in the alphabet. What happens if we give the automaton a choice, giving some states multiple arrows labeled with the same symbol, and letting it choose which way to go?

This corresponds to letting the transition function be multi-valued, so that it returns a set of states rather than a single state. Formally, it is a function into the power set , i.e., the set of all subsets of :

We call such a thing a nondeterministic finite-state automaton, or NFA for short.

Like a DFA, an NFA starts in some initial state and makes a series of transitions as it reads a string . But now the set of possible computations branches out, letting the automaton follow many possible paths. Some of these end in the accepting state , and others don’t. Under what circumstances should we say that accepts ? How should we define the language that it recognizes? There are several ways we might do this, but the one we use is this: accepts if and only if there exists a computation path ending in an accepting state. Conversely, rejects if and only if all computation paths end in a rejecting state.

Note that this notion of “nondeterminism” has nothing to do with probability. It could be that only one out of exponentially many possible computation paths accepts, so that if the NFA flips a coin each time it has a choice, the probability that it finds this path is exponentially small. We judge acceptance not on the probability of an accepting path, but simply on its existence. This definition may seem artificial, but it is analogous to the definition of NP (see Chapter 4) where the answer to the input is “yes” if there exists a solution, or “witness,” that a deterministic polynomial-time algorithm can check.

Perhaps it’s time for an example. Let the input alphabet be , and define an NFA as follows:

Let’s call this automaton . It starts in state , and can stay in state as long as it likes. But when it reads the symbol , it can move to state if it prefers. After that, it moves inexorably to state and state regardless of the input symbol, accepting only if it ends in state . There are no allowed transitions from state . In other words, .

We call this automaton because it accepts the set of strings whose 3rd-to-last symbol is a . But it has to get its timing right, and move from state to state when it sees that

. Otherwise, it misses the boat. How hard do you think it is to recognize this set deterministically? Take a moment to design a DFA that recognizes

. How many states must it have? As in Exercise 2, how much information does it need to keep track of as it reads left-to-right, so that whenever steps, it “knows” whether the 3rd-to-last symbol was a or not?

As we did for DFAs, let’s define the class of languages that can be recognized by some NFA:

Definition 2.

A language is NFA-regular if there is exists a NFA such that .

A DFA is a special case of an NFA, where always contains a single state. Thus any language which is DFA-regular is automatically also NFA-regular.

On the other hand, with their miraculous ability to guess the right path, it seems as if NFAs could be much more powerful than DFAs. Indeed, where P and NP are concerned, we believe that nondeterminism makes an enormous difference. But down here in the world of finite-state automata, it doesn’t, as the following theorem shows.

Theorem 1.

Any NFA can be simulated by a DFA that accepts the same language. Therefore, a language is NFA-regular if and only if it is DFA-regular.

Proof.

The idea is to keep track of all possible states that an NFA can be in at each step. While the NFA’s transitions are nondeterministic, this set changes in a deterministic way. Namely, after reading a symbol , a state is possible if for some state that was possible on the previous step.

This lets us simulate an NFA with a DFA as follows. If ’s set of states is , then ’s set of states is the power set , its initial state is the set , and its transition function is

Then for any string , is the set of states that could be in after reading . Finally, accepts if it could end in at least one accepting state, so we define the accepting set of as

Then . ∎

While this theorem shows that any NFA can be simulated by a DFA, the DFA is much larger. If has states, then has states. Since is finite for any finite , and since the definition of DFA-regular lets have any finite size, this shouldn’t necessarily bother us—the size of is “just a constant.” But as for our question about above, do you think this is the best we can do?

Now that we know that DFA-regular and NFA-regular languages are the same, let’s use the word “regular” for both of them. Having two definitions for the same thing is helpful. If we want to prove something about regular languages, we are free to use whichever definition—that is, whichever type of automaton—makes that proof easier.

For instance, let’s revisit the fact that the union of two regular languages is regular. In Section 1, we proved this by creating a DFA that runs two DFAs and in parallel. But an NFA can do something like this:

This NFA guesses which of or it should run, rather than running both of them at once, and recognizes with about states instead of states. To make sense, the edges out of have to be -transitions. That is, the NFA has to be able to jump from to or without reading any symbols at all:

Allowing this kind of transition in our diagram doesn’t change the definition of NFA at all (exercise for the reader). This also makes it easy to prove closure under concatenation, which we didn’t see how to do with DFAs:

Proposition 2.

If and are regular, then is regular.

Proof.

Start with NFAs and that recognize and respectively. We assume that and are disjoint. Define a new NFA with , , , and an -transition from each to . Then recognizes . ∎

Another important operator on languages is the Kleene star. Given a language , we define as the concatenation of strings from for any integer :

This includes our earlier notation for the set of all finite sequences of symbols in . Note that is allowed, so includes the empty word . Note also that doesn’t mean repeating the same string times—the are allowed to be different. The following exercise shows that the class of regular languages is closed under :

Exercise 7.

Show that if is regular then is regular. Why does it not suffice to use the fact that the regular languages are closed under concatenation and union?

Here is another fact that is easier to prove using NFAs than DFAs:

Exercise 8.

Given a string , let denote written in reverse. Given a language , let . Prove that is regular if and only if is regular. Why is this harder to prove with DFAs?

On the other hand, if you knew about NFAs but not about DFAs, it would be tricky to prove that the complement of a regular language is regular. The definition of nondeterministic acceptance is asymmetric: a string is in if every computation path leads to a state in . Logically speaking, the negation of a “there exists” statement is a “for all” statement, creating a different kind of nondeterminism. Let’s show that defining acceptance this way again keeps the class of regular languages the same:

Exercise 9.

A for-all NFA is one such that is the set of strings where every computation path ends in an accepting state. Show how to simulate an for-all NFA with a DFA, and thus prove that a language is recognized by some for-all NFA if and only if it is regular.

If that was too easy, try this one:

Exercise 10.

A parity finite-state automaton, or PFA for short, is like an NFA except that it accepts a string if and only if the number of accepting paths induced by reading

is odd. Show how to simulate a PFA with a DFA, and thus prove that a language is recognized by a PFA if and only if it is regular. Hint: this is a little trickier than our previous simulations, but the number of states of the DFA is the same.

Here is an interesting closure property:

Exercise 11.

Given finite words and , say that a word is an interweave of and if I can get by peeling off symbols of and , taking the next symbol of or the next symbol of at each step, until both are empty. (Note that must have length .) For instance, if and , then one interleave of and is . Note that, in this case, we don’t know which in came from and which came from .

Now given two languages and , let be the set of all interweaves of and , for all and . Prove that if and are regular, then so is .

Finally, the following exercise is a classic. If you are familiar with modern culture, consider the plight of River Song and the Doctor.

Exercise 12.

Given a language , let denote the set of words that can appear as first halves of words in :

where denotes the length of a word . Prove that if is regular, then is regular. Generalize this to , the set of words that can appear as middle thirds of words in :

3 Equivalent States and Minimal DFAs

The key to bounding the power of a DFA is to think about what kind of information it can gather, and retain, about the input string—specifically, how much it needs to remember about the part of the string it has seen so far. Most importantly, a DFA has no memory beyond its current state. It has no additional data structure in which to store or remember information, nor is it allowed to return to earlier symbols in the string.

In this section, we will formalize this idea, and use it to derive lower bounds on the number of states a DFA needs to recognize a given language. In particular, we will show that some of the constructions of DFAs in the previous sections, for the intersection of two regular languages or to deterministically simulate an NFA, are optimal.

Definition 3.

Given a language , we say that a pair of strings are equivalent with respect to , and write , if for all we have

Note that this definition doesn’t say that and are in , although the case shows that if and only if . It says that and can end up in by being concatenated with the same set of strings . If you think of and as prefixes, forming the first part of a string, then they can be followed by the same set of suffixes .

It’s easy to see that is an equivalence relation: that is, it is reflexive, transitive, and symmetric. This means that for each string we can consider its equivalence class, the set of strings equivalent to it. We denote this as

in which case if and only if . Thus carves the set of all strings up into equivalence classes.

As we read a string from left to right, we can lump equivalent strings together in our memory. We just have to remember the equivalence class of what we have seen so far, since every string in that class behaves the same way when we add more symbols to it. For instance, has three equivalence classes:

  1. , the set of strings with no that do not end in .

  2. , the set of strings with no that end in .

  3. , the set of strings with .

Do you see how these correspond to the three states of the DFA?

Figure 1: If reading or puts in the same state, then so will reading or , causing to accept both or reject both.

On the other hand, if two strings are not equivalent, we had better be able to distinguish them in our memory. Given a DFA , let’s define another equivalence relation,

where and are ’s initial state and transition function. Two strings are equivalent with respect to if reading them puts in the same state as in Figure 1. But once we’re in that state, if we read a further word , we will either accept both and or reject both. We prove this formally in the following proposition.

Proposition 3.

Suppose is a regular language, and let be a DFA that recognizes . Then for any strings ,

Proof.

Suppose . Then for any string , we have

Thus and lead to the same final state. This state is either in or not, so either accepts both and or rejects them both. If recognizes , this means that if and only if , so . ∎

Contrapositively, if and are not equivalent with respect to , they cannot be equivalent with respect to :

Thus each equivalence class requires a different state. This gives us a lower bound on the number of states that needs to recognize :

Corollary 1.

Let be a regular language. If  has equivalence classes, then any DFA that recognizes must have at least states.

We can say more than this. The number of states of the minimal DFA that recognizes is exactly equal to the number of equivalence classes. More to the point, the states and equivalence classes are in one-to-one correspondence. To see this, first do the following exercise:

Exercise 13.

Show that if , then for any .

Thus for any equivalence class and any symbol , we can unambiguously define an equivalence class . That is, there’s no danger that reading a symbol sends two strings in to two different equivalence classes. This gives us our transition function, as described in the following theorem.

Theorem 2 (Myhill-Nerode).

Let be a regular language. Then the minimal DFA for , which we denote , can be described as follows. It has one state for each equivalence class . Its initial state is , its transition function is

and its accepting set is

This theorem is almost tautological at this point, but let’s go through a formal proof to keep ourselves sharp.

Proof.

We will show by induction on the length of that keeps track of ’s equivalence class. The base case is clear, since we start in , the equivalence class of the empty word. The inductive step follows from

Thus we stay in the correct equivalence class each time we read a new symbol. This shows that, for all strings ,

Finally, accepts if and only if , which by the definition of means that .

Thus recognizes , and Corollary 1 shows that any that recognizes has at least as many states as . Therefore, is the smallest possible DFA that recognizes . ∎

Theorem 2 also shows that the minimal DFA is unique up to isomorphism. That is, any two DFAs that both recognize , and both have a number of states equal to the number of equivalence classes, have the same structure: there is a one-to-one mapping between their states that preserves the transition function, since both of them correspond exactly to the equivalence classes.

What can we say about non-minimal DFAs? Suppose that recognizes . Proposition 3 shows that can’t be a coarser equivalence than . That is, can’t lump together two strings that aren’t equivalent with respect to . But could be finer than , distinguishing pairs of words that it doesn’t have to in order to recognize . In general, the equivalence classes of are pieces of the equivalence classes of , as shown in Figure 2.

Figure 2: The equivalence classes of a language where has three equivalence classes (bold). A non-minimal DFA with six states that recognizes corresponds to a finer equivalence relation with smaller equivalence classes (dashed). It remembers more than it needs to, distinguishing strings that it would be all right to merge.

Let’s say that two states of are equivalent if the corresponding equivalence classes of lie in the same equivalence class of . In that case, if we merge and in ’s state space, we get a smaller DFA that still recognizes . We can obtain the minimal DFA by merging equivalent states until each equivalence class of corresponds to a single state. This yields an algorithm for finding the minimal DFA which runs in polynomial time as a function of the number of states.

The Myhill-Nerode Theorem may seem a little abstract, but it is perfectly concrete. Doing the following exercise will give you a feel for it if you don’t have one already:

Exercise 14.

Describe the equivalence classes of the three languages from Exercise 2. Use them to give the minimal DFA for each language, or prove that the DFA you designed before is minimal.

We can also answer some of the questions we raised before about whether we really need as many states as our constructions above suggest.

Exercise 15.

Describe an infinite family of pairs of languages such that the minimal DFA for has states, the minimal DFA for has states, and the minimal DFA for has states.

Exercise 16.

Describe a family of languages , one for each , such that can be recognized by an NFA with states, but the minimal DFA that recognizes has at least states. Hint: consider the NFA defined in Section 2 above.

The following exercises show that even reversing a language, or concatenating two languages, can greatly increase the number of states. Hint: the languages from the previous exercise have many uses.

Exercise 17.

Describe a family of languages , one for each , such that can be recognized by a DFA with states, but the minimal DFA for has at least states.

Exercise 18.

Describe a family of pairs of languages , one for each , such that can be recognized by a DFA with a constant number of states and can be recognized by a DFA with states, but the minimal DFA for has  states.

The sapient reader will wonder whether there is an analog of the Myhill-Nerode Theorem for NFAs, and whether the minimal NFA for a language has a similarly nice description. It turns out that finding the minimal NFA is much harder than finding the minimal DFA. Rather than being in P, it is PSPACE-complete (see Chapter 8). That is, it is among the hardest problems that can be solved with a polynomial amount of memory.

4 Nonregular Languages

The Myhill-Nerode Theorem has another consequence. Namely, it tells us exactly when a language is regular:

Corollary 2.

A language is regular if and only if  has a finite number of equivalence classes.

Thus to prove that a language is not regular—that no DFA, no matter how many states it has, can recognize it—all we have to do is show that has an infinite number of equivalence classes. This may sound like a tall order, but it’s quite easy. We just need to exhibit an infinite set of strings such that for any . And to prove that , we just need to give a string such that but or vice versa.

For example, given a string and a symbol , let denote the number of s in . Then consider the following language.:

Intuitively, in order to recognize this language we have to be able to count the s and s, and to count up to any number requires an infinite number of states. Our definition of equivalence classes lets us make this intuition rigorous. Consider the set of words . If , then since

Thus each corresponds to a different equivalence class. Any DFA with states will fail to recognize since it will confuse with for sufficiently large . The best a DFA with states can do is count up to .

Exercise 19.

Describe all the equivalence classes of , starting with .

Any automaton that recognizes has to have an infinite number of states. Figure 3 shows an infinite-state automaton that does the job. The previous exercise shows that this automaton is the “smallest possible,” in the sense that each equivalence class corresponds to a single state. Clearly this automaton, while infinite, has a simple finite description—but not a description that fits within the framework of DFAs or NFAs.

Figure 3: The smallest possible infinite-state automaton (yes, that makes sense) that recognizes the non-regular language .
Exercise 20.

Consider the language

What are its equivalence classes? What does its minimal infinite-state machine look like?

Exercise 21.

The Dyck language is the set of strings of properly matched left and right parentheses,

Prove, using any technique you like, that is not regular. What are its equivalence classes? What does its minimal infinite-state machine look like?

Now describe the equivalence classes for the language with two types of brackets, round and square. These must be nested properly, so that is allowed but is not. Draw a picture of the minimal infinite-state machine for in a way that makes its structure clear. How does this picture generalize to the language where there are types of brackets?

5 The Pumping Lemma

The framework of equivalence classes is by far the most simple, elegant, and fundamental way to tell whether or not a language is regular. But there are other techniques as well. Here we describe the Pumping Lemma, which states a necessary condition for a language to be regular. It is not as useful or as easy to apply as the Myhill-Nerode Theorem, but the proof is a nice use of the pigeonhole principle, and applying it gives us some valuable exercise in juggling quantifiers. It states that any sufficiently long string in a regular language can be “pumped,” repeating some middle section of it as many times as we like, and still be in the language.

As before we use to denote the length of a string . We use to denote concatenated with itself times.

Lemma 1.

Suppose is a regular language. Then there is an integer such that any with can be written as a concatenation of three strings, , where

  1. ,

  2. , i.e., , and

  3. for all integers , .

Proof.

As the reader may have guessed, the constant is the number of states in the minimal DFA that recognizes . Including the initial state , reading the first symbols of takes to different states. By the pigeonhole principle, two of these states must be the same, which we denote . Let be the part of that takes to for the first time, let be the part of that brings back to for its first return visit, and let be the rest of as shown in Figure 4. Then and satisfy the first two conditions, and

By induction on , for any we have , and therefore . In particular, since we have , so for all . ∎

Figure 4: If a DFA has states, the first symbols of must cause it to visit some state twice. We let be the part of that first takes to , let be the part of that brings back to for the first return visit, and let be the rest of . Then for any , the word takes to the same state that does.

Note that the condition described by the Pumping Lemma is necessary, but not sufficient, for to be regular. In other words, while it holds for any regular language, it also holds for some non-regular languages. Thus we can prove that is not regular by showing that the Pumping Lemma is violated, but we cannot prove that is regular by showing that it is fulfilled.

Logically, the Pumping Lemma consists of a nested series of quantifiers. Let’s phrase it in terms of (there exists) and (for all). If is regular, then an integer such that    with ,      such that , , , and        integers , . Negating all this turns the s into s and vice versa. Thus if you want to use the Pumping Lemma to show that is not regular, you need to show that integers ,    with such that      such that , , and ,        an integer such that .

You can think of this kind of proof as a game. You are trying to prove that is not regular, and I am trying to stop you. The s are my turns, and the s are your turns. I get to choose the integer . No matter what I choose, you need to be able to produce a string of length at least , such that no matter how I try to divide it into a beginning, middle, and end by writing , you can produce a such that .

If you have a winning strategy in this game, then the Pumping Lemma is violated and is not regular. But it’s not enough, for instance, for you to give an example of a word that can’t be pumped—you have to be able to give such a word which is longer than any that I care to name.

Let’s illustrate this by proving that the language

is not regular. This is extremely easy with the equivalence class method, but let’s use the Pumping Lemma instead. First I name an integer . You then reply with . No matter how I try to write it as , the requirement that means that both and consist of s. In particular, for some , since . But then you can take , and point out that . Any other works equally well.

Exercise 22.

Prove that each of these languages is nonregular, first using the equivalence class method and then using the Pumping Lemma.

  1. .

  2. The language of palindromes over a two-symbol alphabet, i.e., .

  3. The language of words repeated twice, .

  4. .

Exercise 23.

Given a language , the language consists of the words in with their characters sorted in alphabetical order. For instance, if

then

Give an example of a regular language such that is nonregular, and a nonregular language such that is regular. You may use any technique you like to prove that the languages are nonregular.

6 Regular Expressions

In a moment, we will move on from finite to infinite-state machines, and define classes of automata that recognize many of the non-regular languages described above. But first, it’s worth looking at one more characterization of regular languages, because of its elegance and common use in the real world. A regular expression is a parenthesized expression formed of symbols in the alphabet and the empty word, combined with the operators of concatenation, union (often written instead of ) and the Kleene star . Each such expression represents a language. For example,

represents the set of strings generated in the following way: as many times as you like, including zero, print or . Then, if you like, print . A moment’s thought shows that this is our old friend . There are many other regular expressions that represent the same language, such as

We can define regular expressions inductively as follows.

  1. is a regular expression.

  2. is a regular expression.

  3. Any symbol is a regular expression.

  4. If and are regular expressions, then so is .

  5. If and are regular expressions, then so is .

  6. If is a regular expression, then so is .

In case it isn’t already clear what language a regular expression represents,

Regular expressions can express exactly the languages that DFAs and NFAs recognize. In one direction, the proof is easy:

Theorem 3.

If a language can be represented as a regular expression, then it is regular.

Proof.

This follows inductively from the fact that , and are regular languages, and that the regular languages are closed under concatenation, union, and . ∎

The other direction is a little harder:

Theorem 4.

If a language is regular, then it can be represented as a regular expression.

Proof.

We start with the transition diagram of an NFA that recognizes . We allow each arrow to be labeled with a regular expression instead of just a single symbol. We will shrink the diagram, removing states and edges and updating these labels in a certain way, until there are just two states left with a single arrow between them.

First we create a single accepting state by drawing -transitions from each to . We then reduce the number of states as follows. Let be a state other than and . We can remove , creating new transitions between its neighbors. For each pair of states and with arrows leading from to and from to , labeled with and respectively, we create an arrow from to labeled with with . If had a self-loop labeled with , we label the new arrow with instead.

We also reduce the number of edges as follows. Whenever we have two arrows pointing from to labeled with expressions and respectively, we replace them with a single arrow labeled with . Similarly, if has two self-loops labeled and , we replace them with a single self-loop labeled .

We show these rules in Figure 5. The fact that they work follows from our definition of the language represented by a regular expression. A path of length two gets replaced with since we go through one arrow and then the other, a loop gets replaced with since we can go around it any number of times, and a pair of arrows gets replaced with since we can follow one arrow or the other.

After we have reduced the diagram to a single arrow from to labeled , with perhaps a self-loop on labeled , then the regular expression for the language is . If there is no such self-loop, then the regular expression is simply . ∎

Figure 5: Rules for reducing the size of an NFA’s transition diagram while labeling its arrows with regular expressions. When the entire diagram has been reduced to and with a single arrow between them, its label is the regular expression for .
Exercise 24.

Apply this algorithm to the DFA for , and for the three languages in Exercise 2. Note that there are usually multiple orders in which you can remove states from the diagram. Use these to produce two different regular expressions for each of these languages.

Like DFAs and NFAs, regular expressions make it easier to prove certain things about regular languages.

Exercise 25.

Recall the definition of from Exercise 8. Give a simple inductive proof using regular expressions that is regular if and only if is regular.

The reader might wonder why we don’t allow other closure operators, like intersection or complement, in our definition of regular expressions. In fact we can, and these operators can make regular expressions exponentially more compact. In practice we usually stick to concatenation, union, and because there are efficient algorithms for searching a text for strings matched by expressions of this form.

Note that unlike DFAs and NFAs, regular expressions do not give an algorithm for recognizing a language, taking a word as input and saying “yes” or “no” if or not. Instead, they define a language, creating it “all at once” as a set. Below we will see yet another approach to languages—a grammar that generates words, building them from scratch according to simple rules. But first, let’s step beyond DFAs and NFAs, and look at some simple kinds of infinite-state machines.

7 Counter Automata

We saw earlier that some languages require an infinite-state machine to recognize them. Of course, any language can be recognized by some infinite-state machine, as the following exercise shows:

Exercise 26.

Show that any language can be recognized by a machine with a countably infinite set of states. Hint: consider an infinite tree where each node has children.

But the vast majority of such machines, like the vast majority of languages, have no finite description. Are there reasonable classes of infinite-state machines whose state spaces are structured enough for us to describe them succinctly?

A common way to invent such machines is to start with a finite-state automaton and give it access to some additional data structure. For instance, suppose we give a DFA access to a counter: a data structure whose states correspond to nonnegative integers. To keep things simple, we will only allow the DFA to access and modify this counter in a handful of ways. Specifically, the DFA can only update the counter by incrementing or decrementing its value by . And rather than giving the DFA access to the counter’s value, we will only allow it to ask whether it is zero or not.

Let’s call this combined machine a deterministic one-counter automaton, or -DCA for short. We can represent it in several ways. One is with a transition function of the form

This function takes the current state of the DFA, the zeroness or nonzeroness of the counter, and an input symbol. It returns the new state of the DFA and an action to perform on the counter. As before, we specify an initial state . For simplicity, we take the initial value of the counter to be zero.

The state space of a -DCA looks roughly like Figure 3, although there the counter takes both positive and negative values. More generally, the state space consists of a kind of product of the DFA’s transition diagram and the natural numbers , with a state for each pair , . It accepts a word if its final state is in some accepting set . However, to allow its response to depend on the counter as well as on the DFA, we define as a subset of .

We have already seen several non-regular languages that can be recognized by a one-counter automaton, such as the language of words with an equal number of s and s. Thus counter automata are more powerful than DFAs. On the other hand, a language like , the set of palindromes, seems to require much more memory than a one-counter automaton possesses. Can we bound the power of counter automata, as we bounded the power of finite-state automata before?

Let’s generalize our definition to automata with counters, allowing the DFA to increment, decrement, and check each one for zero, and call such things -DCAs. As in Exercise 20, the state space of such a machine is essentially a -dimensional grid. The following theorem uses the equivalence class machinery we invented for DFAs. It shows that, while the number of equivalence classes of a language accepted by a -DCA may be infinite, it must grow in a controlled way as a function of the length of the input.

Theorem 5.

Let be a -DCA. In its first steps, the number of different states it can reach is , i.e., at most for some constant (where depends on but not on ). Therefore, if recognizes a language , it must be the case that has different equivalence classes among words of length .

Exercise 27.

Prove this theorem.

Note that these automata are required to run in “real time,” taking a single step for each input symbol and returning their answer as soon as they reach the end of the input. Equivalently, there are no -transitions. As we will see in Section 7.6, relaxing this requirement makes even two-counter automata capable of universal computation.

Theorem 5 lets us prove pretty much everything we might want to know about counter automata. For starters, the more counters we have, the more powerful they are:

Exercise 28.

Describe a family of languages for that can be recognized by a -DCA but not by a -DCA for any .

The next exercise confirms our intuition that we need counters to run a -DCA and a -DCA in parallel.

Exercise 29.

Show that for any integers , there are languages that can be recognized by a -DCA and a -DCA respectively, such that and cannot be recognized by a -DCA for any .

On the other hand, if we lump -DCAs together for all into a single complexity class, it is closed under all Boolean operations:

Exercise 30.

Say that is deterministic constant-counter if it can be recognized by a -DCA for some constant . Show that the class of such languages is closed under intersection, union, and complement.

We can similarly define nondeterministic counter automata, or NCAs. As with NFAs, we say that an NCA accepts a word if there exists a computation path leading to an accepting state. NCAs can be more powerful than DCAs with the same number of counters:

Exercise 31.

Consider the language

Show that can be recognized by a -NCA, but not by a -DCA.

Note how the -NCA for this language uses nondeterminism in an essential way, choosing whether to compare the s with the s, or the s with the s.

The next exercise shows that even -NCAs cannot be simulated by DCAs with any constant number of counters. Thus unlike finite-state automata, for counter automata, adding nondeterminism provably increases their computational power.

Exercise 32.

Recall that is the language of palindromes over a two-symbol alphabet. Show that

  1. is not a deterministic constant-counter language, but

  2. its complement can be recognized by a -NCA.

Conclude that -NCAs can recognize some languages that cannot be recognized by a -DCA for any .

At the moment, it’s not clear how to prove that a language cannot be recognized by a -NCA, let alone by a -NCA. Do you have any ideas?

8 Stacks and Push-Down Automata

Let’s continue defining machines where a finite-state automaton has access to a data structure with an infinite, but structured, set of states. One of the most well-known and natural data structures is a stack. At any given point in time, it contains a finite string written in some alphabet. The user of this data structure—our finite automaton—is allowed to check to see whether the stack is empty, and to look at the top symbol if it isn’t. It can modify the stack by “pushing” a symbol on top of the stack, or “popping” the top symbol off of it.

A stack works in “last-in, first-out” order. Like a stack of plates, the top symbol is the one most recently pushed onto it. It can store an infinite amount of information, but it can only be accessed in a limited way—in order to see symbols deep inside the stack, the user must pop off the symbols above them.

Sadly, for historical reasons a finite-state automaton connected to a stack is called a push-down automaton or PDA, rather than a stack automaton (which is reserved for a fancier type of machine). We can write the transition function of a deterministic PDA, or DPDA, as follows. As before, denotes the DFA’s state space and denotes the input alphabet, and now denotes the alphabet of symbols on the stack.

This takes the DFA’s current state, the top symbol of the stack or the fact that it is empty, and an input symbol. It returns a new state and an action to perform on the stack.

We start in an initial state and with an empty stack. Once again, we accept a word if the final state is in some subset . However, we will often want the criterion for acceptance to depend on the stack being empty, so we define as a subset of . (This differs minutely from the definition you may find in other books, but it avoids some technical annoyances.) We denote the language recognized by a DPDA as .

Figure 6: A deterministic push-down automaton with three stack symbols, , recognizing the word in the language of properly nested and matched strings with three types of brackets. The stack is empty at the beginning and end of the process.

The canonical languages recognized by push-down automata are the bracket languages of Exercise 21, as illustrated in Figure 6. Each symbol has a matching partner: the DPDA pushes a stack symbol when it sees a left bracket, and pops when it sees a right bracket. There are stack symbols for each type of bracket, and we use these symbols to check that the type of each bracket matches that of its partner. When the stack is empty, all the brackets have been matched, and we can accept. Due to the last-in, first-out character of the stack, these partners must be nested as in , rather than crossing as in . If we push and then , we have to pop before we can pop .

As you already know if you did Exercise 21, the state space of a PDA is shaped like a tree. If , each node in this tree has children. Pushing the symbol corresponds to moving to your th child, and popping corresponds to moving up to your parent. At the root of the tree, the stack is empty.

In the case of the bracket language, the DFA has a single state , and . With additional states, we can impose regular-language-like constraints, such as demanding that we are never inside more than one layer of curly brackets. More generally:

Exercise 33.

Show that the DPDA languages are closed under intersection with regular languages. That is, if can be recognized by a DPDA and is regular, then can be recognized by a DPDA.

We can define NPDAs in analogy to our other nondeterministic machines, allowing to be multi-valued and accepting if a computation path exists that ends in an accepting state.

Exercise 34.

Show that the NPDA languages are also closed under intersection with regular languages.

As for counter machines, we will see below that NPDAs are strictly more powerful than DPDAs. For now, note that NPDAs can recognize palindromes:

Exercise 35.

Show that can be recognized by an NPDA. Do you think it can be recognized by a DPDA? How could you change the definition of to make it easier for a DPDA?

As we will see below, PDAs are incomparable with counter automata—each type of machine can recognize some languages that the other cannot. For now, consider the following exercises:

Exercise 36.

Show that a -DCA can be simulated by a DPDA, and similarly for -NCAs and NPDAs. Do you think this is true for two-counter automata as well?

Exercise 37.

Is a deterministic constant-counter language, i.e., is it recognizable by a -DCA for any constant ?

As for all our deterministic and nondeterministic machines, the DPDA languages are closed under complement, and the NPDA languages are closed under union. What other closure properties do you think these classes have? Which do you think they lack?

And how might we prove that a language cannot be recognized by a DPDA? Unlike counter automata, we can’t get anywhere by counting equivalence classes. In steps, a PDA can reach different states. If , this is enough to distinguish any pair of words of length from each other, and this is what happens in . However, as we will see in the next two sections, DPDA and NPDA languages obey a kind of Pumping Lemma due to their nested nature, and we can use this to prove that some languages are beyond their ken.

9 Context-Free Grammars

The job of an automaton is to recognize a language—to receive a word as input, and answer the yes-or-no question of whether it is in the language. But we can also ask what kind of process we need to generate a language. In terms of human speech, recognition corresponds to listening to a sentence and deciding whether or not it is grammatical, while generation corresponds to making up, and speaking, our own grammatical sentences.

A context-free grammar is a model considered by the Chomsky school of formal linguistics. The idea is that sentences are recursively generated from internal mental symbols through a series of production rules. For instance, if , , , and correspond to sentences, nouns, verbs, and adjectives, we might have rules such as , , and so on, and finally rules that replace these symbols with actual words. We call these rules context-free because they can be applied regardless of the context of the symbol on the left-hand side, i.e., independent of the neighboring symbols. Each sentence corresponds to a parse tree as shown in Figure 7, and parsing the sentence allows us to understand what the speaker has in mind.

Figure 7: The parse tree for a small, but delightful, English sentence.

Formally, a context-free grammar consists of a finite alphabet of variable symbols, an initial symbol , a finite alphabet of terminal symbols in which the final word must be written, and a finite set of production rules that let us replace a single variable symbol with a string composed of variables and terminals:

If and , we write if there is a derivation, or sequence of production rules, that generates from . We say that generates the language , consisting of all terminal words such that .

For instance, the following grammar has , , and generates the language with two types of brackets:

(1)

This shorthand means that