Computability logic (CoL), introduced in [4, 9, 16], is a semantical, mathematical and philosophical platform, and an ambitious program, for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which logic has more traditionally been.
Under the approach of CoL, formulas represent computational problems, and their “truth” is seen as algorithmic solvability. In turn, computational problems — understood in their most general, interactive sense — are defined as games played by a machine against its environment, with “algorithmic solvability” meaning existence of a machine that wins the game against any possible behavior of the environment. And a collection of the most basic and natural operations on interactive computational problems forms the logical vocabulary of the theory. With this semantics, CoL provides a systematic answer to the fundamental question “what can be computed? ”, just as classical logic is a systematic tool for telling what is true. Furthermore, as it turns out, in positive cases “what can be computed” always allows itself to be replaced by “how can be computed”, which makes CoL of potential interest in not only theoretical computer science, but many more applied areas as well, including interactive knowledge base systems, resource oriented systems for planning and action, or declarative programming languages.
On the logical side, CoL promises to be an appealing, constructive and computationally meaningful alternative to classical logic as a basis for applied theories. The first concrete steps towards realizing this potential have been made very recently in [18, 19], where the CoL-based versions CLA1 and CLA4 of Peano arithmetic were elaborated. All theorems of the former express number-theoretic computational problems with algorithmic solutions, and all theorems of the latter express number-theoretic computational problems with polynomial time solutions. In either case, solutions can be effectively extracted from proofs, which reduces problem-solving to theorem-proving. Furthermore, CLA4 has also been shown to be complete in the sense that every number-theoretic computational problem with a polynomial time solution is represented by some theorem of the system.
The formalism of CoL is open-ended, and is expected to undergo series of extensions as the studies of the subject advance. Correspondingly, among the main goals of CoL at the present early stage of development still remains identifying the most natural and potentially interesting operations on computational problems, and finding axiomatizations for the corresponding sets of valid formulas. Considerable advances have already been made in this direction (-,-,-,), and the present paper tells one more success story.
The main operations studied so far are:
Constant elementary games (-ary operations):
(automatically won game) and (automatically lost game).
(parallel conjunction) and (parallel disjunction);
(parallel universal quantifier) and (parallel existential quantifier);
(parallel recurrence) and (parallel corecurrence).
(choice conjunction) and (choice disjunction);
(choice universal quantifier) and (choice existential quantifier).
(sequential conjunction) and (sequential disjunction);
(sequential universal quantifier) and (sequential existential quantifier);
(sequential recurrence) and (sequential corecurrence).
(blind universal quantifier) and (blind existential quantifier).
(branching recurrence) and (branching corecurrence).
The branching operations have a number of natural sharpenings, among which are the finite and the countable versions of branching recurrence and corecurrence.
There are also various reduction operations: , defined by ; – , defined by ; – , defined by ; – , defined by ; etc.
The present paper introduces the following new group:
(toggling conjunction) and (toggling disjunction);
(toggling universal quantifier) and (toggling existential quantifier);
(toggling recurrence) and (toggling corecurrence);
(toggling-branching recurrence) and (toggling-branching corecurrence).
This group also induces the reduction operations – and – , defined by and .
The main technical result of this paper is constructing a sound and complete axiomatization for the propositional fragment of CoL whose logical vocabulary consists of , , , , , , , , , , .
2 A quick tour of the operation zoo of computability logic
In this section we give a very brief and informal overview of the language of computability logic and the game-semantical meanings of its main operators for those unfamiliar with the subject. In what follows, and are symbolic names for the players to which we earlier referred as the machine and the environment, respectively.
First of all, it should be noted that computability logic is a conservative extension of classical logic. Classical propositions — as well as predicates as generalized propositions — are viewed as special, elementary sorts of games that have no moves and are automatically won by the machine if true, and lost if false. The languages of various reasonably expressive fragments of computability logic would typically include two sorts of atoms: elementary atoms , , , , …to represent elementary games, and general atoms , , , , …to represent any, not-necessarily-elementary, games. The classically-shaped operators are conservative generalizations of the corresponding classical operations from elementary games to all games. This means that, when applied to elementary games, they again produce elementary games, and their meanings happen to coincide with the classical meanings.
2.1 Constant elementary games
These are two -ary “operations”, for which we use the same symbols and as for the two players. is an elementary game automatically won by , and is an elementary game won by . Just as classical logic, computability logic sees no difference between two true or two false propositions, so that we have “Snow is white”=“”= and “Snow is black”=“”=.
Negation is a role-switch operation: is obtained from by turning ’s (legal) moves and wins into ’s (legal) moves and wins, and vice versa. For example, if Chess means the game of chess from the point of view of the white player, then Chess is the same game from the point of view of the black player.111Here and later, we consider a version of chess with no draw outcomes — for instance, the one where draw is declared to be a win for the black player — so that each play is won by one of the players and lost by the other player. And where is an elementary game automatically won by , is an elementary game automatically won by — there are no moves to interchange here, so only the winners are interchanged. From this explanation it must be clear that , when applied to elementary games (propositions or predicates), indeed acts like classical negation, as promised.
2.3 Choice operations
The choice operations model decision steps in the course of interaction, with disjunction and existential quantifier meaning ’s choices, and conjunction and universal quantifier meaning choices by . For instance, where is a function, is a game in which the first move/choice is by the environment, consisting in specifying a particular value for . Such a move, which intuitively can be seen as asking the question “what is the value of ? ” brings the game down to the position . The next step is by the machine, which should specify a value for , further bringing the game down to the elementary game , won by the machine if true and lost if false. ’s move can thus be seen as answering/claiming that is the value of . From this explanation it must be clear that represents the problem of computing , with having an algorithmic winning strategy for this game iff is a computable function. Similarly, where is a predicate, represents the problem of deciding : here, again, the first move is by the environment, consisting in choosing a value for (asking whether is true); and the next step is by the machine which, in order to win, should choose the true disjunct of , i.e. correctly answer the question. Formally, can be defined as , or can be defined as ; furthermore, assuming that the universe of discourse is , can be defined as and as . It should be mentioned that making an initial choice of a component by the corresponding player in a choice combination of games is not only that player’s privilege, but also an obligation: the player will be considered to lose the game if it fails to make a choice.
2.4 Parallel operations
The parallel operations combine games in a way that corresponds to the intuition of concurrent computations. Playing or means playing, in parallel, the two games and . In , is considered the winner if it wins in both of the components, while in it is sufficient to win in one of the components. Then the parallel quantifiers and recurrences are defined by:
To appreciate the difference between choice operations and their parallel counterparts, let us compare the games and . The former is, in fact, a simultaneous play on two boards, where on the left board plays white, and on the right board plays black. There is a simple strategy for that guarantees success against any adversary. All that needs to do is to mimic, in Chess, the moves made by in , and vice versa. On the other hand, winning the game is not easy: here, at the very beginning, has to choose between Chess and , and then win the chosen one-board game.
While all classical tautologies automatically hold when the classically-shaped operators are applied to elementary games, in the general (nonelementary) case the class of valid principles shrinks. For example, is no longer valid. The above “mimicking strategy” would obviously fail in the three-board game
for here the best that can do is to pair with one of the two conjuncts of . It is possible that then and the unmatched Chess are both lost, in which case the whole game will be lost. As much as this example may remind us of linear logic, it should be noted that the class of principles with parallel connectives validated by computability logic is not the same as the class of multiplicative formulas provable in linear or affine logic. An example separating CoL from both linear and affine logics is Blass’s  principle
not provable in affine logic but valid in CoL. The same applies to principles containing choice (“additive”) and recurrence (“exponential”) operators.
The operation , defined in the standard way by , is perhaps most interesting from the computability-theoretic point of view. Intuitively, is the problem of reducing to . Putting it in other words, solving means solving while having as an (external, environment-provided) computational resource. “Computational resource” is symmetric to “computational problem”: what is a problem (task) for the machine, is a resource for the environment, and vice versa. To get a feel for as a problem reduction operator, let us look at reducing the acceptance problem to the halting problem. The halting problem can be expressed by
is the predicate “Turing machine (encoded by)halts on input ”. And the acceptance problem can be expressed by
with meaning “Turing machine accepts input ”. While the acceptance problem is not decidable, it is algorithmically reducible to the halting problem. In particular, there is a machine that always wins the game
A strategy for solving this problem is to wait till the environment specifies values and for and in the consequent, thus asking the question “does machine accept input ?”. In response, selects the same values and for and in the antecedent (where the roles of and are switched), thus asking the counterquestion “does halt on ?”. The environment will have to correctly answer this counterquestion, or else it loses. If it answers “No”, then also says “No” in the consequent, i.e., selects the right disjunct there, as not halting implies not accepting. Otherwise, if the environment’s response in the antecedent is “Yes”, simulates machine on input until it halts and then selects, in the consequent, the left or the right disjunct depending on whether the simulation accepted or rejected.
2.6 Blind operations
The blind group of operations comprises and its dual (). The meaning of is similar to that of , with the difference that the particular value of that the environment “selects” is invisible to the machine (more precisely, there is no move signifying such a “selection”), so that it has to play blindly in a way that guarantees success no matter what that value is. This way, and produce games with imperfect information.
Compare the problems
Both of them are about telling whether an arbitrary given number is even or odd; the difference is only in whether that “given number” is communicated to the machine or not. The first problem is an easy-to-win, two-move-deep game of a structure that we have already seen. The second game, on the other hand, is one-move deep with only the machine to make a move — select the “true” disjunct, which is hardly possible to do as the value of remains unspecified.
As an example of a solvable nonelementary -problem, let us look at
solving which means solving what follows “” without knowing the value of . Unlike , this game is certainly winnable: The machine waits till the environment selects a value for in the consequent and also selects one of the -disjuncts in the antecedent (if either selection is never made, the machine automatically wins). Then: If is even, in the consequent the machine makes the same selection left or right as the environment made in the antecedent, and otherwise, if is odd, it reverses the environment’s selection.
2.7 Sequential operations
One of the ways to characterize the sequential conjunction is to say that this is a game that starts and proceeds as a play of ; it will also end as an ordinary play of unless, at some point, decides — by making a special switch move — to abandon and switch to . In such a case the play restarts, continues and ends as an ordinary play of without the possibility to go back to . is the same, only here it is who decides whether and when to switch from to . These generalize to the infinite cases and : here the corresponding player can make any finite number of switches, in which case the winner in the play will be the player who wins in ; and if an infinite number of switches are made, then the player responsible for this is considered the loser. The sequential quantifiers, as we may guess, are defined by
and the sequential recurrence and corecurrence are defined by
Below are a couple of examples providing insights into the computational intuitions associated with the sequential operations. See  for more.
Let be any predicate. Then the game represents the problem of semideciding : it is not hard to see that this game has an effective winning strategy by iff is semidecidable (recursively enumerable). Indeed, if is semidecidable, a winning strategy is to wait until selects a particular for , thus bringing the game down to . After that, starts looking for a certificate of ’s being true. If and when such a certificate is found (meaning that is indeed true), makes a switch move turning into the true and hence -won ; and if no certificate exists (meaning that is false), then keeps looking for a non-existent certificate forever and thus never makes any moves, meaning that the game ends as , which, again, is a true and hence -won elementary game. And vice versa: any effective winning strategy for can obviously be seen as a semidecision procedure for , which accepts an input iff the strategy ever makes a switch move in the scenario where ’s initial choice of a value for is .
Algorithmic solvability (computability) of games has been shown to be closed under modus ponens and a number of other familiar or expected rules, such as “from and conclude ”, “from conclude ”, “from conclude ”, etc. In view of these closures, the validity (= “always computability”) of the principles discussed below implies certain known facts from the theory of computation. Needless to say, those examples demonstrate how CoL can be used as a systematic tool for defining new interesting properties and relations between computational problems, and not only reproducing already known theorems but also discovering an infinite variety of new facts.
The following formula, which can be shown to be valid with respect to our semantics, implies — in a sense, “expresses” — the well known fact that, if both a predicate and its negation are recursively enumerable (i.e., is both semidecidable and co-semidecidable), then is decidable:
Actually, the validity of (1) means something more than just noted: it means that the problem of deciding is reducible to (the -conjunction of) the problems of semideciding and . In fact, a reducibility in an even stronger sense (in a sense that has no name) holds, expressed by the following valid formula:
Computability logic defines computability of a game as computability of its -closure, so the prefix can be safely removed in the above formula and, after writing simply “” instead of “”, the validity of (2) means the same as the validity of the following propositional-level formula, provable in our sound and complete propositional system CL13:
Furthermore, the above principle is valid not only for predicates (elementary games), but also for all games that we consider, as evidenced by the provability of the following formula in (the sound) CL13:
Similarly, formula (1) remains valid with instead of :
For our next example, remember the relation of mapping reducibility (more often called many-one reducibility) of a predicate to a predicate , defined as existence of an effective function such that, for any , is equivalent to . It is not hard to see that this relation holds if and only if the game
which we abbreviate as , has an algorithmic winning strategy by . In this sense, expresses the problem of mapping reducing to . Then the validity of the following formula implies the known fact that, if is mapping reducible to and is recursively enumerable, then so is :222By the way, the same principle does not hold with “Turing reducible” instead of “mapping reducible”.
As in the earlier examples, the validity of (6), in fact, means something even more: it means that the problem of semideciding is reducible to the (-conjunction of the) problems of mapping reducing to and semideciding .
2.8 Branching operations
The branching operations come in the form of branching recurrence and its dual branching corecurrence , which can be defined by . We have already seen two other — parallel and sequential — sorts of recurrences, and it might be a good idea to explain by comparing it with them.
What is common to all members of the family of (co)recurrence operations is that, when applied to , they turn it into a game playing which means repeatedly playing . In terms of resources, recurrence operations generate multiple “copies” of , thus making a reusable/recyclable resource. The difference between the various sorts of recurrences is how “reusage” is exactly understood.
Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. Even the simplest operating system would allow to start a session of Chess, then — after finishing or abandoning and destroying it — start a new play again, and so on. The game that such a system plays — i.e. the resource that it supports/provides — is the already known to us sequential recurrence Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by the parallel recurrence Chess. As a resource, Chess is obviously stronger than Chess as it gives the user more flexibility. But is still not the strongest form of reusage. A really good operating system would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate each particular stage of each particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is the branching recurrence Chess.
So, the user of the resource does not have to restart from the very beginning every time it wants to reuse it; rather, it is (essentially) allowed to backtrack to any of the previous — not necessarily starting — positions and try a new continuation from there, thus depriving the adversary of the possibility to reconsider the moves it has already made in that position. This is in fact the type of reusage every purely software resource allows or would allow in the presence of an advanced operating system and unlimited memory: one can start running process ; then fork it at any stage thus creating two threads that have a common past but possibly diverging futures (with the possibility to treat one of the threads as a “backup copy” and preserve it for backtracking purposes); then further fork any of the branches at any time; and so on. The less flexible type of reusage of assumed by , on the other hand, is closer to what infinitely many autonomous physical resources would naturally offer, such as an unlimited number of independently acting robots each performing task , or an unlimited number of computers with limited memories, each one only capable of and responsible for running a single thread of process . Here the effect of replicating/forking an advanced stage of cannot be achieved unless, by good luck, there are two identical copies of the stage, meaning that the corresponding two robots or computers have so far acted in precisely the same ways. As for , it models the task performed by a single reusable physical resource — the resource that can perform task over and over again any number of times.
The operation also has a series of weaker versions obtained by imposing various restrictions on the quantity and form of reusages. Among the interesting and natural weakenings of is the countable branching recurrence in the style of Blass’s [1, 2] repetition operation . See  for a discussion of such operations.
Branching recurrence stands out as the strongest of all recurrence operations, allowing to reuse (in ) in the strongest algorithmic sense possible. This makes the associated reduction operation – , defined by , the weakest and hence most general form of algorithmic reduction. The well known concept of Turing reduction has the same claims. The latter, however, is only defined for the traditional, non-interactive sorts of computational problems — two-step, input-output, question-answer sorts of problems that in our terms are written as (the problem of deciding predicate ) or (the problem of computing function ). And it is no surprise that our – , when restricted to such problems, turns out to be equivalent to Turing reduction. Furthermore, when and are traditional sorts of problems, further turns out to be equivalent to (but not to ), as the differences between and , while substantial in the general (truly interactive) case, turn out to be too subtle to be relevant when is a game that models only a very short and simple potential dialogue between the interacting parties, consisting in just asking a question and giving an answer. The benefits from the greater degree of resource-reusage flexibility offered by (as opposed to ) are related to the possibility for the machine to try different reactions to the same action(s) by the environment in . But such potential benefits cannot be realized when is, say, , because here a given individual session of immediately ends with an environment’s move, to which the machine simply has no legal or meaningful responses at all, let alone having multiple possible responses to experiment with.
Thus, both – and – are conservative extensions of Turing reduction from traditional sorts of problems to problems of arbitrary degrees and forms of interactivity. Of these two operations, however, only – has the moral right to be called a legitimate successor of Turing reducibility, in the sense that, just like Turing reducibility (in its limited context), – rather than – is an ultimate formal counterpart of our most general intuition of algorithmic reduction. And perhaps it is no accident that, as shown in [10, 13, 21], its logical behavior — along with the choice operations — is precisely captured by Heyting’s intuitionistic calculus. As an aside, this means that CoL offers a good justification — in the form of a mathematically strict and intuitively convincing semantics — of the constructivistic claims of intuitionistic logic, and a materialization of Kolmogorov’s  well known yet so far rather abstract thesis, according to which intuitionistic logic is a logic of problems.
Our recurrence operations, in their logical spirit, are reminiscent of the exponential operators of linear logic. It should be noted that, as shown in , linear — in fact, affine — logic is sound but incomplete when its additives are read as our choice operators, multiplicatives as parallel operators, and exponentials as either parallel or branching recurrences. Here the sequential sort of recurrences stands out in that linear logic becomes simply unsound if its exponentials are interpreted as our . The same applies to the toggling sorts of recurrences that will be introduced shortly.
Remember the concept of the Kolmogorov complexity of a number , which can be defined as the size (logarithm) of the smallest Turing machine (encoded by) that returns on input . Just like the acceptance problem , the Kolmogorov complexity problem is known to be algorithmically reducible — specifically, Turing reducible — to the halting problem. Unlike the former case, however, the reduction in the latter case essentially requires repeated usage of the halting problem as a resource. Namely, the reducibility holds only in the sense of – , – or even – but not in the sense of . As an exercise, the reader may try to come up with an informal description of an algorithmic winning strategy for any one of the following games:
2.9 Toggling operations
The new, toggling group of operations forms another natural phylum in this zoo of game operations.
One of the intuitive ways to characterize the toggling disjunction is the following. This game starts and proceeds as a play of . It will also end as an ordinary play of unless, at some point, decides to switch to , after which the game becomes and continues as . It will also end as unless, at some point, decides to switch back to . In such a case the game again becomes , where resumes from the position in which it was abandoned (rather than from its start position, as would be the case, say, in ). Later may again switch to (the abandoned position of) , and so on. wins the overall play iff it switches from one component to another (“changes its mind”, or “corrects its mistake”) at most finitely many times and wins in its final (and hence “real”) choice, i.e., in the component which was chosen last to switch to.
An alternative way to characterize is to say that it is played exactly as , with the only difference that is expected to make a “choose ” or “choose ” move some finite number of times. If infinitely many choices are made, loses. Otherwise, the winner in the play will be the player who wins in the component that was chosen last (“the eventual choice”). The case of having made no choices at all is treated as if it had chosen (thus, as in sequential disjunction, the leftmost component is the “default”, or “automatically made”, initial choice).
It is important to note that the adversary never knows whether a given choice of a component of is the last choice or not (and perhaps itself does not know that either, every time it makes a choice honestly believing that the present choice is going to be final). Otherwise, if was required to indicate that it has made its final choice, then the resulting operation would simply be the same as — more precisely, equivalent to — . Indeed, in the kind of games that we deal with (called static games), it never hurts a player to postpone making moves, so the adversary could just inactively wait till the last choice is declared, and start playing the chosen component only after that, as in the case of ; under these circumstances, making some temporary choices before making the final choice would not make any sense for , either.
What would happen if we did not require that can change its mind only finitely meany times? There would be no “final choice” in this case. So, the only natural winning condition in the case of infinitely many choices would be to say that wins iff it simply wins in one of the components. But then the resulting operation would be the same as — more precisely, equivalent to — our kind old friend , as a smart would always opt for keeping switching between components forever. That is, allowing infinitely many choices would amount to not requiring any choices at all, as is the case with .
One may also ask what would happen if we allowed to make an arbitrary initial choice between and and then reconsider its choice only (at most) once? (“ times” instead of “once” for any particular would not be natural or necessary to consider). Such an operation on games, albeit reasonable, would not be basic. That is because it can be expressed through our primitives as .
Thus, we have four basic and natural sorts , , , of disjunctions, and denying any of these full citizenship would make computability logic unsettlingly incomplete. What is common between these four operations and what warrants the shared qualification “disjunction” is that each one is a “win one out of two” kind of a combination of games from ’s perspective. is the weakest (easiest for to win) kind of a disjunction, as it does not require any choices at all. Next comes , which does require a choice but in the weakest sense possible. is the hardest-to-win disjunction, requiring a choice in the strongest sense. is the next-hardest disjunction. It replaces the strict choice of by the next-strictest kind known in the traditional theory of computation as semidecision.
Does the (very) weak sort of choice captured by have a meaningful non-mathematical, everyday-life counterpart? Obviously it does. This is the kind of choice that one would ordinarily call (making a correct) choice after trial and error
. So, an alternative, sexier name for our toggling operations could perhaps be “trial-and-error operations”. Indeed, a problem is generally considered to be solved after trial and error (a correct choice/solution/answer found) if, after perhaps coming up with several wrong solutions, a true solution is eventually found. That is, mistakes are tolerated and forgotten as long as they are eventually corrected. It is however necessary that new solutions stop coming at some point, so that there is a last solution whose correctness determines the success of the effort. Otherwise, if answers have kept changing all the time, no answer has really been given after all. Or, imagine Bob has been married and divorced several times. Every time he said “I do”, he probably honestly believed that this time, at last, his bride was “the one”, with whom he would live happily ever after. Bob will be considered to have found his Ms. Right after all if and only if one of his marriages indeed turns out to be happy and final.
Back from our detour to the layman’s world, as we already know, for a predicate , expresses the problem of deciding , and expresses the weaker problem of semideciding . What is then expressed by ? This is also a decision-style problem, but still weaker than the problem of semideciding . This problem has been studied in the literature under several names, most common of which probably is recursively approximating (cf. , Definition 8.3.9). It means telling whether is true or not, but doing so in the same style as semideciding does in negative cases: by correctly saying “Yes” or “No” at some point (after perhaps taking back previous answers several times) and never reconsidering this answer afterwards. Observe that semideciding can be seen as always saying “No” at the beginning and then, if this answer is incorrect, changing it to “Yes” at some later time; so, when the answer is negative, this will be expressed by saying “No” and never taking back this answer, yet without ever indicating that the answer is final and will not change.333Unless, of course, the procedure halts by good luck. Halting without saying “Yes” can then be seen as an explicit indication that the original answer “No” was final. Thus, the difference between semideciding and recursively approximating is that, unlike a semidecision procedure, a recursive approximation procedure can reconsider both negative and positive answers, and do so several times rather than only once.
According to Shönfield’s Limit Lemma,444Cf. , Lemma 8.3.12. a predicate is recursively approximable (i.e., the problem of its recursive approximation has an algorithmic solution) iff is of Turing degree , that is, is Turing reducible to the halting problem. It is known that this, in turn, means nothing but having the arithmetical complexity , i.e., that both and its negation can be written in the form , where is a decidable predicate.555See Section 5.1 of  for a definition of all classes of the arithmetical hierarchy, including (). In the theory of computability-in-principle (as opposed to, say, complexity theory), by importance, the class of predicates of complexity is only next to the classes of decidable, semidecidable and co-semidecidable predicates. This class also plays a special role in logic: it is known that a formula of classical predicate logic is valid if and only if it is true in every model where all atoms of the formula are interpreted as predicates of complexity .
To see that recursive approximability of a predicate is equivalent to this predicate’s being of complexity , first assume that is of complexity , so that and for some decidable predicates and . Then is solved by the following strategy. Wait till the environment specifies a value for , thus bringing the game down to . Then initialize both and to , choose the component, and do the following:
- Step 1:
Check whether is true. If yes, increment to and repeat Step 1. If not, switch to the component, reset to , and go to Step 2.
- Step 2:
Check whether is true. If yes, increment to and repeat Step 2. If not, switch to the component, reset to , increment to , and go to Step 1.
With a moment’s thought, one can see that the above algorithm indeed solves.
For the opposite direction, assume a given algorithm solves . Let be the predicate such that is true iff, in the scenario where the environment specified as at the beginning of the play, so that the game was brought down to , we have:
at the th computation step, chose the component;
at the th computation step, did not move.
Quite similarly, let be the predicate such that is true iff, in the scenario where the environment specified as at the beginning of the play, so that the game was brought down to , we have:
either or, at the th computation step, chose the component;
at the th computation step, did not move.
Of course, both and are decidable predicates, and hence so are and . Now, it is not hard to see that and , so that is indeed of complexity .
As a real-life example of a predicate which is recursively approximable but neither semidecidable nor co-semidecidable, consider the predicate , saying that number is simpler than number in the sense of Kolmogorov complexity. It is known that (the Kolmogorov complexity of ) is bounded, never exceeding the size (logarithm) of plus a certain constant . Fix this . Here is an algorithm which recursively approximates the predicate , i.e., solves the problem
Wait till the environment brings the game down to for some and . Then start simulating, in parallel, all Turing machines of sizes on input . Whenever you see that a machine returns and the size of is smaller than that of any other previously found machines that return or on input , choose . Quite similarly, whenever you see that a machine returns and the size of is smaller than that of any other previously found machine that returns on input , as well as smaller or equal to the size of any other previously found machines that return on input , choose . Obviously, the correct choice between and will be made sooner or later and never reconsidered afterwards. This will happen when the procedure hits — in the role of — a smallest machine that returns either or on input .
Once we have toggling disjunction, its dual operation of toggling conjunction can be defined in a fully symmetric way, with the roles of the machine and the environment interchanged. That is, here it is the environment rather than the machine that makes choices. Equivalently, can be defined as .
The toggling versions of quantifiers and recurrences are defined in the same way as in the case of parallel or sequential operations. Namely:
There is yet another natural sort of toggling (co)recurrence operations worth considering. We call these toggling-branching recurrence and toggling-branching corecurrence, respectively. Roughly, are the same to as are to . Namely, a play over proceeds as over , with the difference that now is required to make a choice — which can be reconsidered any finite number of times — of a particular session/branch of out of the many sessions that are being played. As with all other toggling operations, if choices are retracted infinitely many times, loses. Otherwise, the winner in the overall game will be the player which wins in the session of that was chosen last. , as expected, is the dual of , which can be defined by interchanging the roles of the two players, or by .
For our last example illustrating CoL operations at work, remember that Kolmogorov complexity is not a computable function, i.e., the problem has no algorithmic solution. However, replacing with in it yields an algorithmically solvable (yet nontrivial) problem. A solution for goes like this. Wait till the environment chooses a number for , thus bringing the game down to , i.e., to
Initialize to a sufficiently large number, such as ( is the constant mentioned earlier), and then do the following routine:
ROUTINE: Choose the th -disjunct of (7). Then start simulating on input , in parallel, all Turing machines whose sizes are smaller than . If and when you see that one of such machines returns , update to the size of that machine, and repeat ROUTINE.
A similar argument convinces us that the problems and also have algorithmic solutions. So do the problems and , but certainly not the problem .
3 Toggling and sequential operations defined formally
In what follows, we rely on the first six sections of  as an external source. Although long,  is very easy to read and has a convenient glossary666The glossary for the published version of  is given at the end of the book (rather than article), on pages 371-376. The reader may instead use the preprint version of , available at http://arxiv.org/abs/cs.LO/0507045 The latter includes both the main text and the glossary. to look up any unfamiliar terms and symbols. A reader not familiar with  or unwilling to do some parallel reading, may want to either stop here or just browse the rest of the paper without attempting to go into the technical details of formal definitions and proofs. Due to the very dynamic recent development, computability logic has already reached a point where it is no longer feasible to reintroduce all relevant concepts all over again in each new paper on the subject — this would make any significant or fast progress within the project near impossible.
Here we only provide formal definitions for the toggling and sequential operations. Definitions of all other relevant operations are found in . Definitions of sequential operations are also given in  in a form technically different from our present one, but otherwise yielding equivalent (in every reasonable sense) concepts, and whether to adopt the definitions of  or the present definitions of sequential operations is purely a matter of taste or convenience. As for toggling operations, they have never been defined before.
In the definitions of this section, following , we use the notation — where is a run and is a string — for the result of deleting from all labmoves (labeled moves) except those that have the form for some string and player , and then further replacing each such remaining labmove by .
Let () be any constant games. Let us agree that, in the context of a toggling or sequential disjunction or conjunction of these games, a switch move — or just a switch — means the move/string for some (here we identify natural numbers with their decimal representations). When there are finitely many switches in a run , by the active component of a toggling or sequential combination of in we mean such that is the last (rightmost) switch move in ; in case there are no switch moves in at all, is considered to be the active component. The components other than the active one are said to be dormant.
The toggling disjunction of is defined as follows:
A position is a legal position of iff every move of is either a switch by , or the move by either player, where and is some string, and the following condition is satisfied: for each , is a legal position of .
Let be a legal run of . Then is a -won run of iff there are finitely many switches (by ) in and, where is the active component of in , is a -won run of .
The toggling conjunction of is defined as follows:
A position is a legal position of iff every move of is either a switch by , or is the move by either player, where and is some string, and the following condition is satisfied: for each , is a legal position of .
Let be a legal run of . Then is a -won run of iff there are finitely many switches (by ) in and, where is the active component of in , is a -won run of .
The sequential disjunction of is defined exactly as , with the only difference that, in order for a position to be a legal position of , it should satisfy the following additional condition: Whenever is the sequence of the switch moves made (by ) in , we have .
The sequential conjunction of is defined exactly as , with the only difference that, in order for a position to be a legal position of , it should satisfy the following additional condition: Whenever is the sequence of the switch moves made (by ) in , we have .
As we see, a legal run of is nothing but a legal run of with perhaps some switch moves by inserted. As in the case of , such a is seen as a parallel play in the components, with the play (run) in each component being . The meaning of a switch move is that of a retractable choice of a disjunct. As we remember from , in order to win a parallel disjunction, it is sufficient for to win in any one of its disjuncts. Winning a toggling disjunction is harder: here it is necessary for to win in the disjunct that was chosen last. The difference between and is that, while in the former switches/choices can go back and forth and the same component can be re-chosen many times, in the latter the chosen component should always be the one next after the previously chosen component. And, as always, either sort of conjunction is a dual of the corresponding sort of disjunction, obtained from it by interchanging the roles of and .
A reasonable behavior in a toggling or sequential combination of static games by either player is to always assume that the latest choice of a component is final, correspondingly play only in the (currently) active component of the combination, and worry about the other components only if and when they become active. This is so because eventually the outcome of the game will be determined by what happened in the (then) active component. If there are strategically useful moves in a dormant component, they can always wait till that component becomes active (if and when this happens), as postponing moves in a static game never hurts a player. Yet, this “reasonable” behavior is not enforced by the rules of the game, and “unreasonable” yet innocent actions such as making moves in dormant components are legal. This particular design choice in Definition 3.1 has been made purely for the considerations of simplicity. Alternative definitions that yield equivalent operations but are more restrictive/demanding on legal behaviors are not hard to come up with. One example is the definition of sequential operations given in . In any case, however, obtaining such definitions would not be just as straightforward as declaring all moves in dormant components illegal — doing so could violate the important condition that the operations should preserve the static property of games. As we remember from , computability logic and its static games are asynchronous-communication-friendly. If the communication between the two players is asynchronous, (in the case of or ) or (in the case of or ) cannot be sure that the component it considers “active” and in which it wants to make a move is indeed still active; it is possible that the component has already been “deactivated” by the adversary, but this information has not arrived yet.
As an aside, the existence of a variety of alternative definitions for our game operations and their robustness (modulo equivalence) with respect to technical variations is a strong indication of the naturalness of those operations. This is in the same sense as the existence of many equivalent versions of Turing machines and other models of computation, and the robustness of the class of computable functions with respect to those variations, is one of the strongest pieces of empirical evidence in favor of the Church-Turing thesis. Among the interesting alternatives to our present definitions of the four sorts of disjunctions, most clearly showing the similarities and differences between them, is the following. (1) Keep the definition of (from ) unchanged. (2) Define as in Definition 3.1, with the only difference that not having made any switch/choice moves is now considered a loss for (there is no “default” choice, that is). (3) Define as the version of (the new) where the choices are required to be consecutive numbers starting from . (4) Define as the version of (the new) where one single choice is allowed. Thus, the differences between the four disjunctions are in how many (if any) choices, and in what order, are required or allowed.
For the sake of readability, in the sequel we will often employ relaxed, intuitive and semiformal terminology when referring to moves, typically naming them by their meanings or effects on the game instead of the actual strings that those moves are. Consider, for example, the game and the legal run of it. We may say that:
The effect of the move by is (or such a move signifies) choosing within the component. Notice that after this move is made in , the game is brought down to — in the sense that it continues as — . We may also refer to making the move in the overall game as making the move in its left -conjunct, because this is exactly the meaning of the move .
The effect of the next move by is choosing in the component. Such a move further brings the game down to .
The effect of the next, switch move by is activating the right component of , or switching from to in it. Remember that initially the active component of a toggling combination is always the leftmost component. Such a component in our case was , which later became (evolved to) .
The effect of the next move by is choosing in the component. This brings the overall game down to , where the active (sub)component within the component is .
The effect of the last move by is switching from back to in the component, i.e. making the right component dormant and the left component active, as was the case when the game started.
Definition 3.1 straightforwardly extends from the -ary cases to the infinite cases , , and by just changing “” to “”.
Even though we have officially defined only for constant games, our definitions extend to all games in the standard way, as explained in the second paragraph of Section 4 of . Namely, for any not-necessarily-constant games , is the unique game such that, for any valuation (assignment of constants to variables) , we have