Realizability in the Unitary Sphere

04/18/2019 ∙ by Alejandro Díaz-Caro, et al. ∙ University of Buenos Aires 0

In this paper we present a semantics for a linear algebraic lambda-calculus based on realizability. This semantics characterizes a notion of unitarity in the system, answering a long standing issue. We derive from the semantics a set of typing rules for a simply-typed linear algebraic lambda-calculus, and show how it extends both to classical and quantum lambda-calculi.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The linear-algebraic lambda calculus (Lineal) [1, 2, 3] is an extension of the lambda calculus where lambda terms are closed under linear combinations over a semiring . For instance, if and are two lambda terms, then so is with . The original motivation of [1] for such a calculus was to set the basis for a future quantum calculus, where could be seen as the generalization of the notion of quantum superposition to the realm of programs (in which case is the field of complex numbers).

In quantum computation, data is encoded in the state of a set of particles governed by the laws of quantum mechanics. The mathematical formalization postulates that quantum data is modeled as a unit vector in a Hilbert space. The quantum analogue to a Boolean value is the

quantum bit, that is a linear combination of the form , where and respectively correspond to “true” and “false”, and where . In other words, the state is a linear combination of the Boolean values “true” and “false”, of -norm equal to : it is a unit-vector in the Hilbert space .

A quantum memory consists in a list of registers holding quantum bits. The canonical model for interacting with a quantum memory is the QRAM model [4]. A fixed set of elementary operations are allowed on each quantum register. Mathematically, these operations are modeled with unitary maps on the corresponding Hilbert spaces, that is: linear maps preserving the -norm and the orthogonality. These operations, akin to Boolean gates, are referred to as quantum gates, and they can be combined into linear sequences called quantum circuits. Quantum algorithms make use of a quantum memory to solve a particular classical problem. Such an algorithm therefore consists in particular in the description of a quantum circuit.

Several existing languages for describing quantum algorithms such as Quipper [5] and QWIRE [6] are purely functional and based on the lambda calculus. However, they only provide classical control: the quantum memory and the allowed operations are provided as black boxes. These languages are mainly circuit description languages using opaque high-level operations on circuits. They do not feature quantum control, in the sense that the operations on quantum data are not programmable.

A lambda calculus with linear combinations of terms made “quantum” would allow to program those “black boxes” explicitly, and provide an operational meaning to quantum control. However, when trying to identify quantum data with linear combinations of lambda terms, the problem arises from the norm condition on quantum superpositions. To be quantum-compatible, one cannot have any linear combination of programs. Indeed, programs should at the very least yield valid quantum superpositions, that is: linear combinations whose -norm equals —a property which turns out to be very difficult to preserve along the reduction of programs.

So far, the several attempts at accommodating linear algebraic lambda calculi with the -norm have failed. At one end of the spectrum, [7] stores lambda terms directly in the quantum memory, and encodes the reduction process as a purely quantum process. Van Tonder shows that this forces all lambda terms in superposition to be mostly equivalent. At the other end of the spectrum, the linear algebraic approaches pioneered by Arrighi and Dowek consider a constraint-free calculus and try to recover quantum-like behavior by adding ad-hoc term reductions [1] or type systems [8, 9, 10]. But if these approaches yield very expressive models of computations, none of them is managing to precisely characterize linear combinations of terms of unit -norm, or equivalently, the unitarity of the representable maps.

This paper answers this question by presenting an algebraic lambda calculus together with a type system that enforces unitarity. For that, we use semantic techniques coming from realizability [11] to decide on the unitarity of terms.

Since its creation by Kleene as a semantics for Heyting arithmetic, realizability has evolved to become a versatile toolbox, that can be used both in logic and in functional programming. Roughly speaking, realizability can be seen as a generalization of the notion of typing where the relation between a term and its type is not defined from a given set of inference rules, but from the very operational semantics of the calculus, via a computational interpretation of types seen as specifications. Types are first defined as sets of terms verifying certain properties, and then, valid typing rules are derived from these properties rather than set up as axioms.

The main feature of our realizability model is that types are not interpreted as arbitrary sets of terms or values, but as subsets of the unit sphere of a particular weak vector space [3], whose vectors are distributions (i.e. weak linear combinations) of “pure” values. So that by construction, all functions that are correct w.r.t. this semantics preserve the -norm. As we shall see, this interpretation of types is not only compatible with the constructions of the simply typed lambda calculus (with sums and pairs), but it also allows us to distinguish pure data types (such as the type  of pure Booleans) from quantum data types (such as the type  of quantum Booleans). Thanks to these constraints, the type system we obtain naturally enforces that the realizers of the type are precisely the functions representing unitary operators of .

This realizability model is therefore answering a hard problem [12]: it provides a unifying framework able to express not only classical control, with the presence of “pure” values, but also quantum control, with the possibility to interpret quantum data-types as (weak) linear combinations of classical ones.

I-a Contributions

(1) We propose a realizability semantics based on a linear algebraic lambda calculus capturing a notion of unitarity through the use of a -norm. As far as we know, such a construction is novel.

(2) The semantics provides a unified model for both classical and quantum control. Strictly containing the simply-typed lambda calculus, it does not only serve as a model for a quantum circuit-description language, but it also provides a natural interpretation of quantum control.

(3) In order to exemplify the expressiveness of the model, we show how a circuit-description language in the style of QWIRE [6] can be naturally interpreted in the model. Furthermore, we discuss how one can give within the model an operational semantics to a high-level operation on circuits usually provided as a black box in circuit-description languages: the control of a circuit.

I-B Related Works

Despite its original motivations, [10] showed that Lineal can handle the -norm. This can be used for example to represent probabilistic distributions of terms. Also, a simplification of Lineal, without scalars, can serve as a model for non-deterministic computations [13]. And, in general, if we consider the standard values of the lambda calculus as the basis, then linear combinations of those form a vector space, which can be characterized using types [9]. In [14] a similar distinction between classical bits () and qbits () has been also studied. However, without unitarity, it is impossible to obtain a calculus that could be compiled onto a quantum machine. Finally, a concrete categorical semantics for such a calculus has been recently given in [15].

An alternative approach for capturing unitarity (of data superpositions and functions) consists to change the language. Instead of starting with a lambda calculus, [16] defines and extends a reversible language to express quantum computation.

Lambda calculi with vectorial structures are not specific to quantum computation. Vaux [17] independently developed the algebraic lambda calculus (where linear combinations of terms are also terms), initially to study a fragment of the differential lambda calculus of [18]. Unlike its quantum-inspired cousin Lineal, the algebraic lambda calculus is morally call-by-name, and [19] shows the formal connection with Lineal.

Designing an (unconstrainted) algebraic lambda calculus (in call-by-name [17] or in call-by-value [1]) raises the problem of how to enforce the confluence of reduction. Indeed, if the semi-ring is a ring, since , it is possible to design a term reducing both to  and the empty linear combination . A simple solution to recover consistency is to weaken the vectorial structure and remove the equality  [3]. The vector space of terms becomes a weak vector space. This approach is the one we shall follow in our construction.

This paper is concerned with modeling quantum higher-order programming languages. If the use of realizability techniques is novel, several other techniques have been used, based on positive matrices and categorical tools. For first-order quantum languages, [20] constructs a fully complete semantics based on superoperators. To model a strictly linear quantum lambda-calculus, [21] shows that the compact closed category CPM based on completely positive maps forms a fully abstract model. Another approach has been taken in [22], with the use of a presheaf model on top of the category of superoperators. To accomodate duplicable data, [23] extends CPM using techniques developed for quantitative models of linear logic. Finally, a categorical semantics of circuit-description languages has been recently designed using linear-non-linear models by [24, 25].

I-C Outline

Section II presents the linear algebraic calculus and its weak vector space structure. Section III discusses the evaluation of term distributions. Section IV introduces the realizability semantics and the algebra of types spawning from it. At the end of this section, Theorem IV.12 and Corollary IV.13 express that the type of maps from quantum bits to quantum bits only contains unitary functions. Section V introduces a notion of typing judgment and derives a set of valid typing rules from the semantics. Section V-B discusses the inclusion of the simply-typed lambda calculus in this unitary semantics. Finally, Section VI describes a small quantum circuit-description language and shows how it lives inside the unitary semantics.

TABLE I: Syntax of the calculus

Ii Syntax of the calculus

This section presents the calculus upon which our realizability model will be designed. It is a lambda-calculus extended with linear combinations of lambda-terms, but with a subtelty: terms form a weak vector space.

Ii-a Values, terms and distributions

The language is made up of four syntactic categories: pure values, pure terms, value distributions and term distributions (Table I). As usual, the expressions of the language are built from a fixed denumerable set of variables, written .

In this language, a pure value is either a variable , a -abstraction (whose body is an arbitrary term distribution ), the void object , a pair of pure values , or one the two variants and (where  is pure value). A pure term is either a pure value  or a destructor, that is: an application , a sequence    for destructing the void object in 111Note the asymmetry: is a pure term whereas is a term distribution. As a matter of fact, the sequence (that could also be written ) is the nullary version of the pair destructing let  ., a let-construct    for destructing a pair in , or a match-construct   (where , and are arbitrary term distributions). A term distribution is simply a formal -linear combination of pure terms, whereas a value distribution is a term distribution that is formed only from pure values. We also define Booleans using the following abbreviations: , , and, finally, .

The notions of free and bound (occurrences of) variables are defined as expected, and in what follows, we shall consider pure values, pure terms, value distributions and term distributions up to -conversion, silently renaming bound variables whenever needed. The set of all pure terms (resp. of all pure values) is written (resp. ), whereas the set of all term distributions (resp. of all value distributions) is written (resp. ). So that we have the inclusions:

Ii-B Distributions as weak linear combinations

Formally, the set of term distributions is equipped with a congruence that is generated from the 7 rules of Table II.

TABLE II: Congruence rules on term distributions

We assume that the congruence is shallow, in the sense that it only goes through sums () and scalar multiplications (), and stops at the level of pure terms. So that but . (This important design choice will be justified in Section V-A, Remark V.5). We easily check that:

Lemma II.1.

For all , we have .

Proof.

From , we get . ∎

On the other hand, the relation cannot be derived from the rules of Table II as we shall see below (Proposition II.6 and Example II.7). As a matter of fact, the congruence implements the equational theory of a restricted form of linear combinations—which we shall call distributions—that is intimately related to the notion of weak vector space [3].

Definition II.2 (Weak vector space).

A weak vector space (over a given field ) is a commutative monoid equipped with a scalar multiplication such that for all , , we have , , , and .

Remark II.3.

The notion of weak vector space differs from the traditional notion of vector space in that the underlying additive structure may be an arbitrary commutative monoid, whose elements do not necessarily have an an additive inverse. So that in a weak vector space, the vector is in general not the additive inverse of , and the product does not simplify to .

Weak vector spaces naturally arise in functional analysis as the spaces of unbounded operators. Historically, the notion of unbounded operator was introduced by von Neumann to give a rigorous mathematical definition to the operators that are used in quantum mechanics. Given two (usual) vector spaces  and  (over the same field ), recall that an unbounded operator from  to  is a linear map that is defined on a sub-vector space , called the domain of . The sum of two unbounded operators is defined by: , (for all ), whereas the product of an unbounded operator by a scalar is defined by: , (for all ).

Example II.4.

The space of all unbounded operators from  to  is a weak vector space, whose null vector is the (totally defined) null function.

Indeed, we observe that an unbounded operator has an additive inverse if and only  is total, that is: if and only if —and in this case, the additive inverse of  is the operator . In particular, it should be clear to the reader that as soon as .

We can now observe that, by construction:

Proposition II.5.

The space of all term distributions (modulo the congruence ) is the free weak -vector space generated by the set of all pure terms222The same way as the space of linear combinations over a given set  is the free vector space generated by .. ∎

Again, the notion of distribution (or weak linear combination) differs from the standard notion of linear combination in that the summands of the form cannot be erased, so that the distribution is not equivalent to the distribution (provided ). In particular, the distribution is not the additive inverse of , since However, the equivalence of term distributions can be simply characterized as follows:

Proposition II.6 (Canonical form of a distribution).

Each term distribution can be written where are arbitrary scalars (possibly equal to ), and where () are pairwise distinct pure terms. This writing—which is called the canonical form of —is unique, up to a permutation of the summands ().∎

Example II.7.

Given distinct pure terms and , we consider the term distributions and . We observe that the distributions  and (that are given in canonical form) do not have the same number of summands, hence they are not equivalent: .

Corollary II.8.

The congruence is trivial on pure terms:  iff , for all .∎

Thanks to Proposition II.6, we can associate to each term distribution (written in canonical form) its domain 333Note that the domain of a distribution gathers all pure terms  (), including those affected with a coefficient . So that the domain of a distribution should not be mistaken with its support. and its weight . Note that the weight function    is a linear function from the weak -vector space of term distributions to , whereas the domain function    is a morphism of commutative monoids from to , since we have444Actually, the function    is even linear, since the commutative (and idempotent) monoid has a natural structure of weak -vector space whose (trivial) scalar multiplication is defined by for all and .: , , and for all , and .

Remark II.9.

In practice, one of the main difficulties of working with distributions is that addition is not regular, in the sense that the relation does not necessarily imply that . However, for example if , we can deduce that or or .

To simplify the notation, we shall adopt the following:

Convention II.10.

From now on, we consider term distributions modulo the congruence , and simply write for . This convention does not affect inner—or raw—distributions (which occur within a pure term, for instance in the body of an abstraction), that are still considered only up to -conversion555Intuitively, a distribution that appears in the body of an abstraction (or in the body of a let-construct, or in a branch of a match-construct) does not represent a real superposition, but it only represents machine code that will produce later a particular superposition, after some substitution has been performed.. The same convention holds for value distributions.

To sum up, we now consider that (as a top-level distribution), but:

Ii-C Extending syntactic constructs by linearity

Pure terms and term distributions are intended to be evaluated according to the call-by-basis strategy (Section III), that can be seen as the declination of the call-by-value strategy in a computing environment where all functions are linear by construction. Keeping this design choice in mind, it is natural to extend the syntactic constructs of the language by linearity, proceeding as follows: for all value distributions and , and for all term distributions , and we have:

The value distribution will be sometimes written as well.

Ii-D Substitutions

Given a variable  and a pure value , we define an operation of pure substitution, written , that associates to each pure value  (resp. to each pure term , to each raw value distribution , to each raw term distribution ) a pure value (resp. a pure term , a raw value distribution , a raw term distribution ). The four operations , , and are defined by mutual recursion as expected.

Although the operation is primarily defined on raw term distributions (i.e. by recursion on the tree structure of , without taking into account the congruence ), it is compatible with the congruence , in the sense that if , then for all pure values . In other words, the operation of pure substitution is compatible with Convention II.10. It is also clear that, by construction, the operation is linear w.r.t. , so that is for all term distributions . (The same observations hold for the operation ).

Moreover, the operation of pure substitution behaves well with the linear extension of the syntactic constructs of the language (cf. Appendix -D). And we have the expected substitution lemma: For all term distributions and for all pure values  and , provided and , we have . We extend the notation to parallel substitution in the usual manner (cf. Remark .14 in Appendix -D).

From the operation of pure substitution , we define an operation of bilinear substitution that is defined for all term distributions and for all value distributions , letting By construction, the generalized operation of substitution is bilinear—which is consistent with the bilinearity of application (Section II-C). But beware! The bilinearity of the operation also makes its use often counter-intuitive, so that this notation should always be used with the greatest caution. Indeed, while , . Lemma .10, in Appendix -C gives the valid identities. In addition, bilinear substitution is not (completely) canceled when , in which case . where is the weight of  (cf Section II-B).

Iii Evaluation

The set of term distributions is equipped with a relation of evaluation that is defined in three steps as follows.

Iii-a Atomic evaluation

First we define an asymmetric relation of atomic evaluation (between a pure term  and a term distribution ) from the inference rules of Table III.

TABLE III: Inference rules of the relation of atomic evaluation

These rules basically implement a deterministic call-by-value strategy, where function arguments are evaluated from the right to the left. (The argument of an application is always evaluated before the function666This design choice is completely arbitrary, and we could have proceeded the other way around.). Also notice that no reduction is ever performed in the body of an abstraction, in the second argument of a sequence, in the body of a let-construct, or in a branch of a match-construct. Moreover, atomic evaluation is substitutive: If , then for all pure values .

Iii-B One step evaluation

The relation of one step evaluation is defined as follows:

Definition III.1 (One step evaluation).

Given two term distributions and , we say that evaluates in one step to and write when there exist a scalar , a pure term and two term distributions and such that , , and .

Notice that the relation of one step evaluation is also substitutive. In addition, the strict determinism of the relation of atomic evaluation implies that the relation of one step evaluation fulfills the following weak diamond property:

Lemma III.2 (Weak diamond).

If and , then one of the following holds: either ; either or ; either and for some .∎

Remark III.3.

In the decomposition    of Definition III.1, we allow that . So that for instance, we have the following. Let . Then,

Remark III.4.

Given a pure term , we write  ,  so that we have by construction. Then we observe that for all , we have

This example does not jeopardize the confluence of evaluation, since we also have

Iii-C Evaluation

Finally, the relation of evaluation is defined as the reflexive-transitive closure of the relation of one step evaluation , that is: .

Proposition III.5 (Linearity of evaluation).

The relation is linear, in the sense that:

  1. If  ,  then    for all .

  2. If    and  ,  then  . ∎

Example III.6.

In our calculus, the Hadamard operator , whose matrix is given by , is computed by the term

Indeed, for all , we have

Theorem III.7 (Confluence of evaluation).

If    and  ,  then there is a term distribution  such that    and  .

Proof.

Writing the reflexive closure of , it is clear from Lemma III.2 that fulfills the diamond property. Therefore, fulfills the diamond property. ∎

Iii-D Normal forms

From what precedes, it is clear that the normal forms of the relation of evaluation are the term distributions of the form where for each . In particular, all value distributions are normal forms (but they are far from being the only normal forms in the calculus). From the property of confluence, it is also clear that when a term distribution  reaches a normal form , then this normal form is unique.

In what follows, we shall be more particularly interested in the closed term distributions  that reach a (unique) closed value distribution through the process of evaluation.

Iv A semantic type system

In this section, we present the type system associated with the (untyped) language presented in Section II as well as the corresponding realizability semantics.

Iv-a Structuring the space of value distributions

In what follows, we write: the set of all closed pure terms; the space of all closed term distributions; the set of all closed pure values, which we shall call basis vectors; and the space of all closed value distributions, which we shall call vectors.

The space formed by all closed value distributions (i.e. vectors) is equipped with the inner product and the pseudo--norm that are defined by