A comonadic view of simulation and quantum resources

04/22/2019
by   Samson Abramsky, et al.
0

We study simulation and quantum resources in the setting of the sheaf-theoretic approach to contextuality and non-locality. Resources are viewed behaviourally, as empirical models. In earlier work, a notion of morphism for these empirical models was proposed and studied. We generalize and simplify the earlier approach, by starting with a very simple notion of morphism, and then extending it to a more useful one by passing to a co-Kleisli category with respect to a comonad of measurement protocols. We show that these morphisms capture notions of simulation between empirical models obtained via `free' operations in a resource theory of contextuality, including the type of classical control used in measurement-based quantum computation schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/22/2021

Closing Bell: Boxing black box simulations in the resource theory of contextuality

This chapter contains an exposition of the sheaf-theoretic framework for...
04/04/2018

Categories of empirical models

A notion of morphism that is suitable for the sheaf-theoretic approach t...
11/05/2020

The logic of contextuality

Contextuality is a key signature of quantum non-classicality, which has ...
08/23/2019

Semi-Quantum Money

Private quantum money allows a bank to mint quantum money states that it...
07/26/2021

Quantum Information Effects

We study the two dual quantum information effects to manipulate the amou...
01/06/2022

Oracle separations of hybrid quantum-classical circuits

An important theoretical problem in the study of quantum computation, th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A key objective in the field of quantum information and computation is to understand the advantage which can be gained in information-processing tasks by the use of quantum resources. While a range of examples have been studied, to date a systematic understanding of quantum advantage is lacking.

One approach to achieving such a general understanding is through resource theories [1, 2], in which one considers a set of operations by which one system can be transformed into another. In particular, one considers “free operations”, which can be performed without consuming any additional resources of the kind in question. If resource can be constructed from using only free operations, then we say that is convertible to , or is reducible to . This point of view is studied in some generality in [3, 4].

Another natural approach, which is familiar in computation theory, is to consider a notion of simulation; one asks if the behaviour of can be produced by some protocol using as a resource.

Both these points of view can be considered in relation to quantum advantage. Our focus in this paper is on quantum resources that take the form of non-local, or more generally contextual, correlations. Contextuality is one of the key signatures of non-classicality in quantum mechanics [5, 6], and has been shown to be a necessary ingredient for quantum advantage in a range of information-processing tasks [7, 8, 9, 2].

In previous work [2], a subset of the present authors showed how this advantage could be quantified in terms of the contextual fraction, and also introduced a range of free operations, which were shown to have the required property of being non-increasing with respect to the contextual fraction. Thus this work provided some of the basic ingredients for a resource theory of quantum advantage, with contextuality as the resource.

In [10], the other present author introduced a notion of simulation between (possibly contextual) behaviours, as morphisms between empirical models, in the setting of the “sheaf-theoretic” approach to contextuality introduced in [11]. This established a basis for a simulation-based approach to comparing resources.

In this paper, we bring these two approaches together.

  • On the simulation side, we enhance the treatment given in [10] by introducing a measurement protocols construction on empirical models (Section IV-B). Measurement protocols were first introduced in a different setting in [12]. This construction captures the intuitive notion, widely used in an informal fashion in concrete results in quantum information (e.g. [13]), of using a “box” or device by performing some measurement on it, and then, depending on the outcome, choosing some further measurements to perform. This form of adaptive behaviour also plays a crucial role in measurement-based quantum computing [14].

    We show that this construction yields a comonad on the category of empirical models. Hence, we are able to describe a very general notion of simulation of by in terms of co-Kleisli maps from to (Sections IV-C and IV-D).

  • We consider the algebraic operations previously introduced in [2] and introduce a new operation allowing a conditional measurement, a one-step version of adaptivity (Section III-A). We present an equational theory for these operations and use this to obtain normal forms for resource expressions (Section III-C).

  • Using these normal forms, we obtain one of our main results: we show that the algebraic notion of convertibility coincides with the existence of a simulation morphism (Section IV-D).

  • We also prove some further results, including a form of no-cloning theorem at the abstract level of simulations (Section IV-E).

Ii Empirical Models

We begin by introducing the main ingredients of the sheaf-theoretic approach to contextuality [11]. The central objects of study are empirical models. These describe the behaviours that we are considering as resources, which may be contextual.

The behaviour intended to be modelled is that of a physical system, governed perhaps by the laws of quantum mechanics, on which one may perform measurements and observe their outcomes. We abstract away from the details of the physical description of the system in question and consider only its observable behaviour, i.e. the empirical distributions of such measurement experiments.

We can therefore think of an empirical model as a black box, with which an agent might interact by way of questions (measurements) and answers (outcomes). The interface or type of such a box is given by a measurement scenario, which specifies the allowed measurements and the set of possible outcomes for each of them. In a single use of the black box, the agent may perform multiple measurements. However, a crucial feature that is typical in quantum systems is that some combinations of measurements may not be compatible. In particular, it is typically not the case that the agent may jointly perform all of the available measurements. The scenario must, therefore, specify which sets of measurements are compatible and can thus be performed together – or sequentially in any order – in a single use of the black box. Sets of compatible measurements are called measurement contexts.

This compatibility structure on measurements can be naturally described in terms of a simplicial complex. Recall that an (abstract) simplicial complex on is a set of finite subsets of , called faces, that is non-empty, downwards-closed in the inclusion order, and contains all the singletons. Concretely, these axioms amount to saying that any subset of a compatible set of measurements is a compatible set of measurements, and that any single measurement should be possible.

Definition 1.

A measurement scenario is a triple where:

  • is a finite set of measurements;

  • specifies, for each measurement , a finite non-empty set of outcomes;

  • is a simplicial complex on , whose faces are called the measurement contexts.

We will often simply refer to these as scenarios and contexts. Note that a simplicial complex is determined by its maximal faces, called facets. Hence, the measurement compatibility structure can be specified by providing only the maximal contexts, as was the case e.g. in [11].

Definition 2.

Let be a scenario. For any , we write

for the set of assignments of outcomes to each measurement in the set . When is a valid context, these are the joint outcomes one might obtain for the measurements in . This extends to a sheaf , with restriction maps given by the obvious projections. We call this the event sheaf. Whenever it does not give rise to ambiguity, we omit the subscript and denote the event sheaf more simply by .

We write

for the functor of finitely-supported probability distributions. For a set

,

where . The action of on a function is given by pushforward of distributions:

Note that, in particular, the pushforward along a projection corresponds to taking marginal distributions.

Definition 3.

An empirical model on a scenario , written , is a compatible family for on the presheaf . More explicitly, it is a family where, for each ,

is a probability distribution over the joint outcomes for the measurements in the context . Moreover, compatibility requires that the marginal distributions be well-defined: for any with , one must have

i.e. for any ,

Note that compatibility can equivalently be expressed as the requirement that, for all facets (i.e. maximal contexts) and of ,

Compatibility holds for all quantum realizable behaviours [11], and generalizes a property known as no-signalling [15], which we illustrate in the following example.

Example 4.

Consider a bipartite black box shared between parties Alice and Bob, each of whom may choose to perform as their input one of two measurements. We call Alice’s measurements and and Bob’s measurements and . Each measurement outputs an outcome that is either or . The situation can be described by a measurement scenario in which , for all , and the facets of are

The probabilistic behaviour of such a black box could be given e.g. by Table I. This happens to show a well-studied behaviour known as a Popescu–Rohrlich (PR) box [16]. Rows of this table correspond to maximal measurement contexts, and columns to their joint outcomes. Each entry of the table gives the probability of obtaining as output the joint outcome indexing its column given that the input was the measurement context indexing its row. This behaviour is formalized as an empirical model , with the entries in each row of the table directly specifying the probability distribution for a facet of . It is straightforward to check that these distributions are compatible. The probability distributions for the non-maximal faces can then be obtained by marginalization. Note that these marginals are well-defined if and only if compatibility holds.

Table I: A PR box.

In this example, compatibility ensures that the local behaviour on Alice’s part of the box, as described by the probability distributions and , is independent of Bob’s choice of input, and vice versa. If this were not the case, then it would be possible e.g. for Bob to use the box to instantaneously signal to Alice by altering her locally observable behaviour through his choice of input.

Definition 5.

An empirical model is said to be non-contextual if it is compatible with a global section for . In other words, is non-contextual if there exists some , a distribution over global assignments of outcomes to measurements, such that for all measurement contexts . Otherwise, the empirical model is said to be contextual.

Noncontextuality characterizes classical behaviours. One way to understand this is that it reflects a situation in which the physical system being measured exists at all times in a definite state assigning outcome values to all properties that can be measured. Probabilistic behaviour may still arise, but only via stochastic mixtures or distributions on these global assignments. This may reflect an averaged or aggregate behaviour, or an epistemic limitation on our knowledge of the underlying global assignment.

Iii The algebraic viewpoint

Iii-a Operations on empirical models

We consider operations that transform and combine emprical models to form new ones. One should think of these as elementary operations that can be carried out classically, i.e. without using contextual resources beyond the empirical models given as arguments. For this reason, these operations are regarded as ‘free’ in the resource theory of contextuality.

Most of the operations presented here were introduced by a subset of the authors in [2]. A novelty is the idea of conditional measurement, which is intended to capture (a one-step version of) the kind of classical control of quantum systems that is used in adaptive measurement-based quantum computation schemes. Iterating this construction yields longer protocols of this kind.

For each operation, we give some brief motivating explanation followed by its definition. All the operations are summarized in Table II, as typing rules.

  • Zero model. Consider the unique scenario with no measurements:

    There is a single empirical model on this scenario, which we denote by .

  • Singleton model. Consider the unique scenario that has a single measurement with a single outcome:

    There is a single empirical model on this scenario, which we denote by .

  • Translation of measurements. From an empirical model in a given scenario, we can build another in a different scenario, by mapping the measurements in the latter scenario to those in the former, taking care to respect compatibilities. In particular, this can capture the operation of restricting the allowed measurements (or the compatibilities). Note that it can also mean that two measurements in the new scenario are just different aliases for the same measurement being performed in the original model.

    The preservation of compatibilities is captured by the notion of simplicial map. Given simplicial complexes and on sets of vertices and , respectively, a simplicial map is a function between the vertex sets, , that maps faces of to faces of , i.e. such that for all , .

    Given an empirical model and a simplicial map , the model , where for all , is defined by pulling back along the map : for any and ,

    Concretely, can be implemented from as follows: when a measurement is to be performed, one performs instead. Requiring to be a simplicial map guarantees that any set of compatible measurements in can indeed be jointly measured in this manner.

  • Coarse-graining of outcomes. We can similarly consider a translation of outcomes.

    Given and a family of functions , define an empirical model on the scenario as follows: for each and

    One can use to implement : one just performs the measurement and applies the corresponding function to the outcome obtained.

  • Probabilistic mixing. We can consider convex combinations of empirical models: from two models on the same scenario, a new model is constructed in which a coin, which may be biased, is flipped to choose which of the two models to use.

    Given empirical models and in and a probability value , the model is given as follows: for any and ,

  • Controlled choice. From two empirical models, we can construct a new model that can be used as either one or the other. The choice is determined by which measurements are performed, but the compatibility structure enforces that only one of the original models ends up being used.

    Let and be empirical models. We consider a new scenario built out of these two. The measurements are , with outcomes given accordingly by the copairing , i.e.:

    The contexts are given by the coproduct of simplicial complexes

    ensuring that all the measurements performed in a single use come from the same of the two original scenarios, so that only one of the empirical models is used.

    The empirical model is given as

  • Tensor product. From two empirical models, a new one is built that allows the use of both models independently, in parallel.

    Let and be empirical models. As in the previous case, consider a new scenario with measurements and outcomes . The difference is that the contexts are now given by the simplicial join

    corresponding to the fact one may use measurements from the two scenarios in parallel. The empirical model , is given as

    for all , , , and .

  • Conditioning on a measurement. Given an empirical model, one may perform two compatible measurements in sequence. But in such a situation, when one decides to perform the second measurement, the outcome of the first is already known. We could therefore consider the possibility of choosing which measurement to perform second depending on the outcome observed for the first measurement. This process can be considered as a measurement in its own right yielding as its outcome the pair consisting of the outcomes of the two constituent measurements. We can extend the original empirical model with such an extra measurement.

    In order to define this operation, we need the concept of link of a face in a simplicial complex. Given a simplicial complex and a face , the link of in is the subcomplex of whose faces are

    If we think of as representing the compatibility structure of a measurement scenario, and suppose that the measurements in a face have already been performed, then the complex represents the compatibility structure of the measurements that may still be performed, ensuring that overall one always performs a set of compatible measurements according to .

    Let be an empirical model, and take a measurement and a family of measurements with a vertex of the complex

    Consider a new measurement , abbreviated . We call such a measurement a conditional measurement. Its outcome set is the dependent pair type

    The compatibility structure is extended to take the new measurement into account:

    Define the new model

    as follows: for the old faces ,

    for the new faces of the form satisfying for all , we have, for any and ,

Iii-B The contextual fraction

The contextual fraction is a quantitative measure of the degree to which any empirical model exhibits contextuality [11, 2], which we define here using the operation of probabilistic mixing.

Definition 6.

Given an empirical model , we consider the set of all possible decompositions

such that is non-contextual. The non-contextual fraction of , denoted , is defined to be the maximum value of over all such decompositions. The contextual fraction of , denoted , is then defined as

A crucial property for a useful measure of contextuality is that it should be a monotone for the free operations of our resource theory. That is, it should be non-increasing under those elementary operations on empirical models that can be carried out classically.

Proposition 7.

For the operations we have introduced in Section III-A the contextual fraction satisfies the following monotonicity properties.

  • ,
    i.e. 

Proof.

We will only show that , as the other statements were proved in [2]. The inequality holds by monotonicity of measurement translation, since where is the inclusion map.

For the other direction, note that preserves convex combinations and deterministic empirical models, i.e. those models that arise from (a delta distribution on) a single global assignment. Since non-contextual models are precisely those that are convex combinations of deterministic models, the operation takes non-contextual models to non-contextual models. Now, if where is non-contextual, then and is also non-contextual. Consequently, , which means that . ∎

Iii-C Equational theory

We consider terms built out of variables and the operations of Section III:

according to the ‘typing’ rules in Table II. Note that, due to the restriction forbidding repeated variables when building contexts, there is at most one occurrence of each variable in each well-typed term. We can interpret such a term as a composed ‘free’ operation on empirical models. More specifically, the typed term

represents an operation that takes empirical models, for , and builds a new empirical model, denoted , on the scenario , according to the definition of the elementary operations given in the itemized list above.

Terms without variables should therefore correspond to empirical models that are ‘free’ as resources. Indeed, they are precisely the non-contextual ones.

Proposition 8.

A term without variables always represents a non-contextual empirical model. Conversely, every non-contextual empirical model can be represented by a term without variables.

Proof.

Using Proposition 7, it is straightforward to show by induction that every term without variables satisfies , which proves the first claim.

For the second claim, note that since probabilistic mixing is an allowed operation and non-contextual empirical models are precisely the mixtures of deterministic models, it suffices to show that every deterministic empirical model can be built from the operations. So let be deterministic, and write for the certain outcome it assigns to measurement . Then , where is the unique simplicial map and is defined by . ∎

            ()
[left label=(choice)(var)]0Γ, { v : ⟨X,Σ,O⟩} ⊢v : ⟨X,Σ,O⟩         [left label=(choice)(zero)]0Γ⊢z: ⟨∅,Δ_0, ( ) ⟩         [left label=(choice)(singl)]0Γ⊢u: ⟨1,Δ_1,( 1)
Γ⊢t : ⟨X,Σ,O⟩ [left label=(choice)(meas)]1[ simplicial]Γ⊢f^* t : ⟨X’,Σ’,f^*O⟩ Γ⊢t : ⟨X,Σ,O⟩ [left label=(choice)(outc)]1[]Γ⊢t/h : ⟨X,Σ,O’⟩
Γ⊢t : ⟨X,Σ,O⟩ Γ’ ⊢t’: ⟨X,Σ,O⟩ [left label=(choice)(mix)]2[]Γ, Γ’ ⊢t +_λt’ : ⟨X,Σ,O⟩ Γ⊢t : ⟨X,Σ,O⟩ Γ’ ⊢t’:⟨X’,Σ’,O’⟩ [left label=(choice)(choice)]2Γ, Γ’ ⊢t &t’ : ⟨X⊔X’, Σ+ Σ’, [O,O’]⟩
Γ⊢t : ⟨X,Σ,O⟩ [left label=(choice)(cond)]1[, ]Γ⊢t[x?y] : ⟨X ∪{ x?y} ,Σ[x?y],O]⟩ Γ⊢t : ⟨X,Σ,O⟩ Γ’ ⊢t’:⟨X’,Σ’,O’⟩ [left label=(choice)(prod)]2Γ, Γ’ ⊢t ⊗t’ : ⟨X⊔X’, Σ⋆Σ’, [O,O’]⟩
Table II: Free operations on empirical models

We present a list of equations (1)–(28) between terms, with variables denoted . These equations are supposed to capture equality up to a static congruence – isomorphism of empirical models defined below. Implicit is the assumption that the terms on both sides of the equality signs are well-typed in the same typing context, according to the rules of Table II. That is, as we shall see in Proposition 10, when we write , we are implicitly thinking of all the typing contexts such that and .

For most of these equations, it is enough that the term on the left-hand side be well-typed in a given context for the term on the right-hand side to also be. The exception to this rule is equation (23). It is important not to be misled into reading it as meaning that any two consecutive extensions with conditional measurements commute. This is only the case when both are conditional measurements of the original model, i.e. when the second conditional measurement does not make use of the first. In the notation of the equation in question, assuming that the term on the left is well-typed, we would require that and for all in order to be able to also type the term on the right.

  • The controlled choice is a commutative monoid with neutral element :

    (1)
    (2)
    (3)
  • The product is a commutative monoid with neutral element :

    (4)
    (5)
    (6)
  • Standard axioms of convex combinations:

    (7)
    (8)
    (9)
    (10)
  • Measurement and outcome transformations:

    (11)
    (12)
    (13)

    where .

  • Convex combinations and the other operations:

    (14)
    (15)
    (16)
    (17)
    (18)
  • Transformations and other operations:

    (19)
    (20)
    (21)
    (22)
  • Conditional measurements and other operations

    (23)
    (24)

    where, for , we have that is the extension .

    (25)

    where for , we have for all , and extends the family with mapping a pair to .

    (26)
    (27)
  • Choice can be eliminated:

    (28)

    where is the inclusion of simplicial complexes (it acts as identity on the vertices).

The above equational theory intends to capture equality up to the following notion of isomorphism.

Definition 9.

Two empirical models and are said to be isomorphic, written , if there is a simplicial isomorphism and a family of bijections such that .

Note that these isomorphisms coincide exactly with the isomorphisms of the category that is defined in the next section.

Proposition 10 (Soundness).

The equational theory given by equations (1)–(28) is sound. That is, if is one of these equations, then for any context such that and and for any empirical models , we have

The proof is a tedious but straightforward verification of the conditions. It is an open question whether this equational theory is complete. An important step towards proving completeness – or towards finding the missing equations – is provided by the following normal form result. It establishes that, using the equations, we can transform any term into one where the operations are applied in a certain order.

Proposition 11 (Normal form).

Let . Then can be rewritten using equations (1)–(28) into a term of the following form:

Proof.

We are always using the rules from left to right (see remark immediately before the equations).

First, note that rule (28) allows us to rewrite the choice operation in terms of the others, so we can assume that has no occurrences of choice.

Using rules (14)–(18), all uses of probabilistic mixing can be taken to the top level. By the kind of associativity rule (10), can then be rewritten to a term of the form where the terms do not use the mixing operation.

Now, let be a term without mixing. By a similar argument, using equations (20), (22) and (24)–(25), we can commute translation of measurements and outcomes to the top level relative to the remaining operations. Using (13), all coarse-grainings of outcomes can be commuted upwards, and using (11) and (12), one can combine consecutive applications of either of these two operations. Therefore can be rewritten as for some and and some term without occurrences of mixing, translation of measurements, or coarse-graining of outcomes.

From rule (27), can be rewritten to have the form where only uses base cases and the product operation. ∎

Iv The categorical viewpoint

In this section we make precise the idea of using one empirical model to simulate the behavior of another one. In fact, there are several notions of a simulation, depending on the powers allowed to those doing the simulating. The simplest notion is deterministic and has a clear intuitive meaning: to use to simulate amounts to giving a measurement for every and a deterministic way of interpreting the outcomes of as outcomes of , i.e. a map . For such a protocol to succeed, has to be simplicial and the outcome statistics of must, after interpretation, give rise to the statistics of .

One can then consider ways of extending such simulations. In [10] more expressive power was obtained by allowing to be a set of measurements instead of a singleton, and by allowing outcomes to be interpreted stochastically.

Here we obtain even more general simulations by letting be an adaptive measurement protocol on . Classical shared randomness can then be modelled by allowing the use of auxilliary non-contextual empirical models.

Iv-a Deterministic simulations

Definition 12.

Let and be measurement scenarios. A deterministic morphism consists of:

  • a simplicial map ;

  • a natural transformation ; equivalently, a family of maps for each .

The composite of the morphisms and is given by .

Given an empirical model , its pushforward along a deterministic morphism is the empirical model defined by: for any ,

Let and be empirical models. Then a deterministic simulation is a deterministic morphism such that

The category of empirical models and deterministic simulations is denoted by .

The reason that natural transformations correspond to families of maps for each is the following: both and are sheaves on a discrete space, and hence morphisms can be glued along any covering – and in particular along the covering by singletons.

The category and the category of measurement scenarios are in fact symmetric monoidal with the product defined in Section III-A. The action on morphisms is given by

Iv-B Measurement protocols

Deterministic simulations are fairly limited in their expressive power. For instance, one might want to use classical randomness in simulations. If one thinks of an empirical model as a black box, even more is possible: one could first perform a measurement, then based on the observed outcome choose which compatible measurement to perform next, and so on. Such procedures are known as measurement protocols [12] or wirings [17] in the literature on non-locality.

The main task of this section is to formalize carefully the notion of measurement protocol. We define an operation that takes a measurement scenario and builds a new scenario , whose measurements are all the measurement protocols over . This turns out to be functorial, and moreover, a comonad. Hence, using the co-Kleisli category, one can think of more general simulations as deterministic simulations . Moreover, we will see that classical randomness comes for free by considering simulations of the form , where is a non-contextual empirical model.

A measurement protocol is a certain kind of decision tree: the root is the first measurement, and the outcomes obtained dictate which measurements to choose next, i.e. which branch of the tree to pick. Rooted trees can be formalized in various ways: e.g. recursively, as certain graphs, or in terms of prefix-closed sets of words. We have chosen to formalize them using the latter approach. We try to keep the intuitive picture in mind as it can give sense to the proofs, which may seem somewhat technical otherwise.

Definition 13.

A run on a measurement scenario is a sequence such that are distinct, , and each .

A run determines a context and a joint assignment on that context . Two runs and are said to be consistent if they agree on common measurements, i.e. for every we have .

Given runs and , we denote their concatenation by . Note that might not be a run.

Definition 14.

A measurement protocol on is a non-empty set of runs satisfying the following conditions:

  1. if then ;

  2. if , then for every ;

  3. if and , then .

One can think of such a measurement protocol as a (deterministic) strategy for interacting with an empirical model, seen as a black box whose interface is given by its measurement scenario: Condition (iii) expresses that the previously observed outcomes determine the next measurement to be performed, while Condition (ii) captures the fact that every outcome of a performed measurement may in principle be observed and so the protocol must specify how to react to each possibility, either by performing a new measurement or by stopping.

Definition 15.

Given a scenario , we build a scenario :

  • its set of measurements is the set of measurement protocols on ;

  • the outcome set of a measurement protocol is its set of maximal runs, i.e. those that are not a proper prefix of any other ;

  • a set of measurement protocols is compatible whenever for any choice of pairwise consistent runs with , we have .

Definition 16.

Given an empirical model , we define the empirical model as follows. For a compatible set of measurement protocols and an assignment , we set

One way of thinking about the definition above is that it identifies a measurement protocol with the set of all situations in which one might find oneself while carrying out the protocol. Informally, compatibility of measurement protocols means they can be interleaved in any order whatsoever, and when running them one never ends up performing incompatible measurements.

It is clear that satisfies the compatibility (or no-signalling) condition: calculating the probability of a joint outcome in corresponds to a calculating a probability of a joint outcome in , so no-signalling in gives no-signalling for .

Iv-C Measurement protocols as a comonad

We now show that is in fact a comonad on empirical models. We do this by verifying the conditions for a co-Kleisli triple. We work on the category of measurement scenarios – getting a comonad on empirical models requires little further effort.

First, we need to build a deterministic morphism

Intuitively, it is clear how this should be done: every measurement can be viewed as a measurement protocol that only measures and stops afterwards. Formally, the simplicial map underlying is defined by mapping each measurement to the protocol

where is the empty word. The map of outcomes is given by sending an outcome (i.e. maximal run) of this protocol to .

Next, for scenarios and , we define an extension operator that lifts a morphism

to a morphism

Given a measurement protocol over , we wish to define . Intuitively, the extension works as follows: when running , any time one needs to perform a measurement , one performs the measurement protocol instead, maps its outcome to using , and consults the measurement protocol to see what to do next. However, there is a slight catch: if a measurement protocol requires one to perform a measurement in that has already been done, there is no need to redo it – one can simply reuse the previous outcome. To make this precise, we first define inductively a merge operation for compatible runs:

We extend to all pairs of runs by setting whenever and are not compatible.

We are now in a position to define , but we first motivate the formal definition. What are the runs of ? At least, they should contain those that can be interpreted as runs of : for instance, if , then contains runs that can be interpreted as by . These are precisely of the form where each is a maximal run and satisfies . However, contains more runs, since any prefix of such an must be included.

We define as the closure of the set