## 1 Introduction

Controlled experimentation is the gold standard for causal inference due to the simple fact that it can make experimental groups statistically equivalent in all ways but for the treatment assignment, so that any observed differences can only be attributed to a causal effect or to noise. The less noise, the more certain we are that differences attest to a true causal effect. Therefore, in the pursuit of scientific discoveries, it behooves us to minimize noise. One way to do so is to increase the number of units, but this is often expensive and so an economical researcher should seek to eliminate as much noise as possible on a given budget of units.

Noise in experiments arises both due to the randomness in sampling units from a population and due to the randomness of treatment assignment. The latter, of course, is completely within the control of the researcher and is often called the design. Therefore, conditional on the sample (or, similarly, if we treat sampling as non-random), we should seek the design with minimal noise.

A common design is complete randomization, where a random subset of fixed size is chosen for treatment. However, such a design may well result in an assignment that appears “imbalanced.” An oft-quoted criticism by Student (W. Gosset) is that it “would be pedantic to continue with an arrangement of [field] plots known beforehand to likely lead to a misleading conclusion,” referring to completely randomized experiments in agriculture (Student, 1938). Both the judgment of “imbalance” and the supposed foreknowledge of misleading conclusions, however, must depend on some understanding of how post-treatment outcomes depend on pre-treatment variables. Student, for example, mentions experimental group disparities in average “fertility slopes.”

Recently, Kallus (2018) developed a systematic framework of how such a priori understanding on the structure of this relationship translates to optimal design, using the lens of a priori (meaning, before randomization and treatment) balance in pre-treatment variables.
The framework is phrased as a zero-sum game against Nature, where the experimenter seeks to eliminate noise and Nature interferes adversarially but is constrained by the assumed structure.
In the absence of structure, it is in fact *impossible* to improve upon complete randomization, referred to by Kallus (2018) as “no free lunch.”
When one assumes structure in the form of restrictions on the conditional expectation function (CEF) of post-treatment outcomes given pre-treatment variables, other (still randomized) designs become optimal.
In particular, Kallus (2018)

formalizes “noise” as post-treatment estimation

*variance*so these designs minimize worst-case variance, or are minimax with respect to expected squared error.

In this paper, I re-emphasize how minimax-variance optimal design *does not* imply no randomization.
To elucidate this, I re-introduce the framework of Kallus (2018) in a simple, instructive manner that more clearly highlights its minimax structure.
In particular, even when one assumes CEF structure, I demonstrate that *optimal* designs are still *randomized*, even beyond the random flipping of “treatment” and “control” labels on the experimental groups. I discuss the optimality of such randomization and in what limited cases is randomizing only the treatment label on a single partition of units optimal.
Furthermore, I show how one can correctly trade-off balance and additional randomization by relaxing the assumed CEF structure.
I additionally revisit the question of randomization (, design-based) inference and present a new constrained-optimization formulation to find the minimax-optimal design subject to a uniformity constraint to enable randomization testing at a given significance level.

On the way, I also discuss to Johansson et al. (2020) who recently compared rerandomization (Morgan and Rubin, 2012) and the pure-strategy optimal design (PSOD) of Kallus (2018)

, which is a heuristic offered for the mixed-strategy optimal design (MSOD), which is the minimax-optimal design. I thank and congratulate

Johansson et al. (2020) for a thought-provoking paper. I use the opportunity in this paper to set straight a few errors I found in it: randomization beyond just treatment blinding*is*in fact minimax variance optimal; Kallus (2018) proposes the MSOD as the minimax-optimal design, which

*does*randomize beyond two symmetric assignments; designing for optimal precision subject to enabling randomization inference does not require uniform randomization over a restricted set; optimal schemes as I propose herein exist; and finally Theorem 1 (“no free lunch”) and Example 1 of Kallus (2018) are correct (and are misquoted by Johansson et al., 2020) showing that,

*in the worst-case*, randomizing between two symmetric assignments, and in particular optimizing the Mahalanobis distance between experimental group means, can increase variance by a factor of relative to complete randomization.

## 2 The Framework of Kallus (2018) Redux and Refined

We briefly review the framework of Kallus (2018), presenting it anew more clearly as minimax over parameters and focusing on the case of two treatment arms. First, we set up the problem. Our sample consists of units with (observed) pretreatment variables and (unobserved) potential outcomes , for . Each unit is assumed independent of others (but not necessarily identically distributed). Define and .

We are interested in estimating and making inferences on the sample average treatment effect (SATE): . Toward that end, we can choose treatment assignments and get to observe . We refer to treatment as “treatment” and as “control.” (Note also our convention of bold type for tuples.) For simplicity, suppose is even and that , where

is the vector of ones. We focus on the SATE estimator

. Since outcomes are not observed before treatment, must be independent of given .A *design* is a distribution over , which specifies how we choose the treatment assignment *conditional* on ; we treat

as a random variable measurable with respect to

. We require that every assignment supported by the design has , and that andhave the same probability. We refer to the latter property as

*blinding*(the identity of treatment). Specifically,

Given and given a design,

have a joint distribution (conditional on

).Denoting by the design we choose for every (hence a random variable), by repeating Theorem 7 of Kallus (2018) we can shows that, since due to blinding, by algebra, and for by independence, we have

(1) | ||||

where |

where we use the subscript

to denote a dummy variable. Notice that only the first term depends on the design,

, and that is measurable with respect to*alone*. We of course do not know so we consider a minimax framework. Given and some set of potential values for , we define

The *minimax-optimal design* is defined as the one minimizing (given ). Calling this the minimax design is based on the fact that, per Eq. 1, if we are given a random set measurable with respect to and we choose the minimax design for *each*

then this experimental procedure minimizes the maximum variance of the (unbiased) estimator

over all measurable choices of . This optimal design is called the MSOD in Kallus (2018) to emphasize that it is a mixed strategy in this zero-sum game, , it*randomizes*over unit partitions. The PSOD is defined by Kallus (2018) as the design that only randomizes over the assignments that minimize . Since this may not be minimax-optimal, it is given purely as a

*heuristic approximation*for the minimax-optimal MSOD and for the purpose of showing that various existing methods such as blocking, group mean matching, and nonbipartite pair matching are recovered as the PSOD for certain choices of .

Notice that since , we have and so without loss of generality, it suffices to restrict . Next, notice that we can simplify , where , , . Note . In the following, we will often consider for a positive semidefinite , for which .

## 3 The Optimality of Complete Randomization

A natural next question is, what *is* the minimax-optimal design? That, of course, depends on .
If we have no particular knowledge about , we should not constrain in any informative fashion. However, scales linearly with so we must restrict it somehow else (equivalently, we must measure relative to the *magnitude* of ). An uninformative restriction must be permutation symmetric, , is invariant to permutations of the coordinates of .
An important permutation symmetric example is ,
where denotes the complete randomization design.
is of interest as it amounts to measuring one’s variance relative to complete randomization’s. A basic computation shows ,
where ,

is the identity matrix, and

the matrix of ones, and that for . Other examples include for any -norm. Then, a basic exercise in convexity and symmetry shows that, whenever is permutation symmetric, then the minimax-optimal design*is*complete randomization. This is the “no free lunch” theorem of Kallus (2018, Theorem 1): one cannot improve on complete randomization unless one is willing to assume

*structure*, , some deviation from permutation symmetry. (While Theorem 1 of Kallus, 2018 considers worst-case values of , here I take a more proper minimax approach, considering worst-case values of the

*parameters*, conditioned on . I also do not assume identical unit distributions. The proof argument is exactly the same.)

## 4 The Suboptimality of a Single Assignment

Let us now consider a design that only uses a single partition of units and simply randomizes the identity of treatment, , for some . Then, and we can compute
. In comparison,
by construction. This says that, given , for *any* single , there *always* exists such that and (since is closed). In particular, there *always* exists some such that .
Now, take , , of order . If and if are bounded over (, constant), then by Eq. 1, while . That is, CR has variance vanishing as and a design using a single partition has *non-vanishing* variance. The existence of such is a mathematical fact.

Example 1 of Kallus (2018) provides an explicit construction of such purely for illustration (, not as a proof; the existence is already proven by computing ). This particular example has . Taking and constant,
we have while . The example is specially constructed so that and *uniquely* optimize any scaled Euclidean distance between group means in , , where and is positive definite. , if is the inverse sample covariance matrix, this gives the Mahalanobis distance. (The example is also appealing as it recovers the worst-case behavior of nonbipartite pair matching and blocking. Note also that in Journal production, a typo in Example 1 was introduced overlooked in the proofing where some ’s are typeset as and two ’s were dropped; the typo does not appear in an earlier arXiv preprint version. The correct construction is , , .)

Johansson
et al. (2020) cite this example and *incorrectly* claim that and do *not* uniquely optimize the Mahalanobis distance. In fact, they misquote the example. While they incorrectly claim that the construction has , Example 1 of Kallus (2018) clearly provides a *different*, much more involved formula for and writes that “This rather complicated construction essentially yields with just enough perturbation so that the assignment [] uniquely minimizes Mahalanobis distance between group means” (where the fact that is also optimal is implicit since we always blind the identity of treatment so we only discuss the unit partitions). Of course, if we had considered as our pre-treatment variables, this would not be the uniquely optimal partition, but these are *not* the pre-treatment variables in Example 1 of Kallus (2018).
More generally, it is a fact, per the above, that whenever uniquely optimize the Mahalanobis distance, there will *always* exist some mean-outcome vector such that the design randomizing over all optimizers of Mahalanobis distance will have . Example 1 of Kallus (2018) is just one (correct) explicit example. The claim of Johansson
et al. (2020) that “The mistake of Kallus (2018) stems from the incorrect assumption that the allocation [] uniquely minimizes the Mahalanobis distance for all ” is patently false: they consider a *different* set of covariates than Kallus (2018).

## 5 The Optimality of Restricted Randomization: The Mixed-Strategy Optimal Design

The next natural question is, when is something *different* from CR minimax-optimal? That, again, depends on .
Consider the case of . Then the optimal design is that which minimizes , so it depends on the spectrum of . In one extreme, is permutation symmetric, in which case the whole spectrum of

must be concentrated in a single value (aside from the eigenvector

), for some , and CR becomes optimal. In the other extreme, is of rank one, in which case , and the*single*partition that solves becomes the optimal design. In between these extremes, when has a dispersed spectrum, something between perfectly partitioning a single vector and complete randomization is optimal: the MSOD of Kallus (2018).

To motivate other constructions of suppose that there exists such that . This, for example, would be guaranteed if units were identically distributed. Now let where is some class of functions. Suppose that is the unit ball of a reproducing kernel Hilbert space (RKHS), , for a positive semidefinite kernel ,

Then one can show that , where . Examples of positive semidefinite kernels when , as given in Kallus (2018), are linear , polynomial , and Gaussian , all for some positive semidefinite . This offers the researcher a flexible modeling framework and clearly connects assumptions on the structure of the CEF to optimal design. The Gaussian kernel is notable for being a *universal* kernel: the span of is dense in continuous functions in (or in all functions in ). This ensures model-free consistency even without assuming (Kallus, 2018, Theorem 13).

Consider next the linear kernel with a positive definite . We can then rewrite , , the set of CEFs are the linear functions with coefficients bounded in -scaled norm. Notice that in this case, . Now, if has the single partition then , , the -scaled Euclidean distance between group means. Therefore, among *single*-partition designs, the optimal one, , the PSOD, minimizes the distance in experimental group means in pre-treatment variables; Mahalanobis distance if is the inverse sample covariance. However, if then is generally of rank higher than one and a single partition is *not* minimax-optimal. This means that, unlike the characterization of Johansson
et al. (2020), even in the simple linear-CEF setting that recovers Mahalanobis mean matching, a single partition is *not* optimal and randomization beyond just blinding *is necessary* to achieve minimax-optimality, that is, the so-called MSOD proposed in Kallus (2018). So even when we care about balancing experimental group means, and even when we are optimizing only for minimax variance, we should *still* randomize over partitions.

There is also an easy way to trade-off optimality for a given single CEF and additional randomization. Suppose we have a guess for the CEF. If we set to a singleton of , then where and the optimal design is the single partition that minimizes the subset sum differences for the vector , as in the rank-one example in the first paragraph of this section. This is optimal if , but if we knew the outcome CEFs we would not need an experiment to begin with. If we want to introduce additional randomization (, if we are unsure of our guess ), we may use for , , wash out the spectrum of . While recovers the perfect partitioning of , recovers complete randomization. This exactly corresponds to considering the CEF set given by for the Dirac Kernel (note this is not a Mercer kernel). More generally, given any , we can regularize the corresponding minimax-optimal design toward more randomization by using instead. Alternatively, given any (, an RKHS ball, ) one can also use for or . Expanding by arbitrary bounded functions washes out more structure as we increase . The result is similar to Kapelner et al. (2020) but using a proper minimax framework on parameters rather than introducing adversarial choice of random variables.

Given , the minimax-optimal design (the *randomized* MSOD) solves , where . This, however, may be a difficult optimization problem. For that reason, Kallus (2018) provides an outer approximation , as well as an inner approximation , where is a given set. Both approximations give rise to tractable semidefinite programs.

## 6 When Is a Single Partition Optimal?

Section 5 shows that a single partition (randomizing between ) is minimax-optimal for when is rank one. Otherwise, it is generally *suboptimal*.
It is worth mentioning that in the minimax framework it is optimal to randomize only because the researcher plays first and Nature second in this zero-sum game, in the sense that the researcher first announces her mixed (, randomized) strategy (distribution over ) but not the specific she will play, and then Nature choose an action adversarially to maximize the expected loss averaged over ’s from the researcher’s strategy. Once the first player announces her strategy, the second player does not benefit from randomization. Indeed, if the roles were reversed, and first Nature announced its distribution over and then the researcher chose a design, then the researcher would not benefit from randomization (for minimizing squared error). This is *precisely* the Bayesian setting: the distribution announced by Nature is the prior over . In this setting, the researcher need not randomize to minimize squared error. But this assumes we know a prior and is therefore unappealing in a controlled experiment, where we can potentially have assumption-free correct causal inference if we randomized. The minimax framework may therefore be preferable.

## 7 Optimizing for Randomization Inference

The minimax-optimality framework deals with optimizing *precision*, but does not explicitly handle inference. One can attempt to study the marginal sampling distribution of

but that may prove difficult. A more convenient and assumption-free approach may be to use a randomization (aka Fisher exact) test. I here provide an extension of the MSOD that constrain the optimization to ensure designs that support randomization testing at a given significance.

A randomization test can be run for *any*

design to test the sharp null hypothesis,

. First fix some test statistic

(it can also depend on since everything is conditional on ). , the absolute mean difference , or the absolute value of the two-sample -statistic (either pooled variance or Welch’s). Then, after assignment and treatment, we record . Under , would also be what we observed if we made another treatment assignment, so the distribution of given is given by where . This gives the -value , which can be approximated by Monte-Carlo simulation from . By construction, if we only reject whenthen our type-I error rate must be at most

.A concern, however, is the power of the test. One may hope that if precision is high, then power would also be high. But, if we only randomize over a single partition, then for the above examples of two-tailed statistics, we will always have and we never reject the null. We must therefore ensure that each assignment only occurs with probability at most (focusing on two-sided statistics). The MSOD, despite being randomized, may or may not have this property for a given .

I therefore propose the *inference-constrained* MSOD, which,
given and , solves:

(2) |

The constraint ensures that each assignment has probability at most so that, if chosen, the randomization test can potentially return if the statistic is indeed extreme. Equation 2 can be a difficult optimization problem so I propose the following approximation based on Kallus (2018, Algorithm 3). Set ; for , solve and set . Then are the top solutions to (the top two, if unique, gives the PSOD). Each optimization problem is a binary optimization problem with a convex-quadratic objective and linear constraints and can be solved with off-the-shelf solvers such as Gurobi. Then let have the columns and solve

(3) |

and set . Equation 3 is a tractable semidefinite program. Notice we need at least for Eq. 3 to be feasible. If is integral, setting forces Eq. 3 to choose the design that uniformly randomizes over the top solutions to . This latter alternative approach was considered in Kallus (2018, Example 4) but was found empirically less powerful than a bootstrap test. Focusing solely on randomization tests, Eq. 2 is exactly the minimax-optimal design for optimizing variance subject to the constraint of no single assignment occurring more often than . For larger , Eq. 3 provides a good approximation of this design.

## 8 The Suboptimality of Rerandomization

Morgan and Rubin (2012) proposed the design that uniformly randomizes over , which they operationalize by repeatedly sampling uniformly from until , termed rerandomization. Specifically, they use (so is Mahalanobis distance) and recommend setting , where

is the cumulative distribution function of the

-distributions with degrees of freedom (assuming ) and is a target acceptance probability, which this procedure approximates. This is particularly notable for nicely formalizing and theoretically characterizing what was previously a haphazard practice of researchers repeatedly clicking “Recalculate” (F9) in Excel whenever the unconstrained randomization of units obtained appeared “imbalanced” or “undesirable” (this is what “historically haphazard” refers to in Kallus, 2018; not to the method of Morgan and Rubin, 2012 nor the use of Mahalanobis or linear projections, as suggested by Johansson et al., 2020). Johansson et al. (2020) highlight that rerandomization is also notable for both improving precision*and*enabling randomization inference, as long as is chosen so that .

It is important to note, however, that in our minimax framework the rerandomization design is *not* minimax-optimal for . The only exception is the case , where a single partition (, the PSOD) is optimal as discussed in Section 6; this is equivalent to setting in the above (which Johansson
et al., 2020 also refer to as “optimal rerandomization,” perhaps oxymoronically). However, in practice, we generally have , in which case, firstly, the minimax-optimal design (the MSOD) requires *more* randomization than a single partition, and the rerandomization design is *not* minimax-optimal for *any* value of . In particular, even if set such that , not only do we still not obtain the optimal design, but using rejection sampling will also have very bad running time as it is solving constraint satisfaction by brute force Monte Carlo. In particular, rerandomization requires roughly samples. This is not an issue if is fixed with , , , but it also means that we are uniformly randomizing over very many partitions so we are even *less* optimal. If is decreasing with so that is roughly constant, this means we need an exponentially growing number of samples. This is even worse at inference time when we need to again sample multiple times from this design, each time needing exponentially many samples.

## 9 Concluding Remarks

I here studied the minimax-optimal design when conditional mean outcomes may vary in a given set. This more clearly and formally positions the framework of Kallus (2018) as a minimax one and makes clear that the MSOD defined therein is the minimax-optimal design. This also demonstrated that the design that is minimax-optimal for variance *does* still randomize over more than one partition. Since this, nonetheless, only optimizes for precision, it may not ensure sufficient uniformity to ensure randomization inference at any given significance has power to detect violations of null. I therefore proposed the new inference-constrained MSOD (Eq. 2) and a tractable heuristic for it (Eq. 3). While rerandomization designs with sufficiently large enable randomization inference, they do not optimize any principled error objective. Instead, the inference-constrained MSOD is minimax-optimal for precision among all designs with sufficient uniformity to enable randomization inference at a given significance .

I thank and congratulate Johansson et al. (2020) for a thought-provoking paper and for highlighting the importance of randomization inference. However, I find it made a few errors, which I here used the opportunity to set straight: that minimax variance optimality does not mean using a single unit partition, , the minimax-optimal design randomizes beyond blinding and that this was proposed in Kallus (2018); that one can enable randomization inference while still targeting a principled objective; and that Theorem 1 and Example 1 of Kallus (2018) are correct. Their work nonetheless inspired my proposal herein of inference-constrained MSODs and I welcome the continued vigorous conversation.

## References

- Johansson et al. (2020) Johansson, P., D. B. Rubin, and M. Schultzberg (2020). On optimal rerandomization designs. Journal of the Royal Statistical Society: Series B (Statistical Methodology). To appear.
- Kallus (2018) Kallus, N. (2018). Optimal a priori balance in the design of controlled experiments. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80(1), 85–112.
- Kapelner et al. (2020) Kapelner, A., A. M. Krieger, M. Sklar, U. Shalit, and D. Azriel (2020). Harmonizing optimized designs with classic randomization in experiments. The American Statistician, 1–12.
- Morgan and Rubin (2012) Morgan, K. L. and D. B. Rubin (2012). Rerandomization to improve covariate balance in experiments. Ann. Statist. 40(2), 1263–1282.
- Student (1938) Student (1938). Comparison between balanced and random arrangements of field plots. Biometrika 29(3/4), 363–378.