On the complexity of hazard-free circuits

11/06/2017
by   Christian Ikenmeyer, et al.
0

The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/02/2019

A Solution of the P versus NP Problem based on specific property of clique function

Circuit lower bounds are important since it is believed that a super-pol...
12/20/2020

Notes on Hazard-Free Circuits

The problem of constructing hazard-free Boolean circuits (those avoiding...
08/11/2017

Lower bound for monotone Boolean convolution

Any monotone Boolean circuit computing the n-dimensional Boolean convolu...
02/28/2019

Lower Bounds for Multiplication via Network Coding

Multiplication is one of the most fundamental computational problems, ye...
01/14/2022

On Protocols for Monotone Feasible Interpolation

Feasible interpolation is a general technique for proving proof complexi...
07/11/2021

Karchmer-Wigderson Games for Hazard-free Computation

We present a Karchmer-Wigderson game to study the complexity of hazard-f...
10/25/2021

A Compilation of Succinctness Results for Arithmetic Circuits

Arithmetic circuits (AC) are circuits over the real numbers with 0/1-val...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We study the problem of hazards in Boolean circuits. This problem naturally occurs in digital circuit design, specifically in the implementation of circuits in hardware (e.g. [Huf57, Cal58]), but is also closely related to questions in logic (e.g. [Kle52, Kör66, Mal14]) and cybersecurity ([TWM09, HOI12]). Objects are called differently in the different fields; for presentational simplicity, we use the parlance of hardware circuits throughout the paper.

A Boolean circuit is a circuit that uses and-, or-, and not-gates, in the traditional sense of [GJ79, problem MS17], i.e., and and or have fan-in two. The standard approach to studying hardware implementations of Boolean circuits is to use the digital abstraction, in which voltages on wires and at gates are interpreted as either logical or . More generally, this approach is suitable for any system in which there is a guarantee that the inputs to the circuit and the outputs of the gates of the circuit can be reliably interpreted in this way (i.e., be identified as the Boolean value matching the gate’s truth table).

Kleene Logic and Hazards

Several independent works ([Got48], [YR64] and references therein) observed that Kleene’s classical three-valued strong logic of indeterminacy  [Kle52, §64] captures the issues arising from non-digital inputs. The idea is simple and intuitive. The two-valued Boolean logic is extended by a third value  representing any unknown, uncertain, undefined, transitioning, or otherwise non-binary value. We call both Boolean values stable, while is called unstable. The behavior of a Boolean gate is then extended as follows. Let and . Given a string , a resolution of is defined as a string that is obtained by replacing each occurrence of in by either or . If a -ary gate (with one output) is subjected to inputs , it outputs iff it outputs for all resolutions of , otherwise it outputs . In other words, the gate outputs a Boolean value , if and only if its output does not actually depend on the unstable inputs. This results in the following extended specifications of and, or, and not gates:

not 0 1
1 0
and 0 1
0 0 0 0
0
1 0 1
or 0 1
0 0 1
1
1 1 1 1

By induction over the circuit structure, a circuit with input gates now computes a function .

Unfortunately, in some cases, the circuit might behave in an undesirable way. Consider a multiplexer circuit (MUX), which for Boolean inputs outputs if and if . A straightforward circuit implementation is shown in Figure 0(a). Despite the fact that , one can verify that in Figure 0(a), . Such behaviour is called a hazard:

and

and

not

or

(a) Multiplexer with hazard at

and

and

not

or

and

or

(b) Hazard-free multiplexer
Figure 1: Two circuits that implement the same Boolean multiplexer function. One has a hazard, the other one is hazard-free.
1.1 Definition (Hazard).

We say that a circuit on inputs has a hazard at iff and there is a Boolean value such that for all resolutions of we have . If has no hazard, it is called hazard-free.

The name hazard-free has different meanings in the literature. Our definition is taken from [DDT78]. In Figure 0(b) we see a hazard-free circuit for the multiplexer function. Note that this circuit uses more gates than the one in Figure 0(a). The problem of detecting hazards and constructing circuits that are hazard-free started a large body of literature, see Section 2. The question whether hazards can be avoided in principle was settled by Huffman.

1.2 Theorem ([Huf57]).

Every Boolean function has a hazard-free circuit computing it.

He immediately noted that avoiding hazards is potentially expensive [Huf57, p. 54]:

“In this example at least, the elimination of hazards required a substantial increase in the number of contacts.”

Indeed, his result is derived using a clause construction based on the prime implicants of the considered function, which can be exponentially many, see e.g. [CM78]. There has been no significant progress on the complexity of hazard-free circuits since Huffmann’s work. Accordingly, the main question we study in this paper is:

What is the additional cost of making a circuit hazard-free?

Our Contribution

Unconditional lower bounds.

Our first main result is that monotone circuit lower bounds directly yield lower bounds on hazard-free circuits. A circuit is monotone if it only uses and-gates and or-gates, but does not use any not-gates. For a Boolean function , denote (i) by its Boolean complexity, i.e., the size of a smallest circuit computing , (ii) by its hazard-free complexity, i.e., the size of a smallest hazard-free circuit computing , and (iii), if is monotone, by its monotone circuit complexity, i.e., the size of a smallest monotone circuit computing . We show that properly extends to the domain of all Boolean functions.

1.3 Theorem.

If is monotone, then .

We consider this connection particularly striking, because hazard-free circuits are highly desirable in practical applications, whereas monotone circuits may seem like a theoretical curiosity with little immediate applicability. Moreover, to our surprise the construction underlying Theorem 1.3 yields a circuit computing a new directional derivative that we call the hazard derivative222Interestingly, this is closely related to, but not identical to, the Boolean directional derivative defined in e.g. [dRSdlVC12, Def.3], which has applications in cryptography. To the best of our knowledge, the hazard derivative has not appeared in the literature so far. of the function at in direction of , which equals the function itself if it is monotone (and not constant ). We consider this observation to be of independent interest, as it provides additional insight into the structure of hazard-free circuits.

We get the following (non-exhaustive) list of immediate corollaries that highlight the importance of Theorem 1.3.

1.4 Corollary (using monotone lower bound from [Raz85]).

Define the Boolean permanent function as

We have and .

1.5 Corollary (using monotone lower bound from [Tar88]).

There exists a family of functions such that and for a constant .

In particular, there is an exponential separation between and , where the difference does not originate from an artificial exclusion of not gates, but rather from the requirement to avoid hazards. We even obtain separation results for non-monotone functions!

1.6 Corollary.

Let be the determinant over the field with elements, that is,

We have and .

Another corollary of Theorem 1.3 separates circuits of linear size from their hazard-free counterparts.

1.7 Corollary (using monotone lower bound from [Ab87]).

There exists a family of functions such that but for some , where the number of input variables of is

As a final example, we state a weaker, but still substantial separation result for Boolean matrix multiplication.

1.8 Corollary (using monotone lower bound from [Pat75, Mg76], see also the earlier [Pra74]).

Let be the Boolean matrix multiplication map, i.e., with . Every circuit computing with fewer than gates has a hazard. In particular, every circuit that implements Strassen’s algorithm [Str69] or any of its later improvements (see e.g. [LG14]) has a hazard.

Since our methods are based on relabeling circuits only, analogous translations can be performed for statements about other circuit complexity measures, for example, the separation result for the circuit depth from [RW92]. The previously best lower bounds on the size of hazard-free circuits are restricted to depth 2 circuits (with unbounded fan-in and not counting input negations), see Section 2.

Parametrized upper bound.

These hardness results imply that we cannot hope for a general construction of a small hazard-free circuit for even if is small. However, the task becomes easier when restricting to hazards with a limited number of unstable input bits.

1.9 Definition (-bit hazard).

For a natural number , a circuit on inputs has a -bit hazard at , iff has a hazard at and appears at most times in .

Such a restriction on the number of unstable input bits has been considered in many papers (see e.g. [YR64, ZKK79, Ung95, HOI12]), but the state-of-the-art in terms of asymptotic complexity has not improved since Huffman’s initial construction [Huf57], which is of size exponential in , see the discussion of [TY12, TYM14] in [Fri17, Sec. “Speculative Computing”]. We present a construction with blow-up exponential in , but polynomial in . In particular, if is constant and , this is an exponential improvement.

1.10 Corollary.

Let be a circuit with inputs, gates and depth . Then there is a circuit with gates and depth that computes the same function and has no -bit hazards.

Further results.

We round off the presentation by a number of further results. First, to further support the claim that the theory of hazards in circuits is natural, we prove that it is independent of the set of gates (and, or, not), as long as the set of gates is functionally complete and contains a constant function, see Corollary A.4. Second, it appears unlikely that much more than logarithmically many unstable bits can be handled with only a polynomial circuit size blow-up.

1.11 Theorem.

Fix a monotonously weakly increasing sequence of natural numbers with and set . If Boolean circuits deciding -CLIQUE on graphs with vertices require a circuit size of at least , then there exists a function with for which circuits without -bit hazards require many gates to compute.

In particular, if is only slightly superlogarithmic, then Theorem 1.11 provides a function where the circuit size blow-up is superpolynomial if we insist on having no -bit hazards. In this case is slightly superconstant, which means that “Boolean circuits deciding -CLIQUE require size at least ” is a consequence of a nonuniform version of the exponential time hypothesis (see [LMS11]), i.e., smaller circuits would be a major algorithmic breakthrough.

We remark that, although it has not been done before, deriving conditional lower bounds such as Theorem 1.11 is rather straightforward. In contrast, Theorem 1.3 yields unconditional lower bounds.

Finally, determining whether a circuit has a hazard is NP-complete, even for 1-bit hazards (Theorem 6.5). This matches the fact that the best algorithms for these tasks have exponential running time [Eic65]. Interestingly, this also means that if , given a circuit there exists no polynomial-time verifiable certificate of size polynomial in the size of the circuit to prove that the circuit is hazard-free, or even free of 1-bit hazards.

2 Related work

Multi-valued logic is a very old topic and several three-valued logic definitions exist. In 1938 Kleene defined his strong logic of indeterminacy [Kle38, p. 153], see also his later textbook [Kle52, §64]. It can be readily defined by setting , , , and , as it is commonly done in fuzzy logic [PCRF79, Roj96]. This happens to model the behavior of physical Boolean gates and can be used to formally define hazards. This was first realized by Goto in [Got49, p. 128], which is the first paper that contains a hazard-free implementation of the multiplexer, see [Got49, Fig. 75]. The third truth value in circuits was mentioned one year earlier in [Got48]. As far as we know, this early Japanese work was unnoticed in the Western world at first. The first structural results on hazards appeared in a seminal paper by Huffman [Huf57], who proved that every Boolean function has a hazard-free circuit. This is also the first paper that observes the apparent circuit size blow-up that occurs when insisting on a hazard-free implementation of a function. Huffman mainly focused on 1-bit hazards, but notes that his methods carry over to general hazards. Interestingly, our Corollary 1.10 shows that for 1-bit hazards the circuit size blow-up is polynomially bounded, while for general hazards we get the strong separation of Corollary 1.5.

The importance of hazard-free circuits is already highlighted for example in the classical textbook [Cal58]. Three-valued logic for circuits was introduced by Yoeli and Rinon in [YR64]. In 1965, Eichelberger published the influential paper [Eic65], which shows how to use three-valued logic to detect hazards in exponential time. This paper also contains the first lower bound on hazard-free depth 2 circuits: A hazard-free and-or circuit with negations at the inputs must have at least as many gates as its function has prime implicants, which can be an exponentially large number, see e.g. [CM78]. Later work on lower bounds was also only concerned with depth 2 circuits, for example [ND92].

Mukaidono [Muk72] was the first to formally define a partial order of definedness, see also [Muk83b, Muk83a], where it is shown that a ternary function is computable by a circuit iff it is monotone under this partial order. In 1981 Marino [Mar81] used a continuity argument to show (in a more general context) that specific ternary functions cannot be implemented, for example there is no circuit that implements the detector function , .

Nowadays the theory of three-valued logic and hazards can be found for example in the textbook [BS95]. A fairly recent survey on multi-valued logic and hazards is given in [BEI01].

Recent work models clocked circuits [FFL18]. Applying the standard technique of “unrolling” a clocked circuit into a combinational circuit, one sees that the computational power of clocked and unclocked circuits is the same. Moreover, lower and upper bounds translate between the models as expected; using rounds of computation changes circuit size by a factor of at most . However, [FFL18] also models a special type of registers, masking registers, that have the property that if they output when being read in clock cycle , they output a stable value in all subsequent rounds (until written to again). With these registers, each round of computation enables computing strictly more (ternary) functions. Interestingly, adding masking registers also breaks the relation between hazard-free and monotone complexity: [FFL18] presents a transformation that trades a factor blow-up in circuit size for eliminating -bit hazards. In particular, choosing , a linear blow-up suffices to construct a hazard-free circuit out of an arbitrary hazardous implementation of a Boolean function.

Seemingly unrelated, in 2009 a cybersecurity paper [TWM09] was published that studies information flow on the Boolean gate level. The logic of the information flow happens to be Kleene’s logic and thus results transfer in both directions. In particular (using different nomenclature) they design a circuit (see [TWM09, Fig. 2]) that computes the Boolean derivative, very similar to our construction in Proposition 4.10. In the 2012 follow-up paper [HOI12] the construction of this circuit is monotone (see [HOI12, Fig. 1]) which is a key property that we use in our main structural correspondence result Theorem 1.3.

There is an abundance of monotone circuit lower bounds that all translate to hazard-free complexity lower bounds, for example [Raz85, AG87, Yao89, RW92] and references in [GS92] for general problems, but also [Weg82] and references therein for explicit problems, [Pra74, Pat75, MG76] for matrix multiplication and [Blu85]

for the Boolean convolution map. This last reference also implies that any implementation of the Fast Fourier Transform to solve Boolean convolution must have hazards.

On a very high level, some parts of our upper bounds construction in Section 5 are reminiscent to [NW96, Prop. 6.5] or [Gol11].

3 Definitions

We study functions that can be implemented by circuits. The Boolean analogue is just the set of all Boolean functions. In our setting this is more subtle. First of all, if a circuit gets a Boolean input, then by the definition of the gates it also outputs a Boolean value. Thus every function that is computed by circuits preserves stable values, i.e., yields a Boolean value on a Boolean input. Now we equip with a partial order such that is the least element and and are incomparable elements greater than , see [Muk72]. We extend this order to in the usual way. For tuples the statement means that is obtained from by replacing some unstable values with stable ones. Since the gates and, or, not are monotone with respect to , every function computed by a circuit must be monotone with respect to . It turns out that these two properties capture precisely what can be computed:

3.1 Proposition ([Muk72, Thm. 3]).

A function can be computed by a circuit iff preserves stable values and is monotone with respect to .

A function that preserves stable values and is monotone with respect to shall be called a natural function. A function is called an extension of a Boolean function if the restriction  coincides with .

Observe that any natural extension of a Boolean function must satisfy the following. If and are resolutions of (in particular and ) such that , it must hold that and (or vice versa), due to preservation of stable values. By -monotonicity, this necessitates that , the only value “smaller” than both and . Thus, one cannot hope for a stable output of a circuit if has two resolutions with different outputs. In contrast, if all resolutions of produce the same output, we can require a stable output for , i.e., that a circuit computing is hazard-free.

3.2 Definition.

For a Boolean function , define its hazard-free extension as follows:

Hazard-free extensions are natural functions and are exactly those functions that are computed by hazard-free circuits, as can be seen for example by Theorem 1.2. Equivalently, is the unique extension of that is monotone and maximal with respect to .

We remark that later on we will also use the usual order on and . We stress that the term monotone Boolean function refers to functions monotone with respect to .

4 Lower bounds on the size of hazard-free circuits

In this section, we prove that for monotone functions , from which Corollaries 1.4 to 1.8 follow. Our first step is to show that , which is straightforward.

Conditional lower bound

In this section we prove Theorem 1.11, which is a direct consequence of the following proposition and noting that .

4.1 Proposition.

Fix a monotonously weakly increasing sequence of natural numbers with . There is a function with and the following property: if can be computed by circuits of size that are free of -bit hazards, then there are Boolean circuits of size that decide -CLIQUE.

Proof.

The function gets as input the adjacency matrix of a graph on vertices and a list of vertex indices, each encoded in binary with many bits:

Clearly . Let compute and have no -hazards. By the definition of -hazards, it follows that iff contains a -clique. From we construct a circuit that decides -CLIQUE as follows. We double each gate and each wire. Additionally, after each doubled not-gate we twist the two wires so that this not construction sends to instead of to . Stable inputs to are doubled, whereas the input is encoded as the Boolean pair . It is easy to see that the resulting circuit simulates . Our circuit should have inputs and should satisfy iff , thus we fix the rightmost input pairs to constants to obtain . From the two output gates, we treat the right output gate as the output of , while dismissing the left output gate. ∎

Monotone circuits are hazard-free

4.2 Lemma.

Monotone circuits are hazard-free. In particular, for monotone Boolean functions we have .

Proof.

We prove the claim by induction over the number of computation gates in the circuit. Trivially, a monotone circuit without computation gates is hazard-free, as it merely forwards some input to the output. For the induction step, let be a monotone circuit computing a function such that the gate computing the output of receives as inputs the outputs of two hazard-free monotone subcircuits and . We denote by and the natural functions computed by and , respectively. The gate computing the output of can be an and- or an or-gate and we will treat both cases in parallel. Let be arbitrary with the property that for all resolutions of . Denote by the resolution of in which all ’s are replaced by . The fact that implies that ( or ). By monotonicity of and , this extends from to all resolutions of , because and thus . Since and are hazard-free by the induction hypothesis, we have ( or ). As basic gates are hazard-free, we conclude that .

The case that for all resolutions of some is analogous, where is replaced by , the resolution of in which all ’s are replaced by . ∎

The following sections show a much deeper relationship between monotone and hazard-free circuits. A key concept is the derivative, which we will discuss next.

Derivatives of natural functions

Let be a natural function and be a stable input. If , that is, if is obtained from by replacing stable bits by , then . This means that there are two possibilities for — either or .

We can encode in one Boolean function the information about how the value of changes from to when the bits of the input change from stable to unstable. It is reminiscent of the idea of the derivative in analysis or the Boolean derivative, which also show how the value of the function changes when the input changes. To make the connection more apparent, we introduce a notation for replacing stable bits by unstable ones: if , then denotes the tuple that is obtained from by changing the values to in all positions in which has a 1, and keeping the other values unchanged. Formally,

This notation is consistent with interpreting the addition and multiplication on as the hazard-free extensions of the usual addition modulo and multiplication on (xor and and).

Any tuple can be presented as for some . As we have seen, is either or . This condition can also be written as for some .

4.3 Definition.

Let be a natural function. The hazard derivative (or just derivative for short) of is the Boolean function such that

(4.4)

In other words,

For a Boolean function we use the shorthand notation .

Consider for example the disjunction or. The values of are as follows:

or

Thus,

(4.5a)
Similarly, we find
(4.5b)
(4.5c)
(4.5d)

Caveat: Since natural functions are exactly those ternary functions defined by circuits, we can obtain from the ternary evaluations of any circuit computing . For Boolean functions it is more natural to think of as a property of the function , because the correspondence to circuits is not as close: we can obtain from the ternary evaluations of any hazard-free circuit computing on Boolean inputs.

In general, we can find the derivative of a Boolean function as follows:

4.6 Lemma.

For , we have . In particular, if , then .

Proof.

Resolutions of coincide with at positions where has a  and have arbitrary stable bits at positions where has a . Therefore, each resolution of can be presented as for some such that whenever , that is, . Hence, the set of all resolutions of is .

The derivative if and only if . By definition of hazard-freeness, this happens when takes both values and on , in other words, when the for some . The disjunction represents exactly this statement. ∎

As a corollary, we obtain a surprisingly close relation between monotone Boolean functions and their derivatives. For a natural function and any fixed , let denote the Boolean function that maps to , and define the shorthand for a Boolean function .

4.7 Corollary.

Suppose that is monotone with . Then .

4.8 Lemma.

For natural and fixed , is a monotone Boolean function.

Proof.

Note that the expression is antimonotone in : if , i.e., is obtained from by replacing s with s, then is obtained from by replacing more stable bits of with , so . Thus, if , being natural yields that

so . ∎

We can also define derivatives for vector functions

, with natural components as . Note that the equation (4.4) still holds and uniquely defines the derivative for vector functions.

The following statement is the analogue of the chain rule in analysis.

4.9 Lemma (Chain rule).

Let and be natural functions and . Then

Proof.

Use equation (4.4).

and the claim follows with another application of (4.4). ∎

Using monotone circuits to compute derivatives

In this section we show how to efficiently compute derivatives by transforming circuits to monotone circuits. Our main tool is the chain rule (Lemma 4.9).

For a circuit and a gate of , let denote the natural function computed at the gate .

4.10 Proposition.

From a circuit we can construct a circuit by independently replacing each gate on inputs () by a subcircuit on inputs and two output gates (the wiring between these subcircuits in is the same as the wiring between the gates in , but in we have two parallel wires for each wire in ) such that and for Boolean inputs .

Proof.

To construct , we extend with new gates. For each gate in , we add a new gate . If is an input gate , then is the input gate . If is a constant gate, then is the constant- gate.

The most interesting case is when is a gate implementing a function with incoming edges from gates (in our definition of the circuit, the arity is or , but the construction works without modification in the general case). In this case, we add to a subcircuit which takes and their counterparts as inputs and as its output gate, which computes . For the sake of concreteness, for the gate types not, and, or according to (4.5) this construction is depicted in Figure 2.

not

not

and

and

and

and

and

or

or

or

or

not

not

and

and

and

or

or
Figure 2: Gates in get replaced by subcircuits in the construction of .

Clearly . By induction on the structure of the circuit, we now prove that . In the base case, if is an input or constant gate, the claim is obvious. If is a gate of type with incoming edges from , then

By the chain rule,

By the induction hypothesis, , thus the induction step succeeds by construction of . ∎

Note that this construction can be seen as simulation of the behavior of the circuit on the input : the value computed at the gate on this input is , and in the gates and compute the two parts of this expression separately.

By fixing the first half of the input bits in we now establish the link to monotone complexity. In the following theorem the case will be of particular interest.

4.11 Theorem.

For and fixed , it holds that .

Proof.

Let be a hazard-free circuit for of minimal size and let be fixed. We start by constructing the circuit from Proposition 4.10 and for each gate in we remember the corresponding subcircuit in . For each subcircuit we call the gates the primary inputs and the the secondary inputs. From we now construct a monotone circuit on inputs that computes as follows. We fix the leftmost input bits in . This assigns a Boolean value to each primary input in each constructed subcircuit. Each constructed subcircuit’s secondary output now computes some Boolean function in the secondary inputs . If the values at the secondary inputs are , then the value at the secondary output is . Lemma 4.8 implies that is monotone (which can alternatively be seen directly from Figure 2, where fixing all primary inputs makes all not gates superfluous). However, the only monotone functions on at most two input bits are the identity (on one input), and, or, and the constants. Thus, we can replace each subcircuit in by (at most) one monotone gate, yielding the desired monotone circuit that has at most as many gates as and outputs , where the second equality holds because is hazard-free. ∎

We now use this construction to prove Theorem 1.3.

Proof of Theorem 1.3.

The claim is trivial for the constant 1 function. Note that this is the only case of a monotone function that has . Hence assume that is monotone with . By Lemma 4.2, we have that . The other direction can be seen via . ∎

Theorem 1.3 shows that the hazard-free complexity can be seen as an extension of monotone complexity to general Boolean functions. Thus, known results about the gap between general and monotone complexity transfer directly to hazard-free complexity.

Unconditional lower bounds

Corollaries 1.4, 1.5, and 1.8 are immediate applications of Theorem 1.3. Interestingly, however, we can also derive results on non-monotone functions, which is illustrated by Corollary 1.6.

Proof of Corollary 1.6.

The fact that the determinant can be computed efficiently is well known.

Consider the derivative (Lemma 4.6). If there exists a permutation such that all are , then, replacing all the other entries with we get a matrix with , and . If there is no such permutation, then all the summands in the definition of are , and this is also true for all matrices . In this case, . Combining both cases, we get that equals the Boolean permanent function from Corollary 1.4. The lower bound then follows from [Raz85] and Theorem 4.11 (as in Corollary 1.4). ∎

We can combine this technique with the ideas from the proof of Theorem 1.11 to show even stronger separation results, exhibiting a family of functions for which the complexity of Boolean circuits is linear, yet the complexity of hazard-free circuits grows almost as fast as in Corollary 1.5.

4.12 Lemma.

Let be a monotone Boolean function with and be a function such that iff for some . Then .

Proof.

Using Lemma 4.6, we obtain

which means that the circuit for can be obtained from the circuit for by substituting for some inputs. The statement then follows from Theorem 4.11. ∎

Proof of Corollary 1.7.

We use the -complete family from the paper of Alon and Boppana [AB87]. Let denote a finite field with elements. We encode subsets using Boolean variables in a straightforward way. The function maps to iff there exists a polynomial of degree at most over such that for every .

Alon and Boppana proved that for the monotone complexity of this function is at least for some constant . For simplicity, we choose and . In this case, .

We define as the verifier for this instance of . The function takes variables. The first inputs encode a subset , and the second inputs encode coefficients of the polynomial of degree at most over , each coefficient using bits. The value iff for all . To implement the function , for each element we compute the value using finite field arithmetic. Each such computation requires gates. Then we use as a selector in a multiplexer to compute the value indicating whether is contained in , choosing it from all the bits of the input corresponding to pairs of form . This multiplexer requires additional gates for each element . The result is the conjunction of the computed values for all . The total size of the circuit is linear in the size of the input.

The lower bound on the hazard-free complexity follows from the Alon-Boppana lower bound and Lemma 4.12. ∎

5 Constructing -bit hazard-free circuits

In this section we prove Corollary 1.10.

For a collection of subsets of , denote by the minimum size of a circuit whose outputs coincide with whenever the set of input positions with unstable bits is a subset of a set in the collection . Thus, -monotonicity of natural functions implies that . Excluding -bit hazards therefore means that we consider , i.e., contains all subsets of with exactly elements. The minimum circuit depth is defined analogously.

As the base case of our construction, we construct circuits handling only fixed positions for the (up to) unstable bits, i.e., for some . This is straightforward with an approach very similar to speculative computing [TY12, TYM14].

We take copies of a circuit computing . In the th copy we fix the inputs in to the binary representation of . Now we use a hazard-free multiplexer to select one of these outputs, where the original input bits from are used as the select bits. A hazard-free -bit multiplexer of size can be derived from the -bit construction given in Figure 0(b).

5.1 Lemma.

A -bit multiplexer receives inputs and . It interprets as index from and outputs . There is a hazard-free circuit for of size and depth .

Proof.

A hazard-free of size and depth is given in Figure 0(b); its correctness is verified by a simple case analysis. From a hazard-free and the hazard-free we construct a hazard-free circuit as follows: