Characteristics of Reversible Circuits for Error Detection

12/03/2020 ∙ by Lukas Burgholzer, et al. ∙ Johannes Kepler University Linz 0

In this work, we consider error detection via simulation for reversible circuit architectures. We rigorously prove that reversibility augments the performance of this simple error detection protocol to a considerable degree. A single randomly generated input is guaranteed to unveil a single error with a probability that only depends on the size of the error, not the size of the circuit itself. Empirical studies confirm that this behavior typically extends to multiple errors as well. In conclusion, reversible circuits offer characteristics that reduce masking effects – a desirable feature that is in stark contrast to irreversible circuit architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The detection of errors is a fundamental problem in electrical engineering and computer science. Given two circuits  and with inputs and outputs the task is to decide whether they describe the same functionality on the logical level.

Many approaches exist that address this important and challenging problem. In this work, we focus on error detection protocols that only require simulation runs of the two circuits—as opposed to formal verification techniques which explicitly utilize structural knowledge about both circuits [dischCombinationalEquivalenceChecking2007, marques-silvaCombinationalEquivalenceChecking1999, molitorEquivalenceCheckingDigital2010, jhaEquivalenceCheckingUsing1997, clarkeModelChecking2018, biereSymbolicModelChecking1999]. This is a severe restriction, but simulations alone are—in principle—sufficient to solve this task. If the two circuits are equivalent, they have the same input-output behavior. Conversely, suppose that they are functionally distinct. Then, there exists at least one input string for which the two circuits produce distinct outputs. In formulas:

(1)

Such an input successfully detects the discrepancy between and and serves as a counterexample for the equivalence of both circuits.

The problem, however, is how to find counterexamples (1). If we only allow simulations of both circuits, i.e., we consider them as black boxes, we do not have actionable advice on how to choose promising input strings and we may as well generate inputs uniformly at random: , i.e., we flip an unbiased coin for each input value (, where and ). Subsequently, we simulate both circuits with this input and check whether they produce the same output: . If the outputs are distinct, we have found a counterexample. The circuits cannot be equivalent. But if the outputs are the same, the test is inconclusive. In this case, we must repeat it with new (randomly generated) inputs until we either find a counterexample (non-equivalence) or have exhausted all possible inputs (equivalence). The latter, unfortunately, can be a very real possibility. The two circuits and may differ on a single input only and it is extremely unlikely to quickly find this input by (random) chance.

To make matters worse, classical circuits can mask even “small” errors very effectively. For , this is illustrated in Fig. 1. A cascade of logical AND gates, realizing the functionality (ideal circuit ), is affected by a single bit-flip error (erroneous implementation ) in the second layer. It is easy to check that only 4 out of all input strings can detect this discrepancy.

Fig. 1: Error detection in classical circuits is hard: Suppose that a cascade of logical AND gates, realizing the Boolean function , is affected by a single bit-flip error (red) in the second layer. Only out of the possible input strings can detect this error.

Masking is a serious issue for error detection using simulation techniques. No malicious intent is required to fool randomly generated inputs. The circuit may do it all by itself. Needless to say, this issue has been well-known for decades. Error detection based on random inputs (alone) often pales in comparison to other more sophisticated techniques. Today’s state of the art is governed by constrained-based stimuli generation techniques [yuanConstraintbasedVerification2006, biereSATATPGBoolean2002, willeSMTbasedStimuliGeneration2009, kitchenStimulusGenerationConstrained2007, gentFastMultilevelTest2016], fuzzing [laeuferRFUZZCoveragedirectedFuzz2018], etc. But on the positive side, error detection using randomly-chosen inputs is based on minimal assumptions, namely the possibility to simulate two circuits as black boxes. Moreover, it is intuitive and individual simulation runs are easy and fast to execute.

Ii Summary of results:
Error detection in reversible circuits

We have seen that, in general, simulation with (uniformly) random inputs is not a viable strategy for detecting errors in classical circuits. Already a single “small” error can be exceedingly difficult to detect (masking). Perhaps surprisingly, this dark picture lightens up considerably if we consider reversible implementations of logical functionalities. As the name suggests, reversible circuits are circuits whose action can be undone by running the circuit backwards. More formally, -bit reversible circuits implement permutations on the set of all bit strings. This, in particular, implies that the number of input and output bits must be the same (). Despite these restrictions, reversible circuits are universal, i.e., any logical function on bits can be implemented by a reversible circuit [toffoliReversibleComputing1980] and efficient mapping techniques are readily available [zulehnermakeitreversible2017, maslovReversibleCascadesMinimal2004, zilicReversibleCircuitTechnology2007] (this implementation may require strictly more than bits, though). Negation (NOT), exclusive or (CNOT) and the Toffoli gate (CCNOT) are examples of simple reversible functionalities. Viewed as a logic gate, CCNOT is also universal. Every reversible circuit can be constructed from Toffoli gates alone [toffoliReversibleComputing1980].

vs.

size
vs.
Fig. 2: Illustration of main rigorous contributions: Simulations with uniformly random inputs completely expose any single error in a given reversible circuit. The two scenarios are exactly equivalent (“no masking”). In the lower scenario, the probability of correct distinction is governed by the size of the error, not the total number of lines.

To summarize, reversible circuits bear strong similarities with classical (irreversible) circuits, but there are some notable additional characteristics. Chief among them is reversibility itself which implies that information cannot easily escape. Here, we show that this has profound implications for error detection with random inputs. More precisely,

  1. reversible circuits can never mask single errors (rigorous result, see Proposition 1)

  2. the probability of detecting a single error only depends on its size, not the total number of bits (unsurprising rigorous result, see Lemma 2)

  3. multiple errors are typically even easier to detect (empirical studies, see Fig. 3 and discussions in Section IV)

The first two insights are mathematical statements that address single errors only. They readily follow from reversibility and fundamental properties of uniformly random input strings. We refer to Section III for details and Fig. 2 for illustrative caricatures. When combined, they imply the following confidence bound for detecting single errors with random inputs.

Theorem 1.

Suppose that a general reversible circuit is affected by a single error of size and fix (confidence). Then, at most randomly selected inputs suffice to witness this error with probability (at least) .

For —a single bit-flip error (NOT) anywhere within the circuit—this statement actually becomes deterministic: already a single (random) input is guaranteed to detect this error with certainty. We emphasize that this statement is true irrespective of the number of lines and the circuit’s size. It is simply impossible to hide a single bit-flip inside a reversible circuit. Such a behavior is strikingly different from irreversible circuit architectures. There it can routinely happen that order  random inputs are necessary to detect even a single bit-flip error, see e.g. Fig. 1.

The multiple-error case is much more intricate, because error locations and circuit structure start to matter. This leads to drastically different behaviors of best case (independent errors) and worst case (severe masking) behavior. To better understand the typical behavior of multiple errors, we resort to numerical simulations. These indicate a (close-to) best-case behavior: the probability of failing to detect a total of errors is exponentially suppressed in , see Fig. 3. Additional simulation results and details are provided in Section IV.

Fig. 3: Typical accumulation effects for multiple errors (log-log plot): number  of randomly injected errors (-axis) vs. average number of random inputs required to detect erroneous behavior (-axis) in a generic -bit reversible circuit with gates. Different colors denote worst-case errors of increasing size . Solid lines track the theoretical best-case behavior (independent errors, see Eq. (5) below). For small , the plot highlights an excellent agreement between typical (diamonds) and best-case (solid lines) behavior.

Note that a similar line of thought has recently been presented for the domain of quantum computing (which bears many similarities to reversible circuits). More precisely, a verification scheme heavily based on simulation has been proposed in [burgholzerPowerSimulationEquivalence2020] and refined in [burgholzerRandomStimuliGeneration2021]. A similar theoretical result has been presented in [lindenLightweightDetectionSmall2020].

Iii Rigorous theory for single errors

Iii-a Reversible circuits and error model

We will work in the reversible circuit model for input bits (and output bits). A high-level of mathematical abstraction already suffices to deduce powerful consequences. An -bit reversible circuit implements a permutation of all bit strings. Reversing the circuit, that is running it backwards, produces the unique permutation that undoes the original circuit: , where for all is the identity permutation (“do nothing”). This defining feature suffices to deduce three elementary properties that will form the basis of our proof strategy.

Lemma 1 (Characteristics of reversible circuits).

Consider reversible circuits and an -bit string . Then,

  1. output equivalence is unaffected by composition:

  2. invariance of the uniform distribution:


    implies

  3. non-trivial action: suppose . Then, there are at least two bit strings such that .

Proof.

All proofs utilize the fact that reversible circuits act like permutations on the set of all bit strings.

  1. Permutations are invertible transformations. As such, they preserve equivalence: if and only if for any reversible circuit . The claim follows from setting , and .

  2. The uniform distribution over -bit strings assigns the same weight to each of the bit strings. Permuting the bit strings cannot affect the weights and, by extension, the uniform distribution itself.

  3. The number of invariant bit strings () is equal to the number of fix points of the underlying permutation. A non-trivial permutation of elements can have at most fix points (transposition).

example abstraction

Fig. 4: (Single) error model and compatible circuit decomposition: An ideal reversible circuit (blue) is corrupted by a single reversible error (red). The error location begets a decomposition of ideal and corrupted circuit into matching constituents: (ideal) and (corrupted).

Different reversible circuits of compatible bit-size  can be combined to yield another (larger) circuit: for input (“composition”). The reverse direction is also possible (“decomposition”) and, arguably, more interesting. Circuit diagrams provide a well-established tool that does precisely that. They decompose a possibly complicated circuit into a structured sequence of simpler building blocks. We use circuit decomposition on a rather high level to reason about single errors in reversible circuits. Suppose that an -bit reversible circuit is affected by a reversible error that produces a functionally different circuit . Then, the location of this error within the circuit suggests a compatible decomposition into three parts:

  1. describes the original functionality up to the location where the error occurs (“past”),

  2. captures the error as an additional circuit layer on all bits (“present”),

  3. describes the original functionality from the error location onwards (“future”).

In summary,

(2)

and we refer to Fig. 4 for a visual illustration.

Iii-B No masking for random inputs

We now have all building blocks in place to present and derive the main conceptual result of this work. It addresses the probability of detecting single errors in arbitrary reversible circuits (2) based on a single random input .

Proposition 1 (no masking).

Fix (ideal circuit) and (single, reversible error). Then, the probability of detecting this discrepancy with a random input only depends on the error , not the actual circuit. More precisely,

where the probability is taken with respect to the uniform distribution over all possible input strings.

Proof.

This statement is an immediate consequence of two elementary characteristics of reversible circuit architectures. Apply Lemma 1 (i) to remove the effect of ,

and note that, according to Lemma 1 (ii), implies . ∎

Although simple to prove, Proposition 1 pinpoints remarkable differences between reversible and irreversible circuits. As illustrated in Fig. 2, the former cannot hide errors from randomly sampled inputs (“no masking”).

We emphasize that a uniformly random selection of input strings is crucial to arrive at such a powerful conclusion. Reversibility alone is enough to ignore the final portion of the circuit (after the error has occurred). Reversible circuits always map (non-)equal bit strings to (non-)equal bit strings. In contrast, the first portion of the circuit (before the error has occurred) can affect concrete inputs . But if is sampled randomly, then will be a different, but still random, bit string. The uniform distribution is special in the sense that it is invariant under reversible transformations. The circuit may affect every concrete input, but it does not affect the underlying distribution.

Iii-C Only error size matters

We have seen that uniformly random inputs can uncover single errors in a general reversible circuit. According to Proposition 1, the probability of witnessing a discrepancy only depends on the error, not the underlying circuit structure.

We say that an error has size if it only affects bits in a nontrivial fashion. The remaining bits are not touched at all. We refer to Fig. 2 for a visual illustration of this summary parameter. Intuitively, we would expect that “large” errors are easier to detect than “small” ones and that the number of lines plays an active role. However, the following simple statement shows that the probability of detecting an error in the worst case is exponentially suppressed with respect to the error size , but is independent of the actual number of bits .

Lemma 2 (only error size matters).

Suppose that is a non-trivial error that only affects bits in a non-trivial fashion and is sampled from the uniform distribution. Then,

Proof.

Suppose, without loss of generality, that the error only affects the least-significant bits, i.e., , where . Since is reversible, its restriction to the relevant bits must also be reversible. Moreover, , because is non-trivial. Lemma 1 (iii) then implies that there must be at least 2 bit strings of size that are affected by . Finally, we use the fact that implies that the least-significant bits are also distributed uniformly: . Therefore,

This probability bound is actually sharp. Worst-case errors of size permute exactly 2 out of the possible -bit inputs on which they act. Concrete examples of such a behavior are NOT (), CNOT (), CCNOT () and, more generally, a -fold controlled NOT gate on bits (general ). The numerical simulations shown in Fig. 3 are based on injecting such worst-case errors at random circuit locations.

Iii-D General confidence bound for detecting single errors

We now have all necessary ingredients to establish a rigorous performance guarantee for reversible error detection with (uniformly) random inputs. The following statement bounds the number of uniformly random inputs that may be required to detect a single error of size .

Theorem 2.

Fix (ideal circuit), (single error) and has size . Suppose that are (independent) uniformly random inputs. Then,

In words, the probability of failing to detect a single error is exponentially suppressed in the number of random test inputs.

Theorem 1 above is a streamlined consequence of this observation: setting provides a concrete number of repetitions that ensures that we detect the discrepancy with probability (at least) .

Proof of Theorem 2.

For (one random input), the claim readily follows from combining Proposition 1 and Lemma 2 (more precisely, their contrapositions):

This bound readily extends to the general -case by using the assumption that the individual input strings are all sampled independently. Joint probabilities of independent events factorize and we conclude

(3)

Apply for all (convexity of the exponential function) to complete the argument. ∎

The bound provided in Theorem 2 is simple, but not sharp (the inequality

is never tight). As such, it always under-estimates the actual confidence level. This discrepancy is most pronounced for small error sizes

. The extreme case is a single NOT error (). For , the bound in Eq. (3) becomes (exactly) zero. By contraposition, every possible input bit string is guaranteed to detect a single bit-flip error that is hidden anywhere within the circuit.

Iv Empirical analysis for multiple errors

In the previous section, we have established strong theoretical support for detecting single errors. At its heart has been the decomposition illustrated in Fig. 4. Reversibility and uniformly random inputs have subsequently allowed us to discuss away the circuit portions and completely. In turn, we were able to focus exclusively on the error itself.

vs

vs

Fig. 5: Partial simplification for multiple errors: Simulation with uniformly random inputs exposes multiple errors only partially. Everything before the first error () and after the last error () can be safely ignored, but the part in between () does matter. Different circuit structures can lead to strikingly different error detection probabilities.

For more than one error, this is in general not an option anymore. While we can safely ignore circuit contributions before the first and after the last error, the circuit in between cannot be ignored, see Fig. 5. The relation between errors and intermediate circuit parts governs how likely it is to witness the overall error.

In this section, we analyze error accumulation effects in generic reversible circuits. To obtain guiding intuition, we will first isolate and discuss the two extreme cases. Independent errors (best case, see Section IV-A) and maximal masking (worst case, see Section IV-B) turn out to behave in a radically different fashion. Subsequent numerical studies demonstrate that typical error accumulation effects closely follow the best-case trajectory: Multiple errors are typically much easier to detect than a single error.

Iv-a Best-case behavior: Commuting and independent errors

Fig. 6: Best-case scenario for two errors: One of the errors, say , commutes with the relevant circuit part . Reordering allows us to treat the two errors as a single effective error . In addition, and affect disjoint bit collections (independence) and factorizes nicely into two disjoint components: (quadratic improvement).

Let us first discuss errors of size . An extension to multiple errors () and different sizes will be straightforward. Fig. 5 provides valuable guidance for potential best-case behavior. Suppose that one of the errors, say , can be pulled through the central circuit part without affecting it: . If circuit and error commute in such a fashion, we can group both errors into a single layer and have effectively reduced the problem to the single-error case which we already understand:

The only remaining question is: what is the probability of failing to detect the cumulative error with a single random input? This failure probability is smallest if the two errors are independent in the sense that they act on disjoint sets of bits each. A uniformly random input then ensures that the failure probability factorizes:

This argument readily extends to multiple errors (). Taking the complement ensures

(4)

provided that all errors commute with the circuit (first equality) and act on different subsets of bits each (second inequality). Rel. (4) highlights that the probability of (best case) error detection increases substantially with the number of errors . Intuitively, this makes sense: more errors should be easier to detect. This insight has implications for the number of random inputs that are required to detect best-case errors of size each. To pinpoint them, it is instructive to view a single simulation run as a biased coin toss: we detect a discrepancy with probability (“heads”) and fail to detect it with probability (“tails”). When attempting to detect a discrepancy, we input new randomly generated inputs until we find a mismatch. This is equivalent to tossing the biased coin until “heads” appears. The expected number of required coin tosses to achieve this goal is

(geometric distribution). Together with Rel. (

4), this analogy allows us to conclude that we expect to require

(best case) (5)

random inputs to detect commuting and independent errors of size each. This bound is sharp. It holds with equality if each of the errors is a worst-case error of size , e.g. a -fold controlled NOT gate.

We conclude this section with a simplified interpretation of Rel (5). For small (in comparison to ), the claim is comparable to , which can also be observed in Fig. 3: the slopes of the solid lines match this estimate rather well whenever the number of errors is small compared to . Under best-case assumptions, detecting size -errors is -times easier than detecting a single error of the same size.

Iv-B Worst-case: anti-commuting errors and masking

=

Fig. 7: Worst-case scenario for two errors: Two bit-flip errors () affect one control line of a -fold controlled NOT-gate. These errors do not commute with the relevant circuit part . Quite the opposite: two errors with size produce an effective error of size . To make matters even worse, such a -fold controlled NOT error is extremely difficult to detect: (masking).

We expect that worst case error accumulation should occur when errors and relevant circuit portion do not commute at all (“anti-commutation”). If this is the case, the probability of detecting errors can become exponentially small in the total number of bits. We illustrate this by means of an example that is illustrated in Fig. 7: and are bit-flip errors () that affect the first bit while is a -fold controlled NOT-gate. It is easy to check that

where is a -fold controlled NOT gate that acts on all bits, except the very first one (). This is a single worst-case error of almost maximal size. Proposition 1 and Lemma 2 assert

This success probability is exponentially small in the total number of bits and we expect to require a total of

(worst case) (6)

random inputs in order to detect the discrepancy. Even worse error accumulation effects can occur for more errors () and/or larger error sizes (). But already Rel. (6) is almost as bad as it can be. It is only a factor of two away from —the absolute worst case for distinguishing any pair of reversible circuits, see Lemma 1 (iii).

Iv-C Empirical studies

The multiple-error case is intricate by comparison, because the interplay between error (locations) and underlying circuit geometry starts to matter. We have seen that this leads to strikingly different best- (commuting errors, Sub. IV-A) and worst-case (anticommuting errors, Sub. IV-B) behavior. Concrete problem instances fall into the wide range between these extreme cases. In this section, we employ numerics to delineate typical behavior.

We study the effect of size- errors in reversible circuits with  lines. For a given number of lines , we construct random reversible circuits with  arbitrary multi-controlled NOT gates. When injecting errors of size , we always consider -fold controlled NOT gates which represent the worst case behavior, as discussed in Section III-C. Without loss, we assume that these errors are geometrically local, i.e., they only affect neighbouring lines. All experiments were repeated times with different random seeds in order to ensure adequate statistical uniformity.

Fig. 8: Confirmation of theoretical results: Scatter-plot of required simulations (-axis) for detecting a single error of size in a circuit with (left plot) and (right plot) lines. Different colors denote varying values of . This experimentally confirms that the distribution of simulations does not depend on the number of lines and that the number of required simulations grows exponentially with the size of the error.

First and foremost, we confirm interesting aspects of the theory developed in Sec. III. To this end, we considered the injection of a single size--error and count the required number of simulations for detecting this error. The results are depicted in Fig. 8. In contrast to classical intuition, the probability of detecting a single error of size is (1) completely independent of the circuit under consideration, and (2) diminishes exponentially in the error size , i.e., the smaller the error, the greater its impact. This is in excellent agreement with Theorem 2. On average, the required simulations exactly follow the predicted trajectory with no apparent variation. Additionally, the distributions of results is the same when simulating the circuits and as compared to only simulating the error itself.

The next set of numerical experiments pilots us in more interesting territory. Namely, the multiple-error case. We have already teased the results in the introduction and summarized them in Fig. 3. The averaged number of inputs highlights an excellent agreement between the observed behavior and the best-case scenario discussed in Section IV-A. The deviation from this optimum for higher numbers of errors can be explained by accumulation affects of errors not acting independently (see Section IV-B).

Fig. 9: Comparison of worst-case and average-case errors : performed simulations (

-axis) vs. cumulative distribution function (cdf) for detecting

errors of size (-axis). The red curve corresponds to injecting worst-case errors, while the blue curve delineates the cdf for detecting randomly generated errors of the same size. This goes to show, that average-case errors require far less simulations than worst-case ones.

Last but not least, we emphasize that—up to this point—theoretical and empirical results have been contingent on a worst-case assumption: each injected size- error is a -fold controlled NOT-gate. In a final series of evaluations, we analyzed the success probability after conducting a certain number of simulations when choosing errors at random. More precisely, each size- error is a randomly selected gate sequence with the additional constraint that none of the relevant lines remain unaffected (such a scenario would produce an error of size (at most) ). We expect that this error model captures typical behavior in a more accurate fashion. The results are shown in Fig. 9 and highlight a considerable discrepancy between random (blue) and worst-case (red) errors. This is not at all surprising. Random errors of size tend to factorize into several independent contributions and the probabilities of detecting them with random inputs factorizes accordingly, see Sub. IV-A. Such factorizations lead to an increased error detection probability within (very) few simulation runs.

V Conclusion

In this work, we have shown the impact of the reversible circuit paradigm on the probability of detecting errors in circuits. Our rigorous analysis shows, that, as opposed to classical/irreversible circuits, reversible circuits can never mask single errors and, that the probability of detecting a single error only depends on the error’s size and not at all on the surrounding circuit. Empirical evaluations have shown that, in case of multiple errors, the detection probability is very close to the theoretical best case. Finally, we have observed that, in case the assumption of worst-case errors is dropped, the probability of detecting these errors is increased even more.

Acknowledgments

The authors want to thank J. Küng for inspiring discussions throughout the early stages of this project.

This work has partially been supported by the LIT Secure and Correct Systems Lab funded by the State of Upper Austria as well as by BMK, BMDW, and the State of Upper Austria in the frame of the COMET Programme managed by FFG.