Certifying Certainty and Uncertainty in Approximate Membership Query Structures – Extended Version

04/28/2020 ∙ by Kiran Gopinathan, et al. ∙ National University of Singapore 0

Approximate Membership Query structures (AMQs) rely on randomisation for time- and space-efficiency, while introducing a possibility of false positive and false negative answers. Correctness proofs of such structures involve subtle reasoning about bounds on probabilities of getting certain outcomes. Because of these subtleties, a number of unsound arguments in such proofs have been made over the years. In this work, we address the challenge of building rigorous and reusable computer-assisted proofs about probabilistic specifications of AMQs. We describe the framework for systematic decomposition of AMQs and their properties into a series of interfaces and reusable components. We implement our framework as a library in the Coq proof assistant and showcase it by encoding in it a number of non-trivial AMQs, such as Bloom filters, counting filters, quotient filters and blocked constructions, and mechanising the proofs of their probabilistic specifications. We demonstrate how AMQs encoded in our framework guarantee the absence of false negatives by construction. We also show how the proofs about probabilities of false positives for complex AMQs can be obtained by means of verified reduction to the implementations of their simpler counterparts. Finally, we provide a library of domain-specific theorems and tactics that allow a high degree of automation in probabilistic proofs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Approximate Membership Query structures (AMQs) are probabilistic data structures that compactly implement (multi-)sets via hashing. They are a popular alternative to traditional collections in algorithms whose utility is not affected by some fraction of wrong answers to membership queries. Typical examples of such data structures are Bloom filters [Bloom:1970:STH:362686.362692], quotient filters [BenderFJKKMMSSZ12, PaghPR05], and count-min sketches [CormodeM05]. In particular, versions of Bloom filters find many applications in security and privacy [ErlingssonPK14, GerbetKL15, NaorY19], static program analysis [NasreRGK09], databases [cassandra-bloom], web search [GoodwinHLCCEH17], suggestion systems [medium-bloom], and blockchain protocols [eth-bloom, GervaisCKG14].

Hashing-based AMQs achieve efficiency by means of losing precision when answering queries about membership of certain elements. Luckily, most of the applications listed above can tolerate some loss of precision. For instance, a static points-to analysis may consider two memory locations as aliases even if they are not (a false positive), still remaining sound. However, it would be unsound for such an analysis to claim that two locations do not alias in the case they do (a false negative). Even if it increases the number of false positives, a randomised data structure can be used to answer aliasing queries in a sound way—as long as it does not have false negatives [NasreRGK09]. But how much precision would be lost if, e.g., a Bloom filter with certain parameters is chosen to answer these queries? Another example, in which quantitative properties of false positives are critical, is the security of Bitcoin’s Nakamoto consensus [Nakamoto:08] that depends on the counts of block production per unit time [GervaisCKG14].

In the light of the described above applications, of particular interest are two kinds of properties specifying the behaviour of AMQs:

  • No-False-Negatives properties, stating that a set-membership query for an element always returns true if is, in fact, in the set represented by the AMQ.

  • Properties quantifying the rate of False Positives by providing a probabilistic bound on getting a wrong “yes”-answer to a membership query, given certain parameters of the data structure and the past history of its usage.

Given the importance of such claims for practical applications, it is desirable to have machine-checked formal proofs of their validity. And, since many of the existing AMQs share a common design structure, one may expect that a large portion of those validity proofs can be reused across different implementations.

Computer-assisted reasoning about the absence of false negatives in a particular AMQ (Bloom filter) has been addressed to some extent in the past [BlotDL16]. However, to the best of our knowledge, mechanised proofs of probabilistic bounds on the rates of false positives did not extend to such structures. Furthermore, to the best of our knowledge, no other existing AMQs have been formally verified to date, and no attempts were made towards characterising the commonalities in their implementations in order to allow efficient proof reuse.

In this work, we aim to advance the state of the art in machine-checked proofs of probabilistic theorems about false positives in randomised hash-based data structures. As recent history demonstrates, when done in a “paper-and-pencil” way, such proofs may contain subtle mistakes [Bose2008Oct, christensen2010new] due to misinterpreted assumptions about relations between certain kinds of events. These mistakes are not surprising, as the proofs often need to perform a number complicated manipulations with expressions that capture probabilities of certain events. Our goal is to factor out these reasoning patterns into a standalone library of reusable program- and specification-level definitions and theorems, implemented in a proof assistant enabling computer-aided verification of a variety of AMQs.

Our contributions. 

The key novel observation we make in this work is the decomposition of the common AMQ implementations into the following components: (a) a hashing strategy and (b) a state component that operates over hash outcomes, together capturing most AMQs that provide fixed constant-time insertion and query operations. Any AMQ that is implemented as an instance of those components enjoys the no-false-negatives property by construction. Furthermore, such a decomposition streamlines the proofs of structure-specific bounds on false positive rates, while allowing for proof reuse for complex AMQ implementations, which are built on top of simpler AMQs [PutzeSS09]. Powered by those insights, this work makes the following technical contributions:

  • A Coq-based mechanised framework Ceramist, specialised for reasoning about AMQs.111Ceramist stands for Certified Approximate Membership Structures. Implemented as a Coq library, it provides a systematic decomposition of AMQs and their properties in terms of Coq modules and uses these interfaces to to derive certain properties “for free”, as well as supporting proof-by-reduction arguments between classes of similar AMQs.

  • A library of non-trivial theorems for expressing closed-form probabilities on false positive rates in AMQs. In particular, we provide the first mechanised proof of the closed form for Stirling numbers of the second kind [GKP1994, Chapter 6].

  • A collection of proven facts and tactics for effective construction of proofs of probabilistic properties. Our approach adopts the style of Ssreflect reasoning [Gonthier-al:TR, Maboubi-Tassi:MathComp], and expresses its core lemmas in terms of rewrites and evaluation.

  • A number of case study AMQs mechanised via Ceramist: ordinary [Bloom:1970:STH:362686.362692] and counting [TarkomaRL12] Bloom filters, quotient filters [BenderFJKKMMSSZ12, PaghPR05], and Blocked AMQs [PutzeSS09].

For ordinary Bloom filters, we provide the first mechanised proof that the probability of a false positive in a Bloom filter can be written as a closed form expression in terms of the input parameters; a bound that has often been mis-characterised in the past due to oversight of subtle dependencies between the components of the structure [Bloom:1970:STH:362686.362692, mitzenmacher2005]. For Counting Bloom filters, we provide the first mechanised proofs of several of their properties: that they have no false negatives, its false positive rate, that an element can be removed without affecting queries for other elements, and the fact that Counting Bloom filters preserve the number of inserted elements irrespective of the randomness of the hash outputs. For quotient filters, we provide a mechanised proof of the false positive rate and of the absence of false negatives. Finally, alongside the standard Blocked Bloom filter [PutzeSS09], we derive two novel AMQ data structures: Counting Blocked Bloom filters and Blocked Quotient filters, and prove corresponding no-false-negatives and false positive rates for all of them. Our case studies illustrate that Ceramist can be repurposed to verify hash-based AMQ structures, including entirely new ones that have not been described in the literature, but rather have been obtained by composing existing AMQs via the “blocked” construction.

Our mechanised development [ceramist] is entirely axiom-free, and is compatible with Coq 8.11.0 [Coq-manual] and MathComp 1.10 [Maboubi-Tassi:MathComp]. It relies on the infotheo library [AffeldtHS14] for encoding discrete probabilities.

Paper outline. 

We start by providing the intuition on Bloom filters, our main motivating example, in Sec. 2. We proceed by explaining the encoding of their semantics, auxiliary hash-based structures, and key properties in Coq in Sec. 3. Sec. 4 generalises that encoding to a general AMQ interface, and provides an overview of Ceramist, its embedding into Coq, showcasing it by another example instance—Counting Bloom filters. Sec. 5 describes the specific techniques that help to structure our mechanised proofs. In Sec. 6, we report on the evaluation of Ceramist on various case studies, explaining in detail our compositional treatment of blocked AMQs and their properties. Sec. 7 provides a discussion on the state of the art in reasoning about probabilistic data structures.

2 Motivating Example

Ceramist is a library specialised for reasoning about AMQ data structures in which the underlying randomness arises from the interaction of one or more hashing operations. To motivate this development, we thus consider applying it to the classical example of such an algorithm—a Bloom filter [Bloom:1970:STH:362686.362692].

2.1 The Basics of Bloom Filters

Bloom filters are probabilistic data structures that provide compact encodings of mathematical sets, trading increased space efficiency for a weaker membership test [Bloom:1970:STH:362686.362692]. Specifically, when testing membership for a value not in the Bloom filter, there is a possibility that the query may be answered as positive. Thus a property of direct practical importance is the exact probability of this event, and how it is influenced by the other parameters of the implementation.

A Bloom filter

is implemented as a binary vector of

bits (all initially zeros), paired with a sequence of hash functions , collectively mapping each input value to a vector of indices from , the indices determine the bits set to true in the -bit array Assuming an ideal selection of hash functions, we can treat the output of on new values as a uniformly-drawn random vector. To insert a value into the Bloom filter, we can treat each element of the “hash vector” produced from as an index into and set the corresponding bits to ones. Similarly, to test membership for an element , we can check that all bits specified by the hash-vector are raised.

2.2 Properties of Bloom Filters

Given this model, there are two obvious properties of practical importance: that of false positives and of false negatives.

False Negatives.

It turns out that these definitions are sufficient to guarantee the lack of false-negatives with complete certainty, i.e., irrespective of the random outcome of the hash functions. This follows from the fact that once a bit is raised, there are no permitted operations that will unset it.

Theorem 2.1 (No False Negatives)

If  , then , where stands for the approximate membership test, while the relation means that has been previously inserted into .

False Positives.

This property is more complex as the occurrence of a false positive is entirely dependent on the particular outcomes of the hash functions and one needs to consider situations in which the hash functions happen to map some values to overlapping sets of indices. That is, after inserting a series of values , subsequent queries for might incorrectly return true.

This leads to subtle dependencies that can invalidate the analysis, and have lead to a number of incorrect probabilistic bounds on the event, including in the analysis by Bloom in his original paper [Bloom:1970:STH:362686.362692]. Specifically, Bloom first considered the probability that inserting distinct items into the Bloom filter will set a particular bit . From the independence of the hash functions, he was able to show that the probability of this event has a simple closed-form representation:

Lemma 1 (Probability of a single bit being set)

If the only values previously inserted into are , then the probability of a particular single bit at the position being set is

Bloom then claimed that the probability of a false positive was simply the probability of a single bit being set, raised to the power of , reasoning that a false positive for an element only occurs when all the bits corresponding to the hash outputs are set.

Unfortunately, as was later pointed out by Bose et al. [Bose2008Oct], as the bits specified by may overlap, we cannot guarantee the independence that is required for any simple relation between the probabilities. Bose et al. rectified the analysis by instead interpreting the bits within a Bloom filter as maintaining a set , corresponding to the indices of raised bits. With this interpretation, an element only tests positive if the random set of indices produced by the hash functions on is such that . Therefore, the chance of a positive result for resolves to the chance that the random set of indices from hashing is a subset of the union of for each . The probability of this reduced event is described by the following theorem:

Theorem 2.2 (Probability of False Positives)

If the only values inserted into are , then for any , where stands for the Stirling number of the second kind, capturing the number of surjections from a set of size to a set of size .

The key step in capturing these program properties is in treating the outcomes of hashes as random variables

and then propagating this randomness to the results of the other operations. A formal treatment of program outcomes requires a suitable semantics, representing programs as distributions of such random variables. In moving to mechanised proofs, we must first fully characterise this semantics, formally defining a notion of a probabilistic computation in Coq.

3 Encoding AMQs in Coq

To introduce our encoding of AMQs and their probabilistic behaviours in Coq, we continue with our running example, transitioning from mathematical notation to Gallina, Coq’s language. The rest of this section will introduce each of the key components of this encoding through the lens of Bloom filters.

3.1 Probability Monad

Our formalisation represents probabilistic computations using an embedding following the style of the FCF library [Petcher-Morissett:POST15]. We do not use FCF directly, due to its primary focus on cryptographic proofs, wherein it provides little support for proving probabilistic bounds directly, instead prioritising a reduction-based approach of expressing arbitrary computations as compositions of known distributions.

Following the adopted FCF notation, a term of type represents a probabilistic computation returning a value of type , and is constructed using the standard monadic operators, with an additional primitive

that allows sampling from a uniform distribution over the range

:

ret
bind
rand

We implement a Haskell-style do-notation over this monad to allow descriptions of probabilistic computations within Gallina. For example, the following code is used to implement the query operation for the Bloom filter:

  hash_res <-
 hash_vec_int x hashes; (* hash x using the hash functions *)
  let (new_hashes, hash_vec) := hash_res in
  (* check if all the corresponding bits are set *)
  let qres := bf_query_int hash_vec bf in
  (* return the query result and the new hashes *)
       ret (new_hashes, qres).

In the above listing, we pass the queried value x along with the hash functions hashes to a probabilistic hashing operation hash_vec_int to hash x over each function in hashes. The result of this random operation is then bound to hash_res and split into its constituent components—a sequence of hash outputs hash_vec and an updated copy new_hashes of the hash functions, now incorporating the mapping for x. Then, having mapped our input into a sequence of indices, we can query the Bloom filter for membership using a corresponding deterministic operation bf_query_int to check that all the bits specified by hash_vec are set. Finally, we complete the computation by returning the query outcome qres and the updated hash functions new_hashes using the ret operation to lift our result to a probabilistic outcome.

Using the code snippet above, we can define the query operation bf_query as a function that maps a Bloom filter, a value to query, and a collection of hash functions to a probabilistic computation returning the query result and an updated set of hash functions. However, because our computation type does not impose any particular semantics, this result only encodes the syntax of the probabilistic query and has no actual meaning without a separate interpretation.

Thus, given a Gallina term of type , we must first evaluate it into a distribution over possible results to state properties on the probabilities of its outcomes. We interpret our monadic encoding in terms of Ramsey’s probability monad [Ramsey-Pfeffer:POPL02], which decomposes a complex distribution into composition of primitive ones bound together via conditional distributions. To capture this interpretation within Coq, we then use the encoding of this monad from the infotheo library [Affeldt-Hagiwara:ITP12, AffeldtHS14], and provide a function that evaluates computations into distributions by recursively mapping them to the probability monad. Here, dist A represents infotheo’s encoding of distributions over a finite support A, defined as being composed of a measure function , and a proof that the sum of the measure over the support produces 1.

This mapping from computations to distributions must be done to a program (involving, e.g., Bloom filter) before stating its probability bound. Therefore, we hide this evaluation process behind a notation that allows stating probabilistic properties in a form closer to their mathematical counterparts:

Above, is an arbitrary element in the support of the distribution induced by . Finally, we introduce a binding operator to allow concise representation of dependent distributions: .

3.2 Representing Properties of Bloom Filters

We define the state of a Bloom filter (BF) in Coq as a binary vector of a fixed length , using Ssreflect’s m.-tuple data type:

Record BF := mkBF { bloomfilter_state: m.-tuple bool }.
Definition bf_new : BF :=  (* construct a BF with all bits cleared *).
Definition bf_get_int i : BF -> bool :=   (* retrieve BF’s ith bit *).

We define the deterministic components of the Bloom filter implementation as pure functions taking an instance of BF and a series of indices assumed to be obtained from earlier calls to the associated hash functions:

bf_add_int
bf_query_int

That is, bf_add_int takes the Bloom filter state and a sequence of indices to insert and returns a new state with the requested bits also set. Conversely, bf_query_int returns true iff all the queried indices are set. These pure operations are then called within a probabilistic wrapper that handles hashing the input and the book-keeping associated with hashing to provide the standard interface for AMQs:

bf_add
bf_query

The component (to be defined in Sec. 3.3), parameterised over an input type , keeps track of known results of the involved hash functions and is provided as an external parameter to the function rather than being a part of the data structure to reflect typical uses of AMQs, wherein the hash operation is pre-determined and shared by all instances.

With these definitions and notation, we can now state the main theorems of interest about Bloom filters directly within Coq:222bf_addm is a trivial generalisation of the insertion to multiple elements.

Theorem 3.1 (No False Negatives)

For any Bloom filter state , a vector of hash functions , after having inserted an element into , followed by a series of other inserted elements, the result of query is always true. That is, in terms of probabilities:

Lemma 2 (Probability of Flipping a Single Bit)

For a vector of hash functions of length , after inserting a series of distinct values , all unseen in , into an empty Bloom filter , represented by a vector of bits, the probability of its any index being set is Here, bf_get is a simple embedding of the pure function bf_get_int into a probabilistic computation.

Theorem 3.2 (Probability of a False Positive)

After having inserted a series of distinct values , all unseen in , into an empty Bloom filter , for any unseen , the probability of a subsequent query for returning true is given as

The proof of this theorem required us to provide the first axiom-free mechanised proof for the closed form for Stirling numbers of the second kind [GKP1994].

In the definitions above, we used the output of the hashing operation as the bound between the deterministic and probabilistic components of the Bloom filter. For instance, in our earlier description of the Bloom filter query operation in Sec. 3.1, we were able to implement the entire operation with the only probabilistic operation being the call hash_vec_int x hashes. In general, structuring AMQ operations as manipulations with hash outputs via pure deterministic functions allows us to decompose reasoning about the data structure into a series of specialised properties about its deterministic primitives and a separate set of reusable properties on its hash operations.

3.3 Reasoning about Hash Operations

We encode hash operations within our development using a random oracle-based implementation. In particular, in order to keep track of seen hashes learnt by hashing previously observed values, we represent a state of a hash function from elements of type B to a range using a finite map to ensure that previously hashed values produce the same hash output:

Definition HashState B := FixedMap B I_m.

The state is paired with a hash function generating uniformly random outputs for unseen values, and otherwise returns the value as from its prior invocations:

Definition hash value state : Comp (HashState B * B) :=
  match find value state  with
  | Some(output) => ret (state, output)
  | None => rnd <-
 rand m;
             new_state <- put value rnd state;
             ret (new_state, rnd)
  end.

A hash vector is a generalisation of this structure to represent a vector of states of independent hash functions:

Definition HashVec B := k.-tuple HashState B.

The corresponding hash operation over the hash vector, hash_vec_int, is then defined as a function taking a value and the current hash vector and then returning a pair of the updated hash vector and associated random vector, internally calling out to hash to compute individual hash outputs.

This random oracle-based implementation allows us to formulate several helper theorems for simplifying probabilistic computations using hashes by considering whether the hashed values have been seen before or not. For example, if we knew that a value had not been seen before, we would know that the possibility of obtaining any particular choice of a vector of indices would be equivalent to obtaining the same vector by a draw from a corresponding uniform distribution. We can formalise this intuition in the form of the following theorem:

Theorem 3.3 (Uniform Hash Output)

For any two hash vectors , of length , a value that has not been hashed before, and an output vector of length obtained by hashing via , if the state of has the same mappings as and also maps to , the probability of obtaining the pair is uniform:

Similarly, there are also often cases where we are hashing a value that we have already seen. In these cases, if we know the exact indices a value hashes to, we can prove a certainty on the value of the outcome:

Theorem 3.4 (Hash Consistency)

For any hash vector , a value , if maps to outputs , then hashing again will certainly produce and not change , that is, .

By combining these types of probabilistic properties about hashes with the earlier Bloom filter operations, we are able to prove the prior theorems about Bloom filters by reasoning primarily about the core logical interactions of the deterministic components of the data structure. This decomposition is not just applicable to the case of Bloom filters, but can be extended into a general framework for obtaining modular proofs of AMQs, as we will show in the next section.

4 Ceramist at Large

Zooming out from the previous discussion of Bloom filters, we now present Ceramist in its full generality, describing the high-level design in terms of the various interfaces it requires to instantiate to obtain verified AMQ implementations.

The core of our framework revolves around the decomposition of an AMQ data structure into separate interfaces for hashing (AMQHash) and state (AMQ), generalising the specific decomposition used for Bloom filters (hash vectors and bit vectors respectively). More specifically, the AMQHash interface captures the probabilistic properties of the hashing operation, while the AMQ interface captures the deterministic interactions of the state with the hash outcomes.

4.1 AMQHash Interface

The AMQHash interface generalises the behaviours of hash vectors (Sec. 3.3) to provide a generic description of the hashing operation used in AMQs.

The interface first abstracts over the specific types used in the prior hashing operations (such as, e.g., HashVec B) by treating them as opaque parameters: using a parameter AMQHashState to represent the state of the hash operation; types Key and Value encoding the hash inputs and outputs respectively, and finally, a deterministic operation to encode the interaction of the state with the outputs and inputs. For example, in the case of a single hash, the state parameter AMQHashState would be HashState B, while for a hash vector this would instead be HashVec B.

To use this hash state in probabilistic computations, the interface assumes a separate probabilistic operation that will take the hash state and randomly generate an output (e.g., hash for single hashes and hash_vec_int for hash vectors):

Parameter AMQHash_hash: Key -> AMQHashState -> Comp (AMQHash * Value).

Then, to abstractly capture the kinds of reasoning about the outcomes of hash operations done with Bloom filters in Sec. 3.3, the interface assumes a few predicates on the hash state to provide information about its contents:

Parameter AMQHash_hashstate_contains: AMQHashState -> Key -> Value -> bool.
Parameter AMQHash_hashstate_unseen: AMQHashState -> Key -> bool.

These components are then combined together to produce more abstract formulations of the previous Theorems 3.3 and 3.4 on hash operations.

Property 1 (Generalised Uniform Hash Output)

There exists a probability , such that for any two AMQ hash states , a value that is unseen, and an output obtained by hashing via , if the state of has the same mappings as and also maps to , the probability of obtaining the pair is given by: .

Property 2 (Generalised Hash Consistency)

For any AMQ hash state , a value , if maps to an output , then hashing again will certainly produce and not change :

Proofs of these corresponding properties must also be provided to instantiate the AMQHash interface. Conversely, components operating over this interface can assume their existence, and use them to abstractly perform the same kinds of simplifications as done with Bloom filters, resolving many probabilistic proofs to dealing with deterministic properties on the AMQ states.

4.2 The Amq Interface

Building on top of an abstract AMQHash component, the AMQ interface then provides a unified view of the state of an AMQ and how it deterministically interacts with the output type Value of a particular hashing operation.

As before, the interface begins by abstracting the specific types and operations of the previous analysis of Bloom filters, first introducing a type AMQState to capture the state of the AMQ, and then assuming deterministic implementations of the typical add and query operations of an AMQ:

Parameter AMQ_add_internal: AMQState -> Value -> AMQState.
Parameter AMQ_query_internal: AMQState -> Value -> bool.

In the case of Bloom filters, these would be instantiated with the BF, bf_add_int and bf_query_int operations respectively (cf. Sec. 3.2), thereby setting the associated hashing operation to the hash vector (Sec. 3.3).

As we move on to reason about the behaviours of these operations, the interface diverges slightly from that of the Bloom filter by conditioning the behaviours on the assumption that the state has sufficient capacity:

Parameter AMQ_available_capacity: AMQState -> nat -> bool.

While the Bloom filter has no real deterministic notion of a capacity, this cannot be said of all AMQs in general, such as the Counting Bloom filter or Quotient filter, as we will discuss later.

With these definitions in hand, the behaviours of the AMQ operations are characterised using a series of associated assumptions:

Property 3 (AMQ insertion validity)

For a state with sufficient capacity, inserting any hash output into via AMQ_add_internal will produce a new state for which any subsequent queries for via AMQ_query_internal will return true.

Property 4 (AMQ query preservation)

For any AMQ state with sufficient remaining capacity, if queries for a particular hash output in via AMQ_query_internal happen to return true, then inserting any further outputs into will return a state for which queries for will still return true.

Even though these assumptions seemingly place strict restrictions on the permitted operations, we found that these properties are satisfied by most common AMQ structures. One potential reason for this might be because they are in fact sufficient to ensure the No-False-Negatives property standard of most AMQs:

Theorem 4.1 (Generalised No False Negatives)

For any AMQ state , a corresponding hash state , after having inserted an element into , followed by a series of other inserted elements, the result of query for is always true. That is,

Here, AMQ_add, AMQ_addm, and AMQ_query are generalisations of the probabilistic wrappers of Bloom filters (cf. Sec. 3.1) for doing the bookkeeping associated with hashing and delegating to the internal deterministic operations.

The generalised Theorem 4.1 illustrates one of the key facilities of our framework, wherein by simply providing components satisfying the AMQHash and AMQ interfaces, it is possible to obtain proofs of certain standard probabilistic properties or simplifications for free.

Figure 1: Overview of Ceramist and dependencies the between its components.

The diagram in Fig. 1 provides a high-level overview of the interfaces of Ceramist, their specific instances, and dependencies between them, demonstrating Ceramist’s take on compositional reasoning and proof reuse. For instance Bloom filter implementation instantiates the AMQ interface implementation and uses, as a component, hash vectors, which themselves instantiate AMQHash used by AMQ. Bloom filter itself is also used as a proof reduction target by Counting Bloom filter. We will elaborate on this and the other noteworthy dependencies between interfaces and instances of Ceramist in the following sections.

4.3 Counting Bloom Filters through Ceramist

To provide a concrete demonstration of the use of the AMQ interface, we now switch over to a new running example—Counting Bloom filters [TarkomaRL12]. A Counting Bloom filter is a variant of the Bloom filter in which individual bits are replaced with counters, thereby allowing the removal of elements. The implementation of the structure closely follows the Bloom filter, generalising the logic from bits to counters: insertion increments the counters specified by the hash outputs, while queries treat counters as set if greater than 0. In the remainder of this section, we will show how to encode and verify the Counting Bloom filter for the standard AMQ properties. We have also proven two novel domain-specific properties of Counting Bloom filters, which, due to space limits, we outline in Appendix 0.A.

First, as the Counting Bloom filter uses the same hashing strategy as the Bloom filter, the hash interface can be instantiated with the Hash Vector structure used for the Bloom filter, entirely reusing the earlier proofs on hash vectors. Next, in order to instantiate the AMQ interface, the state parameter can be defined as a vector of bounded integers, all initially set to 0:

Record CF := mkCF { countingbloomfilter_state: m.-tuple  }.
Definition cf_new : CF := (* a new CF with all counters set to 0 *).

As mentioned before, the add operation increments counters rather than setting bits, and the query operation treats counters greater than 0 as raised.

cf_add_int
cf_query_int

To prevent integer overflows, the counters in the Counting Bloom filter are bounded to some range , so the overall data structure too has a maximum capacity. It would not be possible to insert any values if doing such would raise any of the counters above their maximum. To account for this, the capacity parameter of the AMQ interface is instantiated with a simple predicate cf_available_capacity that verifies that the structure can support further inserts by ensuring that each counter has at least spaces free (where is the number of hash functions used by the data structure).

The add operation can be shown to be monotone on the value of any counter when there is sufficient capacity (Property 3). The remaining properties of the operations also trivially follow, thereby completing the instantiation, and allowing the automatic derivation of the No-False-Negatives result via Theorem 4.1.

4.4 Proofs about False Positive Probabilities by Reduction

As the observable behaviour of Counting Bloom filter almost exactly matches that of the Bloom filter, it seems reasonable that the same probabilistic bounds should also apply to the data structure. To facilitate these proof arguments, we provide the AMQMap interface that allows the derivation of probabilistic bounds by reducing one AMQ data structure to another.

The AMQMap interface is parameterised by two AMQ data structures, AMQ A and B, using the same hashing operation. It is assumed that corresponding bounds on False Positive rates have already been proven for AMQ B, while have not for AMQ A. The interface first assumes the existence of some mapping from the state of AMQ A to AMQ B, which satisfies a number of properties:

Parameter AMQ_state_map:  A.AMQState -> B.AMQState.

In the case of our Counting Bloom filter example, this mapping would convert the Counting Bloom filter state to a bit vector by mapping each counter to a raised bit if its value is greater than 0. To provide the of the false positive rate boundary, the AMQMap interface then requires the behaviour of this mapping to satisfy a number of additional assumptions:

Property 5 (AMQ Mapping Add Commutativity)

Adding a hash output to the AMQ B obtained by applying the mapping to an instance of AMQ A produces the same result as first adding a hash output to AMQ A and then applying the mapping to the result.

Property 6 (AMQ Mapping Query Preservation)

Applying B’s query operation to the result of mapping an instance of AMQ A produces the same result as applying A’s query operation directly.

In the case of reducing Counting Bloom filters (A) to Bloom filters (B), both results follow from the fact that after incrementing the some counters, all of them will have values greater than 0 and thus be mapped to raised bits.

Having instantiated the AMQMap interface with the corresponding function and proofs about it, it is now possible to derive the false positive rate of Bloom filters for Counting Bloom filters for free through the following generalised lemma:

Theorem 4.2 (AMQ False Positive Reduction)

For any two AMQs A, B, related by the AMQMap interface, if the false positive rate for B after inserting  items is given by the function on , then the false positive rate for is also given by on . That is, in terms of probabilities:

5 Proof Automation for Probabilistic Sums

We have, until now, avoided discussing details of how facts about the probabilistic computations can be composed, and thereby also the specifics of how our proofs are structured. As it turns out, most of this process resolves to reasoning about summations over real values as encoded by Ssreflect’s bigop library. Our development also relies on the tactic library by Martin-Dorel and Soloviev [Martin-DorelS16].

In this section, we outline some of the most essential proof principles facilitating the proofs-by-rewriting about probabilistic sums. While most of the provided rewriting primitives are standalone general equality facts, some of our proof techniques are better understood as combining a series of rewritings into a more general rewriting pattern. To delineate these two cases, will use the terminology Pattern to refer to a general pattern our library supports by means of a dedicated Coq tactic, while Lemma will refer to standalone proven equalities.

5.1 The Normal Form for Composed Probabilistic Computations

When stating properties on outcomes of a probabilistic computation (cf. Sec. 3.1), the computation must first be recursively evaluated into a distribution, where the intermediate results are combined using the probabilistic bind operator. Therefore, when decomposing a probabilistic property into smaller subproofs, we must rely on its semantics that is defined for discrete distributions as follows:

Expanding this definition, one can represent any statement on the outcome of a probabilistic computation in a normal form composed of only nested summations over a product of the probabilities of each intermediate computational step. This paramount transformation is captured as the following pattern:

Pattern 1 (Bind normalisation)

Here, by , we denote the event in which the result of evaluating the command is , where is the result of evaluating the previous command in the chain. This transformation then allows us to resolve the proof of a given probabilistic property into proving simpler statements on its substeps. For instance, consider the implementation of Bloom filter’s query operation from Section 3.1. When proving properties of the result of a particular query (as in Theorem 3.1), we use this rule to decompose the program into its component parts, namely as being the product of a hash invocation and the deterministic query operation bf_query_int. This allows dealing with the hash operation and the deterministic component separately by applying subsequent rewritings to each factor on the right-hand side of the above equality.

5.2 Probabilistic Summation Patterns

Having resolved a property into our normal form via a tactic implementing Pattern 1, the subsequent reductions rely on the following patterns and lemmas.

Sequential composition.

When reasoning about the properties of composite programs, it is common for some subprogram to return a probabilistic result that is then used as the arguments for a probabilistic function . This composition is encapsulated by the operation , as used by Theorems 3.1, 2, and 3.2. The corresponding programs, once converted to the normal form, are characterised by having factors within its internal product that simply evaluate the probability of the final statement to produce a particular value :

Since the return operation is defined as a delta distribution with a peak at the return value , we can simplify the statement by removing the summation over , and replacing all occurrences of with , via the following pattern:

Pattern 2 (Probability of a Sequential Composition)

Notice that, without loss of generality, Pattern 2 assumes that the -containing factor is in the head. Our tactic implicitly rewrites the statement to this form.

Plausible statement sequencing.

One common issue with the normal form, is that, as each statement is evaluated over the entirety of its support, some of the dependencies between statements are obscured. That is, the outputs of one statement may in fact be constrained to some subset of the complete support. To recover these dependencies, we provide the following theorem, that allows reducing computations under the assumption that their inputs are plausible:

Lemma 3 (Plausible Sequencing)

For any computation sequence , if it is possible to reduce the computation to a simpler form when is amongst plausible outcomes of , (i.e., holds) then it is possible to rewrite to without changing the resulting distribution:

Plausible outcomes.

As was demonstrated in the previous paragraph, it is sometimes possible to gain knowledge that a particular value is a plausible outcome for a composite probabilistic computation :

This fact in itself is not particularly helpful as it does not immediately provide any usable constraints on the value . However, we can now turn this inequality into a conjunction of inequalities for individual probabilities, thus getting more information about the intermediate steps of the computation:

Pattern 3

If  then there exist such that

This transformation is possible due to the fact that probabilities are always non-negative, thus if a summation is positive, there must exist at least one element in the summation that is also positive.

Summary of the development.

By composing these components together, we obtain a comprehensive toolbox for effectively reasoning about probabilistic computations. We find that our summation patterns end up encapsulating most of the book-keeping associated with our encoding of probabilistic computations, which, combined with the AMQ/AMQHash decomposition from Sec. 4, allows for a fairly straightforward approach for verifying properties of AMQs.

5.3 A Simple Proof of Generalised No False Negatives Theorem

To showcase the fluid interaction of our proof principles in action, let us consider the proof of the generalised No-False-Negatives Theorem 4.1, stating the following:

(1)

As with most of our probabilistic proofs, we begin by applying normalisation Pattern 1 to reduce the computation into our normal form:

We label the factors to be rewritten as for the convenience of the presentation, indicating the correspondence to the components of the statement (1). From here, as all values are assumed to be unseen, we can use Property 1 in conjunction with the sequencing Pattern 2 to reduce factors and as follows:

Here, is the probability from the statement of Property 1. We also introduce the notations and to denote the deterministic operations AMQ_add_internal and AMQHash_add_internal respectively. Then, using Pattern 3 for decomposing plausible outcomes, it is possible to separately show that any plausible from AMQ_addm must map to , as hash operations preserve mappings. Combining this fact with Lemma 3 (plausible sequencing) and Hash Consistency (Property 2), we can derive that the execution of AMQHash_hash on in must return , simplifying the summation even further:

Finally, as is a plausible outcome from AMQ_addm called on , we can then show, using Property 4 (query preservation), that querying for on must succeed. Therefore, the entire summation reduces to the summation of distributions over their support, which can be trivially shown to be 1.

6 Overview of the Development and More Case Studies

Section Size (LOC)
Specifications Proofs
Bounded containers 286 1051
Notation (§3.1) 77 0
Summations (§5) 742 2122
Hash operations (§4.1) 201 568
AMQ framework (§4.2) 594 695
Bloom filter (§3.2) 322 1088
Counting BF (§4.4, §0.A) 312 674
Quotient filter (§6.1) 197 633
Blocked AMQ (§6.2) 269 522

The Ceramist mechanised framework is implmented as library in Coq proof assistant [ceramist]. It consists of three main sub-parts, each handling a different aspect of constructing and reasoning about AMQs: (i) a library of bounded-length data structures, enhancing MathComp’s [Maboubi-Tassi:MathComp] support for reasoning about finite sequences with varying lengths; (ii) a library of probabilistic computations, extending the infotheoprobability theory library [AffeldtHS14] with definitions of deeply embedded probabilistic computations and a collection of tactics and lemmas on summations described in Sec. 5; and (iii) the AMQ interfaces and instances representing the core of our framework described in Sec. 4.

Alongside these core components, we also include four specific case studies to provide concrete examples of how the library can be used for practical verification. Our first two case studies are the mechanisation of the Bloom filter [Bloom:1970:STH:362686.362692] and the Counting Bloom filter[TarkomaRL12], as discussed earlier. In proving the false-positive rate for Bloom filters, we follow the proof by Bose et al. [Bose2008Oct], also providing the first mechanised proof of the closed expression for Stirling numbers of the second kind. Our third case study provides mechanised verification of the quotient filter[BenderFJKKMMSSZ12]. Our final case study is a mechanisation of the Blocked AMQ—a family of AMQs with a common aggregation strategy. We instantiate this abstract structure with each of the prior AMQs, obtaining, among others, a mechanisation of Blocked Bloom filters [PutzeSS09]. The sizes of each library component, along with the references to the sections that describe them, are given in the table above.

Of particular note, in effect due to the extensive proof reuse supported by Ceramist, the proof size for each of our case-studies progressively decreases, with around a 50% reduction in the size from our initial proofs of Bloom filters to the final case-studies of different Blocked AMQs instances.

6.1 Quotient Filter

A quotient filter [BenderFJKKMMSSZ12] is a type of AMQ data structure optimised to be more cache-friendly than other typical AMQs. In contrast to the relatively simple internal vector-based states of the Bloom filters, a quotient filter works by internally maintaining a hash table to track its elements.

The internal operations of a quotient filter build upon a fundamental notion of quotienting, whereby a single -bit hash outcome is split into two by treating the upper -bits (the quotient) and the lower -bits (the remainder) separately. Whenever an element is inserted or queried, the item is first hashed over a single hash function and then the output quotiented. The operations of the quotient filter then work by using the -bit quotient to specify a bucket of the hash table, and the -bit remainder as a proxy for the element, such that a query for an element will succeed if its remainder can be found in the corresponding bucket.

A false positive can occur if the outputs of the hash function happen to exactly collide for two particular values (collisions in just the quotient or remainder are not sufficient to produce an incorrect result). Therefore, it is then possible to reduce the event of a false positive in a quotient filter to the event that at least one in several draws from a uniform distribution produces a particular value. We encode quotient filters by instantiating the AMQHash interface from Sec. 4.1 with a single hash function, rather than a vector of hash functions, which is used by the Bloom filter variants (Sec. 2). The size of the output of this hashing operation is defined to be , and a corresponding quotienting operation is defined by taking the quotient and remainder from dividing the hash output by . With this encoding, we are able to provide a mechanised proof of the false positive rate for the quotient filter implemented using -bit hash as being:

Theorem 6.1 (Quotient filter False Positive Rate)

For a hash-function , after inserting a series of unseen distinct values into an empty quotient filter , for any unseen , the probability of a query for returning true is given by:

6.2 Blocked AMQ

Blocked Bloom filters[PutzeSS09] are a cache-efficient variant of Bloom filters where a single instance of the structure is composed of a vector of independent Bloom filters, using an additional “meta”-hash operation to distribute values between the elements. When querying for a particular element, the meta-hash operation would first be consulted to select a particular instance to delegate the query to.

While prior research has only focused on applying this blocking design to Bloom filters, we found that this strategy is in fact generic over the choice of AMQ, allowing us to formalise an abstract Blocked AMQ structure, and later instantiate it for particular choices of “basic” AMQs. As such, this data structure highlights the scalability of Ceramist wrt. composition of programs and proofs.

Our encoding of Blocked AMQs within Ceramist is done via means of two higher-order modules as in Fig. 1: (i) a multiplexed-hash component, parameterised over an arbitrary hashing operation, and (ii) a blocked-state component, parameterised over some instantiation of the AMQ interface. The multiplexed hash captures the relation between the meta-hash and the hashing operations of the basic AMQ, randomly multiplexing hashes to particular hashing operations of the sub-components. We construct a multiplexed-hash as a composition of the hashing operation used by the AMQ in each of the blocks, and a meta-hash function to distribute queries between the blocks. The state of this structure is defined as pairing of states of the hashing operation , one for each of the blocks of the AMQ, with the state of the meta-hash function. As such, hashing a value with this operation produces a pair of type , where the first element is obtained by hashing over the meta-hash to select a particular block, and the second element is produced by hashing again over the hash operation for this selected block. With this custom hashing operation, the state component of the Blocked AMQ is defined as sequence of states of the AMQ, one for each block. The insertion and query operations work on the output of the multiplexed hash by using the first element to select a particular element of the sequence, and then use the second element as the value to be inserted into or queried on this selected state.

Having instantiated the data structure as described above, we proved the following abstract result about the false positive rate for blocked AMQs:

Theorem 6.2 (Blocked AMQ False Positive Rate)

For any AMQ with a false positive rate after inserting

elements estimated as

, for a multiplexed hash-function , after having inserted distinct values , all unseen in , into an empty Blocked AMQ filter composed of instances of , for any unseen , the probability of a subsequent query for returning true is given by:

We instantiated this interface with each of the previously defined AMQ structures, obtaining the Blocked Bloom filters, Counting Blocked Bloom filters and Blocked Quotient filter along with proofs of similar properties for them, for free.

7 Discussion and Related Work

Proofs about AMQs.

While there has been a wealth of prior research into approximate membership query structures and their probabilistic bounds, the prevalence of paper-and-pencil proofs has meant that errors in analysis have gone unnoticed and propagated throughout the literature.

The most notable example is in Bloom’s original paper[Bloom:1970:STH:362686.362692], wherein dependencies between setting bits lead to an incorrect formulation of the bound (equation (17)), which has since been repeated in several papers [Mitzenmacher2002compressed, BroderM03, Dharmapurikar2004Deep, Dharmapurikar2006Longest] and even textbooks [mitzenmacher2005]. While this error was later identified by Bose et al. [Bose2008Oct], their own analysis was also marred by an error in their definition of Stirling numbers of the second kind, resulting in yet another incorrect bound, corrected two years later by Christensen et al. [christensen2010new], who avoided the error by eliding Stirling numbers altogether, and deriving the bound directly. Furthermore, despite these corrections, many subsequent papers [PutzeSS09, Jing2009weighted, Li2009memory, debnath2011bloomflash, TarkomaRL12, LimLee2015Complement, QiaoLC11] still use Bloom’s original incorrect bounds. For example, in Putze et al. [PutzeSS09]’s analysis of a Blocked Bloom filter, they derive an incorrect bound on the false positive rate by assuming that the false positive of the constituent Bloom filters are given by Bloom’s bound.

Mechanically Verified Probabilistic Algorithms.

Past research has also focused on the verification of probabilistic algorithms, and our work builds on the results and ideas from several of these developments.

The ALEA library also tackles the task of proving properties of probabilistic algorithms [audebaud2009proofs]. In contrast to our choice of a deep embedding for encoding probabilistic computations, ALEA uses a shallow embedding through a Giry monad [giry1982categorical], representing probabilistic programs as measures over their outcomes. As ALEA axiomatises a custom type to represent the subset of reals between 0 and 1 for capturing probabilities, they must independently prove any properties on reals required for their theorems, considerably increasing the proof effort.

The Foundational Cryptography Framework (FCF[Petcher-Morissett:POST15] was developed for proving the security properties of cryptographic programs and provides an encoding for probabilistic algorithms. Rather than developing specific tooling for solving probabilistic obligations as we do, their library prioritises a proof strategy of proving the probabilistic properties of computations by reducing them to standard “difficult” programs with known distributions. While this strategy closely follows the typical structure of cryptographic proofs, their simple encoding increases the complexity of directly proving probabilistic properties.

Tassarotti et al.’s Polaris [tassarotti2019separation] library is a Coq framework for reasoning about probabilistic concurrent algorithms. Polaris uses the same reduction strategy for probabilistic specifications as the FCF library, inheriting some of the same issues with proving standalone bounds.

Hölzl considered mechanised verification of probabilistic programs in Isabelle/ HOL [Holzl:CPP17]

. While Hölzl uses a similar composition of probability and computation monads to encode and evaluate probabilistic programs, his construction defines the semantics of programs as infinite Markov chains, represented as a co-inductive stream of probabilistic outputs. This design makes the encoding unsuitable for capturing terminating programs, yet it is the only encoding we are aware of that enables probabilistic proofs about non-terminating programs.

Our previous effort on mechanising the probabilistic properties of blockchains also considered the encoding of probabilistic computations in Coq [Gopinathan-Sergey:CoqPL19]. While that work also relied on infotheo’s probability monad, it primarily considered the mechanisation of a restricted form of probabilistic properties (those with complete certainty), and did not deliver reusable tooling for this task.

While the Ceramist development is the first, to the best of our knowledge, that provides a mechanised proof of the probabilistic properties of Bloom filters, prior research has considered their deterministic properties. Blot et al. [BlotDL16] provided a mechanised proof of the absence of false negatives for their implementation of a Bloom filter as part of their work on a library for using abstract sets to reason about the bit-manipulations in low-level programs.

Proofs of differential privacy.

A popular motivation for reasoning about probabilistic computations is for the purposes of demonstrating differential privacy.

Barthe et al.’s CertiPriv framework [BartheKOB12] extends ALEA to support reasoning using a Probabilistic Relational Hoare logic, and uses this fragment to prove probabilistic non-interference arguments. However, CertiPriv focuses on proving relational probabilistic properties of coupled computations rather than explicit numerical bounds as we do. More recently, Barthe et al. [strub2019relational] have developed a mechanisation that supports a more general coupling between distributions. In the future, we plan to employ Ceramist for extending the verification of AMQs to infer the induced probabilistic bounds on differential privacy guarantees [ErlingssonPK14].

8 Conclusion

The key properties of Approximate Membership Query structures are inherently probabilistic. Formalisations of those properties are frequently stated incorrectly, due to the complexity of the underlying proofs. We have demonstrated the feasibility of conducting such proofs in a machine-assisted framework. The main ingredients of our approach are a principled decomposition of structure definitions and proof automation for manipulating probabilistic sums. Together, they enable scalable and reusable mechanised proofs about a wide range of AMQs.

Acknowledgements. 

We thank Georges Gonthier, Karl Palmskog, George Pîrlea, Prateek Saxena, and Anton Trunov for their comments on the prelimiary versions of the paper. We thank the CPP’20 referees (especially Reviewer D) for pointing out that the formulation of the closed form for Stirling numbers of the second kind, which we adopted as an axiom from the work by Bose et al. [Bose2008Oct] who used it in the proof of Theorem 3.2, implied False. This discovery has forced us to prove the closed form statement in Coq from the first principles, thus getting rid of the corresponding axiom and eliminating all potentially erroneous assumptions. Finally, we are grateful to the CAV’20 reviewers for their feedback.

Ilya Sergey’s work has been supported by the grant of Singapore NRF National Satellite of Excellence in Trustworthy Software Systems (NSoE-TSS) and by Crystal Centre at NUS School of Computing.

References

Appendix 0.A Domain-Specific Properties of Counting Bloom Filters

While the No-False-Negatives and false positive rate properties are practically important aspects of an AMQ, in the case of a Counting Bloom filter, there are a few other probabilistic behaviours of the structure that are of importance. One such property is the ability to remove some elements from a Counting Bloom filter without affecting queries for other ones, by decrementing the counters corresponding to the removed element.

To demonstrate the flexibility of our framework, we also provide a mechanised proof of the validity of this removal operation, which, to the best of our knowledge, has not been previously formalised:

Theorem 0.A.1 (Counting Bloom filter removal)

For any Counting Bloom filter with sufficient capacity and associated hashes , removing a previously inserted value will not change the query for any other previously inserted value , that is:

The operation cf_remove from the theorem statement deletes a value from the Counting Bloom filter by decrementing the associated counters, and is provided as a custom operation externally to the other Ceramist components, as removal operations are not a typical operation in AMQ interfaces.

Our development also provides a proof of another specialised property of the structure—that inserting any value will increase the total sum of the counters by a fixed amount. This property characterises how the modified state of the Counting Bloom filter allows tracking more detailed information, than just element membership, in terms of the exact number of insertions.

Theorem 0.A.2 (Certainty of Counter Increments)

For any counting Bloom filter , a value that was not previously inserted into , if the sum of the values of all counters in is , then after inserting , the sum of the counters will certainly increment by , that is: