Fast Privacy-Preserving Punch Cards

06/10/2020 ∙ by Saba Eskandarian, et al. ∙ 0

Loyalty programs in the form of punch cards that can be redeemed for benefits have long been a ubiquitous element of the consumer landscape. However, their increasingly popular digital equivalents, while providing more convenience and better bookkeeping, pose a considerable risk to consumer privacy. This paper introduces a privacy-preserving punch card protocol that allows firms to digitize their loyalty programs without forcing customers to submit to corporate surveillance. We also present a number of extensions that allow our scheme to provide other privacy-preserving customer loyalty features. Compared to the best prior work, we achieve a 14x reduction in the computation and a 25x reduction in communication required to perform a "hole punch," a 62x reduction in the communication required to redeem a punch card, and a 394x reduction in the computation time required to redeem a card. Much of our performance improvement can be attributed to removing the reliance on pairings present in prior work, which has only addressed this problem in the context of more general loyalty systems. By tailoring our scheme to punch cards and related loyalty systems, we demonstrate that we can reduce communication and computation costs by orders of magnitude.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


A privacy-preserving digital version of punch cards used in store loyalty programs

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Punch cards that can be redeemed for rewards after a number of purchases are a widely-used incentive for customer loyalty. Although these time-tested loyalty schemes remain popular, they are increasingly being replaced with digital equivalents that reside in mobile apps instead of physical wallets. The benefits of going digital for business owners include stronger defenses against counterfeit cards, a more convenient customer experience, and better bookkeeping around the popularity and efficacy of their loyalty program (Harbour, 2017; Brown, [n.d.]).

Unfortunately, digital loyalty programs also introduce a myriad new opportunities for customers’ privacy to be violated (Brown, [n.d.]; Lakshmanan, 2019), e.g., by linking customer behavior across transactions. This kind of tracking can be conducted by the business itself, a third-party loyalty service, or a malicious actor who gains access in a data breach. Thus any firm who wants to protect customer privacy should attempt to ensure that its digital loyalty program does not collect unnecessary data. But is it possible to digitize the traditional punch card without damaging customer privacy?

One approach to this problem is via standard anonymous credential techniques (Chaum, 1985; Camenisch and Lysyanskaya, 2001, 2004). Ecash systems (Camenisch et al., 2005; Belenkiy et al., 2009) or even the uCentive system (Milutinovic et al., 2015), which is specifically designed for loyalty programs, can be used to give a customer an unlinkable token for each purchase. However, storage and computation costs to hold and redeem a token in these systems must be linear in the number of “hole punches” a customer acquires.

A recent line of work, beginning with the Black Box Accumulation (BBA) of Jager and Rupp (Jager and Rupp, 2016), removes this linear dependence on the number of hole punches. Although individual hole punches are unlinkable in the original BBA scheme, the processes of issuing and redeeming a punch card are not. This shortcoming is rectified in the later BBA+ and Updatabale Anonymous Credential Systems (UACS) works by Hartung et al. (Hartung et al., 2017) and Blomer et al. (Blömer et al., 2019), as well as the recent improvements of Bobolz et al. (Bobolz et al., 2020), all of which additionally extend the idea of black box accumulation to support a broader set of functionalities.

This work introduces new protocols specifically designed to support privacy-preserving digital punch cards. By focusing specifically on the requirements of punch cards and similar points-based loyalty programs, we are able to make both qualitative and quantitative improvements over prior work. Unlike the works listed above, our main protocol does not rely on pairings, enabling significant performance improvements. Moreover, by stepping away from previous abstractions used for punch cards, we can handle punch card issuance non-interactively, meaning that a customer can generate a new, unpunched card without any interaction with the server. As an ancillary benefit, this removes a potential denial of service opportunity in prior systems, where a customer could register many punch cards without actually needing to earn any punches.

In terms of performance, our scheme reduces the client side computation required to generate a new punch card by compared to prior work (in addition to not requiring interaction with the server), reduces the total client and server computation times to perform a card punch by , and reduces the time to redeem a card by . Communication costs to punch and redeem a card are also reduced by and , respectively.

Our core protocol is quite simple. To generate a punch card, a client picks a random secret and hashes it to a point in an elliptic curve group using a hash function modeled as a random oracle (Fiat and Shamir, 1986; Bellare and Rogaway, 1993). To receive a hole punch, the client masks this group element and sends it to the server, who sends it back raised to a server-side secret value, along with a proof that this was done honestly. Finally, after several punches, the client redeems the card by sending the unmasked version along with the initial random secret to the server. The server checks that the group element submitted matches the hash of the random secret raised to the appropriate exponent. It also checks that the punch card being redeemed has not been redeemed before. Since the server is not involved in card issuance and only ever sees separately masked versions of the card, it cannot link a redeemed card to any past transaction. We prove, in the Algebraic Group Model (AGM) (Fuchsbauer et al., 2018), that a malicious customer cannot successfully claim more rewards that it is entitled to.

We also present a number of extensions to our main scheme that allow us to handle variations on the typical punch card. For example, we can handle special promotions where users get multiple punches, programs where purchases receive a fixed number of points instead of a single punch, and even private ticketing systems. Our most involved extension allows customers to merge the points on two punch cards without revealing anything to the server about the individual punch cards being merged. This extension uses pairings, but it still maintains the other advantages of our protocol and outperforms prior work, albeit by a smaller margin.

Our schemes are implemented in Rust with an Android wrapper for testing on mobile devices, and all our code and raw performance data are open source at

2. Design Goals

This section describes our goals for a punch card scheme. We give security definitions and contrast the goals of our work with those of closely related works.

2.1. Functionality Goals

A punch card scheme consists of three components. First, a client running on a customer’s phone should be able to create a new punch card. Next, the client and a server running a loyalty program can interact in order for the server to give the client a “hole punch.” Finally, a client can submit a completed punch card to the server for verification, and the server will accept valid punch cards that have not already been redeemed. The server keeps a database of previously redeemed cards to make sure a client doesn’t redeem the same card multiple times. After verifying a card, the server can give the client some out-of-band reward. In general, each of these steps can be a multi-round interactive protocol between the two parties. However, since all our protocols involve exactly one round, we present the syntax of a punch card scheme below as consisting of individual algoriths instead of interactive protocols.

A punch card scheme defined with respect to a security parameter is defined as follows.

  • : On input a security parameter , the initial server setup produces server public and secret keys, as well as an empty database to record previously redeemed punch cards.

  • : On input a security parameter , the algorithm generates new punch card and a punch card secret .

  • : On input the server keys and a punch card, outputs an updated punch card and a proof that the punch card was updated correctly.

  • : Given the public key, a punch card secret , the accompanying punch card , a server-updated punch card value , and a proof , outputs an updated secret and card if the proof is accepted and otherwise.

  • : Given a punch card secret and the corresponding punch card , outputs an updated secret and card that are ready to be sent to the server for redemption.

  • : on input the server keys, redeemed card database, a punch card, the accompanying secret, and an integer determining the required number of punches for redemption, outputs a bit determing whether or not the punch card is accepted and an updated database .

Correctness for a punch card scheme is defined in a straightforward way. An honestly generated punch card that has received punches should be accepted by an honest server. This should hold true even after many punch cards have been generated and redeemed.

Definition 2.1 (Correctness).

We say that a punch card scheme is correct if for

and any , the following set of operations, repeated sequentially times, results in for all

with all but negligible probability in


The functionality we desire from our punch cards is at a high level similar to that offered by black box accumulation (BBA) (Jager and Rupp, 2016). Although we offer a similar functionality, we will do so with stronger security guarantees and significantly improved performance. On the other hand, BBA+ (Hartung et al., 2017), UACS (Blömer et al., 2019), and Bobolz et al. (Bobolz et al., 2020) offer additional features that might be useful in other kinds of loyalty programs, such as reducing balances and partially spending accrued rewards. These features enable other applications, but, as described in Section 1, they render the solutions less effective for the original punch card problem. Bobolz et al. introduce the possibility of recovering from a partially completed spend that gets interrupted mid-protocol, e.g., due to a communication or hardware fault. Our scheme avoids the potential for this problem entirely because redemption only requires a single message from the client to the server.

One way in which our setting differs fundamentally that of BBA+, UACS, and Bobolz et al. is the way in which we prevent a punch card from being redeemed more than once. In our setting, the server has access to a database of all previously redeemed cards when deciding whether or not to accept a new punch card submitted for verification. BBA+, UACS, and Bobolz et al. consider an offline double spending scenario where the server may not have access to such a database but must be able to identify clients who have double spent punch cards after the fact. We do not pursue this goal for three reasons, listed in order of increasing importance below.

  1. Not necessary: point-of-sale terminals often require an internet connection to work, so synchronizing spent punch cards between different locations of a firm with multiple branches can happen online with less performance cost than an offline verification approach.

  2. Prohibitively expensive: the performance cost of checking whether a punch card was double spent in prior work is prohibitive, requiring at least one exponentiation for each previously redeemed punch card. This would be about 8 orders of magnitude slower than the hash table lookup required in our setting (as measured on our evaluation setup).

  3. Requires real-world identity: identifying the human user who double spent a punch card in a way that the person can be penalized requires some notion of real-world identity tied to the punch card client. This means that any loyalty system providing such a feature would require a user’s real-world identity in order to operate. This violates our original goal of making a punch card loyalty program digital with no damage to user privacy.

2.2. Security Goals

: Output The experiment makes use of the following oracles, which have access to shared state keeping track of issued punch cards and the public key , subject to the restriction that is only called once on each input . : Output : if , output if , then Output : if , output Output

Figure 1. Real privacy experiment

: Output The experiment makes use of the following oracles, which have access to shared state keeping track of issued punch cards, the public key , and , subject to the restriction that is only called once on each input . : Output : if , output if , then Output : if , output Output

Figure 2. Ideal privacy experiment

At a high level, a punch card scheme must provide two kinds of security guarantees. First, it must protect client privacy such that the server learns nothing from messages sent by the client. Second, it must be sound in that no client can redeem more rewards than it has honestly accrued through valid hole punches authorized by the server.

We define privacy using a simulation-based definition. This means that in order for privacy to be satisfied, there must exist a simulator algorithm that can generate the view of the punch card server without access to client-side secrets. Informally, if the server can’t distinguish between the output of the simulator and a real client, then it surely can’t learn anything from interacting with a real client because it could have received the same information by running the simulator on its own.

Our privacy definition defines real and ideal privacy experiments, both of which begin with the challenger initializing an empty table mapping unique integer identifiers to punch cards and a counter that is incremented each time a new punch card is issued. The adversary is allowed to pick server secret and public keys , and then it is allowed to interact with oracles , , and which play the role of the client in the punch card scheme. In the real privacy experiment, these oracles act as wrappers around the , , and functions, simply calling the functions on the requested punch card (identified by an number chosen at issuance) and performing bookkeeping when punch cards are issued, updated, or redeemed. The ideal privacy experiment replaces each of these functions with calls to simulator algorithms , , and which have no access to punch card secrets. At the end of each experiment, the adversary outputs a distinguishing bit .

Definition 2.2 (Privacy).

Let be a punch card scheme. Then for a security parameter , and for every adversary made up of algorithms and , there exists a simulator made up of algorithms , , and such that the outputs of the experiments (Figure 1) and (Figure 2) are computationally indistinguisable.

In particular, we say that a punch card scheme has privacy if there exists a negligible function such that for any efficient adversary , we have

Our soundness definition resembles that of BBA (Jager and Rupp, 2016), which requires that a malicious client can only redeem as many punches as it has accrued. Aside from modifying the syntax of the definition to match our own, we have also modified it to allow the adversary to interleave hole punches and redemptions instead of requiring that all redemptions occur at the end of the protocol.

Definition 2.3 (Soundness).

Let be a punch card scheme. Then for a security parameter and adversary , we define the soundness experiment in Figure 3. We say that a punch card scheme satisfies soundness if there exists a negligible function such that for any efficient adversary , we have

: if , output 1. Otherwise, output 0. The experiment makes use of the following oracles, which all have access to the shared state . : Output : if : Output

Figure 3. Soundness experiment

As in BBA, this definition does not capture whether or not a client can transfer value from one punch card to another or merge separate, partially filled punch cards to redeem a single, larger card. In fact, it is not entirely clear if this kind of card merging is a malicious behavior to be avoided or a beneficial feature to be desired. This kind of merging appears to be difficult to do in our main construction, but we show how to extend our scheme to allow a limited degree of merging in Section 4.

3. Privacy-Preserving Punch Cards

This section describes our main punch card scheme. In addition to its quantitative improvements over prior work, which we measure in Section 5, our scheme has a number of other desirable properties:

  • Whereas all prior works make use of pairings, either because they rely on Groth-Sahai proofs (Groth and Sahai, 2008) or Pointcheval-Saunders signatures (Pointcheval and Sanders, 2016), our punch card scheme does not require pairings.

  • We require no communication at all to issue a new punch card – a client can do this on its own without server involvement. This removes a potential denial of service opportunity present in prior work, where a client could initiate a number of punch cards without making any purchases, thereby making the server incur unnecessary storage and computation at no cost to the malicious client.

  • Our redemption process involves a client sending a single message to the server, so there is no potential for the process to be interrupted mid-protocol and no need for a recovery process of the form proposed by Bobolz et al. (Bobolz et al., 2020).

3.1. Main Construction

A basic scheme

. We will begin with a bare-bones version of our scheme that provides neither privacy nor soundness. From this starting point, we will gradually build up to our actual scheme. Throughout, we will work in a group of prime order .

To set up the initial scheme, the server chooses a secret , and a client chooses a group element to represent the punch card. To receive a hole punch, the client sends to the server, who returns . To redeem a card after punches, the client submits and to the server, who accepts if and has not been previously used in a redeemed card.

Adding privacy

. The scheme above clearly provides no privacy because the server can link the different times it sees a punch card. We can make punches made on the same card unlinkable by only sending the server masked versions of the punch card, in a way reminiscent of standard oblivious PRF constructions (Naor and Reingold, 2004; Freedman et al., 2005). The punch card is always masked with a fresh value before being sent to the server, so the server only sees , not itself. The mask is removed (via exponentiation by ) before the next mask is applied. This means that the server sees a different random group element each time it punches a card. Moreover, an honest server only sees a random group element and at redemption time.

Unfortunately, this does not actually suffice to provide privacy against an actively malicious server. Consider a malicious server who always follows the scheme above, but during one hole punch (for a client it later wishes to re-identify) it uses a different secret so that except with negligible probability. Then when an unsuspecting client attempts to redeem its punch card, instead of submitting , it really submits , allowing the server to identify it.

We can handle the attack above by having the server give a zero knowledge proof of knowledge that it has honestly punched a card. To facilitate this, we require the server setup to also output a public key , for some publicly known generator . Then the server can prove at punching time that it is returning a punch card such that , i.e., that form a DDH tuple (Diffie and Hellman, 1976). This can be proven efficiently with a generic Chaum-Pedersen proof (Chaum and Pedersen, 1992) made non-interactive in the random oracle model (Fiat and Shamir, 1986; Bellare and Rogaway, 1993). The server generates the proof and sends it to the client along with the punched card . The client rejects the updated card if the proof does not verify. We denote proofs using the notation of Camenisch and Stadler (Camenisch and Stadler, 1997), where represents the Chaum-Pedersen proof, and require the standard zero knowledge and existential soundness properties (Boneh and Shoup, 2020).

Adding soundness

. The two modifications above ensure that the scheme provides privacy, but it still fails to provide soundness, as a malicious client can redeem more points than it has received punches. Consider a client who at first honestly follows the protocol and redeems a punch card by submitting . Next, it submits a masked for another punch and gets back . Finally, it submits as another valid punch card. According to the scheme described thus far, the server would accept this punch card redemption, meaning that the malicious client can redeem punches even though it only received punches.

The attack above works because the client can choose any group element it wants as . We modify our scheme to provide soundness by forcing clients to generate as the output of a hash function modeled as a random oracle . In particular, instead of choosing a random , the client chooses a random and sets . When redeeming a punch card, instead of sending , the client sends , and the server checks that . Since the hash function is modeled as a random function, a malicious client cannot find the preimage of a group element under , eliminating the attack.

With this defense, our scheme now provides both privacy and soundness. We formalize our construction as follows.

Construction 1 (Punch Card Scheme).

Let be a group of prime order with generator , a let be a hash function , modeled as a random oracle.

We construct our punch card scheme as follows:

  • : Select random and set . Initialize as an empty hash table, and return , , and .

  • : First, select a random secret and a random masking value . Then compute . Let . Return .

  • : Compute as well as the proof of knowledge . Output .

  • : First, verify the proof . If verification fails, output . Otherwise, begin by interpreting as . Then sample a new random masking value and compute . Set , and output .

  • : Begin by interpreting as with and . Then compute . Return (as ) and .

  • : Check whether and whether . If the first check returns true and the second returns false, insert into and return . Otherwise, return .

Observe that the asymptotic complexity of almost every operation in our punch card scheme depends only on the security parameter , with two exceptions. The first excpetion is that operations on have amortized time complexity , but in the worst case a read/write to could depend on the number of previously redeemed punch cards. The other exception is the exponentiation performed in , where group operations are required. However, since the same is often used for every punch card in practice, the server could precompute to remove the logarithmic dependence on .

3.2. Security

We now discuss the security of our constructions. We begin by proving the privacy of our punch card scheme.

Theorem 3.1 ().

Assuming the existential soundness of the Chaum-Pedersen proof system, our punch card scheme has privacy (Definition 2.2) in the random oracle model.


We begin by describing the simulator .

  • : This simulator samples and outputs a random group element .

  • : This simulator verifies the proof that form a DDH tuple and outputs if verification fails. Otherwise, it samples and outputs a random group element .

  • : This simulator samples a random string and computes . It outputs .

Next, we show through a short series of hybrids that for our punch card scheme.

  • This hybrid is the real privacy experiment .

  • In this hybrid, we add an abort condition to the execution of the experiment. The experiment aborts and outputs 0 if outputs (i.e., it accepts the proof ) but it is not the case that .

    This hybrid is indistinguishable from by the soundness of the Chaum-Pedersen proof system. In particular, an adversary who can distinguish between and can be used by an algorithm to break the soundness of the proof system as follows. plays the role of the adversary in the soundness game for the Chaum-Pedersen proof, and plays the role of the challenger to in either or with probability each. Whenever causes experiment to abort due to the check introduced in this hybrid, submits the proof and the statement to the soundness challenger. Otherwise, outputs .

    The algorithm described above breaks the soundness of the Chaum-Pedersen proof with the same advantage that distinguishes between and . To see why, observe that the only difference in the view of between and occurs when aborts. Thus must cause the experiment to abort with probability at least equal to its distinguishing advantage between and . But whenever aborts, has a statement and proof that violate the soundness of the Chaum-Pedersen proof, so it wins the soundness game with the same advantage.

  • In this hybrid, the challenger switches to record-keeping in the table in the way does and replaces calls to , , and with calls to , , and , respectively.

    This hybrid is indistinguishable from because the distribution of the adversary ’s view is identical in the two hybrids. We will establish this by considering the oracles , , and one at a time.

    • : In , the value returned by this oracle is determined by , which, since is modeled as a random oracle, corresponds to a uniformly random element of . In , the value is directly chosen as a uniformly random element . In both hybrids, is simply the next value of a counter that is incremented with each query. Thus the distribution of the output of the oracle is identical across the two hybrids.

    • : In both and , the oracle verifies and outputs if verification fails (and the game aborts if verification succeeds for a false statement). Thus we only need to consider cases where the proof verifies, i.e., when . In this case, selects a random and outputs , which is distributed uniformly at random in . In , the value of is directly chosen as a uniformly random value . Thus the distribution of the output of the oracle is identical across the two hybrids.

    • : In , this oracle returns the secret used to generate the punch card stored at as well as the value of that punch card after removing the last mask to get . The value of is distributed uniformly at random in . The value of is equal to raised to the server secret as many times as there was a successful call to – that is, a call whose output was not . This is the case because in each such call, the punch card value stored in is raised to and its mask is replaced with a new one. The final unmasking operation results in a punch card value , where is the number of successful calls to .

      In , clearly has the same distribution as in because in it is sampled directly as . The value also has the same distribution as in because the table gradually keeps count of the number of successful calls to , so can compute directly.

  • This hybrid is identical to except the abort condition introduced in is removed. As was the case in , this hybrid is indistinguishable from the preceding hybrid by the soundness of the Chaum-Pedersen proof system. It also corresponds to the ideal privacy game , completing the proof.

Having proven privacy, we now turn to soundness. We prove the soundness of our scheme in the algebraic group model (AGM) (Fuchsbauer et al., 2018), where for every group element the adversary produces, it must also give a representation of that group element in terms of elements it has already seen. This is a strictly weaker model (in the sense that it puts fewer restrictions on the adversary) than the widely-used generic group model (Shoup, 1997), in which some of the prior works on privacy-preserving loyalty programs have been proven secure (Blömer et al., 2019; Bobolz et al., 2020). Our proof relies on the -discrete log assumption, which assumes the computational hardness of winning the following game.

Definition 3.2 (-discrete log game).

The -discrete log game for a group of prime order is played between a challenger and an adversary . The challenger samples and sends to . The adversary responds with a value , and the challenger outputs 1 iff .

Depending on the concrete group in which the assumption is made, the -discrete log game could be vulnerable to Brown-Gallant-Cheon attacks (Brown and Gallant, 2004; Cheon, 2006), which reduce the security of the assumption by a factor of . Fortunately this attack only negligibly affects the security of the scheme, as is at most a polynomial in the security parameter .

We now state and prove our soundness theorem.

Theorem 3.3 ().

Assuming the zero-knowledge property of the Chaum-Pedersen proof system and the -discrete log assumption in , our punch card scheme has soundness (Definition 2.3) in the algebraic group model with random oracles.


Since already refers to the order of the group , we will refer to the -discrete log assumption throughout this proof. The high-level idea of the proof is to program random oracle queries with re-randomizations of powers of given by the -discrete log challenger. Then, whenever a punch card is given by the adversary, the algebraic adversary must also give a representation of the punch card in terms of group elements it has seen before. As such, the challenger can pick out the component and replace it with in its response. Then a punch card that is accepted before receiving punches must include a second representation of , allowing us to solve for .

We now formalize the proof idea sketched above. Our proof proceeds through a series of hybrids.

  • This hybrid is the soundness experiment .

  • In this hybrid, we replace the proof output by with a simulated proof.

    The the zero-knowledge property of the Chaum-Pedersen proof guarantees that the proof can be simulated. Since hybrids and are identical save for the real proof in and the simulated proof in , the output of an adversary who distinguishes between and can also be used to distinguish between a real and simulated proof with the same advantage.

  • In this hybrid, we add an abort condition to the execution of the experiment. The experiment aborts and outputs 0 if the oracle ever outputs when it receives a value of that the adversary has not previously queried from the random oracle .

    This hybrid is indistinguishable from because the probability of an adversary successfully triggering this abort condition is negligible in and there are no other differences between and . In order for the oracle to output , it must be the case that outputs 1, which means that . But since is modeled as a random function and has not been queried before, its output is chosen uniformly at random in , that is, . But then is also distributed uniformly at random in , and the probability .

  • In this hybrid, we modify how the challenger computes the output of and of the random oracle . Recall that since is an algebraic adversary, every group element it sends is accompanied by a representation in terms of the previous group elements it has seen: the generator , returned punch cards for the queries it has made to the oracle, and random oracle outputs for the random oracle queries it has made.

    Let for . Whenever the adversary makes a call to the oracle on a previously unqueried point , the challenger samples and sets . Since is distributed uniformly at random in , so is .

    Next, whenever makes a call to the oracle , instead of setting , the challenger looks at the algebraic representation of submitted by and replaces each occurence of with , including replacing with . Since the only elements has seen are , random oracle outputs, and the previous results of , the challenger can keep track of which elements contain which as it sends them to . The outputs of in are identical to the outputs in , because the process described here results in the same group element that would be represented by .

    Since all the changes in result in identically distributed outputs as in , the two hybrids are indistinguishable.

From , we can prove that any algebraic adversary who wins the soundness game can be used by an algorithm , described below, to break the -discrete log assumption in . Algorithm plays the role of the adversary in the -discrete log game while simultaneously playing the role of the challenger in . Algorithm simulates exactly to , except that it uses the -discrete log challenge messages as the values of . That is, . Moreover, it sets in the setup phase. Observe that the are distributed identically as in , so this is a perfect simulation of with playing the role of . The value of required in the assumption depends on the maximum number of sequential punches requests on the same group element.

Now, if wins the soundness game, it means that . This, in turn, implies that there was some successful punch card redemption where the accepted value of had not been previously punched times, i.e., the representation of does not contain . But since successful verification requires that , and the algebraic adversary must give a representation of , we now have two different representations of , which together yield a degree- equation in . This equation can be solved for using standard techniques (Shoup, 2006), allowing to recover and win the -discrete log game. ∎

4. Merging Punch Cards

Having described our main construction, we now consider another feature sometimes enjoyed by physical punch cards that we may want to reproduce digitally: merging partially-filled cards. Just as in real life, it is possible to “merge” two punch cards by redeeming them separately and taking into account the sum of the number of punches across the two cards. However, this process reveals the number of punches held by each card at redemption time, information that the customer may want to hide. We can hide the value of the two cards being merged by resorting to pairings.

Definition 4.1 (Pairings (Boneh and Shoup, 2020)).

Let be three cyclic groups of prime order where and are generators. A pairing is an efficiently computable function satisfying the following properties:

  • Bilinear: for all and we have

  • Non-degenerate: is a generator of .

When , we say that the pairing is a symmetric pairing. We refer to and as the pairing groups and refer to as the target group.

Using a symmetric pairing, we can quite simply merge two punch cards without revealing the number of punches on each. Before redeeming punch cards and which have and punches, respectively, with , the client computes . To redeem a merged card, the client sends the server the merged punch card along with and , the secrets for the two punch cards merged into . The server checks that . The bilinear property of the pairing ensures that . We can even hide whether or not a redeemed punch card is merged by generating a fresh punch card before redemption and merging a complete card with it.

The performance of symmetric pairings is far worse than that of asymmetric pairings, so we would like to have a scheme that works for asymmetric pairings as well. Unfortunately, directly converting the idea above to asymmetric pairings meets with some difficulties. Since each punch card must belong to either or , we can only merge pairs of cards where and . But this is a decision that must be made when a card is first issued, restricting punch cards to being merged with cards that belong to the other pairing group.

We resolve this problem by splitting each punch card into two components, one in each pairing group. Each component behaves as a punch card in the original scheme. Generating a punch card is similar to the original scheme, but the secret is hashed by two different functions and . Each hole punch repeats the punch protocol of the original scheme twice, once in and once in . Redeeming a card requires merging the and components of the two cards with each other as above, and since the client has a version of each punch card in both groups, it can merge them as before.

We formalize this sketch of a solution below. We replace the algorithm from our punch card syntax with a new algorithm that merges two punch cards before redeeming them.

Construction 2 (Mergable Punch Card Scheme).

Let be groups of prime order with generators , and let be hash functions , modeled as random oracles. We construct our punch card scheme as follows:

  • : Select random and set . Initialize as an empty hash table, and return , , and .

  • : First, select a random secret and random masking values . Then compute . Let . Return .

  • : First, interpret as and as . Compute as well as the proofs of knowledge and . Output .

  • : First, interpret as , as , as , and as . Next, verify the proofs and . If either verification fails, output . Then sample new random masking values and compute . Finally, output .

  • : Begin by interpreting as , as , as , and as . Then compute . Return and .

  • : Begin by interpreting as . Then perform the following checks:

    If the first check returns true and the other checks return false, insert and into and return . Otherwise, return .

Although not included in our formal construction, our scheme could be extended to allow more punches to occur on a merged card so long as the client indicates that it is a merged card being punched and the punch/proof occur over elements in . Note that this scheme only allows for two punch cards to merged. Our general strategy for merging punch cards could be extended to more than two cards using multilinear maps (Boneh and Silverberg, 2002; Garg et al., 2013; Coron et al., 2013), but a construction that allows merging of more than two cards while only relying on efficient standard primitives would require new techniques. This is an interesting problem for future work to address.

We now state and prove our security theorems for the mergable punch card scheme. The only change required in the security games to account for the change from to is that the redeem oracle in the privacy game takes in two s instead of just one and passes both corresponding punch cards to .

Theorem 4.2 ().

Assuming the existential soundness of the Chaum-Pedersen proof system, our mergable punch card scheme has privacy in the random oracle model.

Proof (sketch).

We begin by describing the simulator .

  • : This simulator samples and outputs two random group elements and .

  • : This simulator interprets and verifies both proofs, outputting if either verification fails. Otherwise, it samples and outputs two random group elements and .

  • : This simulator samples two random strings and computes