Weak-Key Analysis for BIKE Post-Quantum Key Encapsulation Mechanism

04/29/2022
by   Mohammad Reza Nosouhi, et al.
0

The evolution of quantum computers poses a serious threat to contemporary public-key encryption (PKE) schemes. To address this impending issue, the National Institute of Standards and Technology (NIST) is currently undertaking the Post-Quantum Cryptography (PQC) standardization project intending to evaluate and subsequently standardize the suitable PQC scheme(s). One such attractive approach, called Bit Flipping Key Encapsulation (BIKE), has made to the final round of the competition. Despite having some attractive features, the IND-CCA security of the BIKE depends on the average decoder failure rate (DFR), a higher value of which can facilitate a particular type of side-channel attack. Although the BIKE adopts a Black-Grey-Flip (BGF) decoder that offers a negligible DFR, the effect of weak-keys on the average DFR has not been fully investigated. Therefore, in this paper, we first perform an implementation of the BIKE scheme, and then through extensive experiments show that the weak-keys can be a potential threat to IND-CCA security of the BIKE scheme and thus need attention from the research community prior to standardization. We also propose a key-check algorithm that can potentially supplement the BIKE mechanism and prevent users from generating and adopting weak keys to address this issue.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/26/2021

Efficient Quantum Public-Key Encryption From Learning With Errors

Our main result is a quantum public-key encryption scheme based on the E...
05/04/2021

Towards security recommendations for public-key infrastructures for production environments in the post-quantum era

Quantum computing technologies pose a significant threat to the currentl...
07/23/2021

A survey on NIST PQ signatures

Shor's shockingly fast quantum algorithm for solving the period-finding ...
04/18/2018

A voting scheme with post-quantum security based on physical laws

Traditional cryptography is under huge threat along of the evolution of ...
11/15/2020

Removable Weak Keys for Discrete Logarithm Based Cryptography

We describe a novel type of weak cryptographic private key that can exis...
05/09/2022

On recovering block cipher secret keys in the cold boot attack setting

This paper presents a general strategy to recover a block cipher secret ...
09/13/2018

Pre- and post-quantum Diffie-Hellman from groups, actions, and isogenies

Diffie-Hellman key exchange is at the foundations of public-key cryptogr...

1 Introduction and Background

Key exchange mechanisms (KEMs) play a critical role in the security of the internet and other communication systems. They allow two remote entities to securely agree on a symmetric key without explicit sharing, which can be subsequently used to establish an encrypted session. The currently used KEMs are mostly based on the RSA [32] or ECC [24]. RSA and ECC are PKE schemes whose underlying security relies on the difficulty of either integer factorization or discrete logarithm problems. However, these problems can be solved in polynomial time by quantum computation models [38] with the so-called Cryptographically Relevant Quantum Computers (CRQC) [31]. Thus, it is believed that the current PKE schemes (and KEMs, consequently) will be insecure in the post-quantum era [27], [6].

To secure KEMs, tremendous research efforts have been made to design quantum-safe PKE schemes. Currently, NIST is also undertaking a standardization project for quantum-safe KEMs. Of numerous approaches, code-based cryptosystems (CBC) are considered a promising alternative to the existing PKE schemes [10], [18]. Based on the theory of error-correcting codes, their underlying security relies on that decoding a codeword without the knowledge of an encoding scheme is an -complete problem [37]. The idea of CBC was incepted by McEliece in 1978 [25] has remained secure against classical and quantum attacks but at the cost of a larger key size. To circumvent this issue, Misoczki et al. designed Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes to develop the QC-MDPC variant of the McEliece scheme [26]. This variant has received much attention because of its comparable security with significantly smaller key sizes. Bit Flipping Key Encapsulation (BIKE) [30] mechanism that is submitted to NIST for standardization as a quantum-safe KEM, is built on top of the QC-MDPC variant. Due to its promising security and performance features, BIKE has been selected in the final round of the NIST standardization competition as an alternate candidate [39]. In addition, BIKE has been recently added to the list of supported KEM schemes for the post-quantum TLS protocol used in the AWS Key Management Service (KMS) offered by Amazon [33].

The QC-MDPC variant and BIKE leverage a probabilistic and iterative decoder in their decapsulation modules. The original design of QC-MDPC variant employed the orthodox Bit-Filliping (BF) decoder [19] with slight modifications [26]. However, the BF decoder (as a probabilistic and iterative decoder) suffers from higher DFR. Specifically, the decoder suffers from poor decoding performance when the number of iterations is restricted for performance considerations like accelerating the encoder. It fails in the decryption/decapsulation process, which degrades the performance and can also facilitate the side channel/reaction attacks. For example, Guo et al[20] introduced an efficient reaction attack for the QC-MDPC variant known as the GJS attack. In a GJS attack, the attacker firstly sends crafted ciphertexts to the victim Alice while observing the reaction of her decoder for every ciphertext (i.e., successful or failure). Then, utilizing the correlation between the faulty ciphertext patterns and Alice’s private key, the attacker can fully recover her private key. Further, Nilsson et al[29] proposed a novel technique for fast generation of the crafted ciphertexts to improve the efficiency of the GJS attack. This attack can be regarded as a weaker version of Chosen-Ciphertext Attacks (CCA) since the adversary only needs to observe the decoder’s reaction without access to the full decryption oracle (i.e., it does not analyze any decrypted plaintext).

To tackle these attacks, the Fujisaki-Okamoto (FO) CCA transformation model has been adopted in BIKE [21], [11]. The FO model uses a ciphertext protection mechanism so that the receiver can check the integrity of a received ciphertext. Thus, the ability of a GJS attacker to craft ciphertexts is limited. Although the FO model significantly mitigates the threat of reaction attacks, the scheme still must deploy a decoder with a negligible DFR to provide the Indistinguishability under Chosen-Ciphertext Attack (IND-CCA) security. Sendrier et al[35] argued that to provide -bit of IND-CCA security, the average DFR (taken over the whole keyspace) must be upper bounded by . For this reason, several modifications of the BF decoding algorithms have been proposed to offer negligible DFR, [35, 34, 15, 12, 14, 28]. For example, the latest version of BIKE deploys the Black-Grey-Flip (BGF) decoder [14] that is the state-of-the-art variant of the BF algorithm. BGF uses only five iterations for decoding while offering a negligible DFR.

However, it is shown that there are some (private) key structures for which the probabilistic decoders show poor performance in terms of DFR [12], [36]. They are referred to as weak-keys since they are potentially at the risk of disclosure through side-channel/reaction attacks (such as GJS attack) [20], [29]. Although the number of weak keys is much smaller than the size of the entire keyspace, their effect on the average DFR must be analyzed to ensure they do not endanger the IND-CCA security of the scheme. For example, Drucker et al. [12] and Sendrier et al. [36] have recently conducted some weak-key analysis for the QC-MDPC-based schemes and showed that the average DFR is not notably affected by the weak-keys (that have been identified so far), implying that the current IND-CCA claims hold. However, the state-of-the-art BGF decoder has not been investigated in their analysis. For example, Sendrier et al. [36] have considered the previous version of the BIKE scheme that was submitted to the second round of the NIST competition (i.e., BIKE-1). In BIKE-1, the BackFlip decoder [2]

was deployed that enables the scheme to provide 128-bit security in 100 iterations (i.e., BackFlip-100). Moreover, the number of iterations has been limited to 20 for time-saving (significantly larger than the BGF decoder) in existing experiments. For compensating the few iterations, their results are compared with 97-bit security, i.e., the estimated security of BackFlip-20.

In view of the aforementioned discussion, the existent analysis on weak-keys does not extend towards the latest version of BIKE that adopts the contemporary BGF decoder. Therefore, it is important to investigate the impact of weak-keys on the latest version of BIKE. To the best of our knowledge, the effect of weak-keys on the average DFR of the BGF decoder has not been investigated. Thus, the IND-CCA security claims of the latest version of BIKE remain occluded. Motivated by this, we first implement the BIKE scheme in Matlab. Then, through extensive experiments and based on the model for IND-CCA security presented in [21], we show that the contribution of weak-keys in the average DFR of the BIKE’s BGF decoder is greater than the maximum allowed level needed for achieving IND-CCA security. As a result, the negative effect of weak-keys on the average DFR can not be ignored and must be addressed before claiming the IND-CCA security of BIKE. To address the weak-keys issue, we also propose a key-check mechanism that can be integrated into the key generation module of the BIKE scheme to ensure the private keys generated by users are not weak. The main contributions of this paper are summarized as follows:

  • We perform an implementation of the BIKE scheme with the state-of-the-art BGF decoder and provide some technical key points required to implement the BIKE scheme.

  • Through extensive experiments and using the formal model for proving the IND-CCA security, we show that the negative effect of weak-keys on average DFR is greater than the maximum allowed level. It may put the IND-CCA security of the BIKE mechanism at risk.

  • We propose a key-check algorithm that can be integrated into the key generation subroutine of BIKE to ensure that users do not generate and adopt weak (private) keys.

The paper is organized as follows. In Section 2, we provide the preliminaries required for understanding the working principles of the BIKE scheme. Section 3 presents the structure of weak-keys in the BIKE scheme and an intuitive understanding of their effect on IND-CCA security. In Section 4, we present the results of our experimental evaluation. Finally, after introducing the key-check mechanism in Section 5, we make concluding remarks in Section 6.

2 Preliminaries

In this section, we present the basic concepts that will help the readers to understand the other sections of the paper. We first briefly review the QC-MDPC variant of the McEliece scheme (we refer the readers to [40, 26] for more information about the McEliece scheme and its QC-MDPC variant). Then, we review the BF decoding algorithm and the state-of-the-art BGF decoder. Finally, we describe the latest version of the BIKE scheme.

2.1 QC-MDPC Codes

Before we review QC-MDPC codes, we present some key concepts and definitions in the field of error correction codes. Error correction codes are widely used in communication protocols and recording systems to ensure reliable data transmission and storage. Considering a block of information bits, the error correction code computes redundancy bits (based on some encoding equations) and creates an -bit block of data (called a codeword) consisted of information bit and redundancy bits. The codeword is subsequently sent to the relevant destination, which exploits the redundant bits (based on some decoding rules) for detecting and correcting any errors in the received message and successfully retrieves the actual information.

Definition 1 (Linear Block Code [9]).

is a linear error correction code if the modulo-2 sum (i.e., XOR binary operation) of any two or multiple codewords is a valid codeword.

Definition 2 (Hamming weight [8]).

The Hamming weight of a codeword is defined as the number of non-zero bits in the codeword.

Definition 3 (Generator Matrix [9]).

The linear block code has a generator matrix which defines the one-to-one mapping between the -bit message block and the corresponding -bit codeword , i.e., .

Thus, G is used by the encoder to generate the distinct codeword c associated with the message block m. Note that the number of valid codewords for is which can be much smaller than (since

) , i.e., every binary vector over

is not necessarily a valid codeword of . Thus, can be considered as a -dimensional subset of .

Definition 4 (Systematic Code [16]).

is called a systematic code if its generator matrix is written in the form of in which is a identity matrix and A is a coefficient matrix.

If is systematic, in each -bit codeword c, the first bits are equal to the corresponding message block m, and the rest of the block is the parity-check (redundant) bits.

Definition 5 (Parity-check Matrix [9]).

The parity-check matrix of a linear code is an matrix that is orthogonal to all the codewords of , i.e., c is a valid codeword of if and only if , where denotes the matrix transpose operation.

If G is written in the systematic form (i.e., ), it is shown that H can be computed through . The decoder of uses H to decode the received vector.

Definition 6 (Syndrome of a received vector [9]).

Consider as a vector received by the decoder, where is the error vector with the maximum Hamming weight that represents the flipped bits of c due to the noisy channel. The syndrome of x is computed as .

For the syndrome vector S, we have

because . Thus, once x is received by the decoder, its syndrome S is firstly computed through . Then, the decoder needs to obtain e by solving which is then used to compute the sent codeword using . Finally, the message block m associated with c is returned as the decoded vector.

Definition 7 (Syndrome Decoding (SD) Problem [7]).

Given the parity-check matrix and the syndrome vector , the SD problem searches for a vector with the Hamming weight such that .

The SD problem was proved to be -complete if the parity-check matrix H is random [7]. This establishes the essential security feature required by code-based cryptosystems to be quantum-resistant. This is because quantum computation models are considered to be unable to efficiently solve -complete problems [1].

Definition 8 (Quasi-cyclic (QC) code [4]).

The binary linear code is QC if there exists an integer such that every cyclic shift of a codeword by bits results in another valid codeword of .

In a systematic QC code, each codeword c consists of blocks of bits, i.e., . Thus, every block includes information bits and parity bits. In a QC code with , we have

In this case, it is shown that the parity-check matrix H of is composed of circulant blocks of size (or , equivalently) [4] which is written as

(1)

where each circulant block has the following format:

(2)

Note that can be described by its first row only, i.e., the other rows are obtained by cyclic shifts of the first row. It is also shown that the generator matrix G of the above QC code can be written as

(3)

The above format can be proved using the fact that and by performing some linear algebra operations on it.

Definition 9 (QC-MDPC codes [26]).

An -QC-MDPC code is a QC code of length and dimension whose parity-check matrix has a constant row weight of .

Note that here, we only consider those QC-MDPC codes in which , i.e., .

The most important characteristics of QC-MDPC codes is that the circulant blocks in the parity-check matrix can be described by their first row only ( bits only). Thus, to construct H, one needs only the first row of the circulant blocks. Moreover, the parity-check matrix has a relatively small Hamming weight (i.e., ). Therefore, instead of storing bits, the positions (indexes) of non-zero bits can be used to store H. These are the key features of QC-MDPC codes that enable them to significantly mitigate the key size issue of the original McEliece scheme. As you will see in the next subsection, private key of the QC-MDPC variant (and BIKE, consequently) is the parity-check matrix of the selected code.

In the next subsection, we briefly review the new variant of the McEliece scheme that works based on QC-MDPC codes.

2.2 The QC-MDPC PKE Scheme

This cryptosystem is a variant of the McEliece code-based PKE scheme. It was proposed by Misoczki et al. [26] to mitigate the key size problem of the original McEliece scheme. It consisted of three subroutines, i.e., Key Generation, Encryption, and Decryption (see Fig. 1).

Fig. 1: Block diagram of the QC-MDPC variant of the McEliece scheme.

2.2.1 Key Generation

Private Key: In this variant, the parity-check matrix H of the underlying -QC-MDPC code plays the role of private key. It has the format shown in equation (1). To generate H, circulant blocks of size must be generated. To do this, for each block (), a random sequence of bits and Hamming weight is generated such that . This sequence is considered as the first row of . Then, the other rows are computed through cyclic shifts of the first row (i.e., cyclic shifts to generate row , ).

To store the private key, bits are needed. This is because each circulant block is represented by its first row only which can be stored using the indexes of its non-zero bits. This is much less than bits needed to store private key of the original McEliece scheme.

Public Key: The generator matrix G of the underlying -QC-MDPC code is the public key of this cryptosystem. It can be computed from the private key using equation (3). Since G is quasi-cyclic (similar to the circulant blocks in H), it can be represented by its first row only which has bits. Note that, the first bits belong to the identity matrix that do not need to be stored (always have a specific format). Thus, bits are required to store the public key which shows a significant reduction in the key size compared with bits in the original McEliece scheme. Unlike H, the first row of G does not necessarily have a small (and fixed) Hamming weight. Thus, the idea of storing the indexes of non-zero bits can not be used for the storage of public key.

2.2.2 Encryption

The encryption of a plaintext message is performed using the following equation:

where is a random vector of weight that is determined based on the error correcting capability of the corresponding decoder.

Parameter Description Value
Level 1 () Level 2 () Level 3 ()
Size of circulant blocks in 12,323 24,659 40,973
the parity-check matrix H
Row weight of the 142 206 274
parity-check matrix H
Hamming weight of 134 199 264
the error vector
Size of the generated 256 256 256
symmetric key
TABLE I: System parameters of BIKE for different security levels.

2.2.3 Decryption

The receiver performs the following procedure to decrypt the received ciphertext .

  • Apply x to the corresponding -error correcting decoder that leverages the knowledge of H for efficient decoding. The decoder finds the error vector e and returns the corresponding codeword .

  • Return the first bits of c as the decoded plaintext message m (because G is in the systematic form).

Note that the systematic form of G can put the scheme vulnerable to chosen ciphertext and message recovery attacks. The reason is that the first bits of ciphertext includes a copy of the plaintext m with some possible flipped bits (since for ). In fact, two ciphertexts and are most likely distinguishable if an attacker knows their corresponding plaintexts and . In the worst case, if for , the ciphertexts are certainly distinguishable. To address this issue, a CCA transformation model can be used (e.g., [23]) that converts plaintext m to a random vector whose observation brings no useful knowledge for a CCA attacker.

Regarding the above mentioned decoder , several decoding algorithms have been proposed so far [19, 35, 34, 15, 12, 14, 28]. We refer the readers to [14] and [12] for more information about the most efficient QC-MDPC decoders.

Fig. 2: Block diagram of the BIKE scheme. The and symbols represent concatenation and de-concatenation operations, respectively.

2.3 The BIKE Scheme

BIKE [30] is a code-based KEM scheme that leverages the QC-MDPC PKE scheme for encryption/decryption. It has been recently qualified for the final round of the NIST standardization project as an alternate candidate. In previous rounds of the NIST competition, BIKE was submitted in the form of three different versions (BIKE-1, BIKE-2, and BIKE-3) each one satisfied the needs of a specific group of cryptographic applications (e.g., bandwidth, latency, security, etc.). However, in the final round, following the recommendation of NIST, it was submitted in the form of a single version that relies heavily on BIKE-2. The final version suggests three sets of system parameters to satisfy the three different security levels defined by NIST, i.e., level 1 (128-bit security), level 2 (192-bit security), and level 3 (256-bit security) (see Table I). To address the IND-CCA security issues exist in the QC-MDPC variant, the Fujisaki-Okamoto (FO) CCA transformation model has been integrated into the BIKE scheme [21], [11] (see Fig. 2). In addition, in the final version of BIKE, the state-of-the-art BGF decoder [14] has been deployed in the decapsulation subroutine that provides negligible DFR in five iterations.

BIKE includes three subroutines, namely, key generation, encapsulation, and decapsulation (see Fig 2). The procedure is started by Bob who wants to establish an encrypted session with Alice. They need to securely share a symmetric key to start their encrypted session. To do this, Bob firstly generates his public and private keys by running the key generation subroutine. Then, he sends his public key to Alice who uses it to generate the ciphertext and the symmetric key (using the encapsulation subroutine). The first part of C (i.e., ) is the main data encrypted by the underlying ()-QC-MDPC scheme while the second part (i.e., ) protects it against malicious manipulations. Then, Alice sends C to Bob through an insecure channel. By running the decapsulation subroutine, Bob applies to the corresponding QC-MDPC decoder (i.e., the BGF decoder) to decrypt the data. He also checks the integrity of using to ensure it has not been changed. Finally, Bob could generate the same symmetric key as Alice computed.

In BIKE, all the circulant matrix blocks (e.g., and of the parity-check matrix) are treated as polynomial rings since it increases the efficiency of the computations required in the key generation, encapsulation, and decapsulation subroutines. In this regard, considering a as the first row of the circulant matrix A, the -bit sequence a can be represented by the polynomial (see [2] for more information). In the following, we briefly review the three subroutines of the BIKE scheme. We refer readers to [30] for more detailed information about BIKE.

2.3.1 Key Generation

Private Key: Since BIKE works based on the QC-MDPC variant, private key is the parity-check matrix of the underlying ()-QC-MDPC code with circulant blocks. To generate the private key, two random polynomial and are generated with the Hamming weight of . Then, , a random sequence of bits is generated. Finally, the private key is set as ).

Public Key: Set a public key as .

2.3.2 Encapsulation

The encapsulation subroutine takes the public key h as input and generates the ciphertext C and the symmetric key . To do this, three hash functions (modelled as random oracles) , , and are defined and used here. The following procedure is performed in this subroutine.

  • Randomly select an -bit vector m from the message space .

  • Compute where and are error vectors of bits such that .

  • Compute and send it to the recipient.

  • Compute as the secret symmetric key.

2.3.3 Decapsulation

The decapsulation subroutine takes the private key sk and ciphertext C as input and generates the symmetric key as follows.

  • Decode by computing the syndrome and apply it to the corresponding BGF decoder to obtain the error vectors and .

  • Compute . If , set .

  • Compute

The deployed CCA transformation model prevents a CCA attacker (e.g., a GJS attacker) to freely choose any error vector required by the attack procedure and submits the obtained crafted ciphertext to the receiver (i.e., to craft a ciphertext based on a malicious plan). It is because the error vector are computed by the one-way hash function . If the attacker changes the legitimate error vector , the integrity check at the receiver will fail (i.e., ). Therefore, to feed the ciphertext with a desired error vector e, the attacker has to find the corresponding vector m such that . This imposes heavy burden to the attacker since many queries must be submitted to the random oracle to identify the corresponding vector m. We will discuss this problem in the next section.

3 Weak-Keys in the BIKE Scheme

In this section, we first present a formal definition for the weak-keys that we consider in this work. Then, we discuss the effect of weak-keys on the IND-CCA security of BIKE and review the weak-key structures that have been identified so far.

3.1 Definition of Weak-Keys

In this work, we consider a private key of the BIKE scheme as weak if decoding of the ciphertexts generated using its corresponding public key results in a much higher DFR than the average DFR of the decoder. Regardless of performance degradation issues, such weak-keys can offer a significant advantage to CCA adversary for conducting a reaction attack (see the next subsection for more details). It is noteworthy that, prior research have indicated the possibilities of recovering weak private keys from the corresponding public key in the QC-MDPC PKE scheme [5], [3]. Specifically, it is shown that there exists weak private keys whose structure can facilitate an adversary in compromising them by applying some linear algebra techniques such as extended Euclidean algorithm on their corresponding public keys. However, those weak structures are not relevant to IND-CCA security (the adversary does not need to conduct a chosen ciphertext attack to compromise those keys) and thus are not consider in this work. Instead, we assume that recovering private key from the corresponding public key is infeasible in the BIKE scheme. Therefore, the attacker needs to conduct a chosen ciphertext attack to recover the private key by leveraging the decoder’s reactions.

3.1.1 The Negative Effect of Weak-Keys on DFR

To gain an intuitive insight into understanding the impact of weak-keys upon decoder’s performance, we need an elaboration of the following question.

How do the columns of H with a large intersections result in a higher DFR? To answer this question, we consider H as the parity-check matrix (i.e., private key) in which columns and () have non-zero bits at exactly the same positions (i.e., intersections between the two columns). If H is a normal key (i.e., not a weak key), it is shown that the largest possible value of is usually small (e.g., 5) as compared with the Hamming weight of each column, i.e., [36]. Now, assume we have private key H in which (for th and th columns) is much larger than that of normal keys. Also, assume that, and in the original error vector e. In this case, it is intuitive to imagine that the number of unsatisfied parity-check equations (i.e., ) for th and th bits will be highly correlated since they would have similar connections on the corresponding Tanner graph. In other words, both and bit nodes are involved in almost the same parity-check equations due to the large intersection between them. Thus, in a decoding iteration, if a specific parity-check equation (that involves bit node and ) is unsatisfied, it will be counted towards both th and th bits. Thus, it is highly likely that the decoder returns due to their (correlated) s being greater than the set threshold. In fact, a real error at th bit results in a situation that convinces the decoder to incorrectly considers as a set error bit. Again, in the next iteration, the same procedure is performed, this time the th bit that was mistakenly considered as a set error bit (i.e., the decoder flipped it to in the previous iteration while its real value is 0) results a high value for the correlated and . Thus, the decoder (again) identifies both of them as set error bits and flips their value. Although this corrects , it results in an incorrect value for . This process is repeated back and forth for all iterations, making decoder incapable of finding the correct vector e leading to decoding failure.

3.2 Weak-Keys and IND-CCA Security of BIKE

As stated earlier, BIKE deploys a probabilistic and iterative decoder with a fixed number of iterations for ensuring that constant-time implementation needed for suppressing any side-channel knowledge available to an adversary [12]. This decoder can fail to successfully decode a ciphertext in the allowed number of iterations. The decoding capability of such probabilistic decoders is generally represented in terms of (average) DFR. Prior research has shown that the higher DFR of a decoder deployed in QC-MDPC-based schemes (such as BIKE) can facilitate the efficient recovery of private key through an attack referred to as GJS (attack) [20]. In this attack, some specific formats for the error vector e are used to craft a large group of ciphertexts that are submitted to the decryption oracle (this constitutes CCA). Then, utilizing decryption failures, the attacker can recover the private key. As mentioned before, to circumvent this possibility, BIKE mechanism adopts FO CCA transformation model [21] that prevents the attacker from cherry-picking the desired error vector e needed for successful conduction of GJS attack ( i.e., in BIKE, e is the output of a one-way hash function, thus forbids attackers from crafting some specifically chosen ciphertexts). However, despite this simple remedy, a lower DFR is still important for ensuring the required level of security (see below for details).

For a formal proof of IND-CCA security, we first need to define a -correct KEM scheme. Based on the analysis provided in [21], a KEM scheme is -correct if;

(4)

Note that, the term on the left-hand side of the aforementioned inequality is the average DFR (hereinafter denoted as ) taken over the entire key space and all the error vectors. Thus, if , then the KEM scheme is said to be -correct.

For a -correct KEM scheme, it is shown that the advantage of an IND-CCA adversary is upper bounded as,

(5)

where is the number of queries that needs to submit to the random oracle model (i.e., the hash function ) to find the valid vector m that are needed for the desired error vector (see Fig. 2). in Eq. (5) is a complex term that is related to IND-CPA security and is not relevant to IND-CCA analysis and is thus not considered.

Based on the above definitions, a KEM scheme is IND-CCA secure offering -bit security if [21], where is the running time of that is approximated as , with representing the running time of a single query. Since typically , we have , such that , resulting in . Therefore, to provide -bit of IND-CCA security, the KEM scheme must be -correct with , or equivalently (using Eq. (4)),

(6)

However, for QC-MDPC codes used in the BIKE scheme, there exists no known mathematical model for an accurate estimation of (taken over the whole key space and error vectors). Instead, corresponding to needed security-level is estimated through experiments with limited number of ciphertexts (although sufficiently large) and then applying (linear) extrapolation (see [35] for more details). As a result, the claimed DFR may necessarily not be same as the actual . In the worst case, we assume that there is a group of keys for which the value of DFR is high (i.e., the set of weak-keys ). If the weak-keys have not been used in the experiments performed for estimating the average DFR, then the actual may be larger than the estimated DFR (obtained empirically) such that the condition in Eq. (6) is not met. Therefore, the impact of weak keys on must be investigated to estimate the actual value of average DFR and ensure the IND-CCA security of the scheme. To formulate the equation for IND-CCA security in presence of weak-keys, we consider as size of (i.e., the number of weak-keys), as the average DFR taken over , as the set of other keys (i.e., is equal to the whole key space ), and as the average DFR taken over . In this case, becomes;

(7)

where and . By combining Eqs. (6) and (7), we have,

(8)

From (8), the modified condition for IND-CCA security is obtained as;

(9)

Therefore, to provide -bit of IND-CCA security, the set of weak-keys must be negligible enough (compared with ) such that (even if is significantly larger than ).

3.3 Structure of Weak-Keys in BIKE

The weak-keys for BIKE scheme can be determined by adopting the approach proposed in [20] (i.e., the GJS attack methodology for recovering private keys). In this attack, the attacker primarily targets (i.e., the first block of private key). If is successfully recovered, then the attacker can easily compute from by performing simple linear algebra operations on . Precisely, using Eq. (3), we can re-write as , which results in , since . Therefore, the attacker can easily obtain (note that, for , we have ).

To find , the attacker selects the error vector e from a special subset (). The parameter is defined as the distance between two indexes (positions) and in the first row of (shown by ) which is formally defined as follows:

For example, considering , the distance between the first and last bits is 1 because . Based on the above definition, is generated as

Note that, the second half of the error vector e selected from is an all- vector, i.e., the Hamming weight of is . Each () indicates the position of a non-zero bit in .

As demonstrated in prior work research [20], when the error vectors are selected from

, there exists a strong correlation between the decoding failure probability and the existence of distance

between the non-zero bits in (i.e., the first row of first circulate block ). In other words, if distance exists between two non-zero bits in , then the probability of decoding failure is much smaller in contrast with the case when such a distance does not exist. Utilizing this important observation, firstly, the attacker (empirically) computes the probability of decoding failure for various distances . This is done by submitting many ciphertexts (generated using the error vectors selected from ) to the decryption oracle and recording the corresponding result of the decryption (i.e., successful or failed). Then, each value of

is classified into either

existing or not existing classes based on the obtained failure probability, i.e., a specific value of with a small failure probability is categorized as existing, and vice versa. Based on the obtained categorized distances, the distance spectrum of is defined as follows:

Moreover, since a distance may appear multiple times in , the multiplicity of in is defined as follows:

Finally, based on the obtained distance spectrum and the multiplicity of every distance , the attacker can reconstruct . To do this, the attacker assigns the first two non-zero bits of at position 0 and , where is the minimum distance in . Then, the third non-zero bit is put (iteratively) at a position such that the two distances between the third position and the previous two positions exist in . This iterative procedure continues until all the non-zero bits of are placed at their positions. Note that, the attacker needs to perform one or multiple cyclic shifts on the obtained vector to find the actual vector . This is because the first non-zero bit was placed at position 0 which is not necessarily the case in .

The structure of weak-keys in BIKE are determined based on the concepts introduced in the GJS attack (i.e., distance, distance spectrum, and multiplicity of a distance). In this regard, three types of weak-keys have been specified in [36] which are detailed below.

3.3.1 Type 1

In the first type, considering the polynomial representation of binary vectors, the weak-key with -consecutive non-zero positions and Hamming weight is defined as [36],

(10)

where is the distance between non-zero bits of the -bit pattern in the weak-key, determines the beginning position of the -bit pattern, and is a mapping function that replaces with , thus, results in distance between any two successive s in .

Note that, to construct the weak-key in this format, each block (with Hamming weight of ), is first divided into two sections. The first section is an -bit block in which all the bits are set to (i.e., , in the polynomial form). The second section is which is an -bit block with the Hamming weight and randomly chosen nonzero bits. Then, the two sections are concatenated and an -bit cyclic shift is applied on it (using the term). Finally, by applying , the 1-consecutive non-zero bits of the first section are mapped to a block of -consecutive non-zero bits. Note that, in this type of weak-keys, considering the distances (), a lower bound for the multiplicity metric can be obtained as for .

To compute , firstly, consider the second section of a weak-key defined using (10), i.e., . It is a -bit vector of Hamming weight . Thus, we have options for . For the first part of the weak-key (i.e., the -bit pattern), there is only one option as all the bits are set to . The entire (concatenated) package is subsequently shifted (cyclically) by bit, where . Thus, we have different options for the cyclic shifts. Finally, there can be different mappings for , as . Consequently, for , we have [36];

(11)

Note that, the term is essential in Eq. (11) because there are two circulant blocks in h. Finally, for this type is obtained as;

(12)

3.3.2 Type 2

In the second type of weak-keys identified in [36], the focus is on having a single distance with a high multiplicity factor. In this type, the weak-key with Hamming weight and parameter has the multiplicity for a distance and . If , the number of weak-keys is upper bounded by . It is because for , the distance between all the non-zero bits of is . Unlike the first type, the blocks () do not have a second section . Thus, the term that appeared in Eq. (11) is replaced with 1.

However, for , the upper bound for is obtained using a more complicated approach. In this regard, consider the general format for () that starts with s followed by s followed by s, etc., (), i.e., and . In this case, it is shown that the upper bound for is [36];

(13)

where is the number of and blocks.

Eq. (13) can be proved by applying the stars and bars principle [17] on the sets of and separately . According to this principle, the number of -tuples of positive integers () whose sum is (i.e., ) is;

(14)

Considering the set , the value of (the number of bits in the first container) varies from to and for each value of the principle is applied on the remaining containers, i.e., (note that we need to consider the condition to meet ). Therefore, for every value of , we have and (because and bits are already allocated to the first container). It results in term in Eq. (13) (according to (14)). Similarly, for set , for every value of (), we have and that results in term in Eq. (13). The term is applied to consider the number of different circular shifts that are applicable in each case.

3.3.3 Type 3

Unlike the previous types that consider a single block of the parity-check matrix, this type of weak-keys are defined such that the columns of and (in the parity-check matrix) jointly create an ambiguity for the BF-based decoder, resulting in a high DFR. If column of and column of both have non-zero bits at exactly the same positions (i.e., intersections between the two columns) and is large, their number of unsatisfied parity-check (i.e., and ) equations (counted during the decoding procedure) will be highly correlated. In this case, if or in the real error vector e, the high level of correlation can prevent the decoder from finding e within the allowed number of iterations.

The upper bound for in this type of weak keys is obtained as [36]:

(15)

Firstly, positions should be chosen from the set of positions of the non-zero bits. Then, once the positions of non-zero bits are determined, the remaining position can be chosen from the remaining available positions. Finally, in every case, circular shifts of the obtained vector is also a weak-key. It results in the term in Eq. (15). Note that, there is no term in Eq. (15) (compared with Eqs. (11) and (13)) because the second block of h follows the same structure as the first block such that for , where indicates component-wise product.

4 Experimental Setup

In this section, we first provide some key technical details about our BIKE implementation. Then, we present the results of our extensive experiments and provide a relevant discussion.

4.1 Our Implementation

We implemented the key generation, encryption, and decoding modules of the BIKE scheme in MATLAB. We used the BIKE system parameters suggested in the latest technical specification of BIKE [2] for bit security level. The simulations were performed on eight powerful severs equipped with Intel(R) Xeon(R) 2.5GHz CPU (6 processors) and 64GB RAM. We used the extrapolation technique proposed in [35] to estimate DFR of the BGF decoder for the weak-keys. In the following, we provide some technical details on our implementation of key generation, encryption, and decoding procedures.

Key Generation: The most difficult challenge in the implementation of key generation module is to perform the polynomial inversion operation (over ) which is needed to compute public key from private key (see Eq. (3) and Section 2.3.1). Note that, due to the large value of , computing the inverse in the matrix domain is not an efficient approach, and thus can contribute towards computational overhead for our analysis. To have the light-weight inverse operation for our analysis, we adopted the latest extension of the Itoh-Tsujii Inversion (ITI) algorithm [22, 13] for polynomial inversion. This algorithm is based on the Fermat’s Little Theorem that gives the inverse of polynomial as . The ITI algorithm provides an efficient calculation of through cyclic shifts of the a’s binary vector. To utilize this approach, the adopted extension of the ITI algorithm uses a novel technique to convert to a series of sub-components that are computed using easy-to-implement cyclic shifts.

Based on the mentioned approach, the adopted algorithm needs to perform multiplications and squaring operations to compute the inverse, where indicates the Hamming weight of the value of written in the binary format. Thus, it is a scalable algorithm in terms of and much more efficient than the inverse computation in the matrix domain.

We also perform polynomial multiplications before generating the public key. We developed a technical solution explained in the next paragraph. Finally, for each set of experiments, we saved the generated public and private keys in a file such that the keys can be easily accessed by the encryption and decoding scripts, respectively.

Encryption: To compute the ciphertexts, polynomial multiplication is the basic operation performed in the encryption module (see Section 2.3.2). We developed the following approach to compute the multiplication of two polynomials and of degree .

Assuming that , we computed the binary coefficients of c as , , , . We observe that

(16)

We implemented Eq. (16) to perform the multiplications required for computing ciphertext . The same approach is also used to perform the polynomial multiplications in the key generation module.

Decoding: We implemented a BGF decoder based on Algorithm 1 provided in [2] with 128 bit security level, i.e., , , . In each iteration, the syndrome vector is firstly updated using the received ciphertext/updated error vector and the private key. Then, the number of unsatisfied parity-check () equations for each node bit is counted and compared with the threshold , where is the Hamming weight of the syndrome vector updated at each iteration. If it is larger than , the relevant bit in the error vector is flipped. The next iteration is executed using the updated error vector. Finally, after iterations, if the updated syndrome S is equal to , vector e is returned as the recovered error vector; otherwise, the decoder returns failure.

Note that, in the first iteration of the BGF decoder, two additional steps are performed that are related to the Black and Gray lists of the bit nodes. These two lists are created and maintained in the first iteration to keep track of those bit flips that are considered uncertain. The Black list includes those bit nodes that have been just flipped (i.e., ) while the Gray list maintains the index of those bit nodes for which is very close to the threshold such that . Then, to gain more confidence in the flipped bits, a Black/Gray bit node is flipped if its updated is larger than the empirically set threshold .

Fig. 3: Result of linear extrapolation technique proposed in [35] for estimation of DFR with varying values of . (a) , (b) , (c) , (d) , (e) , (f) .

4.2 Experimental Methodology

As mentioned before, there exists no formal mathematical model that can lead to precise computation of DFR in BF-based decoders. To circumvent this, in prior works, DFR is estimated empirically. We adopt a similar empirical approach for DFR estimation in this work, but with the emphasis on weak-keys, a feature that has been overlooked in prior works, specifically in context of BGF decoders (i.e., recommended for BIKE mechanism submitted to NIST). To conduct our analysis and visualize the impact of weak-keys upon BIKE (i.e., BGF decoder), we leveraged our Matlab implementation as detailed above in Section 4.1. We started our analysis by crafting Type I weak-keys (see Section 3.3.1) for varying values of (this is selected such that 2 is primitive modulo ) and by incrementing parameter from 5 to 40 in steps of 5 for each value of .

Ideally, one needs to perform an analysis on the value of that will result in the DFR corresponding to the required level of security. This is because the prior research has shown that the average DFR must be upper bound by for ensuring the -bit of IND-CCA security [35]. Therefore, for 128-bit of security (i.e., minimum requirement for NIST standardization), the DFR of deployed decoder should be . In other words, ciphertexts must be generated and applied to the decoder to record a single failure, which is impracticable even on a powerful and efficient computing platform. In view of this bottleneck, prior research (see [35]) have resorted to extrapolation techniques applied to DFR curve obtained with some small values of (as compared with that needed for DFR of , but sufficiently large to estimate the overall trend of DFR). This technique is based on the assumption (supported by empirical data) that is a concave and decreasing function for . More precisely, in this approach, DFR is empirically obtained for some smaller values of (which results in relatively large DFRs that can be measured using simulation), and then, the last two points on the DFR curve are linearly extrapolated to obtain the third point that corresponds to the desired value of needed for target security level (e.g., in the BIKE BGF decoder for 128-bit security).

We adopt a similar methodology for our analysis — i.e., for each value of (i.e., 5 to 40), we compute DFR with two relatively small values of before extrapolating them to (i.e., the same value proposed in BIKE [30] corresponding to DFR of

). Precisely, we compute DFR at each point with at least 1000 failures, ensuring the confidence interval of 95%

[35]. Our analysis revealed that, as per expectations, increases with , thereby allowing us to move the tested values of from and (for , and ) to and (for ), and and (for ). Moreover, for and , since we expected large values for , we did not need to perform the extrapolation approach, i.e., DFR was directly measured at .

Once we obtain DFR for each value of (5-40), Eq. (9) suggests that for providing the -bit of IND-CCA security, the