Efficient and Secure ECDSA Algorithm and its Applications: A Survey

by   Mishall Al-Zubaidie, et al.

Public-key cryptography algorithms, especially elliptic curve cryptography (ECC) and elliptic curve digital signature algorithm (ECDSA) have been attracting attention from many researchers in different institutions because these algorithms provide security and high performance when being used in many areas such as electronic-healthcare, electronic-banking, electronic-commerce, electronic-vehicular, and electronic-governance. These algorithms heighten security against various attacks and at the same time improve performance to obtain efficiencies (time, memory, reduced computation complexity, and energy saving) in an environment of the constrained source and large systems. This paper presents detailed and a comprehensive survey of an update of the ECDSA algorithm in terms of performance, security, and applications.


page 1

page 2

page 3

page 4


A Secure Multiple Elliptic Curves Digital Signature Algorithm for Blockchain

Most cryptocurrency systems or systems based on blockchain technology ar...

Secured Wireless Communication using Fuzzy Logic based High Speed Public-Key Cryptography (FLHSPKC)

In this paper secured wireless communication using fuzzy logic based hig...

Implementing ECC on Data Link Layer of the OSI Reference Model

The Internet, a rapidly expanding communication infrastructure, poses si...

A New Distribution Version of Boneh-Goh-Nissim Cryptosystem: Security and performance analysis

The aim of this paper is to provide two distributed versions of the Bone...

Pre-shared Key Agreement for Secure Public Wi-Fi

This paper presents a novel pre-shared key (PSK) agreement scheme to est...

Pay as You Go: A Generic Crypto Tolling Architecture

The imminent pervasive adoption of vehicular communication, based on ded...

1 Introduction

Public key encryption algorithms such as elliptic curve cryptography (ECC) and elliptic curve digital signature algorithm (ECDSA) have been used extensively in many applications [1] especially in constrained-resource environments due to the effectiveness of their use it in these environments. These algorithms have appropriate efficiency and security for these environments. Constrained resource environments such as wireless sensor network (WSN), radio frequency identifier (RFID), and smart card require high-speed, low consumption ability, and less bandwidth. ECC has been considered to be appropriate for these constrained-source environments [2]. These algorithms provide important security properties. For example, ECDSA provides integrity, authentication, and non-repudiation. ECDSA has been proven to be efficient in its performance because it uses small keys; thus the cost of computation is small compared with other public key cryptography algorithms, such as Rivest Shamir Adleman (RSA), traditional digital signature algorithm (DSA) and ElGamal. For example, ECDSA with a 256-bit key offers the same level of security for the RSA algorithm with a 3072-bit key [3, 4]. Table 1 [5, 6, 7] shows a comparison of key sizes for public key signature algorithms.

Algorithm Keys sizes Ratio Author(s) Year
and Adleman
Elgamal Taher Elgamal 1985
DSA 1024 2048 3072 7680 15360
David W.
1991 Multiplicative group Schnorr, Nyberg-Rueppel
ECDSA 160-223 224-255 256-383 384-511 512-more 1:6-30 Scott Vanstone 1992
Elliptic curve
discrete log.
Table 1: Keys sizes and some information for public key algorithms

The preservation of efficiency and security in the ECDSA is important. On one hand, several approaches have been developed to improve the efficiency of the ECDSA algorithm to reduce the cost of computation, energy, memory, and consumption of processor capabilities. The operation that consumes more time in ECC/ECDSA is the point multiplication (PM) or scalar multiplication (SM). ECC is used PM for encryption and decryption, while ECDSA is used this operation to generate and verify the signature [8]. One can improve the PM efficiency by improving finite field arithmetic (such as inversion, multiplication, and squaring), elliptic curve model (such as Hessian and Weierstrass), point representation (such as Projective and Jacobian), and the methods of PM (such as Comb and Window method) [9]. Many researchers have made improvements to the PM to increase the performance of the ECC/ECDSA as we will see in Section 4.
On the other hand, the security improvement in ECDSA is no less important than its efficiency because this algorithm is designed primarily for the application of security properties. ECDSA, like previous algorithms, may possibly suffer from some of the security vulnerabilities such as random weak, bad random source [1] or leaking bits of the private key. Also, many researchers have made improvements to close security gaps in the ECC/ECDSA algorithm by providing countermeasures against various attacks. But when selecting countermeasure there should be a balance between security and efficiency [10]. To maintain the security of these algorithms it is important to use finite fields (either prime or binary) recommended by credible institutions such as the Federal Information Processing Standard (FIPS) or National Institute of Standards and Technology (NIST). The choice of appropriate curves and finite fields according to the authoritative organizations’ standards leads to secure ECDSA’s implementations [11]. Therefore, we note from the above that any encryption algorithm or signature should possess a high performance and security level.

1.1 Our Contributions

Our contribution in this survey is to provide an updated study of three important parts in ECC/ECDSA. These items can be summarized as follows:

  • [noitemsep,nolistsep]

  • Presenting a study on the efficiency of ECC/ECDSA in terms of speed, time, memory, and energy.

  • Investigating the security problems of the ECC/ECDSA and countermeasures.

  • Giving a description of the most important applications that use ECC/ECDSA algorithms.

1.2 Structure of the paper

The remainder of this paper is structured as follows: Section 2 provides basic concepts and general information about ECC and ECDSA algorithms. Section 3 describes existing surveys about ECC/ECDSA. Efficiency improvement on ECDSA algorithm is presented in Section 4. In Section 5, we will show the security improvement on ECDSA algorithm through using countermeasures against attacks. The ECDSA applications are described in Section 6. Finally, we will present the conclusion and future work on this survey in Section 7.

2 Preliminaries of ECDSA

In this section, we will present fundamentals about elliptic curve cryptography (ECC) and basic concepts of the ECDSA algorithms.

2.1 Ecc

ECC has been used to encrypt data to provide confidentiality in the communications network with limited capacity in terms of power and processing. This algorithm was independently proposed by Neal Koblitz and Victor Miller in 1985 [12]. It depends on the discrete logarithm problem (DLP) which is impervious to different attacks when selecting parameters accurately [13], i.e difficulty obtaining k from P and Q ( where k is integer and, P and Q are two points on the curve). Small parameters used in ECC help to perform computations quickly. These computations are important in constrained-source environments that require processing power, memory, bandwidth or power consumption [7]. ECC provides encryption, signature and keys exchange approaches [12]. Many operations are performed in ECC algorithms (shown in four layers) as shown in Figure 1 [14].

The first layer cryptographic protocol

The second layer elliptic curve point multiplication

The third layer point operations

The lower finite field arithmetic


Point multiplication (PM)

Point additioin

Point doubling




Figure 1: Arithmetic operations in ECC hierarchy

ECC uses two finite fields (prime field and binary field). The binary field uses two types to represent basis ( normal and polynomial basis) [7], and is well suited to implementation in hardware [14]. Let indicates field type, if q=p (where ) then ECC uses prime field (). In the second case, if q=2 then ECC uses binary field () where m is the prime integer [15]. ECC consists of a set of points (), where are integers and the point at infinity (O) provides an identity for Abelian group rule that satisfies long form for Weierstrass equation (with Affine coordinate).


When prime field is used over ECC, the simplified equation is as follows: highlighted


Where . The law of chord-and-tangent is used in ECC to add two points on the curve. Let us suppose that P and Q are two points on the curve; these two points have coordinates respectively and the sum of these two points is equal to a new point (i.e +=) [7]. ECC uses two operations for addition that are point addition(+) and point doubling (+) (Figure 1), as in the following equations:
In the case of the point addition where and :
with using the slope


In the case of the point doubling where :
with using slope


When the binary field is used over ECC, the simplified equation is as follows:


Where [7], as addition operations are in the following form:
In the case point addition where and :
with using the slope


In the case point doubling where :


ECC operations for encryption and decryption are explained through the algorithm 1 [16].

1:Alice and Bob use same parameters domain , where are coefficient, is field type, is base point, is order point and is cofactor.
2:Alice selects random integer as private key.
3:Alice generates public key and sends and to Bob.
4:Bob receives message from Alice, selects random integer as private key where .
5:Bob encrypts message with point in elliptic curve .
6:Bob computes , and sends and to Alice.
7:Alice receives Bob’s message and decrypts the message by computing to obtain plaintext.
Algorithm 1 ECC encryption and decryption algorithm:

2.2 Ecdsa

ECDSA algorithm is used to warrant data integrity to prevent tampering with the data. This algorithm was proposed by Scott Vanstone in 1992. Data integrity of the message is important in the networks because the attacker can modify the message when it is transferred from source to destination [17]. Many organizations use it as standard such as ISO (1998), ANSI (1999) and, IEEE and NIST (2000) [7]. This algorithm is similar to the digital signature algorithm (DSA), where both algorithms depend on the discrete logarithm problem (DPL), but ECDSA algorithm uses a set of points on the curve and the generating keys are small. ECDSA algorithm with key length 160-bit provides the equivalent for symmetric cryptography with a key length of 80-bit [17]. It is dramatically convenient for devices with constrained-source because it uses small keys and provides computation speed in the signature [18]. Moreover, four point multiplication operations used in ECDSA algorithm: one is in public key generation, one for signature generation and two for signature verification [19]. In addition, this algorithm consists of three procedures: key generation, signature generation, and signature verification. These operations are explained as follows:

  • [noitemsep,nolistsep]

  • Key generation:

    1. [noitemsep,nolistsep]

    2. Select a random or pseudorandom integer in the interval .

    3. Compute

    4. Public key is , private key is .

  • Signature generation:

    1. [noitemsep,nolistsep]

    2. Select a random or pseudorandom integer k, 1 1.

    3. Compute and convert to an integer .

    4. Compute mod . If then go to step 1.

    5. Compute mod .

    6. Compute SHA-1() and convert this bit string to an integer .

    7. Compute = mod . If then go to step 1.

    8. Signature for the message m is .

  • Signature verification:

    1. [noitemsep,nolistsep]

    2. Verify that and are integers in the interval [1,-1].

    3. Compute SHA-1() and convert this bit string to an integer .

    4. Compute mod .

    5. Compute mod and mod .

    6. Compute .

    7. If then reject the signature. Otherwise, convert the -coordinate of to an integer , and compute mod .

    8. Accept the signature if and only if .

ECDSA algorithm becomes unsuitable for signing messages (integration) if used poorly and incorrectly. Validation of domain parameters is important to ensure strong security against different attacks. This algorithm becomes strong if the parameters are well validated [20, 21]. The authors’ recommendations are to update the validation scheme.

2.3 ECDSA Implementations on Constrained Environments

The performance and security presented by ECDSA algorithm, make it suitable for use in several implementations on WSN, RFID, and smart card. Digital signatures in ECDSA have better efficiency in devices with constrained-resource than DSA and RSA. Many authors have pointed to the possibility of using ECDSA with environments constrained-resource (memory, energy, and CPU capability). In the following Subsections, we will explain using ECDSA with these implementations.

2.3.1 Wireless Sensor Network (WSN)

A WSN consists of a group of nodes that communicate wirelessly with each other to gather information about a particular environment in various applications [22]. This network often has restricted sources such as energy and memory. Therefore, a WSN needs efficient algorithms to reduce the complexity of the computation in order to increase the length of the network lifetime. In addition, it requires a high level of security to prevent attacks. Several researchers have investigated the use of ECDSA algorithm in WSN and explained that ECDSA is convenient for WSN.
For example, Wander et al. [23] presented a study in energy for the public key cryptography (ECC/ECDSA, RSA) on sensor node Mica2dot with Atmel ATmegal 128L (8bit). They found that transmission cost is double the receiving cost. They analysed signatures in ECDSA with a key of bit-160 and RSA with a key of bit-1024, signature verification cost in ECDSA is larger than a signature generation while RSA verification is smaller than a signature generation. They noted that ECDSA has less energy cost than RSA. They concluded that ECC/ECDSA is more effective than RSA and feasible in constrained-source devices (WSN) because it generates small keys and certificates with same security level in RSA. Also, implementation was applied for 160-bit ECC/ECDSA and 1024-bit RSA algorithms on sensor node MICAz [24]. The authors used hybrid multiplication to reduce access memory. ECDSA results on MICAz are signature generation=1.3s and signature verification = 2.8s. For the purpose of comparison, the authors implemented ECDSA also on TelosB, where MICAz results were slightly less than TelosB’s results. The authors proved the possibility of using RSA and ECDSA on WSN. In addition, ECDSA (SHA-1) and RSA (AES) were analysed in several types of sensor nodes (Mica2dot, Mica2, Micaz, and TelosB) in terms of energy and time [25]. ECDSA uses short keys (160-bit), which reduces memory, computation, energy and data size transmitted and thus is better than the RSA. De Meulenaer et al. [26] discussed the cost evaluation of energy (communication, computation) on the WSN (TelosB and MICAz) through the Kerberos key distribution protocol (symmetric encryption) and ECDH-ECDSA key agreement protocol (asymmetric encryption). They noted that TelosB sensor node consumes less power than MICAz and Kerberos performs better than ECDH-ECDSA. Moreover, Fan and Gong [27] implemented ECDSA on Micaz motes with the binary field (163-bit). They improved signature verification via cooperation idea of the adjacent nodes. Also, ECDSA’s implementation was presented in sensor node (IRIS) [28], but because this node supports 8-bit of the microcontroller the author modified the SHA-1 code from 32 bits original to 8 bits. Through implementation, the original algorithm is better in size and time than a modified algorithm. The author explained the possibility of using ECDSA algorithm with the sensor node (IRIS) held 8-bit microcontroller.
Recently, researchers in [29, 30] have applied the ECDSA algorithm as a light-weight authentication scheme in the WSN. This demonstrates the effectiveness and efficiency of using ECDSA in WSN in terms of security and performance.

2.3.2 Radio Frequency Identifier (RFID)

Another implementation of ECDSA is Radio Frequency Identifier (RFID). RFID is one of the technologies used in wireless communication. This technology has been used significantly in various fields, but it suffers from constrained-resources such as area and power. Therefore, many researchers considered ECDSA algorithm to be the best choice in RFID tags because ECDSA is not required to store private/public keys as in symmetric cryptography algorithms.
The hardware’s implementation of ECDSA algorithm has been widely used on RFID technology [31]. Authors have used ECDSA algorithm to authenticate the entity. They have applied this algorithm to the prime field () according to SECG standards. They accelerated multiplication algorithm by combining integer multiplication and fast reduction. Through implementation, they get good results in reducing chip area and power consumption ( in 1MHz). They concluded that ECDSA computation requires 511864 clock cycles. Also, Hutter et al. [32] proposed ECDSA processor with RFID tag. This processor has security services (authentication, integrity, and non-repudiation). Authors used this processor to authenticate between tag reader and RFID tag. They used several countermeasures to prevent side channel attacks such as Montgomery ladder algorithm, randomized Projective coordinate and randomness through SHA-1. Through results, the computation cost of the signature in this processor is 859188 clock cycles (127ms at 6.78 MHz) and the area is 19115 GE. Implementation of ECDSA algorithm is presented with prime field () on RFID [33]. Authors exchanged SHA-1 hash algorithm with KECCAK hash to reduce running time in ECDSA on RFID. Also, they used a fixed-base comb with w=4 to accelerate and reduce hardware requirements. Results show that their scheme gets an area of 12448 GEs, power consumption 42.42 in 1MHz and signature generation is less than 140 kcycles. With these results, their scheme competes with other schemes for the binary and prime fields.
Recently, the ECDSA with password-based encryption in [34] has been adopted to improve security and privacy in RFID. Authors pointed out that lightweight processes performed by ECDSA in data signing are significantly effective in RFID. Also, Ibrahim and Dalkılıç [35] used ECDSA with the Shamir scheme to secure RFID technology. They applied the Shamir scheme to reduce the cost of two scalar multiplications to one. Finally, ECDSA has been implemented to secure RFID in IoT applications in [36] have implemented. Authors proposed a shopping system, during the analysis and evaluation, arguing that ECDSA is suitable to sign users requests in the shopping system.

2.3.3 Smart Cards

A smart card is a novel way for authentication as it contains important information for users. This technology is used by several algorithms to implement authentication mechanisms, such as RSA, DSA and ECDSA. ECDSA algorithm is used in this area largely because of the advantages found in ECDSA.
The design and implementation of ECC/ECDSA algorithms have been investigated and they are used in constrained-source devices like smart cards [12]. The authors used a java card that supports the java language and the environment used is next generation integrated circuit card (NGICC). Results showed that the ECC/ECDSA is better than RSA. The results also pointed to the possibility of using these algorithms in the other wireless devices constrained-source. Moreover, the EC algorithms (ECDSA and ECDH) with pairing were used on a Java card [37]. ECC requires little data to move between the card reader and the card, compared with the RSA. Furthermore,Savari and Montazerolzohour [38] concluded that the ECC/ECDSA algorithm is more efficient in terms of power, storage and speed than RSA on the smart card. On the other hand, Bos et al. [1] discussed using ECDSA with Austrian e-ID (smart card). They noted that ECDSA uses the same keys many times; therefore, they pointed to improved randomness in ECDSA. In 2017, Dubeuf et al. [39] discussed the security risks of the ECDSA algorithm when applied to a smart card. They proposed the development of a scalar operation algorithm when applying the Montgomery ladder method in ECDSA. They described that ECDSA offers a security solution for their smart card implementations.

3 Existing Surveys

In this section, we will present the existing surveys in ECC/ECDSA. In this survey, we will focus on a study to many of the aspects of ECC/ECDSA algorithm. To start with beginning, we will present these articles and then explain the difference between our research and existing studies. Table 2 lists existing surveys for the ECC/ECDSA algorithm.

Paper Contribution of existing surveys Aspect Year
[40] Efficiency and flexibility in hardware implementations Efficiency 2007
[17] Compared many different signature schemes 2008
[41] Attacks and countermeasures Security 2010
[42] Hard problems in public cryptography algorithms 2011
[43, 10] SCA and fault attacks, and countermeasures 2012,2013
[44] Security techniques in WSN Applications 2015
[45] Attack strategies in Bitcoin and Ethereum 2016
[6] ECC/ECDSA with some applications 2017
This work Classification of efficiency, security methods and applications with updated contributions Efficiency
Table 2: Different between existing surveys and our research
  • [noitemsep,nolistsep]

  • Performance and efficiency
    First of all, in [40] performance and flexibility have been investigated in the ECC algorithm with accelerators through hardware implementations. Many points in hardware implementations for ECC were discussed such as selecting curves, group law, PM algorithms, and selection of coordinates. In addition, it was pointed out that the architecture of multiple scalar multiplication in ECDSA’s verification should supported because this architecture leads to efficiency in hardware’s implementations. Much research has pointed out that using hardware’s accelerators leads to high performance, but it sacrifices flexibility, where reduction circuits should be used to retrieve the flexibility feature. Similarly, Driessen et al. [17] compared many different signature schemes (ECDSA, XTR-DSA, and NTRUSing) in terms of energy consumption, memory, keys length and signature, and performance. Through implementation, the authors found that NTRUSing algorithm is the best in term performance and memory. However, NTRUSing algorithm suffer from security weakness against attacks.

  • Security and countermeasures
    A detailed study in [41] on attacks and countermeasures in ECC algorithm is presented. The authors divided attacks to passive and active attacks. They explained that the countermeasure for a specific attack may be vulnerable to other attacks, whereas countermeasures should have been selected carefully. Therefore, the authors have made some recommendations in selecting countermeasures. Some surveys have studied public cryptography algorithms in terms of computation of hard problems (integer factorization problem(IFP), discrete logarithm problem (DLP), lattices and error correcting codes) in quantum and classical computers [42]. The authors described RSA, Rabin, ECC, ECDH, ECDSA, ElGamal, lattices (NTRU) and error-correcting code (McEliece cryptography), as they pointed out that ECC provides a higher security level than other cryptosystems; in addition, it presents advantages such as high speed, less storage, and smaller keys sizes. But they did not discuss the use of ECC/ECDSA in applications and implementations of different technologies. Meanwhile, the authors in [43, 10] explained physical attacks on ECC algorithms, where they focused on two known physical attacks: side channel analysis (SCA), and fault attacks. They also described many attacks including these two types, as they presented countermeasures against these attacks such as simple power analysis (SPA), differential power analysis (DPA) and fault attack (FA) countermeasures. Also, some recommendations were presented for countermeasures that add randomness, countermeasures selection, and implementation issues. However, none of these papers investigated non-physical attacks on public key signature algorithms such as ECDSA.

  • Implementation and applications
    A study on security techniques has investigated wireless sensor networks (WSNs) [44]. It focused on three features in WSN security: key management, authentication, and secure routing. This study pointed out that ECC algorithm was convenient for constrained-resource devices. In addition, a survey on attack strategies was given in relation to ECC and ECDSA algorithms in Bitcoin and Ethereum [45]. The author pointed out that different standards for curves (such as ANSI X9.63, IEEE P1363, and safecurves), where this survey focused on safecurves with SECP256k1 through using ECDSA, as this paper referred to safecurves as one of the strongest curves standards. The author suggested many basic points to prevent attacks on ECDSA or ECC. Finally, Harkanson and Kim [6]

    compared RSA and ECC/ECDSA, and pointed out that ECC/ECDSA exhibited the highest performance with the same level of security from RSA. They noted that 69% of websites use ECC/ECDSA, 3% used RSA and the rest used other algorithms. They also described ECC with some applications (such as vehicular communication, e-health and iris pattern recognition). However, they had a duplication between implementation and application. For example, RFID is a technology that can be used to implement a particular application.

In our survey, we study the ECDSA algorithm differently to previous studies. First, we integrate three aspects (efficiency, security, and applications) into one search. Second, systematically, we provide different details (ECDSA aspects) of previous studies. Finally, we provide an updated explanation of all these aspects in ECDSA.

4 Efficiency Improvement on ECDSA

In this section, we will discuss the improvement of ECDSA’s efficiency in many ways such as scalar multiplication, coordinate system, and arithmetic operations. In each subsection, successive improvements to several authors will be explained.

4.1 Efficiency Improvement of Scalar Multiplication

This section includes many strategies to improve scalar multiplication; these are efficient in terms of scalar representation, curve operations, and ECDSA arithmetic operations.

4.1.1 Representation Improvement of the Scalar

In this section, we explain the scalar representation methods on scalar multiplication. Implementation of scalar multiplication take a large amount of time [46, 19] in ECDSA and ECC algorithms. Scalar multiplication takes more than 80% for running time in operation of key computation in sensor devices [13, 47]. Scalar multiplication (SM) or point multiplication (PM) is a set of point addition and point doubling that generates (where k is a large integer and is a point on the curve) through …(k-times)[48, 49]. Representation of 25 points in traditional double-and-add (D&A) method presents as the following [50].
AA 25 = 2(2(2(+2)))+
AA 25 = 2(2(2(2)+))+(2(2)+)
The efficient and fast implementation of ECC algorithm and its derivatives are needed to accelerate SM implementation [51]. As we can see from step 5, signature verification in ECDSA algorithm requires 2 scalar multiplications [52] that require operation of a complexity computation. SM uses three operations: inversion, squaring and multiplication (where it takes the notation i, s and m respectively) that is considered expensive for an ECC algorithm [48]. These operations are used to evaluate SM efficiently through implementations. When using the Affine coordinate system with SM, both point doubling and point addition consume 2 multiplications, 1 squaring and 1 division operation in field [48]. Improving on SM leads to reduce computation cost, and running time and thus improve the efficiency of performance in ECC algorithm and its derivatives. The traditional method (double-and-add algorithm) in scalar multiplication uses base 2 in point doubling such as 2(…(2(2))) in addition to point addition [48]. In this method, point doubling is implemented in each bit in (where is integer scalar and represented in binary form) while point addition is implemented when bit equal ”1” in [52]. Improving SM makes these algorithms convenient for constrained-source devices such as WSN, RFID, and smart card [14] through reducing of running time and energy consumption. Many researchers have presented methods to represent scalar in order to reduce computation complexity in . Many representations have used for SM such as traditional method (double-and-add), double base chain (2,3), multibase representation ((2,3,5) and (2,3,7)) and point halving (1/2). All these representations lead to shorter representation length of terms and hamming weight. For instance, multibase representation (2,3,5) is shorter terms length and more redundant than a double base chain (2,3). Representing 160-bit in multibase representation (MBNR) needs 15 terms while double base chain (DBNR) needs 23 terms [50]. Sometimes, improving on methods of SM representation may bring storage problems, for example, using point halving instead of point doubling with polynomial base requires greater storage in memory [53]. In the following subsections, we will discuss methods to improve the representation of scalar in SM. Double-Base Chain (2,3)

One of the methods used to develop doubling is tripling . This method was proposed to use bases 2 and 3 to reduce the execution time of the PM. Ciet et al. [54] proposed a point tripling operation and mixed it with point doubling depending on various methods to evaluate when 1 inversion is more than 6 multiplications that lead to improving tripling. They presented a comprehensive evaluation through i, s and m for operations types: , , , , , , . The authors noted that is faster than but has slightly more cost. They used the idea of Eisenträger et al. [15] in removing from equations when computing ; the authors used this idea with and removed when 1i is more than 6m to reduce computation cost. Their results proved that this scheme improved SM efficiency in ECC, ECDSA and ECDH. Double-base chain equation is


Where is , and () are integer numbers and and decreased monotonically ( and ). Furthermore, characteristic 3 was investigated with both Weierstrass and Hessian forms [11]. The authors pointed out that characteristic 3 is efficient in Weierstrass form, where tripling operation performs more efficiently than double and add, while characteristic 3 is not efficient with Hessian form (Triple-and-add (T&A) used with Weierstrass and double-and-add(D&A) used with Hessian). In addition, the scalar in the double base number system algorithm (DBNS) is analysed on superlinear EC in characteristic 3 [55]. Double-base chain is DBNS but with restriction doubling and tripling and increasing the number of point addition through a Horner-like manner. The authors proposed sublinear SM algorithm for PM (i.e. sublinear in scalar’s length) with running time O(), and it is faster than D&A and T & A. They pointed out that their algorithm is appropriate to use with large parameters in EC, where selecting large parameters leads to improving performance and security. Also, Dimitrov et al. [56] proposed double-base chains method through prime and binary fields, using point tripling with a Jacobian coordinate in prime fields, and quadrupling combined with quadruple-and-add with an Affine coordinate in binary fields. They used a modified greedy algorithm to convert to DBNS form. Also, they applied the same idea that was used in Eisenträger et al. [15] to evaluate only -coordinate in computation but with to increase SM efficiency. Through results, the authors have become efficient in speeding SM better than classical D&A (21%), NAF (13.5%) and ternary/binary (5.4%) in binary fields. In prime fields, their scheme also gets better results than D&A (25.8%), NAF (15.8%), 4NAF (6%). But results in [54] are better than this scheme in term inversions (with binary fields) in the computation case 4 and 4+, where their scheme obtains for 4 (2[i] + 3[s] + 3[m]) and for 4+ (3[i] + 3[s] + 5[m]). Results in [54] are 4(1[i] + 5[s] + 8[m]) and 4+(2[i] + 6[s] + 10[m]) where squaring is free and ignored in binary fields, while their scheme is better than [54] when using prime fields. Multibase Representation (2,3,5)

Multibase representation is a method to improve SM through using a point quintupling (5). It uses 3 bases (2,3,5) called step multibase representation(SMBR). Multibase representation algorithms are shorter terms, more redundant, and have more sparseness than DBNS algorithms. For instance, representation 160-bit costs 23 terms in bases representation (2,3) whereas 15 terms in representation bases (2,3,5). SMBR equation is presented as follows:


Where is , and () are integer numbers and , and decreased monotonically ( and ). Mishra and Dimitrov [57] used SMBR with Affine in binary fields and Jacobian in prime fields similar to the scheme in [56]; therefore, their scheme is a generalization for [56] but with 3 bases. They improved on a greedy algorithm (mgreedy) in order to fit SMBR and to gain shorter representation and faster running. Moreover, they recommended using small values for exponents because it does not affect cost. Computation of was efficient with the prime field through and . They found that costs 9s+17m (with Affine ) and 14s+20m (with Jacobian) while costs 22m+12s (with Affine) and 26m+16s (with Jacobian). Moreover, computation of efficiency with the binary field (Affine) was 1i+5s+13m. Through these results, the authors achieved efficiency in SM with prime and binary fields better than previous algorithms such as D&A, NAF, 3-NAF,4-NAF, ternary/binary and DB chains whether with precomputation or without precomputation points. Finally, Longa and Miri [58] concluded that the cost of is 26m + 13s. Multibase Representation (2,3,7)

Multibase representation is a scalar representation method, which depends on base triple bases to accelerate point multiplication. It uses 3 bases (2,3,7) and is called multibase number representation(MBNR). This representation is the development of previous representation methods (DBNR (2,3) and SMBR (2,3,5)). Purohit and Rawat [50] proposed triple base method (binary, ternary and septenary) with addition and subtraction. The greedy algorithm was used to convert an integer to the triple base (2,3,7), through finding the closest integer to a scalar. Their evaluation referred to this method as having better results than previous methods of scalar representation. They pointed out that septupling (7) costs is less than two formulas ( and ), where =3i + 18m, =4i + 18m and = 5i + 20m. Also, squaring cost was neglected, regarded as inexpensive in the fields with characteristic 2. MBNR representation was applied through the following equation:


Where is , and () are integer numbers and , and decreased monotonically ( and ). MBNR (2,3,7) is better than MBNR (2,3,5) in terms of shorter terms length, more redundant and spare. The next example gives a comparison between using quintaupling and septupling when points number= 895712 as mentioned in [50].

This method presents efficiency in scalar multiplication better than previous representation methods, where we note from the previous example that septupling is more redundant and has fewer terms than quintupling. Also, Chabrier and Tisserand [59] proposed MBNR (2,3,5,7) to represent scalar without precomputation. They investigated the costs in their method when a = -3, the cost is 18m + 11s (prime field). The results of their proposal indicate high performance in FPGA implementations with SM at high speed with storage level and execution time. Point Halving with DBNR and MBNR Representations

This section will discuss point halving (PH) first, then bases (1/2,3), (1/2,3,5), and (1/2,3,7) respectively. Point halving method is one of the methods used to reduce the cost of point doubling. It means extract from [48]. Knudsen [53] and Schroeppel [60] separately proposed point halving in order to accelerate point multiplication. They wanted to reduce the cost of computation complexity in point doubling. Point halving was suggested to use instead of all point doubling, where point halving is faster than point doubling in the case of using Affine coordinate on the curve (minimal two-torsion) [53]. This method was implemented on the binary field (polynomial and normal). Both polynomial and normal basis perform fast computations, but polynomial basis suffers from storage problem. This scheme neglected squaring operation and evaluation as it depends on inversion and multiplication. To implement point halving on binary field in curve, the following equations were used to get , where and :


Through previous equations, we get point halving; the second equation gives us value, the third equation gives us x and subsequently uses values of x, to get y value from the first equation. Fong et al. [61] presented analysis and comparison between SM methods (point doubling and point halving) and used binary fields ( and ) through reduction polynomials of trinomial and pentanomial (using polynomial basis instead of normal) on standards of NIST’s FIPS 186-2. Through analysis, they found that point halving is faster (29%) than point doubling when is known in advance. Also, they noted -adic (Frobenius Endomorphism) used in [62] faster than PH. They presented a comparison between double-and-add and halve-and-add (H&A) over through Affine and Projective coordinate systems. Also, they pointed out that point doubling can use with mixed coordinate while point halving must be used in the Affine coordinate. They explained that signature verification in ECDSA has 2 SM and this operation is expensive when is known. But performing ECDSA verification with halving is more efficient than doubling in addition to halving being better for constrained-resources in the case assumption that storage is available. Ismail et al. [48] mentioned that point halving is faster than point doubling by 5-24%.
Point halving is combined with DBNR (2,3) in [52] to reduce computation complexity and to increase operations efficiency of scalar multiplication in ECC/ECDSA. The authors implemented their scheme on Pentium D 3.00 GHz using C++ with MIRACL library V 5.0 that deals with a large integer. Equations of scalar representation in this scheme are:


Where a large integer and value it close to field size.


Where is , and () are integer numbers and and decreased monotonically ( and ). The authors showed a comparison between their scheme results and DBNR with using the binary field (163-bit, 233-bit, and 283-bit) and prime field (192-bit) according to NIST’s standards. Their scheme achieved better results than an original double-base chain, their results reduced 1/2 inversion, 1/3 squaring and few number of multiplication and that led to improving DBNR in scalar multiplication.
In addition, Ismail et al. [48] presented improving on SMBR method through using point halving with SMBR. The original algorithm used bases (2,3,5) while the modified algorithm used (1/2,3,5). The modified scheme adopted point halving and halve-and-add instead of point doubling and double-and-add. It applied the following equation:


Where is , and () are integer numbers and , and decreased monotonically ( and ). The greedy algorithm is used to convert an integer to modified MBNR. The authors referred to MBNR (2,3,5) and modified MBNR(1/2,3,5) as having the same terms but costs in modified MBNR (i=7,s=17,m=77) were less than original MBNR (i=15,s=36,m=80) with = 314159. Sometimes, the original algorithm has fewer terms than the modified MBNR, but modified algorithm remains less costly in operations(i,s,m). The modified and original algorithm are applied on different curves sizes (163, 233, 283), where each curve is tested with 100 random numbers. Results refer to modified scheme of less computation cost (30%) rather than the original scheme (2,3,5).
The authors in [51] proposed combining point halving with MBNR (2,3,7). They used bases (1/2,3,7) instead of (2,3,7). This scheme is less costly than schemes that have bases (2,3,5) and (2,3,7). Also, this scheme is used with the binary field. The authors referred to MBNR as being convenient for ECC because of its shorter representation length and less hamming weight. MBNR (1/2,3,7) representation uses the following equation:


Where is , and () are integer numbers and , and decreased monotonically ( and ). This scheme presents improvement in the performance of scalar multiplication through reducing computation complexity. Through these results, the authors pointed out that their scheme reduced inversion to 1/2, and squaring to 1/3 and there were fewer numbers of multiplication when compared with previous schemes. Table 3 shows the costs of arithmetic operations to represent scalar.

Binary field Prime Field
Scalar representation i s m i s m
1/2P - - 1 - - -
1/2 P + Q 1 - 5 - - -
P +Q 1 1 2 1 1 2
2P 1 1 2 1 1 2
2P + Q 1 2 9 1 2 9
3P 1 4 7 1 4 7
3P + Q 2 3 9 2 3 9
4P 1 5 8 1 9 9
4P + Q 2 6 10 2 4 11
5P 1 5 13 - 10 15
5P + Q - - - 1 13 26
7P 3 7 18 - 11 18
7P + Q - - - 1 22 28
1/2 & DBNR DBNR/2 DBNR/3
Slightly lower
than DBNR
Slightly lower
than DBNR
1/2 & SMBR SMBR/2 SMBR/3
Slightly lower
than SMBR
Slightly lower
than SMBR
1/2 & MBNR MBNR/2 MBNR/3
Slightly lower
than MBNR
Slightly lower
than MBNR
Table 3: Costs of inversion (i), squaring (s), and multiplication (m) for scalar representation

4.1.2 Methods of Improving Curve Operations

In this section, we describe the methods which are used to improve the PM by reducing the addition formula operations (point addition and point doubling). In the second part, we explain that some research efforts have used the different methods for first part methods. Improvement Via Various PM Methods

Many approaches can be used to improve PM methods because the PM operation consumes a large amount of time from ECC and ECDSA algorithms. Many researchers have presented different methods to improve the performance of basic PM algorithm (D&A) such as NAF, Window, Comb, Montgomery, Frobenius. In this section, these algorithms will be elaborated. Table 4 shows improvements on SM methods.

  • [noitemsep,nolistsep]

  • Double-and-Add (D&A) Method
    D&A is a basic algorithm in PM. It has been repeatedly used for two operations: point addition and point doubling . This algorithm is similar to the square-and-multiply algorithm [13], and depends on bits in scalar after it was represent it in binary form, where if bit in has zero value, the D&A algorithm will execute point doubling, while if bit in has one value, this algorithm will execute the point doubling and point addition in each loop. In this method, the number of point addition is a half of the point doubling. D&A method is explained in algorithm 2.

    Input: point on ; l-bits scalar
    for i = l - 2 downto 0 do
         if = 1 then      
    Algorithm 2 The D&A algorithm
  • Non Adjacent Form (NAF) Method
    NAF is the representation of signed-digit and was introduced to reduce execution time in PM; it outperforms D&A algorithm. It does not allow for any two nonzero bits in scalar to be adjacent [13, 52] and this leads to reducing hamming weight as PM computation depends on the number of zeros and bits length in a scalar. As a result, this algorithm reduces the number of point addition. The NAF method is faster than the square-and-multiply method [19]. This method can reduce the number of point addition to one-third. It is represented using the following equation:


    Where , NAF method is explained in algorithm 3 [63].

    Input: A positive integer
    Output: NAF()
    while do

    is odd 

             =2−( mod 4)
             /2, +1.      
    return ().
    Algorithm 3 NAF algorithm:

    When NAF is combined with a window through a random point in PM with the Jacobian coordinate system, NAF can achieve an improvement of 33.7 addition and 157.9 doubling (with size is 160-bit) [64]. Also, Gallant et al. [65] combined simultaneous multiple point multiplication with left-to-right window-NAF, where they convert form () for PM to form ( which is Endomorphism operation). This method improved point doubling (79 points) and point addition (38 points) that lead to improving PM to 50% better than traditional methods with . Similarly, Mishra and Dimitrov [57] made a comparison between their scheme (Quintupling) and NAF algorithm (in addition to many NAF versions such as NAF-3 and NAF-4). Furthermore, NAF algorithm (right-to-left binary method) without precomputations (to use in constrained devices) was used with the mixed coordinate system to improve PM [9]. The author compared his algorithm with NAF (left-to-right binary method). Results showed that this scheme presented better performance than NAF (left-to-right binary method). Also, Purohit et al. [51] pointed out that multibase non-adjacent form (mbNAF) provided a speeding up of the execution time in PM with improving scalar performance through multibase.

  • Frobenius Map Method
    Some researchers have used Frobenius map algorithm instead of point doubling because Frobenius map performs squaring operations where is Frobenius [62] (that is replacing point doubling with 2 squaring), and is represented by the following equations:


    Where = , and


    Where is 0 or and is Frobenius endomorphism. Scalar value is obtained through the division repeated for by [14] as in the foregoing equation. Algorithm 4 shows combining Frobenius map instead of point doubling with D&A; this algorithm uses addition, subtraction, and Frobenius.

    Input: Point , -adic expansion of ().
    for i in 0 to ( l-1) loop do
         if  =1 then
         else if  = -1 then
          = Frobenius()
    End loop
    Algorithm 4 Frobenius with D&A algorithm:

    Frobenius endomorphism (-adic) was combined with point halving [62]. The authors used point halving because it is three times faster than point doubling; therefore, they used -and-add instead of double-and-add with Koblitz curves; these curves have useful features in acceleration . They used these curves with fields sizes (-163 bit and -233 bit) to standards of NIST’s FIPS in binary fields. -NAF was used to reduce point addition from to . Results have proved that -and-add based -NAF provides speeding 14.28% better than Frobenius method, and in addition, it does not require of additional memory in the case using of normal basis. Frobenius expansion method for the special hyperelliptic curve (introduced by Koblitz) with GLV(Gallant-Lambert-Vanstone) endomorphism (using fields of large characteristic) is presented to improve the efficiency of scalar multiplication [66]. Through the results, the authors largely improved point doubling while point addition did not improve. This scheme improved scalar point multiplication from 15.6 to 28.3% when it was implemented on . Reducing the time and increasing performance in ECDSA algorithm is performed through exchange doubling point with Frobenius scheme in the PM [14]. The authors presented ECDSA algorithm with curves of subfields Koblitz for binary fields (key length a 163-bit). They explained that binary fields are convenient for hardware implementation. This scheme achieved good results in performance over time where key generation took time 0.2 ms, signature generation took 0.8 ms and signature verification took 0.4 ms. In addition, this scheme is suited for constrained-source devices such as smart cards, WSN and RFID. In 2017, Liu et al. [67] submitted a study of the application of twisted Edwards curve (considered efficient models of ECC/ECDSA) with Frobenius endomorphism to reduce point doubling to 50% for traditional algorithms. They used the 207-bit key (prime field) with two hardware architectures (ASIC and FPGA). They pointed out that their method offers high performance, less memory, and time requirements compared to traditional Frobenius algorithms. Also, they mentioned that integrating their method with a window of 2 size would save 1/16 of the point addition.

  • Window Method (WM)
    Window algorithm is intended to improve execution time in D&A method through specific window size (terms computation in a scalar); if window size equals one, the window algorithm is the same as D&A algorithm. This algorithm uses precomputation for points in the case of the fixed point multiplication (FPM) and also reduced point additions better than D&A [13, 46], as explained in algorithm 5.

    Input: point on ), Window width , .
    Output: .
    Precompute: for i = 1 to do:
    for i = -1 downto 0 do
         = +
    Algorithm 5 Window algorithm:

    Wang and Li [24] used NAF and window methods to improve performance for PM algorithm. They found that window method is more efficient than NAF. They used hybrid multiplication to reduce access memory instead of multi-precision multiplication. For a practical example, ECC results on MICAz is signature generation=1.3s and signature verification = 2.8s while on TelosB is signature generation=1.6s and signature verification = 3.3s. The authors showed the possibility of using ECC on WSN. Also, the variable-length sliding window was combined with NAF method to reduce points addition (PM) in ECC and ECDSA algorithms [19]. Computation complexity in PM depends on bits’ length and zeros number in an integer. The authors divided elements in NAF() to two windows (non-zero and zero windows) and also divided non-zero window to six types in sliding window. In addition, they used coordinate system (Jacobian) with point doubling and coordinate system (Jacobian Affine) with point addition. Their results demonstrate that their scheme is better and more efficient than NAF, wNAF and square-multiply schemes in terms of efficient point multiplication and time of public key generation.

  • Comb Method (CM)
    This method uses binary matrix to compute FPM efficiently, where w is row and d is column. It was introduced by Lim and Lee and this algorithm is case special from multi-exponentiation using Straus’ trick [68]. Comb method uses precomputation to improve PM performance [13, 63]. as in algorithm 6.

    Input: A point , an integer , and a window width
    Output: .
    Precomputation Stage:
    Compute for all
    Write padding with 0 on the left if necessary, where each is a bit-string of length .
    Let denote the -th bit of
    = O
    Evaluation Stage:
    for  = -1 downto 0 do
    Algorithm 6 Comb algorithm:

    Comb method was modified in [69] to improve PM through combining comb method with width- NAF. This method is essentially designed to reduce the number of point addition. It has presented better results than original comb and NAF with comb. Also, it is used to accelerate multiple PM in ECDSA algorithm. The authors explained that their algorithm is convenient for devices constrained-source (memory) when choice suited parameters for an algorithm such as window size. It improved computation complexity from 33% to 38% compared to NAF with CM in devices constrained-sources.

  • Montgomery Method
    Montgomery algorithm eliminates division operation and uses reduction operation efficiently [24]. This algorithm was introduced by Montgomery and uses only -coordinate and removes -coordinate, and this leads to increasing PM performance.
    A method introduced by Eisenträger el al. has improved PM’s performance with an Affine coordinate in ECC [15]. This method eliminates field multiplication in the case of using left-to-right of binary SM. It eliminates y-coordinate in addition, doubling and tripling operations. This scheme led to saving field multiplication and thus achieved an improvement of PM’s cost from 3.8% to 8.5%. For example, in order to perform the operation of , authors used form ; also in the operation of , they used . This trick led to the improvement in PM operation. For instance, with a number of points (1133044P), the cost of original PM is 23i+41s+23m, while the cost of improved PM is 23i+37s+19m, namely, an improvement in m and s. Ciet and Joye [54] used Eisenträger’s idea but with when 1i equal more than 6m to reduce the computation cost of PM. Also, Dimitrov et al.[56] used Eisenträger’s idea but with to improve the PM efficiency. Montgomery method is represented in algorithm 7.

    Input: A point on and a positive integer .
    Output: The point .
    for  = - 2 down to 0 do
         if  = 0 then
    Algorithm 7 Montgomery ladder algorithm:

    Saqib et al. [70] suggested that using Montgomery algorithm depends on the parallel-sequential manner to accelerate PM in ECC with on Xilinx VirtexE XCV3200 FPG device. Their scheme improved PM’s performance to . They compared their scheme with previous schemes (different H/W devices). Results showed that this scheme presents better performance. Also, they pointed out the balance of their scheme between memory size and time. It was suggested to use the Montgomery algorithm with ECDSA (with keys length 224-bit and 256-bit) that has high costs in both computation and communication [71]. The authors analysed time complexity, where PM consumes a large amount of time from ECDSA’s time. They pointed out that Montgomery offers better performance than other algorithms in constrained environments and mobility. Hutter et al. [47] used Montgomery ladder algorithm to point multiplication in ECC (with randomized Projective coordinate) because it provides security against several attacks and performs all operations with -coordinate, thereby increasing PM performance through implementation of ASIC processor with asymmetric cryptography ECDSA (). In 2017, Liu et al. [72] adopted the Montgomery method with the lightweight elliptic curve (twisted Edwards curve (p159, p191, p223, and p255)) to improve speed and balance between memory and performance (cost of communication, execution time, memory) as well as security requirement. They implemented the ECDSA algorithm on Tmote Sky and MICAz nodes. During the implementation, they noted that their scheme offers memory efficiency which requires 6.7k bytes for a SM process instead of 13.1 kbytes for the traditional Montgomery scheme. They recommended the exact selection of ECDSA’s parameters curves and the balance between security and efficiency requirements.

In summary, the best algorithm for PM (ECC and ECDSA) it is fixed-base comb with w=4 in random binary curves, while in Koblitz curves it is fixed-base window TNAF (-adic NAF) with w=6 in case memory is not constrained [73]. When memory is constrained, Montgomery is the best with random binary and TNAF is the best with Koblitz. The authors pointed out that Koblitz curves are faster than random binary curves ( and ) for NIST’s standards [73]. Moreover, Rafik and Mohammed [13] analysed in detail the types of SPM (scalar point multiplication) algorithms (that are D&A, window, and comb method). Examining the results, they concluded that the CM method is faster than D&A and WM because the CM uses less doubling and adding points, but this method needs more memory. As for security, the D&A method is the best with 27%. D&A is good at memory requirements, needs more time. Therefore, they concluded that several concepts of SPM in ECC algorithm (or ECDSA or ECDH) are based on the user’s application and constrained sources. If execution time is a large, the energy consumption increases. The authors carried out the Secure-CM and earned good time to implement (1.57 s); as demonstrated ECC is applicable in WSN. Improvement of Efficiency Via Other Methods

In this subsection, we will explain a set of different ideas from previous methods to improve ECDSA performance through reducing consumption time in signature generation and signature verification.
A method was suggested to accelerate signature verification in ECDSA algorithm [74]. The authors used a number of small bits (side information (1 or 2 bits)) to specify the number of allowed points in point multiplication; through the modified algorithm, points double was largely reduced. They implemented traditional ECDSA algorithm and modified ECDSA algorithm on ARM7TDMI platform processor with a finite field by NIST, and achieved the result that modified ECDSA verification is 40% better than traditional ECDSA. They discovered that this changing does not affect ECDSA standards and proved that their scheme has the same security as traditional ECDSA. Also, an addition formula was proposed through using Euclidean Addition Chains (EAC) with Fibonacci to avoid difficult to find small chain [75]. The author has used a Fibonacci-and-add algorithm instead of double-and-add (using Jacobian coordinate and characteristic greater than 3). Some improvements were added to this scheme through the window and signed representation to improve algorithm performance. The author has compared his scheme with many schemes (D&A, NAF, 4-NAF and Montgomery ladder). Results indicated that his scheme outperforms D&A only in the case of improvement addition (window or signed representation). In addition, the improvement of signature verification in ECDSA is achieved through cooperating of adjacent nodes in the computation of intermediate results [27]. The proposed scheme used 1PM+1Add in nodes that use intermediate results instead of 2PM+1Add as in the original scheme. The authors analysed performance and security in the scheme of signature verification with many attacks (independent and collusive). They noted that these attacks do not have a large effect on their scheme. The modified scheme has saved energy consumption 17.7-34.5% better than the traditional scheme, as signature verification in the modified scheme is 50% faster than the original scheme. They implemented their scheme on Micaz motes with the finite field ().
Furthermore, Li et al. [46] proposed a scheme to improve scalar multiplication by reducing point additions. They presented a method to generate depend on the generation of an integer S periodically. They achieved good results in improving point addition whereas binary scheme (/2). Point doubling in their scheme is similar to previous schemes (D&A, NAF, WM). Their scheme does not require additional memory and depends on the growth rate of a small with bit length growth in scalar(). Also, it is appropriate to implement in H/W because it needs simple operands. Because scalar multiplication (ECDSA) consumes a large time through processing, that leads to power consumption [76]

. The authors exchanged linear point multiplication method with their method using divide and conquer algorithm. This algorithm uses a binary tree to quotient values and a skew tree to reminder values, where it processes points through parallel computation method. Results showed that they achieved better efficiency than linear scalar multiplication in terms of a number of clock pulses and power consumption. In 2017,

Guerrini et al. [77] have proposed a method of random scalar that relies on the covering systems of congruence relationships. A random was applied in a scalar representation using mixed-radix SM algorithm. They generated n-covers randomly with a greedy method, depending on available memory as well as recommended selection of a large n. Also, they pointed out that their method is more efficient, less expensive and incurs fewer additional expenses for arithmetic operations than the Coron’s randomization, D&A, NAF, and wNAF methods. Table 4 shows improvements on other SM methods.

Paper SM methods Subsequent improvements of SM (PA and PD) Field
[19] Traditional NAF 1/3 PA
[64] wNAF
Random point: PA (33.7) and PD (157.9)
Fixed point: PA (30) and PD (15)
wNAF with
Roughly 50% SM
[51] mbNAF 50% SM and
[13] Traditional D&A PA is 1/2 PD
-NAF with PH
1/3 PA and reduce PD
2n/7 PA and reduce PD
[66] Frobenius with GLV 28.3% SM
Frobenius with
twisted Edwards curve
3n/4 PA and 1/2 PD
Traditional window Reduce PA and PD compared with NAF (192,256,512)
Sliding window
10% better than NAF
sliding window
27.4% better than wNAF
[13] Traditional cm
cm is less PA and PD
than wm and D&A
cm with NAF
cm with wNAF
33% SM
38% SM
[15] Traditional Montgomery
8.5% SM
memory size (SM)=13.1k
[56] Improved Montgomery
Prime field: 21% SM is better than Q & A
13.5% SM is better than NAF
Binary field: 25.8% SM is better than Q & A
15.8% SM is better than NAF
[54] Improved Montgomery
Reduce PA and PD in
traditional Montgomery
Montgomery with
parallel sequential
roughly 51% SM with time=0.056ms
Montgomery with
twisted Edwards curve
Faster SM than traditional Montgomery
memory size (SM)=6.7k
[74] SM with side information 40% is better than traditional verification
[75] SM with Fibonacci
Improve PA in D&A and reduce
PA cost (10%)
SM with intermediate
Improve signature verification (50%)
SM with integer S
Improve PA ( is better
than D&A, NAF, and WM
[76] SM with binary tree Reduce SM time and complex computations
[77] SM with random n-cover
Reduce SM cost is better than D&A, NAF,
and wNAF
Table 4: Improvements on different SM methods

4.2 Efficiency Improvement of Coordinate Systems

ECDSA has to perform complex operations; these operations consume resources in constrained devices. Using the appropriate coordinate system, could lead to reducing costs of high computation in doubling and addition operations. The improvement of computation complexity in PM depends on point representation that is considered important in curve operations [78]. Many different coordinate systems used with these algorithms such as Affine, Projective, Jacobian, Chudnovsky Jacobian, modified Jacobian, and mixed. Coordinate systems are used to represent and contribute to speeding computation in ECC/ECDSA algorithm. There is no coordinate system to accelerate both point addition and doubling [3]. Some coordinate systems require faster or slower computation time in the point doubling and point addition operations than other coordinate systems; for instance, point doubling uses less computation in Jacobian while the point addition uses more computation in the Jacobian than Projective and Chudnovsky Jacobian coordinates.

  • [noitemsep,nolistsep]

  • Using Affine Coordinate
    Affine is a basic coordinate system that is used with doubling and addition operations. This system uses two coordinates . When point addition uses the Affine system, then the cost of the resulting point () is 1 inversion, 2 multiplication and 1 squaring, while cost() in the case point doubling is 1 inversion, 2 multiplication, and 2 squaring [79]. Costly arithmetical operations in point addition and point doubling are multiplication, squaring and inversion. Inversion operation is much lower than multiplication operation. The Affine coordinate system uses inversion operations [24]. Inversion operation is required in point addition and point doubling when using Affine coordinate. But Affine coordinate needs fewer multiplication operations than other coordinate systems such as Projective coordinate [64].

  • Using Projective Coordinate
    Inversion operation is dramatically expensive in point addition and point doubling. The Projective coordinate system removes this operation and that leads to improving PM performance in ECC/ECDSA. This system uses three coordinates and 0 [79], which take formula corresponding to Affine coordinate. The Projective coordinate is represented as the following equation (with prime fields) [13]:


    López and Dahab also used three coordinates but using formula and 0 corresponding to Affine coordinate [80]. Their scheme improved original Projective on a curve in the binary field. Furthermore, Projective coordinate is used with random binary and koblitz curves ( and ) for NIST’s standards instead of Affine [73]. This coordinate leads to significant improvement in PM ECC and ECDSA. Then, Hutter et al. [47] used randomized Projective coordinate through generating a randomized number on x-coordinate in base point. Also, different coordinate systems have been discussed, such as Projective and Affine coordinates [13]. Through experiments, the authors found that Projective coordinates are faster by 91% than Affine coordinates but Projective coordinate avoids inversion operation, and vice versa for the memory, while the Projective coordinates require more memory because they use three coordinates . Oliveira et al. [78] proposed -Projective coordinate through using three coordinates with binary field to improve computation efficiency in PM. The formula used for coordinates is corresponding with -Affine formula, where the -Affine formula is . -Projective improved López and Dahab Projective(LD-Projective), where the cost in -Projective is 11m+2s while in LD-Projective it is 13m+4s. Also, cost in -Projective is 10m+6s while in LD-Projective is 11m+10s. In addition, -Projective improved H&A to 60% in squaring and 6% in multiplication better than LD-Projective. Their scheme was implemented on Sandy Bridge platform. The authors achieved computation 69500 clock cycles through the single core with H&A and 47900 clock cycles through the multi core combining Gallant, Lambert, and Vanstone, (GLV) technique, H&A and D&A in SM unprotected. Also, they achieved computation 114800 clock cycles through single core protected. Their scheme achieves performance better by 2% than Ivy Bridge and 46% than Haswell platforms. In 2017, Al-Somani [81] proposed a parallel SM method based on López-Dahab Projective that required 4 multiplications for DBL and 16 multiplications for ADD. His scheme applied on FPGA and AT processors and obtained high speed for SM’s performance when this scheme was implemented on eight processors with López-Dahab Projective.

  • Using Jacobian Coordinate
    This coordinate system does not require inversion in addition formula (addition and doubling), but uses inversion only in the final computation stage. It is similar to Pojective coordinate (also it is an improvement of Projective coordinate). Jacobian provides less running time in point doubling and more running time in point addition than Projective coordinate [64]. This system uses three coordinates and 0, which take formula corresponding for Affine coordinate [79].


    Due to the point addition, Jacobian consumes more computation time than other coordinate systems. Chudnovsky Jacobian coordinate is proposed to accelerate point addition in the Jacobian coordinate. This system uses coordinates and saves 1m+1s in point addition [82]. This coordinate has a disadvantage that point doubling is slower than Jacobian coordinate [9]. The author pointed out that Chudnovsky is the fastest in point addition and modified Jacobian is the fastest in point doubling. A Jacobian coordinate is used to improve running time in EC exponentiation with fixed point and random point. This coordinate is better than Affine and Projective coordinates [64]. Also, modified Jacobian coordinate system was able to achieve better performance than Affine, Projective, Jacobian and Chudnovsky Jacobian coordinate systems, where Affine has addition cost (i+2m+s) and doubling cost(i+2m+2s), Projective has addition cost (12m+2s) and doubling cost (7m+5s), Jacobian has addition cost (12m+4s) and doubling cost (4m+6s), Chudnovsky Jacobian has addition cost(11m+3s) and doubling cost (5m+6s) and, modified Jacobian has addition cost (13m+6s) and doubling cost (4m+4s) [3]. This coordinate system is applied on EC exponentiation through prime field ( and ). Modified Jacobian is faster point doubling than all previous coordinate systems. In addition, the modified Jacobian coordinate is used to represent addition formula when ECDSA algorithm is implemented as authentication protocol in wireless on processor ARM7TDMI [2]. The authors implemented ECDSA with with different field sizes (, , , and ) depending on standards of ANSI X9F1 and IEEE P1363. Through results, they get running time 46.4ms for signature generation and 92.4ms for signature verification when using . They pointed out that their scheme improved bandwidth and storage compared with previous schemes. Brown et al. [79] pointed out that Jacobian is faster than Affine, Projective, and Chudnovsky in point doubling. Through results, these coordinate systems improved speeding of computation in ECC/ECDSA through reducing execution cycle.

  • Using Mixed Coordinate Systems
    The mixed coordinate system uses more one coordinate system to represent addition formula (which each point uses a different system) [3, 9] to get the best performance and least computation time in PM. Many different mixed coordinate systems such as Jacobian-Affine, Affine-Projective, and Chudonvsky-Affine [79].
    Cohen et al. [3] pointed out that modified Jacobian mixed with other coordinate systems (Jacobian and Affine or Jacobian and Chudnovsky Jacobian) presents a significant improvement on computation time with the prime field ( and ). Execution time is reduced using Jacobian-Chudnovsky coordinate with prime field (when using =5 for , and , and =6 for and ). Also, Jacobian and Chudnovsky coordinates improve PM performance and Chudnosky is preferred although that requires some extra storage (precomputation for points) [79]. Different coordinate systems were investigated (Affine, Projective, Jacobian and LD Projective) on point addition and doubling with characteristic 3 in both Weierstrass and Hessian forms [11]. The authors noted that Jacobian is the most efficient with Weierstrass while Projective is the most efficient with Hessian. Also, they used mixed coordinate (Affine-Projective) on both Weierstrass and Hessian. Through using mixed coordinate, results indicate improved timing in addition formula. Mixed coordinate system (Affine-Jacobian) is used to reduce m and s operations or to avoid i operations [24]. Authors pointed out that mixed coordinate is better performance (6%) than Jacobian coordinate. In the mixed coordinate, point addition consumes 8m and 3s (12 modular reductions) while point doubling consumes 4m and 4s (11 modular reductions). Also, mixed coordinate is used with right-to-left NAF algorithm [9], where applied Jacobian is used with point addition and modified Jacobian with point doubling. The author pointed out that modified Jacobian is fast in point doubling but is slow in point addition; therefore, the author used Jacobian in point addition. Through results, this scheme presented efficiency of a scalar by 13.33m (where is bits’ length in a scalar) compared with Jacobian (15.33m) and modified Jacobian (14.33m). But when three fields (temporary) variables additional are used, then the left-to-right binary method described in [3] was a better performance. Jacobian-Affine is faster than other coordinate systems in point addition. Using Jacobian coordinate in point doubling and Jacobian-Affine in point addition [19]. Authors pointed out that Jacobian is the fastest in point doubling and mixed coordinate is the fastest in point addition. Oliveira et al. [78] used mixed coordinate to compute where -Affine was used to represent point and -Projective to represent point. They achieved computation cost less than LD-Projective. In 2017, Pan et al. [83] indicated that mixed Jacobian-Affine coordinate is more efficient than Jacobian coordinate. They applied a mixed coordinate with ECDSA to improve the efficiency of computation processes to authenticate users during online registration. Their results indicated that mixed coordinate reduces calculations in both point addition and point doubling. Table 5 shows the successive improvements of different coordinate systems in SM such as Affine, Projective, Jacobian, and mixed coordinates.

Paper Coordinate system
Subsequent improvements of
SM (PA and PD)
Coordinate formula Field
[64] Affine
1i+2m (PA) and 1i+2m (PD)
m is less than Projective
[79, 13] Traditional Projective
12m (PA) and 7m (PD)
i is removed
91% is faster than Affine
[80] LD-Projective
17% (SM) is better than
traditional Projective
Traditional Projective
13m (PA) and 7m (PD)
14m (PA and 4m (PD)
[78] -Projective
13m (PA) and 11m (PD)
11m (PA) and 10m (PD)
LD-Projective with
parallel SM
Improved SM execution time (,)
[79, 13] Traditional Jacobian 12m (PA) and 4m (PD) (,)
[9] Chudnovsky Jacobian 11m (PA) and 5m (PD) (,)
[3, 2] Modified Jacobian 13m (PA) and 4m (PD) (,,,) (160,192,224)
[11] Affine-Projective
Weierstrass form:
9m (PA) and 1i+2m (PD)
Hessian form:
10 (PA) and 1i+5m (PD)
(,) and
[24] Affine-Jacobian
8m (PA) and 4m (PD)
6% is better performance
than Jacobian
(,) and
Jacobian- modified
11m (PA) and 3m (PD) ()
[19, 83]
Jacobian-Affine and
8m (PA) and 4m (PD)
(,) and
and -Affine
8m (PA) and 4m (PD)
(,) and
Table 5: Improvements of different coordinate systems

4.3 Efficiency Improvement Via Algorithm Simplification

This section will show some different approaches to improving ECDSA performance. A set of ideas which researchers used to simplify the algorithm to improve the performance ECDSA includes removing the inversion in the generation of the signature and the signature verification, a collection of signatures, using a few keys and using two points instead of shared three curved points.
ECDSA algorithm is mainly used to provide integration, authorisation, and non-repudiation depending on servers’ CA (certification authority) [84]. The authors added an ID, ECDSA threshold and trust value with the public key to authorise node to gain access to the information. Also, it was stated that data transmission of ECDSA in the wireless sensor network (WSN) consumes more energy than computation operations [85]. To reduce energy consumption, in data transmission, all encrypted data aggregated from all sensor nodes are directly sent to the Base station without decryption. This operation leads to reducing energy consumption and increasing network lifetime. Encryption and signing algorithms are used for the collected data [86, 85]. Encryption algorithm EC-OU (Elliptic Curve Okamoto-Uchiyama) is used to maintain confidentiality and ECDSA algorithm to maintain integration, both algorithms are used during homomorphic. The idea behind this scheme is that encryptions, signatures and public keys gathered together from all nodes are sent to aggregator (CH), then these aggregators are forwarded to the top level (parent aggregators) and so on up BS, that is, it is not the decryption or signature verification in the aggregators, but these processes are carried only in BS, that has a high capacity. The authors concluded that this scheme offers energy and time efficiency due to a collection of encryptions and signatures that are sent to the BS. Also, this scheme avoids the signature verification process that takes more time than the signature generation. In addition, it is efficient for large networks. Also, ECDSA algorithm was investigated in detail [16], and contains some of the problems that were related to using inversion processes in generation and verification of the signature. An inversion process consumes a great deal of time, which affects the calculation. This process consumes 10 times the multiplication process. In this scheme, the inversion process was removed, and the results demonstrated that this scheme is more efficient than the original ECDSA scheme. It had less time and therefore less computation and energy. This scheme was applied to the sensor node Micaz.
Using a few keys saved energy, memory and reduced the computation in ECDSA algorithm [87]. Researchers used ECDSA to secure the connection between the gateway and the cluster head(CH), and CH and nodes. This scheme broadcasts only session keys (), which reduces energy, as the session keys are deleted after their account. This operation leads to less memory (only a few keys stored in memory). The public key of the gateway does not require power, and calculation processes are accomplished through the use of a random number and hash function that require less computation operations. This scheme used periodic authentication in session key. Therefore, it prevents attacks and is appropriate for environments of constrained-source. Finally, modified ECDSA algorithm is proposed through using two curve points publicly (public key and point of signature verification ) [88], where the base point becomes the private point. In this algorithm, parameter only is used (without using parameter) thus removing overheads. The authors pointed out that the modified ECDSA algorithm is less complex (more performance) using less number of PMs, point addition and point doubling than original ECDSA algorithm.

5 Security Improvement on ECDSA

In this section, we will consider the security in the ECDSA algorithm. A list of mechanisms for improving the security in ECDSA is given. In the beginning, we will come across types of attacks, security requirements and countermeasures by categorizing them into non-physical and physical attacks, as each category includes passive and active attacks. We then will offer security mechanisms to enhance and protect ECDSA’s signatures from tampering.

5.1 Categories of Attacks

First of all, we give a review of the attacks that threaten the security level in the ECDSA algorithm. Attacks are becoming increasingly sophisticated, so at the same time there should be countermeasures against such attacks. But these countermeasures are expensive in terms of time, storage, and complex computations. Also, it is difficult to develop a countermeasure for each attack [89].
When the ECDSA algorithm was implemented, it needed to use countermeasures against known attacks [4]. The use of standard criteria to credible organizations such as IEEE, ISO, NIST, NSA, FIPS, and ANSI is extremely important to prevent many attacks. Namely, the abnormal curves are weak for attacks. In addition, the parameters’ validation of ECDSA leads to strong security against attacks [21]. Protection of the private key and ephemeral key in the ECC/ECDSA algorithm are essential procedures because if an adversary could get these the keys, the attacker is able to modify messages and signatures and thus the ECDSA algorithm becomes useless. Therefore, a group of countermeasures have used to improve security procedures. Many attacks such as signature manipulation, Bleichenbacher and restart attacks work to retrieve the private key or ephemeral generator [21]. A set of conditions (the difficulty of DPL, one-way hash, collision-resistant,

unexpected) are installed to prevent such attacks on the ECDSA. We will classify attacks on the ECDSA algorithm for non-physical and physical attacks. Each category includes many attacks that try to penetrate the signatures of messages in terms of integrity, authentication, and non-repudiation. We will then explain the countermeasures applied by the researchers in their projects.

5.1.1 Non-Physical Attacks and Security Requirements

These attacks do not require physical access (indirect access) to users’ devices or the network to penetrate repositories’ data or messages transferred between the clients and server. The attackers have used different strategies for physical attacks such as sniffing, spoofing, eavesdropping, and modification to break signatures and encoders transmitted through radio frequency signals or the Internet [90]. They implement non-physical threats to penetrate the security requirements of confidentiality, integrity, and availability (CIA) during the implementation of analysis and modification operations. Non-physical attacks can use network devices indirectly. For example, a DDoS attack uses network devices to send a stream of messages to disable the server services [91]. These attacks are categorized into passive and active attacks, according to the strategies of these attacks and the intended target of the attack. Passive attacks

These attacks have used eavesdropping, monitoring, and sniffing mechanisms to control and record the activities of network devices, and then break user authentication signatures without altering or destroying the data [92]. The attacker needs to adjust a large number of packets through the use of packet sniffing and packet analysis software to assist in the analysis and penetration of signatures. These attacks are primarily intended to penetrate authentication (signatures), encryption (confidentiality), and access control (authorisation policies). During these attacks, the attacker tries to analyze the data to gain information about messages such as sender and receiver IP addresses, transport protocol such as TCP or UDP, location, data size, and time, as this information helps him in the penetration operation [93]. Passive attacks do not perform any changes to the network data, but the attacker can use these data analyses to destroy the network in the future. This type of attack is difficult to detect by security precautions because the eavesdropper does not perform any data changes. But there is also a set of security requirements that prevent such attacks. Many research projects implement the ECDSA algorithm to prevent the passive non-physical attack. Table 8 shows passive non-physical attacks types in many applications with ECDSA. A brief explanation of some of the most popular passive attacks as follows.

  • [noitemsep,nolistsep]

  • Traffic analysis and scanning (eavesdropping): Monitors and analyses data as they travel over the Internet or wireless network, for example, monitoring data size, number of connections, open ports, visitor identity and vulnerabilities in operating systems [94].

  • Keylogger and snooping: Records authentication activities in victim devices during keystrokes monitoring of username/password [95].

  • Tracking: Records and traces users’ information such as location for network devices and personal information such as name, address, email, and age [96].

  • Guessing: The attacker attempts to guess a user’s credential by relying on words like dictionary attack or a combination of symbols (probabilities) such as brute force attack to gain access to the network

    [97]. Active attacks

An active attacker performs data penetration of users’ identities by creating, forging, modifying, replacing, injecting, destroying, or rerouting messages as these messages move through the network’s nodes or the Internet [93]. This type of attack can use sniff attack (passive attack) to collect information and then performs changes or destruction of network data. For instance, a malicious attacker intercepts money transfer messages in e-banking applications and then implements adding and deleting operations to obtain personal gains [95]. An active attacker can send false or fake signals to deceive the network nodes by linking to the fake server and then redirecting nodes’ packets to the legitimate server after modifying the data or authentication messages. Security requirements are significantly important, in particular, the authentication and integrity requirement, to prevent many attacks such as man-in-the-middle (MITM) and replay [98]. According to many research projects in [83, 99, 100, 20, 101], ECDSA’s signatures is a security solution to prevent many attacks such as modification, spoofing, denial, and cyber. Figure 2 shows the classification of non-physical attacks. Table 8 shows active non-physical attacks types in many applications with ECDSA. A brief explanation of some of the most popular active attacks is given in the following.

  • [noitemsep,nolistsep]

  • Spoofing: Many types of attacks such as MITM, replay, routing, and hijacking. The attacker intercepts and modifies credential messages to gain access to the network by MITM attack, while in replay attacks, the attacker intercepts messages and resends them later [97]. Also, routing/hijacking changes the packets paths (changing the IP address) by exploiting the vulnerability in the routing algorithms used to find the best path for moving packets [93] such as IP-spoofing and ARP-spoofing.

  • Denial: These attacks are categorized as denial of service (DoS) and distributed denial of service (DDoS). In DoS, the attacker uses his own devices to prevent the server from providing services to network members. This attack targets security requirements (CIA), while in DDoS the attacker uses his own devices as well as network devices to quickly destroy the network [96, 93, 98, 92].

  • Cyber: An attacker creates websites such as phishing attacks or social pages for Facebook and Twitter, such as social media attacks, to trick the user into believing that these websites are legitimate websites or pages. The user enters his/her confidential information such as username, password and card number in these counterfeit websites to allow this information to be accessed by the hacker [95, 97].

  • Modification: This attack changes, delays, rearranges users’ data packets after gaining the attacker’s access as a legitimate user or the attacker may be a legitimate member of the network (internal attack). Any change to this data can cause problems for the user such as changing medical or diagnostic reports in e-health applications [96].

Figure 2: Classification of non-physical attacks Security Requirements

The ECDSA algorithm provides three security requirements: authentication, integrity, and non-repudiation. But integrating the ECDSA algorithm with security mechanisms such as encryption and authorisation provides many security requirements (such as confidentiality, authorisation, availability, accountability, forward secrecy, backward secrecy, auditing, scalability, completeness, anonymity, pseudonymity, and freshness) when used in applications like e-health, e-banking, e-commerce, e-vehicular, and e-governance. We will provide a brief explanation of the security requirements that provided by ECDSA as in the following.

  • [noitemsep,nolistsep]

  • Authentication has been used to authenticate legitimate users or data in the network to prevent anyone else from accessing it. Namely, if the users’ identities or data is a trusted source in the network it is accepted, but if an unknown source it is ignored [102]. Many attacks such as brute-force, keylogger, and credential guessing attempt to penetrate the signatures’ authentication service.

  • Integrity has been used to ensure that the transmitted data has not been tampered with or edited by the adversary [92]. Many attacks such as MITM, replay, hijacking, and phishing attempt to penetrate the signatures’ integrity service.

  • Non-repudiation has been used to detect the compromised nodes via the sender who cannot deny his message [98]. Many attacks such as repudiation, masquerading, and social media attempt to penetrate the signatures’ non-repudiation service.

5.1.2 Physical Attacks and Countermeasures

Physical attacks are passive or active attacks. The attacker applies a passive physical attack to analyse the signatures and breaks the authentication property (gets the private key) to become a legitimate user in the network. On the other hand, the active physical attack attempts to penetrate the integrity or non-repudiation property to change signatures and messages transferring between the network’s nodes. Many physical attacks have applied on ECC/ECDSA. These attacks exploit some problems in these algorithms in order to access the private key [4, 43]. These problems include:

  • [noitemsep,nolistsep]

  • The power consumption

  • Electromagnetic radiation

  • Computation time

  • Errors Passive attacks

This type of attacks do not tamper with or modify the data but analyses the leaked bits of data (such as the scalar’s bits). These attacks take advantage of the different running time for operations, power consumption and electromagnetic radiation. The attacker tries to get some bits leaked from to produce the full value of . The attacks that exploit the power consumption and electromagnetic radiation to detect are called side channel attacks (SCAs). SCA attacks consist of a suitable model, power trace, and statistical phase [4]. In these attacks, the attacker monitors the power consumption and exploits the unintended outputs (side-channel outputs on the secret key) of the device [13]. Leakage power consumption is divided by the transition count leakage and Hamming weight, where the first depends on the state variable bits at a time, while the second depends on the number of 1-bits treated in time [103] by tracking voltages of the device. SCA uses many methods to detect bits such as the analysis of the distinction between the addition formula operations, creation of a template, statistical analysis, re-use values, special points, auxiliary registers, and the link between the register address and the key [43] (passive attack is described in Figure 3). SCA attacks (passive) consist of four major attacks:

  • [noitemsep,nolistsep]

  • Simple power analysis (SPA)
    In this attack, the attacker relies on a single trace of power consumption to detect the secret key bits. The attacker extracts these bits based on power consumption discrimination in the addition formula (point addition (PA) and point doubling (PD)) [13, 43].

  • Timing attack
    In this attack, the attacker relies on the analysis of the execution time of the addition formula and the arithmetic operations [69]. For example, the attacker analyses the processing time for PA and PD; if the processing time is greater, it is considered PA (”1”) or else PD (”0”), and repeats the process until he/she obtains all ephemeral bits .

  • Template attack

    In this attack, the attacker creates templates with a large number of traces of the controlled device. It uses multivariate normal distribution to detect the key based on power consumption during data processing. The attacker gets the ECDSA’s key by matching the best template with these traces

    [43, 104].

  • Deferential power analysis (DPA)
    In this attack, the attacker uses many traces in a statistical analysis of power consumption. The attacker uses hypothetical points in SM with stored recorded results. He/she then compares these results with the power consumption of the controlled device to obtain a valid guess in the detection of secret key bits [105].

General countermeasures have used to prevent passive physical attacks [41]:

  • [noitemsep,nolistsep]

  • Elimination of relation between data and leakages.

  • Elimination of relation between fake data and real data.

The implementation of one of the former two countermeasures ensures data protection from attacks.

Figure 3: SCA (passive) attacks

Jacobi form has been proposed instead of Weierstrass to prevent SCA attacks [106]. This form allows for addition formula operations to have identical time and power and this leads to prevention of SPA and DPA attacks. Unfortunately, this scheme is 70% less efficient than the original scheme (Weierstrass). The average of field operations in their scheme is 3664 while in the original scheme it is 2136. Many recommendations have been presented with PM’s endomorphisms to protect many attacks such as Pohlig-Hellman and Pollard’s rho through , and is not dividable on when [65]. Similarly, a study presented DSA and ECDSA algorithms in detail and discussed that these algorithms become unsuitable for signing messages (integrity) if applied incorrectly, as this study has focused on the parameters validation of DSA/ECDSA to ensure strong security for these algorithms against different attacks; also, the author proved that these algorithms become strong if the parameters are well-validated [21]. In addition, Dimitrov et al. [56] proposed protection mechanisms with double-base chains method against SCA attack (SPA and DPA), using side channel atomicity against SPA and randomization method against DPA [107]. Also, right-to-left NAF algorithm was proposed without precomputations and investigated security [9]. This algorithm protected against SPA, DPA, and doubling attacks through using atomicity technique countermeasures.
Implementation of template based SPA attack in ECDSA was investigated with microprocessor 32-bit (ARM7 architecture) [104]. The authors pointed out that this attack is applicable on ECDSA through three basics (few bits to retrieve by lattice attack, using fixed base point and microprocessors enough to build templates) to combine with their attack. Furthermore, Mohamed et al. [69] presented a scheme dependent on combining comb method with width-wNAF. But, this scheme is not resistant to SCA attacks. Therefore, they suggested using constant runtime to prevent SPA and timing (through add point of infinity (O) when bits value equal nonzero and also add it to precomputation values). Physical attacks are explained on ECC/ECDSA algorithms [43]. The authors described many SCA attacks, as they presented countermeasures and recommendations to apply ECC/ECDSA such as adding randomness, countermeasures selection, and implementation issues to prevent these attacks. Moreover, Rafik and Mohammed [13] analyse SPA and DPA attacks (DPA analysis neglected because it is not viable in the ECC/ECDSA algorithm but SPA is applicable to ECDSA in WSN). DPA is not applicable in the case random scalar but is applicable in the case using it against secret key where the attacker knows signed message [104]. SCA attack is working on the use of information leaked from the SM (). They concluded that protecting the ephemeral is important in ECDSA against SPA attacks.
Template attack with lattice attack is presented to retrieve ECDSA’s private key over prime field [108]. The authors pointed out that endomorphism curves can be penetrated by Bleichenbacher attack through using one bit of bias. They had a secret key for ECDSA through few number of hours when using signatures, memory and time. Varchola et al. [4] focused on the analysis of correlation power analysis (CPA) attack on ECDSA algorithm in FPGA. They did not use traditional power models (Hamming weight/distance) in the analysis of CPA attack but used another type as a countermeasure (chosen plaintext attack). They proved possible successful feasible CPA in ECDSA on FPGA in case or , but also CPA attack had no impact in ECDSA in case using equation (Results obtained by DISIPA platform). Also, a SPA attack was suggested for SM penetration based on conditional subtraction of a modular multiplication. Constant time of modular multiplication is a guarantor to prevent this attack from SM analysis. On the other hand, Liu et al. [109] used secure-SM with randomized point coordinates to prevent SPA, DPA, and ZPA. Table 6 shows the passive physical attacks and countermeasures.

Paper Passive physical attack (s) Countermeasure (s) Year
[106] SPA/DPA Jacobi form 2001
Pohlig-hellman, Pollard’s raho
Semaev-Satoh-Smart, Weil pairing
and Tate pairing
Parameters selection 2001
[21] Bleichenbacher and restart Parameters validation 2003
[56] [9] [107] SPA/DPA/doubling
Side channel atomicity
and randomization
[104] [13]
Random base point
Random scalar
[69] SPA/timing constant runtime 2012
Adding randomness
appropriate countermeasure
selection depend on the
application and environments
[108] Template with lattice Large prime finite 2014
[4] CPA 2015
[105] SPA
Constant time modular multiplication,
point blinding, random Projective
Secure-SM with randomized point
Table 6: Passive physical attacks and countermeasures Active attacks

There is another type of SCA attack, which uses errors to reveals some bits , called fault attacks. This type of attack exploits the error in the dummy operations, memory block, invalid point and curve, and the difference between right and wrong results. The attacker gets an error signature by adding one bit to the generator . This type of attack is considered a serious risk to ECC/ECDSA algorithms if no proper countermeasures have applied for applications and data transfer environment (Figure 4 shows active SCA attacks). SCA attacks (active) consist of main three attacks:

  • [noitemsep,nolistsep]

  • Safe-error based analysis
    In this attack, the attacker uses faults to exploit dummy operations (used to resist SPA) and memory blocks (register or memory location). Safe-error includes two types of c safe-error and m safe-error, where the first exploits the vulnerability of the algorithm whereas the second exploits the implementation [41, 110].

  • Weak-curve based analysis
    In this attack, the attacker tampers with the curve parameters, where the error is injected by using an invalid point and invalid curve at a specified time for the parameters that were not correctly selected or validated. The attacker uses the errors to move SM from the strong curve to the weak and thus retrieves the private key [111, 43].

  • Differential fault analysis (DFA)
    In this attack, the attacker compares the difference between the correct and fault results to retrieve the scalar bits. Differential fault includes two types: classic DFA and sign change FA [43, 112]. Validation of intermediate results and random ephemeral are used to prevent classic DFA attacks while Montgomery ladder and verify the final results used to prevent sign change FA attacks [41].

General countermeasures have used to prevent active physical attacks [41]:

  • [noitemsep,nolistsep]

  • Error-detection

  • Error-tolerance

The former countermeasure allows the detection of the errors that are added by the attacker whereas the latter prevents the attacker from the discovery of scaler or private key, even after providing faults.

Figure 4: SCA (active) attacks

The c safe-error attack is described on ECDSA [110]. This attack exploits dummy operations in the addition formula. The attacker can use errors with dummy operations to discover bits of . In [110] they proposed the use of atomic patterns (Verneuil and Rondepierre patterns) instead of the dummy operations to repeal the c safe-error. Also, Fournaris et al. [112] suggested using a residual number system (RNS) and leak resistant arithmetic (LRA) to prevent both differential faults (comparing the difference between correct and fault results) and m safe-error (using block memory errors) that are intended to retrieve bits from ephemeral .
An invalid point attack is carried out on a weak single curve during application skip of faults to move the points to weak Weierstrass curves to retrieve the private key [111]. The authors used point validation as a countermeasure to prevent this attack. Since the device tests the validation of the point on the curve, if it is incorrect, it generates an error message. Moreover, Bos et al. [113] proposed using Edwards curves with ECDSA () instead of Weierstrass curves to prevent weak-curve attacks. They pointed out that Edwards curves provide constant time and exception-free SM so these attacks are not applicable. Similarly, Harkanson and Kim [6] showed that there were safe and unsafe curves. They demonstrated that curve25519 curve with 256-bit key length is secure and fast.
The comprehensive survey presented information about fault attacks on symmetric and asymmetric cryptography algorithms [89]. The authors presented countermeasures to this attack in different cryptography algorithms. They pointed out that this attack may apply on ECDSA, and also presented countermeasures to prevent this attack on ECDSA. Their countermeasures used against fault attacks are the addition of the cyclic redundancy check (CRC) in the private key or using the public key to verify the signature before sending it, but this process is expensive [89]. In addition, fault attacks and types are discussed in [43, 114]. The authors have referred to countermeasures to prevent these attacks such as point validation and curve integrity check. According to [115], many countermeasures known to repel various fault attacks such as algorithm restructuring, physical protection of the device, random techniques in computation processes and power consumption with independent implementation. However, the authors noted that fault detection, intrusion detection, algorithmic resistance and correction techniques are capable of eliminating injection fault attacks. Furthermore, Barenghi et al. [116] have provided a fault attack to retrieve the private key. Their attack depended on the injection of errors in the implementation of modular arithmetic operations in ECDSA signatures. They used multiprecision multiplication faults with scalar () to prevent the penetration of signatures. Recently, Liao et al. [117] discussed invasive SCA attacks such as fault attacks with ECC algorithms (). The attacker injects errors into the victim’s device using tools such as voltage glitch and laser. The authors pointed out that time and location greatly affect the success of fault attacks. Therefore, they proposed one multiplication module and one division module to fix the timing of all operations. Table 7 shows the active physical attacks and countermeasures.

Paper Active physical attack (s) Countermeasure (s) Year
[89] Fault CRC and public key 2004
[114] [43] Fault attacks and types
Point validation, curve integrity check,
coherence check and combined curve
[115] Fault
Fault detection, intrusion,
algorithmic resistance and
correction techniques
[116] Invalid point Point validation 2015
[110] C safe-error Atomic pattern 2016
[113] Weak-curve Edwards curve 2016
Differential fault and
m safe-error
(random base permutation)
[116] Fault
Multiprecision multiplication
[117] invasive fault Fix computing time 2017
Table 7: Active physical attacks and countermeasures

5.2 Security Improvement Via the Random

One of the problems that leads to weak security in the ECDSA algorithm is that there is not sufficiently random and thus ECDSA produces unsafe keys. The random increase in ECDSA prevents penetration of signed messages, whether in physical attacks or during transfer messages from the source to the destination. For instance, Bos et al. [1] pointed out that ECDSA used in Bitcoin suffers from poor randomness in the signature, and this leads to penetration of clients’ accounts by attackers. Therefore, the keys should not be duplicated, and should not be used for more than one message. Also, the randomness source in ECDSA is extremely important to prevent key leakage [118]. If the randomness source is bad this leads to the generation of a bad signature. A bad signature allows the attacker to leak bits of the key and thus discover the private key. Danger et al. [10], Fan et al. [41], Fan and Verbauwhede [43] pointed out to many countermeasures using randomization in scalar (blinding, splitting, and group), point (blinding and coordinates), register address, EC, and field isomorphism. This section explains some researchers’ ideas to improve randomness in ECDSA algorithm.

5.2.1 Random of Scalar

Point multiplication in the ECDSA algorithm uses a random scalar to provide strong signatures as a countermeasure against modifying the data. But the use of scalar with weak random allows the attacker to reveal and then forge signatures. The scalar should be subject to several tests to prevent weak random and make it difficult to predict scalar by the attacks. Many methods have used to generate random scalars, such as the use of hash functions or the use of generators.
Randomization source, Hutter et al. [32] used a SHA-1 algorithm to generate ephemeral and to gain appropriate randomness as a countermeasure against side channel attacks, where the generation of random is according to FIPS’s standard. They pointed out that using a hash function with increases signature security due to changing in each message. Also, in the signature algorithm, they used equation instead of to prevent DPA attacks. Moreover, Nabil et al. [14] used W7 algorithm to increase random integer in the ECDSA algorithm, which showed that the W7 generator is better than other generators in performance and area. Also, [1] focused on the weaknesses of ECC/ECDSA through the study of four SSH, TLS, Bitcoin and the Austrian citizen card protocols. They discovered that the ECDSA suffers from security weakness in the same way as previous security systems. They focused on the three points: deployment, weak keys, and vulnerable signatures. They gathered databases belonging to the four protocols and found that several cases of repeated public keys have used in SSH and TLS. Bitcoins suffer from several signatures that share nonces that allow an attacker to compute the corresponding private keys and steal coins. The Austrian citizen card as well suffers from a multi-use of the keys. The big problem with the signature is insufficient randomness in generating keys or nonces in digital signatures. For example, Debian OpenSSL. Koblitz and Menezes [119] discussed random oracle model with ECDSA in the real world. They pointed out this model is less dependent on random generators of weakness and gives more flexibility with modified ECDSA. As they explained, modified ECDSA algorithm prevents chosen-message attacks.
Randomization techniques, randomize with NAF was proposed to prevent DPA attack [120]. This scheme required for a number of point addition and for point doubling. The authors also proposed using randomized signed-scalar with addition-subtraction multiplication algorithm to prevent SPA attacks. Their algorithm required for both point addition and doubling. Their scheme is resistant to timing attacks because scalar changed in each running time depending on randomized signed-scalar representation. They explained through experimental results that their scheme is resistant to power attacks. In addition, Samotyja and Lemke-Rust [121] investigated the evaluation of randomization techniques such as scalar blinding and splitting with prime-256, brainpool256rl, and Ed25519 curves in ECC/ECDSA. Their results demonstrated that these techniques prevent SCA attacks. However, Bhattacharya et al. [122] investigated the application of scalar blinding and splitting techniques in ECC/ECDSA with asynchronous samples. They noted that these techniques are vulnerable to branch misprediction, DPA, and template attacks. To eliminate such attacks, they recommended applying execution independence for scalar or implementing parallel random branching executions. Dou et al. [107] noted that the integration of scalar randomization and side channel atomicity techniques is a countermeasure against SCA attacks such as SPA and DPA. Similarly, Poussier et al. [123] pointed out that scalar blinding is a countermeasure against SPA.

5.2.2 Random of point repres