A 0.16pJ/bit Recurrent Neural Network Based PUF for Enhanced Machine Learning Atack Resistance

12/13/2018 ∙ by Nimesh Shah, et al. ∙ IIT Kharagpur Nanyang Technological University 0

Physically Unclonable Function (PUF) circuits are finding widespread use due to increasing adoption of IoT devices. However, the existing strong PUFs such as Arbiter PUFs (APUF) and its compositions are susceptible to machine learning (ML) attacks because the challenge-response pairs have a linear relationship. In this paper, we present a Recurrent-Neural-Network PUF (RNN-PUF) which uses a combination of feedback and XOR function to significantly improve resistance to ML attack, without significant reduction in the reliability. ML attack is also partly reduced by using a shared comparator with offset-cancellation to remove bias and save power. From simulation results, we obtain ML attack accuracy of 62 represents a 33.5 estimated to be 12.3uW with energy/bit of 0.16pJ.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Device connectivity in the Internet of Things (IoT) is predicted to increase exponentially while electronics usage in automobiles has also increased significantly. With the arrival of self-driving-automobile technology, the role of hardware security will only get more important. PUFs provide a safe way to generate unique identification(ID) or authentication bits, compared with non-volatile memories that can be hacked using widely known techniques. Both analog and digital circuit techniques have been used to design PUFs [1, 2, 3], with a preference for sub-threshold analog techniques for low power operation. There are two kinds of PUFs, weak and strong [4], with the weak one used solely for ID, while the strong one being used for authentication and/or ID. Strong PUFs have a large challenge-response pair (CRP) space such that an attacker cannot predict the response of some arbitrary challenge even though he has knowledge of many other CRPs. However, several mathematical models have been developed for existing strong PUFs, largely using machine learning (ML) techniques to learn the expected patterns of the PUF’s CRPs[5]. Alternative design strategies using analog components have been proposed like in [6, 7] with improved resistance than the classic Arbiter PUF. In this paper, we present a technique to significantly decrease the machine learning attack accuracy (MLAA) more specifically for analog PUFs using neural-networks (NN), without being heavily penalized on reliability. Our RNN-PUF uses a combination of recurrence, or feedback, and XOR to generate a response whose relationship with the particular challenge is obscured to the attacker due to the randomness provided by the previous response.

Our contributions in this work are as follows:

  • We first show high MLAA of a recently proposed analog NN-PUF thus showing that analog PUFs are not inherently robust against ML attacks.

  • We propose a simple method to reduce MLAA for the NN-PUF at the cost of more area.

  • We propose the RNN-PUF that has both low MLAA and low area.

  • We propose a design guide for the RNN-PUF where area and MLAA/reliability are traded-off.

The paper is organized as follows: Section 2 describes the conventional PUF based on current mirror arrays (CMA). Section 3 introduces the RNN-PUF, its operational details and its reliability/accuracy. Section 4 discusses the design guide, while Section 5 presents power consumption simulations and comparison with state-of-art. Finally, Section 6 concludes the paper.

2 The Conventional NN-PUF

The conventional PUF without recurrence is based on a Neural-Network PUF [8] using CMA with high mismatch. We choose the NN-PUF as the base PUF because it can be reused for ML, another important functionality becoming ubiquitous in IoT [9]. Also, it has the least area per bit reported so far. Similar arrays may be constructed out of the current mirrors reported in [7, 10]. Figure 1 shows the schematic of the conventional NN-PUF. A 1-bit response is generated by comparing the output current in a column pair specified by the column challenge. The row challenge bits selects which row of transistors are active. The current values for each column of the column pair are summed up, and the two results are sent to the comparator to generate () or () where denotes the current output for column given a challenge . The only difference is we show a shared comparator while [8] used current controlled oscillators (CCO) to digitize the current values. We note that the maximum number of possible column pairs is where is the number of columns. However we can only get independent column pairs, because if we know that and , then we know for sure that [4].

Figure 1: Architecture of Conventional NN-PUF

2.1 Conventional NN-PUF Reliability

In order to improve reliability of the NN-PUF, it was shown in [8] that we can use a dynamic threshold control, , but it comes at the cost of discarding more CRPs. In that scheme, a response is considered valid if and only if . If invalid, a different challenge has to be used.

Figure 2 shows the reliability of the conventional PUF against , for

CRPs. Reliability is defined as the probability of 1-bit output being correct across different operating conditions such as temperature/voltage/time etc. As shown in

[8], we also observed that reliability variation with temperature is far more dominant than other effects. So we will use this worst-case reliability at C for our analysis going forward. The reliability of this NN-PUF without recurrence can be denoted –this notation will be useful when we talk about reliability of the RNN-PUF.

Figure 2: Conventional NN-PUF Reliability variation against
Figure 3: Conventional PUF ML Accuracy - Linear SVM

2.2 Conventional NN-PUF ML Attack Accuracy

The idea of using ML to predict responses of PUFs is well known [5, 11]

. We attempt to use different ML algorithms to predict the response to a given challenge. To assess ML resistance of the basic structure, initially the column pair part of the challenge is kept fixed i.e. it is not randomized for different row part of the challenge. Using Linear Support Vector Machine(LSVM), we obtained the results shown in Figure

3. Since the accuracy exceeds the reliability for CRPs, we can use this number to denote the secure space of the conventional PUF [12]. The results are not surprising because the conventional current-mirror-based PUF, for a fixed column pair, is similar to the APUF that has been accurately modeled before. The linear relationship between challenge and response is easily learned by the LSVM with very few training CRPs. This result shows that even analog PUFs are prone to ML attacks and there is nothing inherently secure about analog PUFs–careful design is necessary.

Another problem with conventional PUFs such as that shown in [6, 8] is the low Hamming Distance Test [13] value of the output due to the comparator’s bias, which biases the output toward 0 or 1, and makes it easier to predict the response. For the NN-PUF in [8], this can be modeled as a gain factor multiplying the output current of the X-th column.

Figure 4: (a) Recurrent Neural Network PUF concept (b) Circuit-level RNN-PUF core (c) High speed comparator with offset cancellation

3 Proposed RNN-PUF

In order to improve over a conventional PUF’s attack susceptibility, we propose the design of an RNN-PUF, whose concept is shown in Figure 4

(a). Feed-forward Neural networks such as Extreme Learning Machine have been implemented in hardware before

[8] and used in dual-mode as a PUF. Our hypothesis was if we use the idea of Recurrent Neural Networks for a PUF, it may improve resistance to Machine Learning attacks by introducing non-linearity in the CRP space through the feedback mechanism. However, this means that one needs as many fed-back bits as there are number of challenge bits. A high number of output bits to be fed-back increases both power and area, and also reduces reliability. In order to avoid these issues, we propose another improvement wherein we XOR the feedback bits with the challenge bits.

The core of the RNN-PUF is an array of current mirrors with a shared comparator at the output, as shown in Figures 4(b) and 4(c). If there are rows of current mirrors and bits are fed back, then each feedback bit is XOR-ed with challenge bits to produce the new challenge after feedback. By sharing the comparator, we can drastically cut down power consumption compared with individual comparators for each column pair. Additionally, by using offset-cancellation we can remove the comparator bias. Another free parameter for the RNN-PUF is , the number of times recurrence is applied before producing the final output bit. We will show a design space exploration for choosing and in Section 4.

3.1 RNN-PUF ML Attack Accuracy

To make a fair comparison, we first show MLAA reduction by using a shared comparator for the RNN-PUF as opposed to using CCO for each column pair for the conventional PUF. For these results, we randomize the column pair selection and disable recurrence, i.e., it is just a conventional PUF but without the comparator’s bias. The results are shown in Table 1 and are generated using CRPs using 5-fold cross-validation. There is considerable improvement by averaging out the comparator’s bias, and we get a much stronger PUF against ML attacks.

Figure 5: MLAA (Boosted Trees) comparison for NN-PUF and RNN-PUF, 25,000 CRPs for each
W/ comp. bias W/o comp. bias
Deep NN 71.14% 52.74%
Linear SVM 51.3% 50.7%
Bag/Boost Trees 63.5% 50.9%
Table 1: ML attack accuracies: conventional NN-PUF with and without comparator bias

Now we enable recurrence for the RNN-PUF and use a shared comparator to remove bias. Although the NN-PUF with randomized column pairs is very resistant to MLAA, Figure 5 shows that we can design the RNN-PUF (, ) with much fewer columns of current mirrors (and hence much smaller area) for the same MLAA compared with the NN-PUF. As with the conventional PUF ML tests, we fix the final column pair selection while intermediate cycle column pair selections are random. MLAA’s for several algorithms are shown in Figure 6, for CRPs and . An accuracy of close to 50% means that the RNN-PUF (with ,

) is very resilient to ML attacks and the predictability of the response is a 50-50 guess. Higher order SVMs such as Cubic SVM also show MLAA of around 50%, so their results are not shown here. Advanced ML algorithms such as Ensemble Classifiers or Boosted/Bagged Trees are known to be more effective at breaking strong PUFs

[5, 14] compared to SVMs. However even these algorithms are unable to break the RNN-PUF, showing the effectiveness of the nonlinear structure due to recurrence. These results are all the more effective because even if a hacker has physical access to the RNN-PUF, he wouldn’t be able to access the intermediate output of the RNN-PUF, without tampering with and destroying the PUF circuitry. As a result trying to model the RNN-PUF by getting the intermediate bits would be difficult even with sophisticated equipment.

It would be useful to learn how the RNN-PUF compares with other PUFs that have had non-linearity introduced into them. Two such PUFs are the XOR APUF and Feed-Forward APUF(FF-APUF), both of which have been attacked using ML techniques [5, 15, 16]. In the -XOR APUF, APUF outputs are XOR-ed, and one response bit is generated. This is like XOR-ing several column pair outputs of the conventional NN-PUF; hence, it is not similar to our RNN-PUF since we have both XOR-ing and cascading operations. In the FF-APUF, bits are generated at random locations within an APUF, and are then fed as part of the challenge bits to the rest of the APUF delay chain. This is comparable to feeding back output bits of the conventional NN-PUF as the challenge, without XOR-ing. Thus, it is safe to say that no attacks have been performed on APUF-like structures that use both XOR-ing and cascading, which is what the RNN-PUF is based on. A PUF construction that’s closest to the RNN-PUF is the composite PUF in [17], but no modeling attacks have been discussed.

Figure 6: MLAA for RNN-PUF with =1 and =2 vs. number of training CRPs.

3.2 RNN-PUF Reliability

Reliability of the RNN-PUF can be decomposed into the reliability of an XOR and cascade of PUFs [17]. Figure 7 shows the RNN-PUF as a composition of PUFs. This idea will help us relate the RNN-PUF reliability to a conventional PUF’s reliability. We first analyze reliability mathematically for XOR operation and for cascade operation separately, and then combine them for the final analysis. XOR and cascade reliabilities are given by [17]:

(1)
(2)
Figure 7: RNN-PUF modeled as composition of NN-PUF

where and are individual reliabilities of any process/variable. We analyze the reliabilities as a function of and , where N is the number of intermediate bits feedback, and is the number of recurrences.For , is the reliability of a bit feedback from the PUF, while is the reliability of the challenge bits, which is just 1. So just reduces to . For , assuming and , is the reliability of the PUF output, while is the reliability of the PUF behind in the chain, which is just . So the final reliability is just .

Therefore, the theoretical reliability can be modeled by a recursive equation:

(3)

where is the reliability of the NN-PUF, and is the reliability of N-bits after cycles. This is the theoretical lower bound limit of the reliability. In practice, the reliability is observed to be higher because there is a chance that the final output is the same, even though the intermediate bits are different. Ideally, the probability that the final output is correct despite incorrect intermediate bits is . This notion can be expressed with another equation where a correction term is added:

(4)
Figure 8: (a) Reliability vs. for . (b) Reliability vs. for .

where the second factor with the multiplier of denotes the case when the final output is correct even though intermediate bits are incorrect. We can improve the reliability by increasing , but at the cost of reducing the CRP secure space. We take care of optimizing this trade-off in our Design Guide in section 4.

Figure 9: FoM comparison of RNN-PUF for different and values.

Figures 8(a) and 8(b) show reliability as a function of and . When choosing , we make sure that is less than half the number of columns, so that we are always choosing independent column pairs to avoid low Hamming-Distance-Test (HDT) values. We observe that the reliability drops as or are increased–so we should just choose low values of and . We use these results in Section 4. Recurrence causes the reliability of the RNN-PUF to drop compared with that of the conventional PUF, however the gain in ML attack accuracy far exceeds the loss in reliability.

4 Design Guide for RNN-PUF

The RNN-PUF has parameters that we can control: , , , number of Rows () and Columns (). The FoM used to compare the effect of parameters is the difference between reliability and MLAA:

(5)

Figure 9 shows the RNN-PUF’s FoM for different and . We notice that MLAA immediately drops to around 50% for and =1 for 128x128 RNN-PUF with the reliability being the highest for these parameters. Increasing or decreases reliability–so increasing them does not improve the FoM too much as we see in Figure 9. As a result it is better to go with low value of , either 1 or 2, and =1. In this way, power consumption will also stay low since only one or two column pairs are being used. We recommend using since there is a higher chance that the intermediate challenge is different from the original one and hence more difficult to predict.

If we keep the number of rows fixed and increase number of columns , we see a clear trend of decreasing MLAA in Figure 10(a). Since reliability stays almost the same, so the FoM increases. However increasing the number of columns increases area. Beyond columns, the MLAA already drops to around , and with reliability being the same as just columns, we note that columns give sufficient performance.

Assuming , the number of rows needs to be at least . This is because for , there will only be 262,000 CRPs which can be cracked in few days using brute force method. If we go with , we do get a high number of CRPs, but in order to get high reliability our needs to be around 0.015 which means we lose 25% of the CRPs. After losing these CRPs, we have secure space of CRPs which may or may not be sufficient to withstand a brute-force attack, depending on how quickly the CRPs can be collected.

Figure 10: (a) Trade-off area vs. secure space (=1 and =2) (b) MLAA for 64x8 RNN-PUF with =1 and =2 (c) Reliability- plot for 64x8 RNN-PUF with =1 and =2.
ASPDAC’18 [3] FF-Arb [5] XOR-Arb [5] VLSI’17 [6] VLSI’17 [7] TCAS’17 [8] This Work Array Size 32x2 72x2 128x3 64x64 (5x13)x2 128x128 64x8 Temp. Range, C - 27 to 70 - 0 to 80 -20 to 80 -45 to 90 -45 to 90 Reliability, % - 90.2 - 89 88 92.5 92aafootnotemark: a MLAA, % 80 95.5 99 77bbfootnotemark: b 60 95ccfootnotemark: c 61 FoM, % - -5.3 - 12 28 -2.5 31 Energy/bit, pJ - - - 0.097 11 3.36 0.16 Technology FPGA 0.18um FPGA 28nm 0.13um 0.35um 65nm 11footnotetext: Worst-case native reliability at 80C for fair comparison with [6] and [7], without any dynamic thresholding22footnotetext: For 10,000/64 = 156 CRPs/column, from Fig. 9 in [6]33footnotetext: For 200,000/64 = 3125 CRPs/col. pair from Fig. 3 in this paper
Table 2: Comparison Table

Furthermore, by having , we do not get sufficient entropy [14]. Figure 10(a) compares the FoM for and . We can see that the FoM does not degrade for due to worse reliability. As a result, we can confidently choose rows. In order to have enough CRPs, we go for 64x8 RNN-PUF with and . Using Figure 10(c) we pick –so we lose about of the CRPs to achieve reliability of . MLAA for the remaining CRPs using several algorithms is around at best, as seen in Figure 10(b), implying the FoM is still a respectable . Figure 10(c) also shows the theoretical reliability plotted using Equation 4 and the lower bound using Equation 3. We can observe that Equation 4 accurately models the actual reliability. In these equations, we use the fact that , so we get .

5 Power Consumption & Comparison

We estimate power consumption of the RNN-PUF by simulating the 64x8 array of current mirrors, shared comparator and XOR gates.Technology used is UMC 65nm with 1.2V supply voltage. Since the comparator is shared, and , the comparator burns power three times before we get the final output. All the XORs are operated in parallel; hence, latency for this operation is just that for 1 XOR gate. Total power consumed is 12.3, of which the array and comparator consume 11.6, and the XORs just 0.7. The clock frequency used is MHz. From the time of reset to obtaining the final output bit after recurrence takes about ns. As a result, we obtain an approximate Energy/bit value of pJ/bit.

Table 2 compares our results with recent work. Ref. [7] shows good ML resistance, but the results are only for 10,000 CRPs as opposed to almost 200,000 CRPs collected for our work, for a wider temperature range. Additionally, the RNN-PUF consumes less energy and has a higher FoM of 31% compared to 28% at worst case temperature of 80C. For a fair comparison with the RNN-PUF using columns, we have deduced the MLAA’s of [6] and [8] for the whole array, rather than just single column pair. For a multi-column/column pair PUF we can use several ML attackers in parallel, one for each column/column pair because each column/column pair is easily attacked, and therefore crack the whole PUF.

6 Conclusions

We have presented a Recurrent Neural Network strong PUF that is more resistant to ML attacks compared with conventional PUFs. Recurrence along-with XOR operation introduces non-linearity which makes it difficult even for advanced ML algorithms like Ensemble Classifiers to model the RNN-PUF. We analyzed the trade-offs and came up with a design guide to optimally design the RNN-PUF. Reducing MLAA came at a cost of reducing reliability, but the gap between reliability and attack accuracy is large and positive enough to choose the RNN-PUF over a conventional PUF. Power consumption is kept low due to sub-threshold operation of the current mirrors, making the RNN-PUF attractive for use in IoT devices. For future work, we will be implementing the RNN-PUF as a mixed-signal ASIC and will try using RNN algorithms to perform ML attack on the PUF.

References

  • [1] A. Alvarez, W. Zhao and M. Alioto. 15fJ/b static physically unclonable functions for secure chip identification with 2% native bit instability and 140X Inter/Intra PUF hamming distance separation in 65nm. 2015 Intl. Solid-State Circuits Conf. (ISSCC).
  • [2] S. Stanzione, D. Puntin and G. Iannaccone. CMOS Silicon Physical Unclonable Functions Based on Intrinsic Process Variability. 2011 Journal of Solid-State Circuits.
  • [3] Q. Ma et al. A Machine Learning Attack Resistant Multi-PUF Design on FPGA. 2018 23rd Asia and S. Pac. Design Auto. Conf. (ASP-DAC).
  • [4] R. Maes. Physically Unclonable Functions: Constructions, Properties and Applications. Springer, pp. 32-33, 2013.
  • [5] U. Ruhrmair et al. Modeling attacks on physical unclonable functions. 2010 Proc. of the ACM Conf. on Comp. and Comms. Sec.
  • [6] S. Jeloka et al. A sequence dependent challenge-response PUF using 28nm SRAM 6T bit cell. 2017 Symp. on VLSI Circuits.
  • [7] X. Xi, H. Zhuang, N. Sun and M. Orshansky. Strong subthreshold current array PUF with 2 challenge-response pairs resilient to machine learning attacks in 130nm CMOS. 2017 Symp. on VLSI Circuits.
  • [8] Z. Wang et al. Current Mirror Array: A Novel Circuit Topology for Combining Physical Unclonable Function and Machine Learning. 2017 Trans. on Circuits and Sys. I: Reg. Papers.
  • [9] P. Whatmough et al. A 28nm SoC with a 1.2GHz 568nJ/prediction sparse deep-neural-network engine with 0.1 timing error rate tolerance for IoT applications. 2017 Intl. Solid-State Circuits Conf. (ISSCC).
  • [10] R. Kumar and W. Burleson. On design of a highly secure PUF based on non-linear current mirrors. 2014 Intl. Symp. on Hardware-Oriented Sec. and Trust (HOST).
  • [11] D. Lim. Extracting secret keys from integrated circuits. M.S. thesis, Mass. Inst. Tech., Cambridge, 2004.
  • [12] G. Hospodar, R. Maes and I. Verbauwhede. Machine learning attacks on 65nm Arbiter PUFs: Accurate modeling poses strict bounds on usability. 2012 Intl. Wksh. on Info. Forensics and Sec. (WIFS).
  • [13] P. Nguyen, D. Sahoo, R. Chakraborty and D. Mukhopadhyay. Security Analysis of Arbiter PUF and Its Lightweight Compositions Under Predictability Test. 2016 ACM Trans. Design Auto. Elec. Sys.
  • [14] A. Vijayakumar, V. Patil, C. Prado and S. Kundu. Machine learning resistant strong PUF: Possible or a pipe dream?. 2016 Intl. Symp. on Hardware Oriented Sec. and Trust (HOST).
  • [15] B. Gassend, D. Lim, D. Clarke, M. Dijk and S. Devadas. Identification and Authentication of Integrated Circuits. 2004 Concurrency Computat.: Pract. Exper.
  • [16] G. Suh and S. Devadas. Physical unclonable functions for device authentication and secret key generation. 2007 Proc. of the 44th annual Des. Auto. Conf. (DAC).
  • [17] D. Sahoo et al. Composite PUF: A new design paradigm for Physically Unclonable Functions on FPGA. 2014 Intl. Symp. on Hardware-Oriented Sec. and Trust (HOST).