1 Introduction
Device connectivity in the Internet of Things (IoT) is predicted to increase exponentially while electronics usage in automobiles has also increased significantly. With the arrival of selfdrivingautomobile technology, the role of hardware security will only get more important. PUFs provide a safe way to generate unique identification(ID) or authentication bits, compared with nonvolatile memories that can be hacked using widely known techniques. Both analog and digital circuit techniques have been used to design PUFs [1, 2, 3], with a preference for subthreshold analog techniques for low power operation. There are two kinds of PUFs, weak and strong [4], with the weak one used solely for ID, while the strong one being used for authentication and/or ID. Strong PUFs have a large challengeresponse pair (CRP) space such that an attacker cannot predict the response of some arbitrary challenge even though he has knowledge of many other CRPs. However, several mathematical models have been developed for existing strong PUFs, largely using machine learning (ML) techniques to learn the expected patterns of the PUF’s CRPs[5]. Alternative design strategies using analog components have been proposed like in [6, 7] with improved resistance than the classic Arbiter PUF. In this paper, we present a technique to significantly decrease the machine learning attack accuracy (MLAA) more specifically for analog PUFs using neuralnetworks (NN), without being heavily penalized on reliability. Our RNNPUF uses a combination of recurrence, or feedback, and XOR to generate a response whose relationship with the particular challenge is obscured to the attacker due to the randomness provided by the previous response.
Our contributions in this work are as follows:

We first show high MLAA of a recently proposed analog NNPUF thus showing that analog PUFs are not inherently robust against ML attacks.

We propose a simple method to reduce MLAA for the NNPUF at the cost of more area.

We propose the RNNPUF that has both low MLAA and low area.

We propose a design guide for the RNNPUF where area and MLAA/reliability are tradedoff.
The paper is organized as follows: Section 2 describes the conventional PUF based on current mirror arrays (CMA). Section 3 introduces the RNNPUF, its operational details and its reliability/accuracy. Section 4 discusses the design guide, while Section 5 presents power consumption simulations and comparison with stateofart. Finally, Section 6 concludes the paper.
2 The Conventional NNPUF
The conventional PUF without recurrence is based on a NeuralNetwork PUF [8] using CMA with high mismatch. We choose the NNPUF as the base PUF because it can be reused for ML, another important functionality becoming ubiquitous in IoT [9]. Also, it has the least area per bit reported so far. Similar arrays may be constructed out of the current mirrors reported in [7, 10]. Figure 1 shows the schematic of the conventional NNPUF. A 1bit response is generated by comparing the output current in a column pair specified by the column challenge. The row challenge bits selects which row of transistors are active. The current values for each column of the column pair are summed up, and the two results are sent to the comparator to generate () or () where denotes the current output for column given a challenge . The only difference is we show a shared comparator while [8] used current controlled oscillators (CCO) to digitize the current values. We note that the maximum number of possible column pairs is where is the number of columns. However we can only get independent column pairs, because if we know that and , then we know for sure that [4].
2.1 Conventional NNPUF Reliability
In order to improve reliability of the NNPUF, it was shown in [8] that we can use a dynamic threshold control, , but it comes at the cost of discarding more CRPs. In that scheme, a response is considered valid if and only if . If invalid, a different challenge has to be used.
Figure 2 shows the reliability of the conventional PUF against , for
CRPs. Reliability is defined as the probability of 1bit output being correct across different operating conditions such as temperature/voltage/time etc. As shown in
[8], we also observed that reliability variation with temperature is far more dominant than other effects. So we will use this worstcase reliability at C for our analysis going forward. The reliability of this NNPUF without recurrence can be denoted –this notation will be useful when we talk about reliability of the RNNPUF.2.2 Conventional NNPUF ML Attack Accuracy
The idea of using ML to predict responses of PUFs is well known [5, 11]
. We attempt to use different ML algorithms to predict the response to a given challenge. To assess ML resistance of the basic structure, initially the column pair part of the challenge is kept fixed i.e. it is not randomized for different row part of the challenge. Using Linear Support Vector Machine(LSVM), we obtained the results shown in Figure
3. Since the accuracy exceeds the reliability for CRPs, we can use this number to denote the secure space of the conventional PUF [12]. The results are not surprising because the conventional currentmirrorbased PUF, for a fixed column pair, is similar to the APUF that has been accurately modeled before. The linear relationship between challenge and response is easily learned by the LSVM with very few training CRPs. This result shows that even analog PUFs are prone to ML attacks and there is nothing inherently secure about analog PUFs–careful design is necessary.Another problem with conventional PUFs such as that shown in [6, 8] is the low Hamming Distance Test [13] value of the output due to the comparator’s bias, which biases the output toward 0 or 1, and makes it easier to predict the response. For the NNPUF in [8], this can be modeled as a gain factor multiplying the output current of the Xth column.
3 Proposed RNNPUF
In order to improve over a conventional PUF’s attack susceptibility, we propose the design of an RNNPUF, whose concept is shown in Figure 4
(a). Feedforward Neural networks such as Extreme Learning Machine have been implemented in hardware before
[8] and used in dualmode as a PUF. Our hypothesis was if we use the idea of Recurrent Neural Networks for a PUF, it may improve resistance to Machine Learning attacks by introducing nonlinearity in the CRP space through the feedback mechanism. However, this means that one needs as many fedback bits as there are number of challenge bits. A high number of output bits to be fedback increases both power and area, and also reduces reliability. In order to avoid these issues, we propose another improvement wherein we XOR the feedback bits with the challenge bits.The core of the RNNPUF is an array of current mirrors with a shared comparator at the output, as shown in Figures 4(b) and 4(c). If there are rows of current mirrors and bits are fed back, then each feedback bit is XORed with challenge bits to produce the new challenge after feedback. By sharing the comparator, we can drastically cut down power consumption compared with individual comparators for each column pair. Additionally, by using offsetcancellation we can remove the comparator bias. Another free parameter for the RNNPUF is , the number of times recurrence is applied before producing the final output bit. We will show a design space exploration for choosing and in Section 4.
3.1 RNNPUF ML Attack Accuracy
To make a fair comparison, we first show MLAA reduction by using a shared comparator for the RNNPUF as opposed to using CCO for each column pair for the conventional PUF. For these results, we randomize the column pair selection and disable recurrence, i.e., it is just a conventional PUF but without the comparator’s bias. The results are shown in Table 1 and are generated using CRPs using 5fold crossvalidation. There is considerable improvement by averaging out the comparator’s bias, and we get a much stronger PUF against ML attacks.
W/ comp. bias  W/o comp. bias  

Deep NN  71.14%  52.74% 
Linear SVM  51.3%  50.7% 
Bag/Boost Trees  63.5%  50.9% 
Now we enable recurrence for the RNNPUF and use a shared comparator to remove bias. Although the NNPUF with randomized column pairs is very resistant to MLAA, Figure 5 shows that we can design the RNNPUF (, ) with much fewer columns of current mirrors (and hence much smaller area) for the same MLAA compared with the NNPUF. As with the conventional PUF ML tests, we fix the final column pair selection while intermediate cycle column pair selections are random. MLAA’s for several algorithms are shown in Figure 6, for CRPs and . An accuracy of close to 50% means that the RNNPUF (with ,
) is very resilient to ML attacks and the predictability of the response is a 5050 guess. Higher order SVMs such as Cubic SVM also show MLAA of around 50%, so their results are not shown here. Advanced ML algorithms such as Ensemble Classifiers or Boosted/Bagged Trees are known to be more effective at breaking strong PUFs
[5, 14] compared to SVMs. However even these algorithms are unable to break the RNNPUF, showing the effectiveness of the nonlinear structure due to recurrence. These results are all the more effective because even if a hacker has physical access to the RNNPUF, he wouldn’t be able to access the intermediate output of the RNNPUF, without tampering with and destroying the PUF circuitry. As a result trying to model the RNNPUF by getting the intermediate bits would be difficult even with sophisticated equipment.It would be useful to learn how the RNNPUF compares with other PUFs that have had nonlinearity introduced into them. Two such PUFs are the XOR APUF and FeedForward APUF(FFAPUF), both of which have been attacked using ML techniques [5, 15, 16]. In the XOR APUF, APUF outputs are XORed, and one response bit is generated. This is like XORing several column pair outputs of the conventional NNPUF; hence, it is not similar to our RNNPUF since we have both XORing and cascading operations. In the FFAPUF, bits are generated at random locations within an APUF, and are then fed as part of the challenge bits to the rest of the APUF delay chain. This is comparable to feeding back output bits of the conventional NNPUF as the challenge, without XORing. Thus, it is safe to say that no attacks have been performed on APUFlike structures that use both XORing and cascading, which is what the RNNPUF is based on. A PUF construction that’s closest to the RNNPUF is the composite PUF in [17], but no modeling attacks have been discussed.
3.2 RNNPUF Reliability
Reliability of the RNNPUF can be decomposed into the reliability of an XOR and cascade of PUFs [17]. Figure 7 shows the RNNPUF as a composition of PUFs. This idea will help us relate the RNNPUF reliability to a conventional PUF’s reliability. We first analyze reliability mathematically for XOR operation and for cascade operation separately, and then combine them for the final analysis. XOR and cascade reliabilities are given by [17]:
(1) 
(2) 
where and are individual reliabilities of any process/variable. We analyze the reliabilities as a function of and , where N is the number of intermediate bits feedback, and is the number of recurrences.For , is the reliability of a bit feedback from the PUF, while is the reliability of the challenge bits, which is just 1. So just reduces to . For , assuming and , is the reliability of the PUF output, while is the reliability of the PUF behind in the chain, which is just . So the final reliability is just .
Therefore, the theoretical reliability can be modeled by a recursive equation:
(3) 
where is the reliability of the NNPUF, and is the reliability of Nbits after cycles. This is the theoretical lower bound limit of the reliability. In practice, the reliability is observed to be higher because there is a chance that the final output is the same, even though the intermediate bits are different. Ideally, the probability that the final output is correct despite incorrect intermediate bits is . This notion can be expressed with another equation where a correction term is added:
(4) 
where the second factor with the multiplier of denotes the case when the final output is correct even though intermediate bits are incorrect. We can improve the reliability by increasing , but at the cost of reducing the CRP secure space. We take care of optimizing this tradeoff in our Design Guide in section 4.
Figures 8(a) and 8(b) show reliability as a function of and . When choosing , we make sure that is less than half the number of columns, so that we are always choosing independent column pairs to avoid low HammingDistanceTest (HDT) values. We observe that the reliability drops as or are increased–so we should just choose low values of and . We use these results in Section 4. Recurrence causes the reliability of the RNNPUF to drop compared with that of the conventional PUF, however the gain in ML attack accuracy far exceeds the loss in reliability.
4 Design Guide for RNNPUF
The RNNPUF has parameters that we can control: , , , number of Rows () and Columns (). The FoM used to compare the effect of parameters is the difference between reliability and MLAA:
(5) 
Figure 9 shows the RNNPUF’s FoM for different and . We notice that MLAA immediately drops to around 50% for and =1 for 128x128 RNNPUF with the reliability being the highest for these parameters. Increasing or decreases reliability–so increasing them does not improve the FoM too much as we see in Figure 9. As a result it is better to go with low value of , either 1 or 2, and =1. In this way, power consumption will also stay low since only one or two column pairs are being used. We recommend using since there is a higher chance that the intermediate challenge is different from the original one and hence more difficult to predict.
If we keep the number of rows fixed and increase number of columns , we see a clear trend of decreasing MLAA in Figure 10(a). Since reliability stays almost the same, so the FoM increases. However increasing the number of columns increases area. Beyond columns, the MLAA already drops to around , and with reliability being the same as just columns, we note that columns give sufficient performance.
Assuming , the number of rows needs to be at least . This is because for , there will only be 262,000 CRPs which can be cracked in few days using brute force method. If we go with , we do get a high number of CRPs, but in order to get high reliability our needs to be around 0.015 which means we lose 25% of the CRPs. After losing these CRPs, we have secure space of CRPs which may or may not be sufficient to withstand a bruteforce attack, depending on how quickly the CRPs can be collected.
Furthermore, by having , we do not get sufficient entropy [14]. Figure 10(a) compares the FoM for and . We can see that the FoM does not degrade for due to worse reliability. As a result, we can confidently choose rows. In order to have enough CRPs, we go for 64x8 RNNPUF with and . Using Figure 10(c) we pick –so we lose about of the CRPs to achieve reliability of . MLAA for the remaining CRPs using several algorithms is around at best, as seen in Figure 10(b), implying the FoM is still a respectable . Figure 10(c) also shows the theoretical reliability plotted using Equation 4 and the lower bound using Equation 3. We can observe that Equation 4 accurately models the actual reliability. In these equations, we use the fact that , so we get .
5 Power Consumption & Comparison
We estimate power consumption of the RNNPUF by simulating the 64x8 array of current mirrors, shared comparator and XOR gates.Technology used is UMC 65nm with 1.2V supply voltage. Since the comparator is shared, and , the comparator burns power three times before we get the final output. All the XORs are operated in parallel; hence, latency for this operation is just that for 1 XOR gate. Total power consumed is 12.3, of which the array and comparator consume 11.6, and the XORs just 0.7. The clock frequency used is MHz. From the time of reset to obtaining the final output bit after recurrence takes about ns. As a result, we obtain an approximate Energy/bit value of pJ/bit.
Table 2 compares our results with recent work. Ref. [7] shows good ML resistance, but the results are only for 10,000 CRPs as opposed to almost 200,000 CRPs collected for our work, for a wider temperature range. Additionally, the RNNPUF consumes less energy and has a higher FoM of 31% compared to 28% at worst case temperature of 80C. For a fair comparison with the RNNPUF using columns, we have deduced the MLAA’s of [6] and [8] for the whole array, rather than just single column pair. For a multicolumn/column pair PUF we can use several ML attackers in parallel, one for each column/column pair because each column/column pair is easily attacked, and therefore crack the whole PUF.
6 Conclusions
We have presented a Recurrent Neural Network strong PUF that is more resistant to ML attacks compared with conventional PUFs. Recurrence alongwith XOR operation introduces nonlinearity which makes it difficult even for advanced ML algorithms like Ensemble Classifiers to model the RNNPUF. We analyzed the tradeoffs and came up with a design guide to optimally design the RNNPUF. Reducing MLAA came at a cost of reducing reliability, but the gap between reliability and attack accuracy is large and positive enough to choose the RNNPUF over a conventional PUF. Power consumption is kept low due to subthreshold operation of the current mirrors, making the RNNPUF attractive for use in IoT devices. For future work, we will be implementing the RNNPUF as a mixedsignal ASIC and will try using RNN algorithms to perform ML attack on the PUF.
References
 [1] A. Alvarez, W. Zhao and M. Alioto. 15fJ/b static physically unclonable functions for secure chip identification with 2% native bit instability and 140X Inter/Intra PUF hamming distance separation in 65nm. 2015 Intl. SolidState Circuits Conf. (ISSCC).
 [2] S. Stanzione, D. Puntin and G. Iannaccone. CMOS Silicon Physical Unclonable Functions Based on Intrinsic Process Variability. 2011 Journal of SolidState Circuits.
 [3] Q. Ma et al. A Machine Learning Attack Resistant MultiPUF Design on FPGA. 2018 23rd Asia and S. Pac. Design Auto. Conf. (ASPDAC).
 [4] R. Maes. Physically Unclonable Functions: Constructions, Properties and Applications. Springer, pp. 3233, 2013.
 [5] U. Ruhrmair et al. Modeling attacks on physical unclonable functions. 2010 Proc. of the ACM Conf. on Comp. and Comms. Sec.
 [6] S. Jeloka et al. A sequence dependent challengeresponse PUF using 28nm SRAM 6T bit cell. 2017 Symp. on VLSI Circuits.
 [7] X. Xi, H. Zhuang, N. Sun and M. Orshansky. Strong subthreshold current array PUF with 2 challengeresponse pairs resilient to machine learning attacks in 130nm CMOS. 2017 Symp. on VLSI Circuits.
 [8] Z. Wang et al. Current Mirror Array: A Novel Circuit Topology for Combining Physical Unclonable Function and Machine Learning. 2017 Trans. on Circuits and Sys. I: Reg. Papers.
 [9] P. Whatmough et al. A 28nm SoC with a 1.2GHz 568nJ/prediction sparse deepneuralnetwork engine with 0.1 timing error rate tolerance for IoT applications. 2017 Intl. SolidState Circuits Conf. (ISSCC).
 [10] R. Kumar and W. Burleson. On design of a highly secure PUF based on nonlinear current mirrors. 2014 Intl. Symp. on HardwareOriented Sec. and Trust (HOST).
 [11] D. Lim. Extracting secret keys from integrated circuits. M.S. thesis, Mass. Inst. Tech., Cambridge, 2004.
 [12] G. Hospodar, R. Maes and I. Verbauwhede. Machine learning attacks on 65nm Arbiter PUFs: Accurate modeling poses strict bounds on usability. 2012 Intl. Wksh. on Info. Forensics and Sec. (WIFS).
 [13] P. Nguyen, D. Sahoo, R. Chakraborty and D. Mukhopadhyay. Security Analysis of Arbiter PUF and Its Lightweight Compositions Under Predictability Test. 2016 ACM Trans. Design Auto. Elec. Sys.
 [14] A. Vijayakumar, V. Patil, C. Prado and S. Kundu. Machine learning resistant strong PUF: Possible or a pipe dream?. 2016 Intl. Symp. on Hardware Oriented Sec. and Trust (HOST).
 [15] B. Gassend, D. Lim, D. Clarke, M. Dijk and S. Devadas. Identification and Authentication of Integrated Circuits. 2004 Concurrency Computat.: Pract. Exper.
 [16] G. Suh and S. Devadas. Physical unclonable functions for device authentication and secret key generation. 2007 Proc. of the 44th annual Des. Auto. Conf. (DAC).
 [17] D. Sahoo et al. Composite PUF: A new design paradigm for Physically Unclonable Functions on FPGA. 2014 Intl. Symp. on HardwareOriented Sec. and Trust (HOST).
Comments
There are no comments yet.