I Introduction
Physically unclonable functions, or PUFs, are electronic devices that are used to produce unique identifiers. Small variations of the manufacturing process are exploited so that any two devices, built according to the same description, will likely produce different identifiers. Moreover, since such process variations are intrinsically random, they cannot be controlled to replicate the behavior of another device, hence the name physically unclonable functions. PUFs find many applications: the identifier can be used to generate a unique cryptographic key, which cannot be easily extracted from the device; it can be recorded during manufacturing into a whitelist to prevent counterfeiting or overproduction; and it can also be employed in the implementation of challengeresponse protocols at a low cost. This is especially valuable on devices where implementing asymmetric cryptography primitives is too computationally expensive.
There are several ways to build PUFs. SRAMPUFs [1] exploit the states of SRAM cells after powering up, while ringoscillator (RO) PUFs [2] exploit delay differences of signals in electronic circuits. In this paper, we analyze another delay PUF, called loopPUF, first proposed in [3]. Our analysis will also be valid for the ROsum PUF [4], which shares essentially the same mathematical model, as well as the arbiter PUF [5]. In the remainder of this paper, we will write PUF as a shorthand for loopPUF, ROsum PUF or arbiter PUF.
Ia Modelization and Notations
A PUF of size generates one identifier bit, or response bit, when queried with a challenge , a sequence of values or . The PUF is characterized by weights, denoted by that represent delay differences of the PUF circuit. As explained in [6], for each challenge , the response bit of the PUF of parameters is equal to .
The base for all logarithms in this paper is equal to , and all entropies are given in bits.
Due to manufacturing process variations, the weights
are modeled as realizations of random variables
. In [6], a Gaussian model was analyzed, where the Gaussian nature of the variables is justified by simulations of process variations in electronic circuits [7].More generally, our analysis is valid for any whose components are i.i.d. continuous variables with symmetric densities about (whose support contains ). The i.i.d. assumption is justified by the fact that delays are caused by “identical” circuit elements that lie in different locations in the circuit and can, therefore, be considered independent. In particular, each is the difference between two delays caused by such “identical” independent elements, which justifies the symmetry assumption. Simulations in Section V will be made in the Gaussian model, for which the weight distribution is centered isotropic.
IB Problem Statement
The security of PUFs is related to Rényi entropies of various orders [8].
The minentropy is related to the maximum (worstcase) probability of successfully cloning a given PUF. Therefore, minentropy should be as large as possible to ensure a given worstcase security level.
The collision entropy is related to the average probability that two randomly chosen PUFs have the same identifier. Therefore, should also be as large as possible to ensure a given average security level against collision.
The classical Shannon’s entropy is known to provide a resistance criterion against modeling attacks—which predict the response to a new challenge given previous responses to other challenges [9]. Again should be as large as possible.
The maxentropy is simply the logarithm of the total number of PUFs. upper bounds all the other entropies . Theoretically, it is possible to choose a non i.i.d. weight distribution such that all PUFs are equiprobable, yielding for every . In this case it is sufficient to count PUFs. In practice, however, due to the assumption of i.i.d. weights (typically Gaussian), the upper bound will not be attained. Therefore, it is important to derive a efficient method to estimate the various Rényi entropies.
Estimating the various Rényi entropies typically requires estimating the entire PUF probability distribution. However, because a PUF of size is determined by response bits, there can be as many as PUFs of size . The naive complexity increases very rapidly with , in the order of .
IC Outline
In this paper, we link the analysis of PUFs to the theory of Boolean Threshold Functions (BTF) and build an algorithm that accurately estimates the PUF probability distribution and entropies up to order . Our algorithm relies on determining equivalent classes of PUFs with the same probability, and then estimating the probability within each class. The classes are determined using Chow parameters from BTF theory. The remainder of the paper is thus organized as follows. Section II recalls known results from the theory of BTFs which we adapt to PUFs. The key results on the equivalence classes are proved in Section III. Section IV describes the simulation algorithm that allows us in Section V to determine the PUF distributions and entropies up to order . Finally, Section VI concludes.
Ii The Chow Parameters of PUFs
Definition 1 (Puf)
Let be such that for all , . The PUF of size and weight sequence is the function defined as
(1) 
where is the usual scalar product.
This definition coincides with socalled “selfdual” BTFs of variables [10]. BTFs have been studied since the 1950’s as building blocks for Boolean circuits [11]
and also find applications in machine learning
[12]. Leveraging the correspondence between PUFs and BTFs, we adapt fundamental results from BTF theory to conveniently characterize PUFs.Iia All PUFs are Attainable
Recall that in our framework, the PUF parameters
are realizations of a random vector
. Under this probabilistic model a PUF becomes a randomized mapping such that for any (deterministic) challenge .Lemma 1
For every PUF , we have .
In other words, every PUF can be reached by a realization of weights with positive probability (even though one has ).
By assumption all components of are i.i.d. with symmetric density of support containing . Hence the support of the density of is an dimensional manifold containing the origin in its interior. Let be fixed and let be the cone (scaleinvariant set) of all such that . This cone has apex and contains the intersection of all halfspaces where . Therefore, it is a dimensional manifold which intersects with positive volume. Hence .
IiB Chow Parameters Characterize PUFs
First introduced by Chow [13] and later studied by Winder [11] who gave them their name, the socalled Chow parameters uniquely define a Boolean threshold function. Their definition is especially simple for PUFs:
Definition 2 (Chow parameters)
The Chow parameters of a PUF of size is defined as
(2) 
where the vector sum is carried out componentwise.
We remark that for , all Chow parameters are even integers. This is due to the fact that a sum of even number of elements must be even. More precisely,
(3)  
(4) 
Theorem 1 (Chow’s theorem [13])
Two PUFs with the same Chow parameters are identical.
For completeness, we give a new proof of Chow’s theorem rewritten in our PUF framework. Such proof turns out to be very simple.
Let and be two PUFs with identical Chow parameters:
(5) 
Simplifying this expression by , we obtain
(6) 
which is equivalent to
(7) 
Taking the scalar product with , we get
(8) 
which implies whenever . Now we assumed that is never zero by Def. 1. Thus .
IiC Consequence on the MaxEntropy
An upper bound on the maxentropy can be easily deduced from Chow’s theorem.
Corollary 1
There are no more than PUFs of size , i.e., the maxentropy of the PUF of size satisfies
(9) 
A more refined version, which can be rewritten as for , can be found in [14, Corollary 10.2]. The proof of (9) is again particularly simple for PUFs.
The Chow parameters , , satisfy
(10) 
and similarly, . Since there are even integers between and , there can only be different values taken by the Chow parameters. The conclusion follows from Chow’s Theorem 1.
A lower bound on is also easily found from the representation of Definition 1, as given by the following Proposition. The corresponding bound for the number of BTFs was first established independently by Smith [15] and Yajima et al. [16] in the 1960s.
Proposition 1
The maxentropy satisfies
(11) 
Recall from Lemma 1 that every PUF can be reached by a realization of weights with positive probability. Hence it is sufficient to consider all for all in order to lowerbound the total number of PUFs.
Let a PUF of size . Applying some small perturbation on if necessary (without affecting ) we may always assume that all the () take distinct values.
Now let be such that is different from all the , and define . For any challenge , we have
(12) 
Depending on how many of the values of are smaller/larger than , we can construct different PUF functions of size . Hence each PUF of size gives rise to more than PUFs of size . Therefore, . The result follows by finite induction:
More recently, Zuev [17] has shown that, asymptotically, . Therefore, for the maxentropy, we have that . As a result, instead of evaluating the probabilities of different PUFs, we will only have to evaluate about .
As apparent in the proof of Zuev [17, Theorem 1]
although through different geometrical considerations on normal vectors of hyperplanes, we can further reduce the number of PUFs to be considered down by a factor of about
. Section III will derive the exact compression factor using the equivalence classes on Chow parameters.IiD Order and Sign Stability of Chow Parameters
An important property of the Chow parameters is that their share the same signs and relative order as the weights .
Lemma 2
Let be a PUF with weight , and be the corresponding Chow parameters. Then

and .

.
A similar result was shown by Chow in [13], although with another definition of Chow parameters. Again we give a simplified proof in the PUF framework.
We first prove that , the other case being similar. Suppose that . Let (resp. ) be the set (resp. ). By definition,
(13) 
We show the existence of an injective mapping from to . Consider the onetoone mapping defined by
(14) 
For any , , and
(15)  
(16) 
Therefore, and . Hence, the bijection induces an injection from to . This implies that hence .
To prove the second part, assume that for . Let be a PUF given by , where is obtained from by dropping , for any , and . Say the Chow parameters of is . According to the first part of this lemma, we have . Now, expand the expression of as
(17)  
(18)  
(19) 
Iii Equivalence Classes and Chow Parameters
Since the are i.i.d. symmetric random variables, the joint probability distribution of the weights is invariant under permutations and sign changes. Therefore, all PUFs that can be obtained from one another by permuting or changing signs of their weights can be clustered together into equivalence classes of PUFs with the same probability .
We now establish several properties of these equivalence classes for PUFs, known as “selfdual” classes [10] in the context of BTFs. Zuev [17] had already mentioned elements per class in a special case. Our generalization (Theorem 3) is mentionned in a different form in [18, § 3.1.2] for calculating the total number of BTFs, yet we couldn’t find formal proofs published in the literature.
We give a formal definition of the equivalence classes by the action of the group
(20) 
where is the symmetric group of order . An element is determined by the permutation and the sign changes .
Proposition 2
For any and define such that
(21) 
This defines a group action of on , where the inner product in is defined by
(22) 
is clearly a group with identity . For any and ,
(23)  
(24)  
(25)  
(26) 
This shows that defines a group action of on .
Thus we can say that the group acts on the PUFs of size , the action being defined as
(27) 
In keeping with Lemma 2, we now show that the group action is carried over to Chow parameters:
Theorem 2
Let a PUF of Chow parameters , and let . The Chow parameters of is .
Let . For any challenge , we have that . Thus,
(28)  
(29) 
Changing the signs of the weights or permuting them is reflected by the same operation on the Chow parameters. This allows us to compute the size of the equivalence classes:
Theorem 3
Let be a PUF with Chow parameters . Let be the number of Chow parameters equal to , and let the orbit of by , that is, the equivalence class containing . Then
(30) 
By applying the wellknown orbitstabilizer theorem (see for instance [19, p. 89]), we have
(31) 
where is the stabilizer of . The size of the orbit of can therefore be deduced from the size of its stabilizer. Now the latter can be easily computed: Let such that . Since , we have and . The number of such is exactly .
Iv MonteCarlo Algorithm
As seen in the introduction to the previous section, all PUFs in one equivalence class have the same probability. It follows that the probability of any particular PUF can be deduced from the probability of the class to which it belongs. Therefore, to determine the various entropies, it suffices to find a method that estimates the probabilities of the various equivalence classes.
In this section, we propose an algorithm that exploits a definition of a canonical PUF in any equivalence class in such a way that for given any PUF, it is trivial to determine the corresponding canonical PUF. As expected, only about probabilities need to be estimated, instead of approximatively .
Definition 3 (Canonical PUF)
A canonical PUF of variables is a PUF whose Chow parameters satisfy
(32) 
The canonical form of a PUF is the canonical PUF belonging to the same class, i.e., where is such that is canonical.
This notion was first introduced by Winder [11] and is related to the concept of “prime” functions independently studied by Chow [13].
Proposition 3 (Unicity of the canonical PUF)
Two canonical PUFs in the same class are equal.
Since and are in the same equivalence class, their Chow parameters are identical up to sign changes and order. Since both are canonical, the signs and order are fixed. Their Chow parameters are thus identical and .
Proposition 4
Let be a weight sequence of a PUF , and let such that satisfies
(33) 
Then is the canonical form of the PUF .
Let us denote by (resp ) the Chow parameters of (resp ). The PUF obtained from weights is . From Lemma 2, the satisfy the same ordinal relations and have the same signs as the . Therefore, is a canonical PUF.
These results allow us to efficiently estimate the PUF distribution by MonteCarlo methods, as described in Algorithm 1. Such an algorithm can be used for any i.i.d. weight distribution with symmetric densities (not necessarily Gaussian).
V Entropies Estimation
In this section, we present the simulation results in the Gaussian case where the weights are i.i.d. . Exact values were already determined up to in [20].
Va Estimating the MaxEntropy
According to Lemma 1, every PUF can be attained by some realization of weights. Therefore, the maxentropy of the PUF distribution is simply the logarithm of the total number of PUFs with weights. This number is equal to the total number of BTFs of variables and has been computed up to in [18, § 3.1.2], see Table I.
# PUFs  (bits)  

1  2  1 
2  4  2 
3  14  3.8074… 
4  104  6.7004… 
5  1882  10.8781… 
6  94572  16.5291… 
7  15028134  23.8411… 
8  8378070864  32.9640… 
9  17561539552946  43.9974… 
10  144130531453121108  57.0001… 
VB Estimating the Shannon Entropy
For any PUF , let denote the equivalence class of with cardinality , its probability, the set of all PUFs and the quotient group induced by the action of the group . Then, letting , one has
(34)  
(35)  
(36)  
(37) 
In other words, the Shannon entropy of the PUF distribution is simply the sum of the entropy of the equivalence classes and the average of their logarithmic size. The latter term can be estimated using the unbiased empirical mean, where a confidence interval can be determined using Student’s tdistribution
[21]. The former term, however, is an entropy, for which no unbiased estimator exists
[22]. The NSB estimator [23]has a reduced bias and a low variance. However, because we generated much more PUFs than equivalence classes (by a factor of at least 100000), the plugin estimator, based on the empirical frequency estimates, performs quite well: Its bias can be upper bounded as described in
[22] and was found to be less than bit. The results are summarized in Table II.PUF Sample size  (bits)  

1  —  1 
2  —  2 
3  —  3.6655… 
4  —  6.2516… 
5  10.0134 – 10.0156  
6  15.1903 – 15.1925  
7  21.9856 – 21.9879  
8  30.5628 – 30.5645  
9  41.0367 – 41.0384  
10  53.4737 – 53.4740 
VC Estimating the Collision Entropy
The collision entropy was estimated using an unbiased estimator adapted from [24, § 1.4.2]. Let be the number of PUF samples that belong to the equivalence class of
among a number of Poissondistributed PUFs with parameter
, and . We can compute(38)  
(39)  
(40) 
where we used the fact that from [24, § 2.2]. It follows that
(41) 
is an unbiased estimator for the powersum . As can be also checked, the variance of this estimator admits the same upper bound as the one described in [24, § 1.4.2]. This allows us to determine confidence intervals for the collision entropy as shown in Table III.
PUF Sample size  (bits)  

1  —  1 
2  —  2 
3  —  3.5462… 
4  —  5.7105… 
5  8.4551 – 8.4568  
6  11.5977 – 11.6023  
7  14.8819 – 14.89805  
8  18.5201 – 18.5753  
9  22.0309 – 22.4067  
10  25.9070 – 26.1983 
VD Estimating the MinEntropy
In order to determine the minentropy of the PUF distribution, one needs to estimate the probability of the most likely PUF. Our experiments, as well as those of Delvaux et al. [25]
, strongly suggest that for a Gaussian distribution of the weights, the most likely PUFs are the
PUFs corresponding to the Boolean functions and , .The maximum likelihood estimator of that probability is simply the sample frequency, which is an unbiased estimator. A confidence interval for this estimator can be obtained using the Wilson score interval [26], which yields a confidence interval for the minentropy .
Because we have already determined that there are exactly PUFs in the equivalence class of the most likely PUF, we only need to estimate a confidence interval on the sample frequency of the equivalence class. Once such an interval was obtained, for instance , then the confidence interval for the minentropy is given by
The confidence intervals of the minentropy are presented in Table IV.
PUF Sample size  (bits)  

1  —  1 
2  —  2 
3  —  3.2086… 
4  —  4.5850… 
5  6.1006 – 6.1008  
6  7.7352 – 7.7354  
7  9.4731 – 9.4735  
8  11.3020 – 11.3024  
9  13.2123 – 13.2132  
10  15.1899 – 15.1901 
Vi Conclusions and perspectives
While it had been previously shown [6] that the entropy of the loopPUF of elements could exceed , the exact values were only known for very small values of . Making the link with BTF theory using Chow parameters, we have extended these results to provide accurate approximations up to . Our results suggest that the entropy of the loopPUF might be quadratic in : This would be a very positive result for circuit designers, since it implies that the PUF has a very good resistance to machine learning attacks. However, because the minentropy and collision entropy are much smaller (on the order of ) the resistance to cloning may not be as high as expected.
Two interesting theoretical aspects of the PUF entropy are still open: First, to what extent does the entropy of the PUF stay close to the maxentropy for larger values of ? Second, is it possible to obtain a quasiquadratic entropy in when choosing a small subset of all possible challenges? The latter point is of great practical interest since it would reduce the time required to obtain the PUF identifier while maintaining a high resistance to machine learning attacks.
For values of larger than , our method seems to become too costly in space and time to produce accurate estimates of the PUF probability distributions under reasonable conditions. One could perhaps have recourse to entropy estimation methods that dispense with learning the distribution itself, such as the NSB estimation [23]. This could be used to check the predicted trend of the PUF entropy for increasing .
Acknowledgment
The authors would like to thank Prof. Gadiel Seroussi, who first suggested a possible link between our problem and BTF theory at the LAWCI’18 conference in Campinas, Brazil.
References
 [1] D. E. Holcomb, W. P. Burleson, and K. Fu, “Powerup SRAM state as an identifying fingerprint and source of true random numbers,” IEEE Transactions on Computers, vol. 58, no. 9, pp. 1198–1210, 2009.
 [2] G. E. Suh and S. Devadas, “Physical unclonable functions for device authentication and secret key generation,” in 44th ACM/IEEE Design Automation Conference, 2007, pp. 9–14.
 [3] Z. Cherif, J.L. Danger, S. Guilley, and L. Bossuet, “An easytodesign PUF based on a single oscillator: The Loop PUF,” in 15th Euromicro Conference on Digital System Design (DSD). IEEE, 2012, pp. 156–162.
 [4] M.D. M. Yu and S. Devadas, “Recombination of physical unclonable functions,” in 35th Annual GOMACTech Conference, 2010.
 [5] B. Gassend, D. Clarke, M. Van Dijk, and S. Devadas, “Delaybased circuit authentication and applications,” in Proceedings of the 2003 ACM Symposium on Applied Computing. ACM, 2003, pp. 294–301.
 [6] O. Rioul, P. Solé, S. Guilley, and J.L. Danger, “On the entropy of physically unclonable functions,” in IEEE International Symposium on Information Theory (ISIT), July 2016, pp. 2928–2932.
 [7] H. Chang and S. S. Sapatnekar, “Statistical timing analysis considering spatial correlations using a single PERTlike traversal,” in Proceedings of the 2003 IEEE/ACM International Conference on ComputerAided Design. IEEE Computer Society, 2003, p. 621.
 [8] A. Rényi, “On measures of entropy and information,” in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. Berkeley: University of California Press, 1961.
 [9] A. Vijayakumar, V. C. Patil, C. B. Prado, and S. Kundu, “Machine learning resistant strong PUF: Possible or a pipe dream?” in IEEE International Hardware Oriented Security and Trust, 2016, pp. 19–24.
 [10] E. Goto and H. Takahasi, “Some theorems useful in threshold logic for enumerating boolean functions.” in IFIP Congress, 1962, pp. 747–752.
 [11] R. O. Winder, “Single stage threshold logic,” in Symposium on Switching Circuit Theory and Logical Design. IEEE, 1961, pp. 321–332.

[12]
T. M. Cover, “Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition,”
IEEE Transactions on Electronic Computers, no. 3, pp. 326–334, 1965.  [13] C.K. Chow, “On the characterization of threshold functions,” in Symposium on Switching Circuit Theory and Logical Design, 1961, pp. 34–38.
 [14] S.T. Hu, Threshold Logic. Univ of California Press, 1965.
 [15] D. R. Smith, “Bounds on the number of threshold functions,” IEEE Transactions on Electronic Computers, no. 3, pp. 368–369, June 1966.
 [16] S. Yajima and T. Ibaraki, “A lower bound of the number of threshold functions,” IEEE Transactions on Electronic Computers, vol. EC14, no. 6, pp. 926–929, Dec 1965.
 [17] Y. A. Zuev, “Methods of geometry and probabilistic combinatorics in threshold logic,” Discrete Mathematics and Applications, vol. 2, no. 4, pp. 427–438, 1992.
 [18] N. Gruzling, “Linear separability of the vertices of an dimensional hypercube,” Master’s thesis, University of Northern British Columbia, 2008.
 [19] T. W. Hungerford, Algebra, ser. Graduate Texts in Mathematics. New York: SpringerVerlag, 1980, vol. 73.
 [20] A. Schaub, O. Rioul, J. J. Boutros, J.L. Danger, and S. Guilley, “Challenge codes for physically unclonable functions with Gaussian delays: A maximum entropy problem,” Latin American Week on Coding and Information, UNICAMPCampinas, Brazil, pp. 22–27, 2018.
 [21] Student, “The probable error of a mean,” Biometrika, pp. 1–25, 1908.
 [22] L. Paninski, “Estimation of entropy and mutual information,” Neural computation, vol. 15, no. 6, pp. 1191–1253, 2003.
 [23] I. Nemenman, F. Shafee, and W. Bialek, “Entropy and inference, revisited,” in Advances in Neural Information Processing Systems, 2002, pp. 471–478.
 [24] J. Acharya, A. Orlitsky, A. T. Suresh, and H. Tyagi, “The complexity of estimating Rényi entropy,” in Proceedings of the TwentySixth Annual ACMSIAM Symposium on Discrete Algorithms. SIAM, 2014, pp. 1855–1869.
 [25] J. Delvaux, D. Gu, and I. Verbauwhede, “Upper bounds on the minentropy of RO sum, arbiter, feedforward arbiter, and SArbRO PUFs,” in HardwareOriented Security and Trust (AsianHOST), IEEE Asian. IEEE, 2016, pp. 1–6.
 [26] E. B. Wilson, “Probable inference, the law of succession, and statistical inference,” Journal of the American Statistical Association, vol. 22, no. 158, pp. 209–212, 1927.
Comments
There are no comments yet.