Supervised Machine Learning Techniques for Trojan Detection with Ring Oscillator Network

03/12/2019 ∙ by Kyle Worley, et al. ∙ 0

With the globalization of the semiconductor manufacturing process, electronic devices are powerless against malicious modification of hardware in the supply chain. The ever-increasing threat of hardware Trojan attacks against integrated circuits has spurred a need for accurate and efficient detection methods. Ring oscillator network (RON) is used to detect the Trojan by capturing the difference in power consumption; the power consumption of a Trojan-free circuit is different from the Trojan-inserted circuit. However, the process variation and measurement noise are the major obstacles to detect hardware Trojan with high accuracy. In this paper, we quantitatively compare four supervised machine learning algorithms and classifier optimization strategies for maximizing accuracy and minimizing the false positive rate (FPR). These supervised learning techniques show an improved false positive rate compared to principal component analysis (PCA) and convex hull classification by nearly 40 maintaining > 90% binary classification accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

While the transition from vertically integrated supply chains to horizontally integrated has decreased costs for integrated circuit (IC) designers; the ”fabless” approach comes with the steep price of trust [1, 2, 4, 5, 6, 8, 9, 10, 11]. Semiconductor designers now must trust their intellectual property (IP) to multiple parties in order to have their ICs manufactured at foundries [11, 8]. Not only do they run the risk of having their IP stolen, but it is not uncommon for untrusted system integrators and foundries to insert hardware Trojans before shipping the final product [1, 2, 4, 5]. These Trojans are capable of leaking sensitive information, disabling key portions of the IC, self-destructing the chip, or hindering performance [1, 2]. This has driven the need for fast, accurate, and simple methods of detecting infected ICs before they are able to taint the supply chain.

With this rising need, some research in the field of hardware security has been focused on finding optimal methods of detecting and classifying Trojans. Initial research suggested the use of semi-invasive strategies such as scanning electron microscopy (SEM) for failure analysis. However, this is expensive and time consuming for it to be applied to every IC. Using netlist failure detection techniques was also unsuccessful due to Trojans that add functional logic remaining undetected.

The most promising technique relies on the use of side channel information as it is non-invasive and can be done quickly. By monitoring side channel information from an IC power grid it is possible to detect Trojans due to their additional activity [15]. In [13], the authors developed a ring oscillator network (RON) in a chip’s power grid for hardware Trojan detection. The increased switching activity from Trojan activation will manifest itself in decreased RO frequencies due to the variable voltage drop in the chip’s power network. Using Principal Component Analysis (PCA) and convex hull classification ([19, 20]) they were able to achieve greater than 80% classification accuracy with a false positive rate of 50%.

This was improved upon in [16]

using a genetic algorithm (GA) for feature reduction and a support vector machine (SVM) for classification. Feature reduction allows machine learning algorithms to reduce the feature space and decrease training time and the possibility of over fitting. The genetic algorithm is built upon the idea of natural selection where the best features will ”survive” through each generation. When the end of the algorithm is reached you will be left with the optimal feature set. This in addition to the use of SVM resulted in 99.6% classification accuracy and a reduced false positive rate. However,

[16] still suffers from a large FPR.

In this paper, we present a supervised machine learning approach for the classification of Trojan free and infected ICs using a RON. The results show that we maintain similar accuracy to previous work in addition to reducing the FPR by using the K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Naive Bayes, and ensemble classification algorithms

[23, 24, 25, 26]. Experimental results show detection accuracy 88% with some classifiers even reaching 97.4%. Low false positive rates (FPR) were also achieved and in the case of two classifiers a 0% FPR was reached.

The rest of this paper will be laid out as follows: Section II will provide all necessary background information, Section III will discuss my proposed method of classification, and Section IV will contain the results and discussion of our proposed method. Section V will conclude our proposed work.

Ii Background and Experimental Set-up

Ii-a Hardware Trojan and Related Work

Hardware Trojans are malicious modifications made by attackers during the design and manufacturing process [1, 2, 5, 6]. Trojans can be used to degrade performance, steal information, or block functionality of an IC [1, 2]. These unwanted circuit additions are often hard to detect because they are not always triggered or activated by standard test procedures [1, 2]. Trojans also come in a wide variety of formats so no single filter can catch them all. These Trojans are unwanted and pose a risk to the chip owners of receiving secretly modified hardware that could lead to devastating consequences [1, 2]. A hardware Trojan can be classified into three main categories according to their physical, activation, and action characteristics [1, 2]. The physical characteristics category describes the various hardware manifestations of Trojans according to their shape and size; the activation characteristics describe the conditions, which activate the Trojans, and action characteristics refer to the behavior of the Trojans [1, 2].

Current Trojan detection methods largely focus on (i) functional testing and (ii) side channel analysis [6, 7, 13, 14, 15, 16]. Functional verification is the attempt to activate Trojans by applying test vectors and comparing the responses with the correct results [1, 15]. The difficulty with this approach is the rarity of which some hardware Trojans are activated. It is nearly impossible to explore every possible state of a circuit and search for Trojan activity [6]. Whereas, side-channel analyses detect the HT by analyzing the physical characteristics of the IC chip such as transient current, leakage current, delay, energy, heat generation, or EM radiation [6, 7, 12, 13, 14, 15]. In both approaches, the outputs of circuit under test are compared with the outputs of a golden circuit. Typically, the adversary would design a Trojan to evade detection by ensuring rare activation to evade logic testing and minimal physical characteristics, like size, to escape side channel based testing. Backside optical imaging of the fabricated chip enables extraction of the full standard cell layout of the chip with the watermarks, which in turn can be validated with image processing against the expected simulated layout to detect any changes made to accommodate hardware Trojans [12]. A challenge in backside imaging is obtaining a high enough spatial resolution for an accurate representation of a nanometer-scale circuit [12].

Ii-B Supervised Learning

In the field of machine learning there are two main approaches in use: supervised and unsupervised learning

[22]. Unsupervised learning is outside the scope of this paper and will not be discussed further for the sake of brevity. Supervised learning algorithms work under the assumption the training data is labeled before being processed by the algorithm. By labeling the data, the algorithm then knows the desired output for the given input set and can create a hypothesis for determining the desired output for future inputs [22]. Within the topic of supervised learning exist two problem types: regression and classification. Regression algorithms are going to map input values to a real output value, e.g., predicting stock market prices given a feature set. On the other hand, classification algorithms will place a set of input data points into one or more ”classes”, e.g. Trojan free or Trojan infected.

In this paper, a binary classification ([21, 22]) approach is used to classify each IC as either Trojan free or Trojan infected. In order to properly train the classifier we must operate under the assumption we have data from both Trojan free and Trojan infected circuits. Obtaining known Trojan free ICs is a challenge in and of itself, but knowing which ICs are infected with Trojans will require some other method of detection until enough data can be collected to train a classifier.

Ii-C K-Nearest Neighbors

One of the simplest and yet most popular machine learning algorithms is k-nearest neighbors (KNN). When used for classification the nearest training samples in the feature space are used to classify the new point through a simple majority vote. This simplicity does come with the cost of longer classification times for larger data sets. The value

is usually defined as a positive integer, and in the case of binary classification it is useful to set k as an odd number to prevent a split decision. The distance metric can be any method of calculating distance, but Euclidean distance is often used. It can be defined as follows:

(1)

As can be seen in 1, the value of will have an effect on the classifiers performance. By comparing the mean error and accuracy values for a series of values it is possible to find the most optimal value for a data set.

Fig. 1: An example of the KNN algorithm showing the effect the value of has on classification. Notice that the new example will be classified as class 1 if = 1, but class 2 if = 3 [17].

In this paper, the classifier was trained using a range of values from 1 to 40 and the value with the best FPR and accuracy was selected without being over fitted. It is usually safe to select a value of near the square root of the number of training samples. However, low values of will lead to a classifier that performs worse with noisy data, and high values of can lead to over fitting the classifier to the training data.

Ii-D Support Vector Machine

Another popular machine learning algorithm is known as the support vector machine (SVM) [24]

. While not as simple as KNN it is much more powerful for classification and regression applications. The training of the SVM consists of finding the optimal hyperplane that will linearly classify data points with the largest margin possible between the two classes of data points. However, not all data can be linearly separated by a hyperplane in which case we must apply a ”kernel trick” to transform the feature space.

If we define our training data as a set of points in the form of , where is a vector and is -1 or 1 to represent the class of , then we can define our hyperplane as satisfying the following equation [24]:

(2)

If the data set is linearly separable then one class can be defined as anything on or above the boundary and the other class can be defined as anything on or below . Now in order to train the SVM we want to minimize the difference between these two hyperplanes so that the margin between the two classes is maximized. Thus the problem simplifies down into:

(3)

The classifier will then be defined by and . An example of a hyperplane used to separate two classes of data can be seen in 2.

Fig. 2: Example of a SVM hyperplane separating two classes of data. In this example A, B, and C would be classified by computing the dot product. They all wold be classified as class ’x’ [18]

This method will only work for linearly separable data and the classification of other data requires the replacement of the dot product with a nonlinear kernel function, thus the name ”kernel trick”. By using the kernel function we can now put a hyperplane in our higher dimensional nonlinear feature space. In this paper a Gaussian radial base function was used.

(4)

Ii-E Naive Bayes

The construction of classifiers using Naive Bayes is a relatively simple process that can produce highly accurate and fast classification results using a probabilistic approach[25]

. Using Bayes theorem we can generate the probability that a data point will belong to a class

given the presence of one of the features of the data [25].

(5)

However, when trying to build a classifier we are interested in the probability a data point belongs in class given multiple features. Using the chain rule Bayes theorem can be expanded to account for this. Assuming the

features in the data set can be represented as then it follows [25]:

(6)

Now if we assume the conditional independence of the features in the set:

(7)

Finally, to create a classification rule we must have a way of making decisions. Using a maximum a posteriori rule, or simply stated choosing the most probable outcome, we can decide how to assign class labels to data points.

(8)

Naive Bayes can be applied in one of three ways to estimate the likelihood of the features. A Gaussian classifier will assume the features are distributed on a Gaussian distribution, Multinomial will assume multinomially distributed data, and Bernoulli will assume binary-valued features. The Gaussian classifier was selected in this paper due to the continuous nature of the data set.

Ii-F Ensemble Learning

Ensemble learning is another technique for producing better prediction results using machine learning algorithms [26]. It operates under the assumption that by combining the predictive power of single algorithms it is possible to increase the overall possible predictive power. Several popular strategies include voting, bagging, stacking, boosting, and ”bucket of models”. In this paper, we will only be implementing a simple voting method. This is done by taking the output of each of the three classifiers and using it as a vote. The class with the most votes will then be the output of the classifier. Theoretically, given an odd number of classifiers in the ensemble a decision should be made every time that represents the best of each classifier [26]. However, if the ensemble is made up of an even number of classifiers situations can arise resulting in split decisions. This is said to be an unstable decision. This can be mitigated through the use of either an odd number of classifiers or using weighted voting to reduce the possibility of a split decision.

Ii-G Ring Oscillator Network and Trojan Detection

Recent work has shown that a ring oscillator (RO) network (RON) connected to the power supply structure of an IC can be used to detect hardware Trojan activity. As shown in 3, ROs consisting of inverters and a NAND gate for activation control are placed in a vertical orientation within the power structure of an IC. The ROs are then provided test patterns from a linear feedback shift register and a decoder. These outputs are then selected using a multiplexer and a counter registering the number of oscillations from the selected RO. The RO’s frequency can then be derived from the number of oscillations. Any Trojan inserted into an IC will result in extra noise in the power supply structure that would not otherwise be present in a ”golden” chip. By injecting the same test patterns into every IC the Trojans should at least partially active and thus cause extra noise. Since a RO’s frequency is directly related to its power supply voltage this Trojan caused power supply noise should propagate to the RO’s frequency and result in differing measurements between clean and infected ICs [13, 16].

However, the frequency differences are not always discernible to the human eye nor to simple algorithmic classification strategies due to process variations and other factors. In [13] Principal Component Analysis (PCA) was used as a means of feature reduction. The data set contained the frequency data from 8 ROs, but through feature reduction could be accurately represented with just 3. A simple convex hull classification method was then used to classify each IC as either Trojan free or into one of the 23 Trojan categories. While the RON is successful at detecting the difference between Trojan free and Trojan infected circuits the false positive rate was nearly 50%. Using the data collected from the RON we will try to improve on this false positive rate while maintaining above 90% classification accuracy.

Fig. 3: The ring oscillator network used for Trojan detection. While 8 ROs are used in this configuration the structure will differ based on the power network of the IC you are trying to protect [13, 16]

Ii-H Experimental Set-up

We conducted our experiments on eight FPGA boards (Nexys4 DDR development board [27]). Each FPGA board is divided into four separate regions to increase the sample size. Each region is considered as an individual IC and Trojan, and the RON architecture is implemented in only a single portion at a time in order to make sure that one portion (or an individual IC) does not interfere another. We used a total of eight 41-stage ROs in each portion (i.e., IC). We distributed combinational and sequential Trojans ([28]) in one portion randomly. We used several Trojan benchmarks from Trusthub [28]. We measured the average RO frequency at room temperature and nominal operating voltage from 50 measurements (with Trojan and without Trojan) to cancel out the measurement noise. We included ITC-99 ([29]) benchmarks for normal operation.

Iii Method

The method we will use in this paper is to use the four previously discussed supervised classification approaches and optimize them for accuracy and a low FPR. The main motivation for this is to reduce potential waste of Trojan free ICs that would otherwise be discarded due to being classified as infected. However, accuracy must be maintained to prevent Trojans from being introduced into the supply chain.

In order to do this, from the collected data, each chip has readings for two ”golden” or Trojan free samples and 23 Trojan inserted samples. The data was collected using the test setup described in II-H. This data was then be labeled accordingly and used to train the classifiers.

The KNN classifier will then be optimized by finding the best value for maintaining accuracy and minimizing the FPR. By training the KNN classifier on a range of values and different training sample sizes we were able to select the best value for our data set. The SVM classifier will be optimized using two slack values pertaining to the Gaussian kernel function, and .

can be considered the weight correct classification has over maximizing the margin between the two classes. Gamma is the inverse of the variance of our Gaussian function. Thus, a small

will lead to a large variance and points could be similar even if they are not close together and vice versa. In order to find the optimal and values we have used a grid search method in which a given set of values is exhaustively run through until the best values for the data set are found. The Naive Bayes Gaussian classifier will not be tuned using any parameters. Each of the three classifiers will then be combined in a simple voting ensemble in the following combinations: KNN+SVM+Naive Bayes, KNN+SVM, KNN+Naive Bayes, and SVM+Naive Bayes. The KNN and SVM classifiers will retain the same optimization parameters as they had being trained individually.

Fig. 4: A plot showing the effect the value of has on the classifier accuracy. As can be seen a value of 2 provides enough accuracy without being over fitted.

Fig. 5: A plot showing the effect the value of has on the classifier false positive rate. As shown higher values of k will lead to over fitting and higher false positive rates.

Iv Results

Following the method above each classifier was trained and optimized for three different sized data sets consisting of 6 chips, 12 chips, and 24 chips. Each sample size was then repeated for 20 trials and the average accuracy, false positive rate (FPR), false negative rate (FNR), true negative rate (TNR), and true positive rate (TPR) were calculated and recorded as follows:

(9)
(10)
(11)
(12)

The optimization step of the training led to the discovery of useful properties of our data set. Initial estimates for the value of when training the KNN classifier used the square root of the number of samples in the training data set. While this resulted in a very accurate classifier it came at the cost of a FPR greater than or equal to 50%. This is most likely a result of the data set having little noise and being very prone to over fitting. In Figure 4, the accuracy for a range of values is depicted, note that as the value increases the accuracy tends to plateau as a result of over-fitting. Figure 5 shows the same plateau for the FPR. Since we want to avoid over-fitting and lower values perform well we can assume the data set is not noisy. This led to the decision to use a value of 2 for every sample size. This maintained the greater than 90% accuracy benchmark and had a best case FPR of only 9.4%, a near 40% decrease compared to PCA and convex hull classification. Even with small training sets the KNN maintained a FPR under 20% (Table I).

Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.813 0.815 0.906
FPR 0.187 0.185 0.094
FNR 0.075 0.063 0.051
TPR 0.916 0.927 0.745
Accuracy 0.916 0.927 0.945
TABLE I: KNN Classifier Results

Optimizing the SVM proved to be more difficult than the KNN classifier. The grid search was quick to converge on a value of 1 and value of 0.1, but the FPR left much to be desired. As can be seen in Table II, the SVM is very accurate but when trained on fewer samples it struggles with a high FPR. Using a balancing optimization it was still able to achieve a 97.4% classification accuracy and a 7.1% FPR (Table II) and outperform convex hull and approach the results achieved in [16]. This leads me to believe that with a larger data set and increased training set sizes the SVM could become more accurate and reduce the FPR even further. Unfortunately, it is not always possible to have large data sets due to factors outlined above.

Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.445 0.605 0.929
FPR 0.555 0.355 0.071
FNR 0.017 0.023 0.023
TPR 0.983 0.977 0.977
Accuracy 0.940 0.946 0.974
TABLE II: SVM Classifier Results

Despite the many operating assumptions the Naive Bayes classifier is a very powerful but simple and fast method. With no optimization the classifier produced results that were slightly less accurate compared to the other classifiers. At the 6 chip sample size the classifier was 88.3% accurate but had only a 6.9% FPR. The accuracy only dropped 0.1% when increasing the training sample size to 12 chips, but the FPR dropped to 6.1%, the lowest FPR of any non-ensemble classifier (Table

III). The Naive Bayes classifier produced the best results in term of FPR but was held back by a higher FNR which led to reduced accuracy. In theory, this could be reduced by tuning the decision threshold, but would most likely result in the FPR increasing.

Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.931 0.955 0.939
FPR 0.069 0.045 0.061
FNR 0.121 0.124 0.127
TPR 0.879 0.876 0.873
Accuracy 0.883 0.882 0.873
TABLE III: Naive Bayes Gaussian Classifier Results

When using ensemble learning the hope is the results are better than that of each of the individual classifiers by themselves. However, it also runs the risk of the opposite occurring. We encountered both situations while training the ensembles. The ensemble containing all three classifiers performed better than the lone SVM classifier at the lower training sample sizes. Yet, it was outperformed at the 24 chip sample size (Table IV). The Naive Bayes and KNN/SVM ensembles had the lowest overall FPRs of all classifiers, but struggled to beat the desired 90% binary classification accuracy threshold (Tables VI & VII). This can be attributed to the Naive Bayes classifier’s characteristics dominating those of the other classifiers. Despite the lower accuracy, at the 24 chip training sample size both ensembles had a 0% FPR. Overall, the best ensemble method was the combination of the SVM and KNN classifiers (Table V). At the lower training sample sizes the FPR was only 19.6% and 16.4%, but kept an accuracy of 92.1% and 93.0% respectively. At the 24 chip training sample size the FPR was 0.03% higher than the SVM alone but with a 3.4% accuracy loss.

Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.785 0.796 0.908
FPR 0.215 0.204 0.092
FNR 0.066 0.062 0.055
TPR 0.934 0.938 0.945
Accuracy 0.922 0.927 0.943
TABLE IV: SVM+KNN+NB Ensemble Classifier Results
Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.804 0.836 0.926
FPR 0.196 0.164 0.074
FNR 0.069 0.062 0.058
TPR 0.931 0.938 0.942
Accuracy 0.921 0.930 0.940
TABLE V: SVM+KNN Ensemble Classifier Results
Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.939 0.953 1.000
FPR 0.061 0.047 0.000
FNR 0.125 0.127 0.129
TPR 0.875 0.873 0.871
Accuracy 0.880 0.879 0.881
TABLE VI: SVM+NB Ensemble Classifier Results
Metric Sample Size
6 Samples 12 Samples 24 Samples
TNR 0.982 0.993 1.000
FPR 0.018 0.007 0.000
FNR 0.122 0.126 0.137
TPR 0.878 0.874 0.863
Accuracy 0.886 0.883 0.873
TABLE VII: KNN+NB Ensemble Classifier Results

Considering the results, the choice for the best approach is very dependent on the data set and desired outcomes. The Naive Bayes and KNN classifiers are extremely fast, simple, and do well at maintaining low FPRs and moderate accuracy throughout the sample sizes. Combining the SVM and KNN classifiers in an ensemble allowed the classifier to maintain greater than 90% accuracy, but kept the FPR lower compared to using a SVM alone. Thus, with very little data the best classification performance will come from a Naive Bayes or ensemble containing the Naive Bayes classifier such as the KNN and NB ensemble. However, with sufficient data the SVM classifier alone still provides the best trade off between accuracy and FPR.

V Conclusion

In this paper, we presented a quantitative comparison of four supervised machine learning algorithms’ performance when classifying ICs based on their ring oscillator network frequencies. This method was able to achieve 97.6% binary classification accuracy and a FPR of just 7.1% when using a SVM classifier, and ensemble approaches achieved 88% accuracy with no false positives. Despite these promising results, supervised learning approaches are often impractical in a real supply chain. Finding proven ’Golden chips’ is a challenge and knowing which chips are infected at the scaled assumed in the data set is near impossible. Future work is planned to use unsupervised approaches and only ”Golden” chip data to classify ICs as Trojan free or infected to maximize utility in real supply chains.

Vi Acknowledgment

This work was supported in parts by the National Science Foundation under Grant Number CNS-1850241 and UAH NFR. We would like to thank Dr. Tehranipoor and Dr. Forte for sharing the resources on Trusthub [28].

References

  • [1] M. Tehranipoor and F. Koushanfar, “A Survey of Hardware Trojan Taxonomy and Detection,” in IEEE Design & Test of Computers, vol. 27, no. 1, pp. 10-25, Jan.-Feb. 2010.
  • [2] Shakya, Bicky, Tony He, Hassan Salmani, Domenic Forte, Swarup Bhunia, and Mark Tehranipoor. “Benchmarking of hardware Trojans and maliciously affected circuits.” Journal of Hardware and Systems Security 1, no. 1 (2017): 85-102.
  • [3] Mark Tehranipoor and Hassan Salmani, “Trojan Benchmarks” Available: https://www.trust-hub.org/benchmarks/trojan
  • [4] Jyothi, Vinayaka, and Jeyavijayan JV Rajendran. “Hardware Trojan Attacks in FPGA and Protection Approaches.” In The Hardware Trojan War, pp. 345-368. Springer, Cham, 2018.
  • [5] Salmani, Hassan. “Trusted Testing Techniques for Hardware Trojan Detection.” In Trusted Digital Circuits, pp. 109-119. Springer, Cham, 2018.
  • [6] Cui, Xiaotong, Kaijie Wu, and Ramesh Karri. “Hardware Trojan detection using path delay order encoding with process variation tolerance.” In 2018 IEEE 23rd European Test Symposium (ETS), pp. 1-2. IEEE, 2018.
  • [7] Plusquellic, Jim, and Fareena Saqib. “Detecting Hardware Trojans Using Delay Analysis.” In The Hardware Trojan War, pp. 219-267. Springer, Cham, 2018.
  • [8] M. T. Rahman, D. Forte, Q. Shi, G. K. Contreras and M. Tehranipoor, “CSST: Preventing distribution of unlicensed and rejected ICs by untrusted foundry and assembly,” 2014 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), Amsterdam, 2014, pp. 46-51.
  • [9] Rahman, Md Tauhidur, Domenic Forte, and Mark M. Tehranipoor. “Protection of Assets from Scan Chain Vulnerabilities Through Obfuscation.” In Hardware Protection through Obfuscation, pp. 135-158. Springer, Cham, 2017.
  • [10] Rahman, Md Tauhidur, Domenic Forte, Quihang Shi, Gustavo K. Contreras, and Mohammad Tehranipoor. “CSST: an efficient secure split-test for preventing IC piracy.” In Test Workshop (NATW), 2014 IEEE 23rd North Atlantic, pp. 43-47. IEEE, 2014.
  • [11] Banga, Mainak, and Michael S. Hsiao. “Hardware IP Trust.” In The Hardware Trojan War, pp. 75-100. Springer, Cham, 2018.
  • [12] Boyou Zhou et al., “Detecting Hardware Trojans using backside optical imaging of embedded watermarks,” 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, 2015, pp. 1-6.
  • [13] Shane Kelly, Xuehui Zhang, Mohammed Tehranipoor, and Andrew Ferraiuolo, ”Detecting Hardware Trojans using On-chip Sensors in an ASIC Design.” Journal of Electronic Testing 31, no. 1 (2015): 11-26.
  • [14] Tehranipoor, M., & Koushanfar, F. (2013). “A Survey of Hardware Trojan Taxonomy and Detection.” IEEE Design & Test, 1–1.
  • [15] Wang, Xiaoxiao & Tehranipoor, Mark & Plusquellic, J. (2008). Detecting malicious inclusions in secure hardware: Challenges and solutions. 15-19. 10.1109/HST.2008.4559039.
  • [16] Karimian, Nima & Tehranipoor, Fatemeh & Forte, Domenic & Rahman, Md Tauhidur. (2015). Genetic Algorithm for Hardware Trojan Detection with Ring Oscillator Network (RON).
  • [17] Bronshtein, A, “A quick introduction to K-Nearest Neighbors Algorithm”. 2017.
  • [18] Ng, A. “CCS229 Lecture Notes: Support Vector Machines”. Stanford University, 2018.
  • [19] Lever, Jake, Martin Krzywinski, and Naomi Altman. ”Points of significance: Principal component analysis.” (2017): 641.
  • [20] Tang, Min, Jie-yi Zhao, Ruo-feng Tong, and Dinesh Manocha, ”GPU accelerated convex hull computation.” Computers & Graphics 36, no. 5 (2012): 498-506.
  • [21] Yang, Zhiguang, and Haizhou Ai. “Demographic classification with local binary patterns.” In International Conference on Biometrics, pp. 464-473. Springer, Berlin, Heidelberg, 2007.
  • [22]

    Chapelle, Olivier, Bernhard Scholkopf, and Alexander Zien. “Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews].” IEEE Transactions on Neural Networks 20, no. 3 (2009): 542-542.

  • [23] Denoeux, Thierry. ”A k-nearest neighbor classification rule based on Dempster-Shafer theory.” IEEE transactions on systems, man, and cybernetics 25, no. 5 (1995): 804-813.
  • [24] Suykens, Johan AK, and Joos Vandewalle, “Least squares support vector machine classifiers.” Neural processing letters 9, no. 3 (1999): 293-300.
  • [25] McCallum, Andrew, and Kamal Nigam. ”A comparison of event models for naive bayes text classification.” In AAAI-98 workshop on learning for text categorization, vol. 752, no. 1, pp. 41-48. 1998.
  • [26] Dietterichl, Thomas G. ”Ensemble learning.” (2002).
  • [27] https://store.digilentinc.com/nexys-4-ddr-artix-7-fpga-trainer-board-recommended-for-ece-curriculum/
  • [28] Trusthub https://trust-hub.org/benchmarks/trojan
  • [29] https://www.cerc.utexas.edu/itc99-benchmarks/bench.html