1. Introduction
The considerable high capital costs on semiconductor manufacturing motivate most semiconductor companies to outsource their designed integrated circuits (ICs) to the contract foundries for fabrication. Despite the reduced cost and other benefits, this trend has led to everincreasing security risks such as IC counterfeiting, piracy and unauthorized overproduction by the contract foundries (Manoj et al., 2018; Sayadi et al., 2018; Stangl et al., 2018; Subramanyan et al., 2015a). The overall financial risk caused by such counterfeit and unauthorized ICs was estimated to be over $169 billion per year (most counterfeited parts represent a $169 billion potential challenge for global semiconductor industry, [n. d.]). The major threats from the attackers arise from reverse engineering an IC by fully identifying its functionality by stripping it layerbylayer and extracting the unveiling gatelevel netlist. To prevent such reverse engineering, IC obfuscation techniques have been extensively researched in recent years (Azar et al., 2019; Yasin et al., 2017). The general idea is to camouflage some gates in an IC so that their gate types cannot be determined by reverse engineering optically, yet they preserve the functionality same as the original gates. Such techniques were highly effective until very recent progress of the attacking techniques based on logical attackers were invented and widely applied (El Massad et al., 2015). This is based on the fact that there are limited types of gates (e.g., AND, OR, XOR) in IC, so the attackers can just brute force all the possible combinations of types for all camouflaged gates to find out the one that functions identically to the targeted IC to be deobfuscated. As brute force is usually prohibitively expensive, more recently, efficient methods such as Boolean satisfiability problem (SAT)based attacks have been proposed which have attracted enormous attention (Roshanisefat et al., 2018b; Liu et al., 2016).
The runtime of SAT attack to reverse engineer the IC highly depends on the complexity of the camouflaged IC, which can vary from milliseconds to thousands of years or more depending on the number and layout of camouflaged gates. Therefore, a successful obfuscation defense is to consume attackers prohibitive amount of time (i.e., many years) to deobfuscate. However, camouflaging each gate comes at a heavy cost in finance, power, and space, such tradeoff forces us to search for optimal layout instead of purely increasing their quantity. This means to select the best set of gates to be selected for being camouflaged in order to maximize the runtime for deobfuscating. Although such selection can significantly influence the deobfuscation runtime, however, until now it is still generally based on human heuristics or experience, which is seriously arbitrary and suboptimal
(Khaleghi and Rao, 2018). This is majorly because it is unable to “try and error” all the different ways of obfuscation, as there are millions of combinations to try and the runtime for each try (i.e., to run the attacker) can be days, weeks, or years.To address this issue, this paper focuses on efficient and scalable ways to estimate the runtime for an attacker to deobfuscate a camouflaged IC. This research topic is highly underexplored because of its significant challenges: 1) Difficulty in characterizing the hidden and sophisticated algorithmic mechanism of attackers. Over the recent years, a large number of deobfuscation methods have been proposed with various techniques (Khaleghi and Rao, 2018)
. In order to practically beat the defender, methods with more and more sophisticated theories, rules, and heuristics have been proposed and adopted. The behavior of such highlynonlinear and stronglycoupling systems is prohibitive for conventional simple models (e.g., linear regression and support vector machine
(Bishop and Mitchell, 2014)) to characterize. 2) Difficulty in extracting determinant features from discrete and graphstructured IC.The inputs of the runtime estimation problem is the IC and the selected gates for camouflaging, where the first input is a heterogeneous graph while the second is a vector with discrete values. Conventional feature extraction methods are not intuitive to be applied to such type of data without significant information loss. Hence, it is highly challenging to intactly formulate and seamlessly integrate them as mathematical forms that can be input to conventional computational and machine learning models.
3) Requirement on high efficiency and scalability for deobfuscation runtime estimation. The key to the defense against deobfuscation is the speed. The faster the defender can estimate the deobfuscation runtime for each candidate set of camouflaged gates, the more candidate sets the defender can estimate, and hence the better the obfuscation effect will be. Moreover, the estimation speed of deobfuscation runtime must not be sensitive to different obfuscation strategies in order to make the defender strategy controllable.This work address all the above challenges, and proposes the first generic framework for deobfuscation runtime prediction, based on graph deep learning techniques. In recent years, deep learning methods in complex cognitive tasks such as object recognition and machine translation have achieved immense success, which motivates the generalization of it into graphstructured data (Kipf and Welling, 2017)
. By concretely formulating ICs and the camouflaged gates as multiattributed graphs, this work innovatively leverages and extends the stateoftheart graph deep learning methods such as Graph Convolutional Neural Networks (GCN)
(Kipf and Welling, 2017) to instantiate a graph regressor. Such endtoend deep graph regressor can characterize the underlying and sophisticated cognitive process of the attacker for deobfuscating the ICs. It can also automatically extract the discriminative features that are determinants to the estimation of the deobfuscation runtime to achieve accurate runtime prediction. After being trained, the prediction based on this deobfuscation runtime estimator just runs instantly fast by simply performing a feedforward propagation process. The major contributions of this paper are:
Proposing a new framework, ICNet, for deobfuscation runtime estimation based on graph deep learning.

Developing a new multiattributed graph convolutional neural network for graph regression.

Conducting systematical experimental evaluations and analyses on realworld datasets.
We evaluate this proofofconcept on ISCA85 benchmark for one replacement policy and SAT solver (Subramanyan et al., 2015b) that employs lingeling solver. However, this can be applied to any of the circuits as well as replacement policies, as the GCN learns the patterns and is not confined to any circuit or replacement policy or SAT solver.
The rest of the paper is organized as follows. Section 2 reviews existing work in this area. Section 3 elaborates a graph deep learning model for SAT runtime prediction task. In Section 4, experiments on realworld data. This paper concludes by summarizing the study’s important findings in Section 5.
2. Background and Related Work
Here, we discuss the logic obfuscation and SAT attacks followed by graph convolutional networks and the relevant works.
2.1. Logic Obfuscation and SAT Attacks
Logic obfuscation often referred as logic locking (Yasin et al., 2016b)
is a hardware security solution that facilitates to hide the IP using keyprogrammable logic gates The activation of the obfuscated IP is accomplished in a trusted regime before releasing the product into the market, thereby reducing the probability to obtain the secret configuration keys by the attacker. During the activation phase, the correct key is applied to these keyprogrammable gates to recover the correct functionality of the IC/IP. Besides, the correct key will be stored in the IC in a tamperproof memory.
Although obfuscation schemes try to minimize the probability of determining the correct key by an attacker, and avoid making pirated and illegal copies, introducing SAT attack shows that these schemes can be broken (Subramanyan et al., 2015b). In order to perform SAT attack, the attacker is required to have access to the functional IC along with the obfuscated netlist. The SAT attack first tries to find the Distinguishing Input Patterns (DIP) X, which when applied as the input can produce different outputs () such that () when different key values are applied (K, K). This DIP can then be used to distinguish the correct and incorrect keys. The number of DIPs discovered during the SATbased attack is the same as the number of iterations needed to unlock the obfuscated design. In each iteration, the constraint is added to SAT solver, until SAT solver cannot find a satisfying assignment. This results in finding the correct key. The SATbased attack is summarized in the Algorithm 1.
Different SAThard schemes such as (Xie and Srivastava, 2018; Yasin et al., 2016a) are proposed Furthermore, new obfuscation schemes that focus on nonBoolean behavior of circuits (Xie and Srivastava, 2017), that are not convertible to an SAT circuit is proposed for SAT resilience. Some of such defenses include adding cycles into the design (Roshanisefat et al., 2018a). By adding cycles into the design may cause that the SAT attack gets stuck in the infinite loop, however advanced SATbased attacks such as cycSAT (Zhou et al., 2017) can extract the correct key despite employing such defenses.
To ensure that the proposed defense ensures robustness against SAT attacks, the defenders need to run the rigorous simulations which could range as a step to alleviate the need to run the attack to verify whether the defense is strong enough or not. The work in (Selsam et al., 2018) utilizes neural network with singlebit supervision to predict whether a given circuit in Conjunctive Normal Form (CNF) can be decrypted or not. However, this is limited to determining for few kinds of SATsolvers, but cannot be applied to SAThard solutions such as SMTSAT (Zamiri Azar et al., 2019), a superset of SAT attacks. However, with proposed GCN based predictor, the defender can determine the deobfuscation time in a single run of GCN, which consumes few seconds.
2.2. Graph Convolutional Networks
Spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph. Many graphs and geometric convolution methods have been proposed recently. The spectral convolution methods
(Defferrard et al., 2016; Kipf and Welling, 2017) are the mainstream algorithms developed as the graph convolution methods. Their theory is based on the graph Fourier analysis (Shuman et al., 2013). The polynomial approximation is firstly proposed by (Hammond et al., 2011). Inspired by this, graph convolutional neural networks (GCNNs) ((Defferrard et al., 2016)) is a successful attempt at generalizing the powerful convolutional neural networks (CNNs) in dealing with Euclidean data to modeling graphstructured data. Kipf and Welling proposed a simplified type of GCNNs(Kipf and Welling, 2017), called graph convolutional networks (GCNs). The GCN model naturally integrates the connectivity patterns and feature attributes of graphstructured data and outperforms many stateoftheart methods significantly. With rational function, GCN can model nonsmooth signal in spectral domain(Chen et al., 2018).3. Proposed Graph Learning based SAT Runtime Prediction
This section introduces the problem setting, and present the proposed deobfuscation time prediction.
3.1. Problem Setting
First, circuit is modeled as a graph network: , where is a set of n vertexes, represents edges and is an unweighted adjacency matrix. A signal defined on the nodes may be regarded as a vector . Combinatorial graph Laplacian is defined as where is degree matrix.
Accordingly, we formulate the estimation of running time on IC as a regression task. Specifically, the model accepts graph structure along with gate attributes as input, and predict the running time:
(1) 
where is a function of graph structure and that denotes the attributes of gates such as gate type. Function can accept heterogeneous data format for and , since is often represented using matrix, while is using vector. indicates the parameters of normal neural network layers connecting the output of and the labeled time , such as fullyconnected layers. The goal is to learn a set of parameters of both and so that the difference between and is minimized.
3.2. ICNet
ICNet is a neural network that is based on graph convolution operator. As shown in Figure 1, ICNet encodes the obfuscated circuit on the left hand into two components:

graph structure : Complete set of local connection is often used to represent the graph structure. Typically, a graph Laplacian is employed, since it contains gatewise connection.

gate attributes : gatelevel information can be encoded as numerical vector as input feature. Such information could include gate type, whether it is obfuscated and so on.
By applying the GCN, we can easily build a model to automatically learn the relationship between the circuit and deobfuscation time. However, the original graph convolutional operator is not suitable for the circuit, since the graph Laplacian will make the graph convolutional operator behavior as label propagation, i.e., the attributes of each gate are similar to its neighbors. This is called the smoothness assumption, and it does not fit the fact that gate type or encryption location of each gate does not determine its neighbors’ related attributes in theory. This issue is because of graph Laplacian, which counts each node as ( is the index of the row in graph Laplacian), and counts the weights of its neighbors as . Consequently, they are canceled out when gate representation are aggregated using sum, and the model can hardly learn the relationship between their sum(residues) and labeled time. To solve this issue, our model employs several policies to enhance the traditional GCN for circuit learning.

Graph Representation : our model uses adjacency matrix instead of graph Laplacian. This representation can avoid intrinsic smoothness assumption which is not compatible with ICs.

Feature Aggregation(): mean function is a typical methods for aggregating node feature into single number. However, mean does not consider the quantity of summed. A more flexible way is build to neural network to automatically learn feature aggregation.

Gate Aggregation(): similarly, mean can also be used to aggregate gate representation into circuit graph representation. Due to the complicated real world aggregation, another neural network is designed to learn the gate aggregation function.
Our model is based on GCN setting(Kipf and Welling, 2017) which simplify the layer parameters of graph convolutional operator and applies an approximate technique to boost the efficiency. Graph convolutional networks(GCNs), as state of the art deep learning method for the graph, focus processing graph signals defined on undirected graphs
As is a real symmetric positive semidefinite matrix, it has a complete set of orthonormal eigenvectors and their associated ordered real nonnegative eigenvalues identified as the frequencies of the graph. The Laplacian is diagonalized by the Fourier basis : where is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e.,
. The graph Fourier transform of a signal
is defined as and its inverse as (Shuman et al., 2013; Shuman et al., 2016). To enable the formulation of fundamental operations such as filtering in the vertex domain, the convolution operator on graph is defined in the Fourier domain such that , where is the elementwise product, and are two signals defined on vertex domain. It follows that a vertex signal is filtered by spectral signal as:Note that a real symmetric matrix can be decomposed as since . D. K. Hammond et al. and Defferrard et al.(Hammond et al., 2011; Defferrard et al., 2016) apply polynomial approximation on the spectral filter so that:
(2)  
According to the analysis above, graph Laplacian matrix is replaced with adjacency matrix. To fit wholegraph level regression task, the proposed method designs two aggregation neural networks. Formally, it is denoted as:
(3)  
However, the running time tends to grow at an exponential rate as the number of encrypted gates increases. Therefore, the model is modified as:
(4) 
As shown in Fig. 1, the model conducts one or two graph convolutional operation(GCN) to fuse information from graph structure and gate attributes in the spectral domain. Then two sets of neural networks are performed for the feature and gate aggregation. Last few layers are fully connected to predict the runtime.
3.3. Algorithm description
The Algorithm 1 first prepare graph adjacency as circuit connection representation(line 2). To fit machine learning schema, the whole dataset is split into training and testing dataset. Each dataset is then split into small batch size to improve learning efficiency(line 34). ICNet training is an iterative process which updates the model until the residues are small enough or converged(line 613). First, the model parameters are initialized by Gaussian or uniform distribution. In each iteration, a batch of the training set is selected randomly. By equation 4, the model computes the predicted runtime(line 10) and then calculates the residues between real runtime and prediction(line 11). Following normal deep learning schema, the model update parameters by the derivatives regarding the parameters themselves with learning rate(line 12).
4. evaluation
This section elaborates evaluation of the proposed method ICNet with competitive baselines including: Graph deep learning methods:
The input of these models above is exactly same as our model. We also compare against several stat of art regression models^{1}^{1}1https://scikitlearn.org/stable/modules/linear_model.html:

Linear Regression(LR)

LASSO

EpsilonSupport Vector Regression(SVR). Two kernels were applied: polynomial(P) and RBF(R).

Ridge Regression(RR)

Elastic Net(EN)

Orthogonal Matching Pursuit(OMP)

SGD Regression

Least Angle Regression(LARS)

TheilSen Estimators(Theil)
These regression models does not model graph using Laplacian or adjacency matrix, since they can only accept feature vector. Therefore, the input are encoded as mean or sum on concatenation of Laplacian or adjacency matrix and gate attributes.
4.1. Data processing
The datasets are obtained by running SAT algorithm (Subramanyan et al., 2015b, a) on realworld ISCA85 benchmark: First, we take a circuit and select a random gate and replace it with LUT of fixed size (LUT size 4 in current work). To deobfuscate, we implement SAT attack algorithm (Subramanyan et al., 2015b, a) with the obfuscated circuit netlist as input. We monitor the time that sat takes to decode the key, which is the deobfuscation time. The proposed model is evaluated on two datasets:

Dataset 1: the total number of the encryption location ranges from 1 to 350, this is for testing if the model is sensitive to the number of encrypted quantity of gates.

Dataset 2: the total number of the encryption location ranges from 1 to 3, this is for testing if the model can handle very small value.
The circuit in the experiments is the same, and the total gate number of the circuit is 1529. For graph deep learning methods, graph is represented using Laplacian matrix or adjacency matrix, while for general regression baselines, the graph Laplacian or adjacency matrix is summed or averaged across gates. Though the evaluations shown here are mere proofofconcept of how powerful the proposed GCN based deobfuscation runtime prediction is, it can be applied to a SAThardening solution utilizing any replacement policy, LUT size and other SAT parameters, by retraining GCN.
4.2. Experiment configuration
The features of gate used in experiments include

gate mask: if the gate is encrypted, the value is set to 1, otherwise 0.

gate type: the gate type include {AND, NOR, NOT, NAND, OR, XOR}, they are encoded using onehot coding, such as [1,0,0,0,0,0,] for AND and [0,1,0,0,0,0,] for NOR gate.
For graph deep learning model(ChebNet and ICNet), the graph structure is represented using graph Laplacian matrix or adjacency matrix. These model employ ADAM (Le et al., 2011) optimizer and will stop learning when the learning loss is converged. The implementation of our model will be available online. All the baselines and the proposed model are tested on two different feature set, since gate type is useful or not is unknown.

Location: Only the gate mask is included.

All features: Besides gate mask, gate type is also included.
For node aggregation, we apply and since they are the popular. Deep learning model can have another node aggregation method, i.e., learning by a neural network automatically. Therefore, in the results, and denote the automatic version. It is expected that deep neural network can learn a the optimal aggregation which is not worse than our assumption, i.e., sum or mean.
Location  All feat  
Method  Sum  Mean  Sum  Mean 
SVR RBF  1.6791  0.6784  1.6675  0.6739 
SVR Poly  0.1913  2.1890  0.1696  2.2091 
SGD  2.1450e+25  2.1823  1.0430e+26  2.2072 
LR  0.2839  0.2284  0.2449  0.2253 
RR  0.2309  2.1508  0.2058  2.1738 
LASSO  0.9213  2.1843  1.0127  2.2083 
EN  0.5763  2.1843  0.6409  2.2083 
OMP  1.8182  1.9192  1.8651  2.0337 
LARS  1.9968  2.1277  2.0434  2.1833 
Theil  0.2948  0.2238  0.2385  0.2277 
ChebNet  0.1484  8.8370e+33  0.1761  0.1760 
ChebNetNN  0.17858  3.8549e+27  
GCN  0.3364  0.4149  0.2496  0.3290 
GCNNN  0.1811  0.1606  
ICNet  0.1534  0.1256  0.2390  0.1902 
ICNetNN  0.0843  0.1367 
Location  All feat  
Method  Sum  Mean  Sum  Mean 
SVR RBF  0.0051  0.0048  0.0050  0.0051 
SVR Poly  0.0048  0.0048  0.0048  0.0051 
SGD  7.6301e+25  0.0045  2.0675e+26  0.0049 
LR  6.9063ee+23  4.6521e+20  7.2916e+25  5.8600e+23 
RR  0.0070  0.0045  0.0065  0.0049 
LASSO  0.0047  0.0045  0.0046  0.0049 
EN  0.0047  0.0045  0.0046  0.0049 
OMP  0.0047  0.0045  0.0045  0.0049 
PAR  0.0054  0.1918  0.0051  0.3143 
LARS  0.0047  0.0045  0.0046  0.0049 
Theil  N/A  N/A  N/A  N/A 
ChebNet  0.0047  0.0045  4.3570e+28  0.0048 
ChebNetNN  0.0043  0.0047  
GCN  0.0061  0.0046  0.0048  0.0050 
GCNNN  0.0050  0.1606  
ICNet  0.0049  0.0047  0.0040  0.0043 
ICNetNN  0.0051  0.0048 
4.3. Regression Results
In the dataset 1 experiment(Table 1, all methods achieved acceptable mean square error except SGD (sum) which did not learn a reasonable model to predict the runtime, since the value is very large (at e+25/+26 scale). Most regression methods are sensitive to the aggregation method. For example, only using location feature, MSE of RR is 0.2319 when using sum, but it got 2.1508 when using mean function. Sensitive models include SVR, LASSO, and EN. The best of the regression baselines is LR and Theil, which achieved around MSE of 0.22. On the other hand, graph deep learning model ChebNet is slightly better than the best regression model. However, ChebNet is not stable and sensitive to the aggregation method and feature set, since it may yield a very large error. Our model, ICNet, is stable to the feature and aggregation setting and outperformed all the other methods, i,e, 0.11001 of MSE. Note that ICNetNN is better than ICNet with the sum or mean, which demonstrates that there exists a better aggregation method, and deep neural network can learn this function automatically. Note that ICNet is always better than GCN under any settings, which shows that our improvement works on circuit senario.
While in the dataset 2, it is more challenging, since all the runtime is small and the model has to be very precise to achieve low MSE. All methods at almost the same level of MSE. Once again, some of the regression models are not stable such as SGD and LR. Graph deep learning method includes ChebNet and ICNet still at the best error level. ChebNet can achieve the best level but sensitive to the settings, while ICNet is insensitive to all configuration. ICNetNN is still the best method, and it outperformed its mean and sum version. Moreover, ICNet is more stable than GCN and ChebNet.
4.4. Prediction behavior analysis
In the section, we show several predicted value along with real value to analyze the prediction characterization.
Since there is little difference in dataset 2, we choose several competitive baselines in dataset 1 experiments. Several baselines performed very badly such as OMP and SGD which only output values around a constant level. SVR(RBF) is also bad and yield constant value when the real runtime is larger than a threshold. The results of EN and LASSO is positively related to the real values, but the correlation parameters are significantly different from the truth. Linear, RR, SVR(POLY) and Theil predicted the values that are relatively closer than that of the other baselines, but with high variance. The proposed method, ICNet, not only predicted the value very precisely but also with small variance.
5. conclusion
In this work, we have introduced a neural network model for recovering SAT runtime on ICs. To properly fuse graph structure and gate attributes, an enhanced graph convolutional operator is introduced. The proposed method can avoid attribute propagation which is in the original GCN but not suitable for ICs. Experiments on realworld datasets suggest that the proposed model is capable of modelling the runtime regarding the circuit graph accurately.
=0mu plus 1mu
References
 (1)
 Azar et al. (2019) Kimia Zamiri Azar, Hadi Mardani Kamali, Houman Homayoun, and Avesta Sasan. 2019. SMT Attack: Next Generation Attack on Obfuscated Circuits with Capabilities and Performance Beyond the SAT Attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems (2019), 97–122.
 Bishop and Mitchell (2014) Christopher M Bishop and Tom M Mitchell. 2014. Pattern Recognition and Machine Learning. (2014).
 Chen et al. (2018) Zhiqian Chen, Feng Chen, Rongjie Lai, Xuchao Zhang, and ChangTien Lu. 2018. Rational Neural Networks for Approximating Jump Discontinuities of Graph Convolution Operator. arXiv preprint arXiv:1808.10073 (2018).
 Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems. 3844–3852.
 El Massad et al. (2015) Mohamed El Massad, Siddharth Garg, and Mahesh V Tripunitara. 2015. Integrated Circuit (IC) Decamouflaging: Reverse Engineering Camouflaged ICs within Minutes.. In NDSS.
 Hammond et al. (2011) David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. 2011. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis 30, 2 (2011), 129–150.
 Khaleghi and Rao (2018) Soroush Khaleghi and Wenjing Rao. 2018. Hardware Obfuscation Using Strong PUFs. In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 321–326.
 Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. SemiSupervised Classification with Graph Convolutional Networks. In ICLR.
 Le et al. (2011) Quoc V Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y Ng. 2011. On optimization methods for deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning. Omnipress, 265–272.
 Liu et al. (2016) Duo Liu, Cunxi Yu, Xiangyu Zhang, and Daniel Holcomb. 2016. Oracleguided incremental SAT solving to reverse engineer camouflaged logic circuits. In Design, Automation & Test in Europe Conference & Exhibition (DATE), 2016. IEEE, 433–438.
 Manoj et al. (2018) P. D. Sai Manoj, Ferdinand Brasser, L. Davi, A. Dhavlle, T. Frassetto, S. Rafatirad, A. Sadeghi, A. Sasan, H. Sayadi, S. Zeitouni, and H. Homayoun. 2018. HardwareAssisted Security: Understanding Security Vulnerabilities and Emerging Attacks for Better Defenses. In International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES).
 most counterfeited parts represent a $169 billion potential challenge for global semiconductor industry ([n. d.]) IHS Technology Press Release: Top 5 most counterfeited parts represent a $169 billion potential challenge for global semiconductor industry. [n. d.]. https://technology.ihs.com/405654/top5mostcounterfeitedpartsrepresenta169billionpotentialchallengeforglobalsemiconductormarket,2. http://www.test.org/doe/
 Roshanisefat et al. (2018a) Shervin Roshanisefat, Hadi Mardani Kamali, and Avesta Sasan. 2018a. SRCLock: SATResistant Cyclic Logic Locking for Protecting the Hardware. In Proceedings of the 2018 on Great Lakes Symposium on VLSI (GLSVLSI ’18).
 Roshanisefat et al. (2018b) Shervin Roshanisefat, Harshith K Thirumala, Kris Gaj, Houman Homayoun, and Avesta Sasan. 2018b. Benchmarking the Capabilities and Limitations of SAT Solvers in Defeating Obfuscation Schemes. arXiv preprint arXiv:1805.00054 (2018).
 Sayadi et al. (2018) H. Sayadi, N. Patel, P. D. Sai Manoj, A. Sasan, S. Rafatirad, and H. Homayoun. 2018. Ensemble Learning for Hardwarebased Malware Detection: A Comprehensive Analysis and Classification. In ACM/EDAA/IEEE Design Automation Conference.
 Selsam et al. (2018) Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill. 2018. Learning a SAT Solver from SingleBit Supervision. ArXiv abs/1802.03685 (2018).

Shuman et al. (2013)
David I Shuman, Sunil K
Narang, Pascal Frossard, Antonio Ortega,
and Pierre Vandergheynst.
2013.
The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains.
IEEE Signal Processing Magazine 30, 3 (2013), 83–98.  Shuman et al. (2016) David I Shuman, Benjamin Ricaud, and Pierre Vandergheynst. 2016. Vertexfrequency analysis on graphs. Applied and Computational Harmonic Analysis 40, 2 (2016), 260–291.
 Stangl et al. (2018) J. Stangl, T. Loruenser, and P. D. Sai Manoj. 2018. A Fast and Resource Efficient FPGA Implementation of Secret Sharing for Storage Applications. In ACM/EDAA/IEEE Design Automation and Test in Europe (DATE).
 Subramanyan et al. (2015a) Pramod Subramanyan, Sayak Ray, and Sharad Malik. 2015a. Evaluating the security of logic encryption algorithms. In Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on. IEEE, 137–143.
 Subramanyan et al. (2015b) Pramod Subramanyan, Sayak Ray, and Sharad Malik. 2015b. Evaluating the security of logic encryption algorithms. In Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on. IEEE, 137–143.
 Xie and Srivastava (2017) Y. Xie and A. Srivastava. 2017. Delay locking: Security enhancement of logic locking against IC counterfeiting and overproduction. In ACM/EDAC/IEEE Design Automation Conference (DAC). 1–6.
 Xie and Srivastava (2018) Y. Xie and A. Srivastava. 2018. AntiSAT: Mitigating SAT Attack on Logic Locking. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems (2018), 1–1.
 Yasin et al. (2016a) M. Yasin, B. Mazumdar, J. J. V. Rajendran, and O. Sinanoglu. 2016a. SARLock: SAT attack resistant logic locking. In 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).
 Yasin et al. (2016b) M. Yasin, J. J. Rajendran, O. Sinanoglu, and R. Karri. 2016b. On Improving the Security of Logic Locking. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems 35, 9 (Sept 2016), 1411–1424.
 Yasin et al. (2017) Muhammad Yasin, Abhrajit Sengupta, Mohammed Thari Nabeel, Mohammed Ashraf, Jeyavijayan JV Rajendran, and Ozgur Sinanoglu. 2017. Provablysecure logic locking: From theory to practice. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1601–1618.
 Zamiri Azar et al. (2019) Kimia Zamiri Azar, Hadi Mardani Kamali, Houman Homayoun, and Avesta Sasan. 2019. SMTAttack: Next Generation Attack on Obfuscated Circuits with Capabilities and Performance Beyond The SAT Attacks. In Transaction of Cryptography Hardware and Embedded Systems.
 Zhou et al. (2017) H. Zhou, R. Jiang, and S. Kong. 2017. CycSAT: SATbased attack on cyclic logic encryptions. In 2017 IEEE/ACM International Conference on ComputerAided Design (ICCAD). 49–56.
Comments
There are no comments yet.