CryptoSPN
None
view repo
AI algorithms, and machine learning (ML) techniques in particular, are increasingly important to individuals' lives, but have caused a range of privacy concerns addressed by, e.g., the European GDPR. Using cryptographic techniques, it is possible to perform inference tasks remotely on sensitive client data in a privacy-preserving way: the server learns nothing about the input data and the model predictions, while the client learns nothing about the ML model (which is often considered intellectual property and might contain traces of sensitive data). While such privacy-preserving solutions are relatively efficient, they are mostly targeted at neural networks, can degrade the predictive accuracy, and usually reveal the network's topology. Furthermore, existing solutions are not readily accessible to ML experts, as prototype implementations are not well-integrated into ML frameworks and require extensive cryptographic knowledge. In this paper, we present CryptoSPN, a framework for privacy-preserving inference of sum-product networks (SPNs). SPNs are a tractable probabilistic graphical model that allows a range of exact inference queries in linear time. Specifically, we show how to efficiently perform SPN inference via secure multi-party computation (SMPC) without accuracy degradation while hiding sensitive client and training information with provable security guarantees. Next to foundations, CryptoSPN encompasses tools to easily transform existing SPNs into privacy-preserving executables. Our empirical results demonstrate that CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
READ FULL TEXT VIEW PDFNone
In our increasingly connected world, the abundance of user information and availability of data analysis techniques originating from artificial intelligence (AI) research has brought machine learning (ML) techniques into daily life. While these techniques are already deployed in many applications like credit scoring, medical diagnosis, biometric verification, recommender systems, fraud detection, and language processing, emerging technologies such as self-driving cars will further increase their popularity.
These examples show that progress in AI research certainly has improved user experience, potentially even saving lives when employed for medical or safety purposes. However, as the prevalent usage in modern applications often requires the processing of massive amounts of sensitive information, the impact on user privacy has come into the public spotlight.
This culminated in privacy regulations such as the European General Data Protection Regulation (GDPR), which came into effect in 2018. Not only does the GDPR provide certain requirements to protect user data, it also includes restrictions on decisions based on user data, which may be interpreted as a “right to explanation” [17]
. Luckily, new deep probabilistic models that encode the joint distribution, such as sum-product networks (SPNs)
[36], can indicate whether the model is fit to predict the data at hand, or raise a warning otherwise. This increases trust as they “know when they do not know”. Moreover, SPNs can also perform inference with missing information [35], an important aspect for real-life applications.Generally, probabilistic graphical models [24] provide a framework for understanding what inference and learning are, and have therefore emerged as one of the principal theoretical and practical approaches to ML and AI [14]
. However, one of the main challenges in probabilistic modeling is the trade-off between the expressivity of the models and the complexity of performing various types of inference, as well as learning them from data. This inherent trade-off is clearly visible in powerful – but intractable – models like Markov random fields, (restricted) Boltzmann machines, (hierarchical) Dirichlet processes, and variational autoencoders. Despite these models’ successes, performing inference on them resorts to approximate routines. Moreover, learning such models from data is generally harder as inference is a sub-routine of learning, requiring simplified assumptions or further approximations. Having guarantees on tractability at inference and learning time is therefore a highly desired property in many real-world scenarios.
Tractable graphical models such as SPNs guarantee exactly this: performing exact inference for a range of queries. They compile probabilistic inference routines into efficient computational graphs similar to deep neural networks, but encode a joint probability distribution. As a result, they can not only be used for one ML task, but support many different tasks by design, ranging from outlier detection (joint distribution) to classification or regression (conditional inference). They have been successfully used in numerous real-world applications such as image classification, completion and generation, scene understanding, activity recognition, and language and speech modeling. Despite these successes, it is unclear how one can develop an SPN framework that is GDPR-friendly.
As a naive solution, SPN tasks can be performed only on client devices to ensure that no sensitive information is handed out, but this requires the service provider to ship a trained model to clients, thereby giving up valuable intellectual property and potentially leaking sensitive data as such models often contain traces of sensitive training data, e.g., due to unintended memorization [7]. Therefore, current “ML as a Service” (MLaaS) applications usually send and hence leak client data to a remote server operated by the service provider to perform inference (cf. top of Figure 1).
Even if the service provider and the remote server are considered trustworthy, the privacy of clients can still be compromised by breaches, hacks, and negligent or malicious insiders. Such incidents occur frequently even at high-profile companies: recently, Microsoft Outlook was hacked [45] and AT&T’s customer support was bribed [29]. Thus, it is not enough to protect client data just from outsiders, it must also be hidden from the server to ensure privacy.
Previously, protecting the identity of individuals via anonymization techniques was seen as sufficient when learning on or inferring from data of a collection of users. Such techniques reduce raw data to still enable extraction of knowledge without individuals being identifiable. However, recent works conclude that current de-identification measures are insufficient and unlikely satisfy GDPR standards [41].
We believe this indicates that cryptographic measures should be employed to satisfy today’s privacy demands. The cryptographic literature has actively developed protocols and frameworks for efficient and privacy-preserving ML in the past years. So far, efforts were focused on deep/convolutional neural networks, see
[38] for a recent systematization of knowledge. There, usually a scenario is considered where the server holds a model and performs private ML inference on a client’s data, with no information except for the inference result being revealed to the client (cf. bottom of Figure 1).Existing frameworks mostly rely on homomorphic encryption (HE), secure multi-party computation (SMPC), or a combination of both, to enable private inference with various security, resource, and usage properties. As many ML tasks today already require intense computational resources, the overhead incurred by introducing cryptographic privacy mechanisms is substantial. Though a line of prominent frameworks from CryptoNets [15] to XONN [39] has established increased efficiency and relatively low execution times for private inference, research has mainly focused on NNs
by looking for efficient ways to securely compute common activation functions, sometimes degrading accuracy by using more efficient approximations. Existing frameworks only possess a low degree of automation and often require very low-level model descriptions, making it hard for non-experts to run private inference using their own models. Additionally, for approaches using SMPC, it is very common that the topology of the NN is leaked, which might reveal some model information to the client.
In this work, we present foundations and tools for privacy-preserving ML in the unexplored domain of sum-product networks (SPNs). Our framework, which we call CryptoSPN, demonstrates that SPNs can very well be protected with cryptographic measures. Specifically, after presenting the necessary background for private ML and SPNs (Section 2), we show how to efficiently perform private SPN inference using SMPC (Section 3
). We combine techniques from both AI and applied cryptography to achieve this. Contrary to popular SMPC-based approaches for protecting NNs, ours leaks no information from the network topology by using Random Tensorized SPNs (RAT-SPNs)
[35]. We implement CryptoSPN using the state-of-the-art SMPC framework ABY [12]and provide an open-source tool that can transform SPN instances from the SPFlow framework
[33] into privacy-preserving executables (Section 4). CryptoSPN is easily usable by non-experts and intended to make private ML available to the broader AI community working on a wide range of sophisticated models. In an experimental evaluation (Section 5), we show that CryptoSPN performs private inference in reasonable time while preserving accuracy. With our work, we push private ML beyond NNs and bring attention to the crucial, emerging task of making a variety of ML applications private.We start with the necessary background on secure computation, existing privacy-preserving ML solutions, and SPNs.
First described by [46], the concept of secure computation (SC) lets computational parties (e.g., a client and a server) evaluate arbitrary functions on secret inputs without leaking any information but the results. For example, a server can calculate statistics on client data without learning the raw data, or a group of clients can jointly schedule meetings without revealing their availability. The SC research community has put forth efficient schemes with practical implementations for applications that rely on homomorphic encryption (HE) or secure multi-party computation (SMPC). The former allows computations directly on encrypted data, whereas in SMPC an interactive protocol is executed between parties that, in the end, reveals only the desired output. A general rule of thumb is that SMPC requires more communication, whereas computation is the bottleneck for HE. In this work, we rely on secure two-party computation, i.e., SMPC with two parties: client and server.
We shortly recapitulate the most influential works for preserving privacy when performing machine learning tasks using SC techniques.
Privacy-preserving neural network inference was first proposed in [34, 42, 3]
. Secure classification via hyper-plane decision, naive Bayes, and decision trees was presented in
[5]. SecureML [31]provides SMPC-friendly linear regression, logistic regression, and neural network training using SGD as well as secure inference. With CryptoNets
[15], the race for the fastest NN-based privacy-preserving image classification began: MiniONN [28], Chameleon [40], Gazelle [20], XONN [39], and DELPHI [30] are only some of the proposed frameworks.These frameworks mostly offer privacy-preserving deep/convolutional neural network inference based on HE or SMPC protocols, or even combinations of both techniques in different computational and security models. However, they are not readily accessible to ML experts, as prototype implementations are not well-integrated into ML frameworks and require extensive cryptographic knowledge to secure applications. Moreover, these frameworks are often engineered towards delivering outstanding performance for benchmarks with certain standard data sets (e.g., MNIST [26]
), but fail to generalize in terms of accuracy and performance. There are some but very few attempts to directly integrate privacy technology into ML frameworks: for TensorFlow there exists rudimentary support for differential privacy
[1], HE [44], and SMPC [10], and for Intel’s nGraph compiler there exists an HE backend [4]. Very recenty, Facebook’s AI researchers released CrypTen [18], which provides an SMPC integration with PyTorch. However, currently not much is known about the underlying cryptographic techniques and, therefore, its security guarantees.
Trusted execution environments (TEEs) are an intriguing alternative to cryptographic protocols. They use hardware features to shield sensitive data processing tasks. TEEs are widely available, e.g., via Intel Software Guard Extensions (SGX), and therefore are explored for efficiently performing ML tasks [25]. Unfortunately, Intel SGX provides no provable security guarantees and requires software developers to manually incorporate defenses against software side-channel attacks, which is extremely difficult. Moreover, severe attacks on Intel SGX allowed to extract private data from the TEE, making SGX less secure than cryptographic protocols [8].
In SMPC, the function to be computed securely is represented as a Boolean circuit consisting of XOR and AND gates: each gate is securely computed based on the encrypted outputs of preceding gates, and only the values of the output wires of the entire circuit are decrypted to obtain the overall output. The intermediate results leak no information and only the outputs are decrypted by running a corresponding sub-protocol.
The literature considers two security settings: semi-honest, where the involved parties are assumed to honestly follow the protocol but want to learn additional information about other parties’ inputs, and malicious, which even covers active deviations from the protocol. SMPC protocols are usually divided into two phases: a setup phase that can be performed independently of the inputs (e.g., during off-peak hours or at night), and an online phase that can only be executed once the inputs are known. Most of the “expensive” operations can be performed in the setup phase in advance such that the online phase is very efficient. Two prominent SMPC protocols are Yao’s garbled circuit (GC) [46] and the GMW protocol [16]. As we heavily rely on floating-point operations with high-depth circuits, we use Yao’s GC protocol, which has a constant round complexity (the round complexity of the GMW protocol depends on the circuit depth and, hence, is not suited for our case).
We present a schematic overview for the secure evaluation of a single gate in Figure 2 and refer to [27] for further technical details.
The central idea of this protocol is to encode the function (more precisely, its representation as a Boolean circuit) and the inputs such that the encoding reveals no information about the inputs but can still be used to evaluate the circuit. This encoding is called “garbling” and individual garbled values can be seen as encryptions. We will use the common notation of and to refer to a garbled gate or garbled inputs, respectively. The evaluation of the garbled circuit (consisting of many garbled gates) using the garbled inputs, in turn, results in an encoding of the output. The encoded output can only be decoded jointly to the plain value , i.e., both parties have to agree to do so. In the protocol, one of the parties, the “garbler”, is in charge of creating the garbled circuit. The other party, the “evaluator”, obtains the garbled circuit, evaluates it, and then both parties jointly reveal the output.
The wires in the Boolean circuit of a function are assigned two randomly chosen labels / keys: and , indicating that the wire’s plain value is or , respectively. Though there is a label for each possible value , only the garbled value for the actual plain value of wire is used to evaluate the garbled circuit. The garbler creates both labels and therefore is the only one who knows the mapping to plaintext values – for the evaluator, a single randomly-looking label reveals no information.
The garbler creates a randomly permuted “garbled” gate in the form of an encrypted truth table for each gate in the circuit of , and sends all garbled gates to the evaluator. The key idea is to use an encryption scheme that has two encryption keys. For each truth table entry, the label associated with the plaintext value of the outgoing wire is then encrypted using the labels associated with the plain values of the two incoming wires as encryption keys (cf. Figure 2).
Now, if the evaluator is in possession of the labels and corresponding to the incoming wires’ values and , then exactly one entry of can be successfully decrypted using and as decryption keys^{1}^{1}1A special type of encryption scheme is used to detect if the decryption was successful or not.. This will result in , the label of the outgoing wire of associated with the desired plaintext value . Since only the desired entry can be decrypted and, given that the labels are chosen randomly and independently of the wire values, the evaluator can perform this computation without learning any plaintext information.
The remaining challenge is that the evaluator needs to obtain the correct garbled inputs (i.e., the labels corresponding to its inputs) without revealing the inputs to the garbler. This is solved by a cryptographic protocol called oblivious transfer (OT), which enables one party with input bit to obliviously obtain a string from another party holding two input strings without revealing and learning anything about . With this building block, Yao’s GC protocol is composed as follows (cf. Figure 2): In the setup phase, the garbler creates all wire labels for the garbled circuit and sends to the evaluator. During the online phase, the garbler sends the labels corresponding to its input to the evaluator. The evaluator’s garbled inputs are obtained via OT. Then, the evaluator decrypts . The output can be jointly decrypted if the parties reveal the output label associations.
Improvements on OT [19, 2] and the garbling scheme [22, 47] have significantly reduced the overhead of Yao’s GC protocol, making it viable to be used in applications. Specifically, the GC created and sent in the setup phase requires bits per binary AND gate, where is the symmetric security parameter (e.g., for the currently recommended security level). For obliviously transferring the labels corresponding to the evaluator’s input , bits must be sent in the setup phase, as well as bits in the online phase. Additionally, the labels for the garbler’s input must be sent in the online phase ( bits). The protocol only requires a constant number of rounds of interaction.
Recent years have seen a significant interest in tractable probabilistic representations such as Arithmetic Circuits (ACs) [9], Cutset Networks [37], and SPNs [36]. In particular, SPNs, an instance of ACs, are deep probabilistic models that can represent high-treewidth models [48] and facilitate exact inference for a range of queries in time linear in the network size.
Formally, an SPN is a rooted directed acyclic graph, consisting of sum, product, and leaf
nodes. The scope of an SPN is the set of random variables (RVs) appearing on the network. An SPN can be defined recursively as follows: (1) a tractable univariate distribution is an SPN; (2) a product of SPNs defined over different scopes is an SPN; and (3) a convex combination of SPNs over the same scope is an SPN. Thus, a product node in an SPN represents a factorization over independent distributions defined over different RVs
, while a sum node stands for a mixture of distributions defined over the same variables . From this definition, it follows that the joint distribution modeled by such an SPN is a valid normalized probability distribution [36].To answer probabilistic queries in an SPN, we evaluate the nodes starting at the leaves. Given some evidence, the probability output of querying leaf distributions is propagated bottom up. For product nodes, the values of the child nodes are multiplied and propagated to their parents. For sum nodes, we sum the weighted values of the child nodes. The value at the root indicates the probability of the asked query. To compute marginals, i.e., the probability of partial configurations, we set the probability at the leaves for those variables to and then proceed as before. This allows us to also compute conditional queries such as . Finally, using a bottom-up and top-down pass, we can compute approximate MPE states [36]. All these operations traverse the tree at most twice and therefore can be achieved in linear time w.r.t. the size of the SPN.
Given the AC structure of SPNs, SMPC is a fitting mechanism to preserve privacy in SPN inference as it relies on securely evaluating a circuit. Compared to, e.g., NNs, SPNs do not have alternating linear and non-linear layers, which would complicate the application of SMPC protocols. Here, we are concerned with private SPN inference in a setting where the client has a private input and the server is in possession of a model; in the end, the server learns nothing, and the client only learns the inference result (cf. bottom of Figure 1). Unfortunately, we cannot use the arithmetic version of the GMW protocol, as it only provides integer or fixed-point operations, which is insufficient for tractable and normalized probabilistic inference such as the case of SPNs. Instead, CryptoSPN uses Yao’s GC protocol that evaluates Boolean circuits, which allows us to use floating point operations by including Boolean sub-circuits corresponding to IEEE 754-compliant - or -bit floating point operations [11] in the circuit representation of the to-be-evaluated SPN.
Our approach (cf. Figure 3) is to transform the SPN into a Boolean circuit and to then evaluate it via SMPC. The server input consists of all the model parameters of the SPN (i.e., weights for the weighted sums and parameters for the leaf distribution), the client input consists of the evidence, and the output is the root node value. We perform all computations in the log-domain using the well-known log-sum-exp trick, which also provides a runtime advantage for our SMPC approach as it replaces products with more efficient additions. Contrary to the convention, we use the log2 domain in CryptoSPN since the circuits for log2 and exp2 operations are significantly smaller than the natural log and exp operations.
Due to the SMPC security properties, all the model parameters are hidden from the client and the input values or evidence from the server. However, this naive approach alone does not provide our desired privacy guarantees since the circuit evaluated in SMPC is public. Therefore, the topology of the SPN is leaked to the client, including which RVs (the scope) are used in which leaves. Depending on how the SPN was learned, this might reveal information about the server’s model, such as correlations among RVs, number of mixtures, etc. To hide this information, one could make use of generic private function evaluation techniques such as incorporating universal circuits (UCs) [43]. UCs allow one party to choose a function as the private input, which is then obliviously evaluated on the other party’s input such that nothing about the function or the input is revealed. Employing these generic techniques, however, would drastically increase the overhead we introduce via SMPC. For this reason, the related work on SMPC for private NN inference usually assumes that the NN topology is public, with the impact on model privacy being unclear in this situation. To mitigate these concerns in CryptoSPN, we tailor efficient techniques stemming from both AI as well as applied cryptography research specifically to SPNs. The first method hides specifics of the training data by using Random Tensorized SPNs (RAT-SPNs) [35], while the second method allows to hide the scope of any existing SPN without the need to re-learn a RAT-SPN.
It is possible that the structure of a general SPN leaks information about the training data. To hide any information that could be revealed from the SPN structure, we propose to use RAT-SPNs [35]. The RAT-SPN structure is built randomly via region graphs. Given a set of RVs , a region is defined as any non-empty subset of . Given any region , a -partition of is a collection of non-overlapping sub-regions , whose union is , i.e., , , , . This partitioning algorithm randomly splits the RVs. Furthermore, we recursively split the regions until we reach a desired partitioning depth. Here, we consider only 2-partitions. From these region graphs, we can construct an SPN specifying the number of uniform leaves per RV. Since the structure-building algorithm is data-agnostic (it only knows the number of RVs in the dataset), there is no information leakage. This also means that any initial random structure for
, the number of random variables, is a valid initial structure for any other dataset with the same number of dimensions. After obtaining the structure, we use a standard optimization algorithm for parameter estimation. The structure produced by the RAT-SPN algorithm is regular, and the values of the parameters after the optimization encode the knowledge needed to build the joint distribution. In our scheme, the parameters are only visible to the service provider. Using a random structure also enables us to choose the size of the SPNs, which allows service providers to trade off model complexity, efficiency, and accuracy.
Since the scope of a node is defined by the scope of its children, it suffices to hide the leaves’ scopes. Concretely, for each leaf, we have to hide which for from the client’s RVs is selected. This corresponds to an oblivious array access in each leaf, where an array can be accessed without revealing the accessed location . There exist efficient methods to do this based on homomorphic encryption [6] or secure evaluation of selection networks [23] via SMPC. A recent study of private decision tree evaluation [21] shows that selection networks outperform selection based on HE in both total and online runtime. Hence, we obliviously select RVs via securely evaluating a selection network in CryptoSPN.
Similar to the usage in decision trees, we add just one selection network below the SPN instead of selecting one variable per leaf. That is, the variable input of the secure leaf computation (see below) is the outcome of the selection network, which selects the variables for the leaves in the SPN from the client inputs according to a server input denoting which leaf uses . If (which we assume is true since RVs are usually used more than once), the complexity of such a selection network is [23]:
beating the trivial solution of . This requires bits of setup communication and bits online [21]. Hereby, one can hide the scope of any SPN, including ones learned through other methods [13] (although the topology is still leaked). We propose to use this approach to increase privacy in cases where leaking the topology is deemed acceptable, or where re-learning the structure of an already existing SPN is infeasible.
Because the secure computation of each floating point operation introduces overhead, our approach at the leaf level is to let the respective parties locally pre-compute as many terms as possible before inputting them into the secure SPN evaluation. For Gaussians in the log2 domain, the result can be evaluated in SMPC with just two multiplications and two additions based on the client’s RV input and server inputs , , and based on parameters mean
and variance
:Thus, for each leaf, the SPN circuit requires AND gates, where denotes the number of AND gates for a -bit floating point operation OP, cf. [11]^{2}^{2}2For instance, , and .. Additionally, for the entire SPN, bits of client input and bits of server input are added, where is the amount of leaves and is the number of RVs.
Similarly, we can securely compute Poissons with just one multiplication and two additions based on the client’s RV inputs and , and server inputs and based on mean :
This results in the leaf size with input sizes and for the entire SPN.
Bernoullis consist of just one MUX gate, selecting from two server inputs and based on the binary client RV input :
Hence, they have a complexity of AND gates, yielding the costs , , and .
Due to the log2 domain, computations of a product node just introduce a complexity of , where denotes the amount of children of a node . For the same reason, the complexity of a sum node is:
Putting all of the presented building blocks together, we get the following amount of AND gates (the only relevant cost metric for Yao’s GC protocol) for an SPN with RVs and leaves of distribution that operates with -bit precision and consists of a set of sum nodes and product nodes , where for denotes the amount of children of node :
In addition, we also have client input bits stemming from the RVs and server input bits stemming from the leaf parameters as well as the sum weights. Therefore, using Yao’s GC protocol, CryptoSPN has the following communication costs in bits in the setup phase:
and in the online phase:
where is the symmetric security parameter (e.g., ).
If one does not use RAT-SPNs and instead our scope-hiding private SPN evaluation, obliviously selecting the leaves’ RVs for Gaussian, Poisson, and Bernoulli leaves has the following online communication, respectively: , , and bits. The setup communication is , , and bits, respectively.
As we use RAT-SPNs, the underlying structure, the size and depth of the SPN are known to the client. However, this structure is randomly generated and comes only from hyper-parameters. Therefore, the structure is independent of training data and leaks no private information. The number of random variables is known by both parties, as usual (e.g., [15, 20, 28, 30, 39, 40]). And while the value of the input variables is hidden, the output is not and might reveal some information but is data that inherently has to be revealed.
The protocols we use in our implementation of CryptoSPN are provably secure in the semi-honest model [27]. In the studied setting, it is reasonable to assume the server is semi-honest, as reputable service providers are confined by regulations and potential audits. Furthermore, detected malicious behaviour would hurt their reputation, providing an economic incentive to behave honestly. However, these regulations and incentives do not exist for the client’s device, which can be arbitrarily modified by the client or harmful software.
Fortunately, CryptoSPN can easily be extended to provide security against malicious clients as it relies on Yao’s GC protocol. There, the only messages sent by the (potentially malicious) client are in the oblivious transfer. Thus, one just needs to instantiate a maliciously secure OT protocol to achieve security against malicious clients, which incurs only a negligible performance overhead [2].
We implemented CryptoSPN using the state-of-the-art SMPC framework ABY [12] with the floating point operation sub-circuits of [11] and the selection network circuit of [21]. ABY implements various SMPC protocols in C++ and provides APIs for the secure evaluation of supplied circuits within these protocols. It also supports single instruction, multiple data (SIMD) instructions, which allows CryptoSPN to batch-process multiple queries at the same time. Notably, like most other SMPC frameworks, ABY requires a very low-level circuit description of the function that is computed securely, making it hard for AI researchers and others without a background in cryptography to actually perform private ML inference. Motivated by this gap, we integrate CryptoSPN with SPFlow [33], an open-source Python library that provides an interface for SPN learning, manipulation, and inference. For users, CryptoSPN appears as another SPFlow export that enables private SPN inference. Specifically, CryptoSPN allows ML experts to easily transform an SPN in SPFlow into a privacy-preserving ABY program with just the SPN as input. The resulting ABY program can be compiled into an executable for simple deployment on the client and server side. CryptoSPN is available at https://encrypto.de/code/CryptoSPN.
We evaluate CryptoSPN on random SPNs trained with SPFlow for the standard datasets provided in [13], and on regular SPNs for nips, a count dataset from [32]. We evaluate models with both - and -bit precision to study the trade-off between accuracy and efficiency. The experiments are performed on two machines with Intel Core i9-7960X CPUs and of RAM. We use a symmetric security parameter of bits according to current recommendations. The connection between both machines is restricted to bandwidth and a round-trip time of to simulate a realistic wide-area network (WAN) for a client-server setting.
dataset | #leaves | #edges | #layers | setup (s) | setup (GB) | online (s) | online (MB) | |||||||
b | b | b | b | b | b | b | b | |||||||
accidents | ||||||||||||||
baudio | ||||||||||||||
bbc | ||||||||||||||
bnetflix | ||||||||||||||
book | ||||||||||||||
c20ng | ||||||||||||||
cr52 | ||||||||||||||
cwebkb | ||||||||||||||
dna | ||||||||||||||
jester | ||||||||||||||
kdd | ||||||||||||||
kosarek | ||||||||||||||
msnbc | ||||||||||||||
msweb | ||||||||||||||
plants | ||||||||||||||
pumsb_star | ||||||||||||||
tmovie | ||||||||||||||
tretail | ||||||||||||||
nltcs | ||||||||||||||
nips | 100 | 7 | 17 | 1061 | 1084 | 11 | ||||||||
100 | 15 | 43 | 2750 | 2807 | 15 | |||||||||
Our benchmarks are given in Table 1. Compared to previous works focused on NNs, we evaluate a variety of datasets, which shows that CryptoSPN can easily transform any SPN into a privacy-preserving version. In addition to the theoretical analysis of Section 3.2, we also investigate RAT-SPNs of various sizes for the nltcs dataset of [13] to gain a practical sense of how different SPN parameters affect our runtime. Moreover, we use two regular SPNs trained for nips to see how hiding the scope (cf. Section 3.1.2) increases the runtime.
Generally, our results shown in Table 1 demonstrate that we achieve tractable setup and highly efficient online performance for medium-sized SPNs. Specifically, the setup phase requires costs in the order of minutes and gigabytes, while the online phase takes only a few seconds and megabytes. Though multiple seconds might seem like a significant slow-down in some cases, this is certainly justified in many scenarios where privacy demands outweigh the costs of privacy protection (such as legal requirements for medical diagnostics).
While no single parameter appears to be decisive for the runtimes, we observe that some parameters are much more significant:
The number of sums has a significantly larger effect than products or leaves, which is expected given the log2 and exp2 operations. But, since the absolute amount of sums is still relatively small, the additional input weights do not affect online communication.
Though differences in the number of RVs, product nodes, leaves, and edges do influence the runtimes, deviations have to be very large to take an effect. For instance, when examining the SPNs for accidents, baudio, and msweb, it takes roughly twice the amount of RVs and edges (the SPN for msweb) compared to the others to reach a significant runtime deviation.
When looking at the SPNs for nltcs, the first three SPNs have roughly the same density and the runtime seems to scale according to their size. The last two SPNs, however, have a noticeably higher density but comparable size and result in much higher runtimes. Thus, density (especially the amount of edges) is a much more significant parameter than plain network size.
Yet, depending on the SPN, the costs of other, less important parameters can outweigh the costs of individual parameters. This is in line with our theoretical analysis in Section 3.2: the circuit’s size depends on the number of children (with different costs for sums and products) as well as the number of RVs and leaves. The amount of layers has no direct effect because the round complexity of Yao’s GC protocol is independent of the depth. As for the regular SPNs for nips, one can observe that the effects of hiding RV assignments are insignificant compared to the overall performance.
Using -bit precision roughly doubles the costs of -bit precision, which is expected as the sub-circuits are about twice the size [11]. Comparing the difference of the resulting log-probabilities when evaluating the SPNs in CryptoSPN to the plain evaluation with SPFlow, we get an RMSE of for -bit and for -bit models. We stress that this insignificant loss in accuracy is not due to the cryptographic measures, but rather due to the more SMPC-friendly computation in the log2 domain.
Resolving privacy issues in ML applications is becoming a challenging duty for researchers, not least due to recent legal regulations such as the GDPR. By combining efforts from both AI and applied cryptography research, we presented CryptoSPN, which successfully addresses this challenge for the evaluation of sum-product networks (SPNs) that support a wide variety of desired ML tasks. The protocols of CryptoSPN together with the tools developed for ML experts deliver efficient yet extremely accurate SPN inference while providing unprecedented protection guarantees that even cover the network scope and structure. With our work serving as a foundation, future research can investigate further efficiency improvements (e.g., via quantization techniques appropriate for SPNs), hiding the structure of SPNs that cannot be re-trained, and private SPN learning.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 850990 PSOTI). It was co-funded by the Deutsche Forschungsgemeinschaft (DFG) –- SFB 1119 CROSSING/236615297 and GRK 2050 Privacy & Trust/251805230, and by the BMBF and HMWK within ATHENE. KK also acknowledges the support of the Federal Ministry of Education and Research (BMBF), grant number 01IS18043B “MADESI”.
Alejandro Molina, Sriraam Natarajan, and Kristian Kersting, ‘Poisson sum-product networks: A deep architecture for tractable multivariate poisson distributions’, in
AAAI, (2017).Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Xiaoting Shao, Martin Trapp, Kristian Kersting, and Zoubin Ghahramani, ‘Random sum-product networks: A simple and effective approach to probabilistic deep learning’, in
UAI, (2019).M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin Lauter, and Farinaz Koushanfar, ‘XONN: XNOR-based oblivious deep neural network inference’, in
USENIX Security, (2019).Han Zhao, Mazen Melibari, and Pascal Poupart, ‘On the relationship between sum-product networks and Bayesian networks’, in
ICML, (2015).