Encrypted statistical machine learning: new privacy preserving methods

08/27/2015 ∙ by Louis J. M. Aslett, et al. ∙ 0

We present two new statistical machine learning methods designed to learn on fully homomorphic encrypted (FHE) data. The introduction of FHE schemes following Gentry (2009) opens up the prospect of privacy preserving statistical machine learning analysis and modelling of encrypted data without compromising security constraints. We propose tailored algorithms for applying extremely random forests, involving a new cryptographic stochastic fraction estimator, and naïve Bayes, involving a semi-parametric model for the class decision boundary, and show how they can be used to learn and predict from encrypted data. We demonstrate that these techniques perform competitively on a variety of classification data sets and provide detailed information about the computational practicalities of these and other FHE methods.



There are no comments yet.


page 21

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Privacy requirements around data can impede the uptake and application of statistical analysis and machine learning algorithms. Traditional cryptographic methods enable safe long-term storage of information, but when analysis is to be performed the data must first be decrypted. Rivest et al. (1978) initially showed that it may be possible to design an encryption scheme that supports restricted mathematical computations without decrypting. However, it was not until Gentry (2009) that a scheme able to support theoretically arbitrary computation was proposed. Briefly here, these so-called homomorphic encryption schemes allow for certain mathematical operators such as addition and multiplication to be performed directly on the cipher texts (encrypted data), yielding encrypted results which upon decryption render the same results as if the operations had been performed on the plain texts (original data). These schemes are reviewed in a companion report to this paper (Aslett, Esperança and Holmes, 2015) in a manner which is accessible to statisticians and machine learners with accompanying high level open source software in R to allow users to explore the various issues111In this report we will assume that the reader is familiar with the basic concepts of fully homomorphic encryption and some of the practical computational constraints, as overviewed in Aslett, Esperança and Holmes (2015) and Gentry (2010)..

Privacy constraints enter into many areas of modern data analysis from biobanks and medical data to the impending wave of ‘wearable devices’ such as smart watches, generating large amounts of personal biomedical data (Anderlik and Rothstein, 2001; Kaufman et al., 2009; Angrist, 2013; Brenner, 2013; Ginsburg, 2014). Moreover with the advent of cloud computing many data owners are looking to outsource storage and computing, but particularly with non-centralised services there may be concerns with security issues during data analysis (Liu et al., 2011). Indeed, encryption may even be desirable on internal network connected systems as providing an additional layer of security.

Although homomorphic encryption in theory promises arbitrary computation, the practical constraints mean that this is presently out of reach for many algorithms (Aslett, Esperança and Holmes, 2015). This motivates the interest in tailored machine learning methods which can be practically applied. This paper contributes two such methods with FHE approximations to extremely random forests and naïve Bayes developed, such that both learning and prediction can be performed encrypted, something which is not possible with the original versions of either technique.

We are not the first to explore secure machine learning approaches to encryption. Graepel et al. (2012) implemented two binary classification algorithms for homomorphically encrypted data: Linear Means and Fisher’s Linear Discriminant. They make scaling adjustments which preserve the results, but leave the fundamental methodology unchanged. Bost et al. (2014)

developed a two party computation framework and used a mix of different partly and fully homomorphic encryption schemes which allows them to use machine learning techniques based on hyperplane decisions, naïve Bayes and binary decision trees — again the fundamental methodologies are unchanged, but here substantial communication between two (‘honest but curious’) parties is required.

These are two existing approaches to working within the constraints imposed by homomorphic encryption: either by the use of existing methods amenable to homomorphic computation; or by invoking multi-party methods. Here, we consider tailored approximations to two statistical machine learning models which make them amenable to homomorphic encryption, so that all stages of fitting and prediction can be computed encrypted. Thus, herein we contribute two machine learning algorithms tailored to the framework of fully homomorphic encryption and provide an R package implementing them (Aslett and Esperança, 2015). These techniques do not require multi party communication.

Aside from classification techniques, other privacy preserving statistical methods have been proposed in the literature such as small-linear regression (; Wu and Haven, 2012

) and predictive machine learning using pre-trained models (e.g., logistic regression;

Bos et al., 2014)

In Section 2 a brief recap of homomorphic encryption and consequences for data representation is presented, with the unfamiliar reader directed to Aslett, Esperança and Holmes (2015) for a fuller review. Section 3 contains a novel implementation of extremely random forests (Geurts et al., 2006; Cutler and Zhao, 2001) including a stochastic approximation to tree voting. In Section 4 a novel semi-parametric naïve Bayes algorithm is developed that utilises logistic regression to define the decision boundaries. Section 5 details empirical results of classification performance on a variety of tasks taken from the UCI machine learning repository, as well as demonstrating the practicality with performance metrics from fitting a completely random forest using the Amazon EC2 cloud platform. Section 6 offers a discussion and conclusions.

2 Homomorphic encryption and data representation

We shall adopt a public key encryption scheme having public key and secret key and equipped with algorithms and which encrypt and decrypt messages respectively. Encryption maps a message from message space to an element of cipher text space . A scheme is then said to be homomorphic for some operations acting in message space (such as addition or multiplication) if there are corresponding operations acting in cipher text space satisfying the property:

A scheme is fully homomorphic if it is homomorphic for both addition and multiplication. We shall consider herein the particular homomorphic encryption scheme of Fan and Vercauteren (2012), a high performance and easy to use implementation of which is available in R (Aslett, 2014a), and assume that the reader is familiar with the basic principles of this approach (Aslett, Esperança and Holmes, 2015).

2.1 Practical limitations

Although FHE schemes exist, it is worth briefly recalling the practical constraints in implementing arbitrary algorithms, as they impact and motivate the tailored developments presented in this paper. Some of the current practical implementation issues include:

Message space: Real value encryption lies outside of existing FHE schemes so that measurements must typically be stored as integers. Given an integer measurement, , the choice of the corresponding message space representation, , will have consequences for computational cost and memory requirements. For example, , could be directly represented in an integer message space , or in a binary message space involves writing down the value in base 2 () and encrypting each bit (each ) separately.

The major consequence is that performing simple operations such as addition and multiplication under the binary representation involves manual binary arithmetic, which is much more expensive than the single operation involved when the natural integer representation is used. For instance, adding two 32-bit values in a binary representation would involve over 256 fundamental operations by using standard full binary adder logic. Consequently, we do not consider FHE schemes where binary representation is the only option and instead require for the new techniques to be presented in Sections 3 and 4.

However, although is more efficient computationally, it still does not naturally accommodate the kinds of data commonly encountered in statistics and machine learning applications, so that even representing data requires careful consideration.

Cipher text size: existing FHE schemes result in substantial inflation in the size of the data. For example, in Fan and Vercauteren (2012) the cipher text space is , a cartesian product of high degree polynomial rings with coefficients belonging to a large integer ring. Therefore when using the default parameter values in that paper the two polynomials are of degree , with each of the coefficients being 128-bit integers. This means that 1MB of message data can grow to approximately 16.4GB of encrypted data, representing a 1,600 fold increase in storage size.

Computational speed: due in part to the increased data size, but also due to the complex cipher text spaces, the cost of performing operations is high. For example, in Fan and Vercauteren (2012) arithmetic for simple messages in is achieved by performing complex polynomial arithmetic in .

To make this concrete, imagine adding the numbers 2 and 3 to produce 5. Basic parameter choices for the Fan and Vercauteren (2012) encryption scheme will mean that not only does this simple addition involve adding degree polynomials, but the 128-bit integer coefficients of those polynomials are too large to be natively represented or operated on by modern CPUs.

Indeed, the theoretical latency for integer addition on a modern CPU is 1 clock cycle, so that 2+3 executes in sub 1 nanosecond (s). By contrast, the optimised C++ implementation of Fan and Vercauteren (2012) in Aslett (2014a) takes around 3 milliseconds (s) to perform the same computation encrypted.

Division and comparison: existing integer message space schemes cannot perform encrypted division and are unable to evaluate binary comparison operations such as and . So that mathematical operations are currently restricted to addition and multiplication.

Cryptographic noise: the semantic security necessary in existing schemes involves injection of some noise into the cipher texts, which grows as operations are performed. Typically the noise growth under multiplication can be significant so that after a certain depth of multiplications the cipher text must be ‘refreshed’. This refresh step is usually computationally expensive, so that in practice the parameters of the encryption scheme are usually chosen a priori to ensure that all necessary operations for the algorithm to be applied can be performed without any refresh being required.

Thus, the restriction to integers, addition and multiplication, combined with a limit on noise growth emanating from multiplication operations, means that in reality the constraints of homomorphic encryption allow only moderate degree polynomials of integers to be computed encrypted. Even so, the speed of evaluation will be relatively slow compared to the unencrypted counterparts, as demonstrated in our examples in Section 5.

2.2 Data representation

One consequence of the above is that we need to transform data to make it amenable to FHE analysis. We show that certain transformations will also allow for limited forms of computation involving comparison operations such as and . We consider two simple approaches below.

2.2.1 Quantisation for real values

Given that many current homomorphic schemes work in the space of integers (Aslett, Esperança and Holmes, 2015), it may be necessary to make approximations when manipulating real-valued variables. Graepel et al. (2012) proposed an approximation method where real values are first approximated by rationals (two integers factors) and then denominators cleared—by multiplying the entire dataset by a pre-specified integer—and rounding the results to the nearest integer.

One suggestion here is more straightforward: choose a desired level of accuracy, say , which represents the number of decimal places to be retained; then multiply the data by and round to the nearest integer. This avoids the need for rational approximations and the requirement for a double approximation caused by the denominator-clearing step. More precisely, for a given precision , a real value is approximated by , where denotes rounding to the nearest integer. This transformation adequately represents real values in an integer space, in the sense that smooth relative distances are approximately maintained.

For data sets of finite precision (the typical case in real applications), no loss of precision is necessary if is selected to be equal to the accuracy (i.e., number of decimal places) of the most accurate value in the data set. Otherwise, in the cases where transformations are required (e.g., logarithms), precision is under the user’s control. Note that the parameter regulates the accuracy of the input (data), not that of the output (result). To the extent that the output accuracy depends on the input accuracy and also on the complexity of the algorithm, the choice of should take both these factors into consideration.

In particular, when evaluating homogeneous polynomial expressions, then no intermediate scaling is required since every term will have scaling , where is the degree of the homogeneous polynomial. Where scaling is required, it will be known a priori based on the algorithm and is not data dependent.

2.2.2 Quantisation to categorical or ordinal

The approach above encodes real values by an integer representation, but this increases substantially the number of multiplication operations involved. Instead, by transforming continuous measurement values into categorical or ordinal ones via a quantisation procedure, it is possible to dispense with the need to track appropriate scaling in the algorithm. This simple solution has not, as far as we’re aware, been taken in the applied cryptography literature to date. Moreover, this quantisation procedure allows some computations involving comparison operations () to now be performed as detailed below.

Let be a design matrix, with elements recording the

th predictor variable for the

th observation. It may be that or may be a categorical value. In both cases, consider a partition of the support of variable to be quantised . That is, and . There are at least two routes one may take to quantisation:

  1. is encoded as an indicator , where for each continuous variable , and . For example, a natural ordinal choice for is the partition induced by the quintiles of that variable.

  2. If the partition also satisfies and , then another option is to replace the value of by a corresponding ordinal value, so that so that the support becomes .

Both approaches transform continuous, categorical or ordinal values to an encoding which can be represented directly in the message space of homomorphic schemes. Note that for categorical or discrete variables in the design matrix, these procedures can be exact, whilst for continuous ones they may introduce an approximation.

Thus, for example a design matrix would map to two possible representations and corresponding to the different procedures above, with and :

Recall from §2.1 and Aslett, Esperança and Holmes (2015) that comparisons of equality cannot usually be made on encrypted content. However, Method 1 can be seen to enable encrypted indicators for simple tests of equality, since comparisons simply become inner products:


otherwise the sum is zero. In particular, note that this is a homogeneous polynomial of degree 2, requiring only 1 multiplication depth in the analysis.

Likewise, it is possible to evaluate an encrypted indicator for whether a value lies in a given range, because:


otherwise the sum is zero.

Conversely, Method 2 may be preferred in linear modelling situations which would then represent the change in for an incremental change in the quantised encoding, whereas in a linear modelling context Method 1 results in separate estimates of effect for each category of encoding.

Note that this is not a binary representation of the kind critiqued in section 2.1 — here they are binary indicator values, with an integer representation in an integer space. Therefore, to count the number of indicators, for example, is simple addition, as opposed to the binary arithmetic described earlier.

In the next two sections we present the tailored statistical machine learning techniques developed specifically with the constraints of homomorphic encryption in mind.

3 Extremely Random Forests

Extremely or perfectly random forests (Geurts et al., 2006; Cutler and Zhao, 2001) can exhibit competitive classification performance against their more traditional counterpart (Breiman, 2001)

. Forest methods combine many decision trees in an ensemble classifier and empirically often perform well on complex non-linear classification problems. Traditional random forests involve extensive comparison operations and evaluation of split quality at each level, operations which are either prohibitive or impossible to compute homomorphically in current schemes. However, we show that a tailored version of extremely or perfectly random forests can be computed fully encrypted, where both fitting and prediction are possible, with all operations performed in cipher text space. Moreover we highlight that the completely random nature of the methods allows for incremental learning and divide-and-conquer learning on large data, so that massive parallelism can be employed to ameliorate the high costs of encrypted computation. In particular, this is demonstrated in a real

core cluster example in §5.4.

3.1 Completely Random Forests (CRF)

To begin, we assume the training data are encoded as in Method 1 (§2.2.2) so that the comparison identities in (1) and (2) can be used. In overview, the most basic form of the proposed algorithm then proceeds as follows:

  • Predictor variables at each level in a tree are chosen uniformly (“completely”) at random from a subset of the full predictor set. Additionally, the split points are chosen uniformly (“completely”) at random from a set of potential split points. Identity (2) then provides an indicator variable for which branch a variable lies in, so that a product of such indicators provides an indicator for a full branch of a decision tree. Then (1) enables the pseudo-comparison involved in counting how many observations of a given class are in each leaf of the tree.

  • Step 1 is repeated for each tree in the forest independently, using a random subset of predictors per tree, so that many such trees are grown. Each observation casts one vote per tree, according to the terminal leaf and class to which it belongs. Note that Step 2 can be performed in parallel as the trees are grown independently of one another.

  • At prediction the same identities as in Step 2 can be used to create an indicator which picks out the appropriate vote from each tree, for each class.

The detailed algorithm is given in Appendix A.

This algorithm is referred to as a ‘completely random forest’ since it takes the random growth of trees to a logical extreme with tree construction performed completely blindfold from the data. This is in contrast to, for example, extremely random forests (Geurts et al., 2006) where optimisation takes place over a random selection of splits and variables, and tree growth can terminate upon observing node purity or underpopulation. It is also different to perfectly random tree ensembles (Cutler and Zhao, 2001), where random split points are constructed between observations known to belong to different classes. Neither of those approaches can be directly implemented within the constraints of fully homomorphic encryption.

The model returns an encrypted prediction,

as a count of the votes222The vote is the total number of training samples of category laying in the same root node as prediction point across the trees. for each class category, , in message space , with encrypted training data and encrypted test prediction point . The user decrypts using the private encryption key

and forms a predictive empirical ‘probability’ as:


3.2 Cryptographic stochastic fraction estimate

In conventional forest algorithms each tree gets a single prediction ‘vote’ regardless of the number of training samples that were present in a leaf node for prediction. This is in contrast to the above, where due to encryption constraints the algorithm simply counts the total number of training samples from each category falling in the leaf node of the prediction point, summed across all trees. The difficulty in matching to convention is that converting the number of training samples in a category to the vote of the most probable category for each tree is not possible under current FHE schemes, and would need to be done through decryption.

To address this we propose a method of making an asymptotically consistent stochastic approximation to enable voting from each tree. This is done by exploiting the fact that the adjustment required can be approximated via an appropriate encrypted Bernoulli process by sampling with replacement. This stochastic adjustment can be computed entirely encrypted.

There are several approaches to estimating class probabilities from an ensemble of trees. Perhaps most common is the average vote forest, which is not possible because comparisons between class votes to establish the maximum vote in a leaf are not possible. An alternative is the relative class frequencies approach, which appears also to be beyond reach encrypted because of the need to perform division and representation of values in . An obvious solution to the representation issue as already discussed in §2.2 is to say:

where denotes rounding to the nearest integer, is the number of training observations, counts the number of training samples of class laying in the th terminal leaf node of the th tree (see Appendix A for details), giving a scale which can be represented encrypted, albeit seemingly still not computed due to the division.

Note that , so that the reciprocal of the second term above lies in and can be treated as a probability, and recall that

. In other words, one can view an unbiased stochastic approximation to the fraction we require to be a draw from a Geometric distribution with probability


This transforms the problem from performing division to performing encrypted random number generation, where the distribution parameter involves division, which initially may seem worse. However, observe that each

term arises from summing a binary vector from


Consequently, exchanging the order of summation, can be treated as a sum of a binary vector length :

where is the number of classes. In other words, the length vector (for each ) is an encrypted sequence of 0’s and 1’s with precisely the correct number of 1’s such that blind random sampling with replacement from the elements produces an (encrypted) Bernoulli process with success probability . Hereinafter, refer to this as simply , the dependence on tree and leaf being implicit.

Thus, the objective is to sample a Geometric random variable encrypted, but at this stage it is only possible to generate the encrypted Bernoulli process underlying the desired Geometric distribution. This finally shifts the problem to that of counting the number of leading zeros in an encrypted Bernoulli process: in other words, resample with replacement from

a vector of length , say, and without decrypting establish the number of leading zeros.

To achieve this it is possible to draw on an algorithm used in CPU hardware to determine the number of leading zeros in an IEEE floating point number, an operation required when renormalising the mantissa (the coefficient in scientific notation). Let be a resampled vector and assume is a power of 2 (the reason being that this maximises the estimation accuracy for a fixed number of multiplications):

  1. For :

    • Set

  2. The number of leading zeros is

In summary, this corresponds to increasing power of 2 bit-shifts which are then OR’d with itself, all of which can be computed encrypted.

Thus, an approximately unbiased (for large enough ) encrypted estimator of the desired fraction, , is upon termination of the above algorithm. It is important to note that the multiplicative depth required for this algorithm is , and recall that multiplicative depth is restricted under current FHE schemes (Aslett, Esperança and Holmes, 2015) if expensive cipher text refreshing is to be avoided. Hence, in practise typically the resample size will be restricted to a small value like even for large datasets. However, this is desirable: it enables some shrinkage to take place by placing an upper bound of on the fraction estimate. Thus, for a choice of , terminal leaves will in expectation have the correct adjustment if at least of the training data are in that decision path of the tree. For example, in a training data set of size , the stochastic fraction is correct in expectation for all leaves containing at least observations — fewer observations and the leaf votes will undergo shrinkage in expectation.

Note in particular that CRFs are inherently discrete and the probability of regrowing exactly the same tree twice is not measure zero, so that asymptotically the same tree will be regrown infinitely often with probability 1. If the encrypted stochastic fraction is recomputed in each new tree then asymptotically the correct adjustment will be made.

3.3 Further implementation issues

We highlight a couple of additional implementation issues that are important for the practical machine learning of completely random forests on FHE data.

3.3.1 Calibration

The first point to note is that there is no calibration of the trees, or indeed the forest. Consequently there should be no presumption that provides the “best” prediction for class under unequal misclassification loss. As such, the traditional training and testing setup is crucially important in order to select optimal decision boundaries according to whatever criteria are relevant to the subject matter of the problem, such as false positive and negative rates. This is the only step which must be performed unencrypted: the responses of the test set must be visible, though note that the predictors need not since step 3 for prediction is computed homomorphically.

3.3.2 Incremental and parallel computation

One key advantage of CRFs is that learning is incremental as new data become available: there is no need to recompute the entire fit as there is no optimisation step, so that once used encrypted data can be archived at lower cost and moreover adding new observations has linear growth in computational cost.

Indeed the whole algorithm is embarrassingly parallel both in the number of trees and the number of observations. One can independently compute the trees and data can be split into shards whereby is computed for each shard separately using the same seed in the random number generator for growing trees and then simply additively combined afterwards (a comparatively cheap operation). This is highlighted in a real-world large scale example in §5.4.

3.3.3 Theoretical parameter requirements for Fan and Vercauteren (2012)

For a discussion of practical requirements for the parameter selection in homomorphic encryption schemes, see Appendix B.

The CRF fitting, prediction and forest combination are all implemented in the open source R package EncryptedStats (Aslett and Esperança, 2015) and can be run on unencrypted data as well as data encrypted using the HomomophicEncryption (Aslett, 2014a) package. These are briefly described in Appendix D.

The next section introduces the second novel method tailored for FHE.

4 Naïve Bayes Classifiers

The Naïve Bayes (NB) classifier is a popular generative classification algorithm that models the joint probability of predictor variables independently for each response class, and then uses Bayes rule and an independence assumption among predictors to construct a simple classifier (Ng and Jordan, 2002; Hastie et al., 2009, p.210). The advantages and disadvantages have been extensively described in the literature, for example (Rennie et al., 2003). Although the independence assumption underlying NB is often violated, the linear growth in complexity for large number of predictors and the simple closed-form expressions for the decision rules make the approach attractive in “big-data” situations. Moreover as highlighted by Domingos and Pazzani (1997) there is an important distinction between classification accuracy (predicting the correct class) and accurately estimating the class probability, and hence NB can perform well in classification error rate even when the independence assumption is violated by a wide margin. Essentially, although it produces biased probability estimates, this does not necessarily translate into a high classification error (Hand and Yu, 2001). Consequently, NB remains a well established and popular method.

The Naïve Bayes framework

Consider the binary classification problem with a set of predictors and let

. The NB classifier uses Bayes theorem,


for prediction coupled with an independence assumption,


which embodies a compromise between accuracy and tractability, to obtain a conditional class probability. This allows NB to separately model the conditional distributions of predictor variables, , and then construct the prediction probability via Bayes theorem,


The most popular forms for the distributions are multinomial for categorical predictors and Gaussian for continuous predictors. As shown in the Appendix C, it is possible to work with multinomial distributions directly in cipher text space, albeit at a multiplicative depth of

, but the Gaussian distributions lay outside of FHE. In the next subsection we propose a tailored semi-parametric NB model that is amenable to cipher text computation. Crucially this novel method scales to an arbitrary number of predictors at no additional multiplicative depth, making it well suited to encrypted computation.

4.1 Semi-parametric Naïve Bayes

The NB classifier solves the classification task using a generative approach, i.e., by modelling the distribution of the predictors (Ng and Jordan, 2002). However, distributions such as the Gaussian cannot be directly implemented within an FHE as they involve division and exponentiation operators as well as continuous values. Here, we show that it is possible to model the decision boundary between the two response classes more explicitly — without a parametric model for the distributions of the predictors — while still remaining in the NB framework. As will become clear, this corresponds to a discriminative approach to classification where the decision boundary of the conditional class probabilities is modelled semi-parametrically.

To begin, note that the expression for the log-odds prediction from NB can be rearranged to give


Using this identity, we now propose to model the decision boundary directly using the linear logistic form, rather than parameterise via a distribution function. That is we assume,


where, in this work, is taken to be a linear predictor of the form . The independence structure means that each term can be optimised independently by way of an approximation to logistic regression, amenable to homomorphic computation (presented below; §4.2), since the standard iteratively reweighted least squares fitting procedure is not computable under the restrictions of homomorphic encryption. Optimisation of is done independently of ,


The estimated log-odds are then


or, equivalently, in terms of conditional class probabilities


Hence equation (11) can be computed after decryption from the factors (numerator/denominator) that comprise to form the conditional class predictions.

This defines a semi-parametric Naïve Bayes (SNB) classification model that assumes a logistic form for the decision boundary in each predictor variable, and where each logistic regression involves at most two parameters. As we will show in the next section, in this setting there is a suitable approximation to the maximum likelihood estimates which are traditionally computed using the iteratively reweighted least squares algorithm.

4.2 Logistic Regression

The SNB algorithm proposed in the previous section requires a homomorphic implementation of simple logistic regression involving an intercept and a single slope parameter. In this section an approximation to logistic regression based on the first iteration of the Iteratively Reweighted Least Squares (IRLS) algorithm is proposed and some of its theoretical properties analysed. Apart from its use in SNB, the approximation to logistic regression described in this section also stands on its own as a classification method amenable to homomorphic computation.

4.2.1 First step of Iteratively Reweighted Least Squares

Optimisation in logistic regression is typically achieved via IRLS based on Newton–Raphson iterations. Starting from an initial guess , the first step involves updating the auxiliary variables


where denotes the th row of . By starting with an initial value the initialisation step of the algorithm is simplified: for all we have , , and . In the second step, the parameter estimates of are updated using generalised least squares


and these two steps are repeated until convergence is achieved.

In what follows, only the first-iteration update of is considered, because it provides an approximation to logistic regression and, most importantly, one which is computationally feasible under homomorphic encryption; this will be termed the one-step approximation. Note that implementation of the full IRLS algorithm is infeasible under FHE because the weights can not be updated as this requires the evaluation of non-polynomial functions.

The particular form of these (paired) one-step approximating equations for an intercept and a single slope parameter is shown in the following section. We show an additional simplification (so-called unpaired estimates) which improves encrypted computational characteristics further at the cost of greater approximation.

4.2.2 Paired, one-step approximation to simple logistic regression

In the SNB model the conditional log-odds are optimised for each variable separately, following the independence assumption. In this case the the design matrix contains a single column of 1’s for the intercept and a single predictor variable column, and the first step of the IRLS leads to


and where


To use the one-step approximation in combination with the SNB classifier detailed above (§4.1), the quantity — required for the classification of a new observation — can then be estimated as , where .

This corresponds to a standard, paired optimisation strategy, that is, to optimise intercept and slope jointly, using an approximation targeting


An alternative to this approach would be to optimise intercept and slope independently (i.e., in an unpaired fashion), that is, targeting


in which case all take the same value and, therefore, this is equivalent to estimating all independently and including a global intercept, for some .

We will see that this is computationally appealing due to the simplified estimating equations when using the same one-step approximation.

4.2.3 Unpaired, one-step approximation to simple logistic regression

In the case of unpaired estimation (or in the absence of an intercept term) the estimating equations for and are simpler. To distinguish between unpaired and paired estimates we denote the unpaired by and , respectively. All have a common form


while have the form


Note that this unpaired formulation also arises in the case of centred predictors, , so that the unpaired approach is completely equivalent to the paired approach and introduces no additional approximation if it is possible for the data to be centred prior to encryption. Note that this is trivially achievable for ordinal quintile data by representing each quintile by , rather than . Consequently, where it is possible to centre the data it should be done in order to take advantage of these computational benefits at no approximation cost.

Furthermore, when the data are represented as in section 2.2.2, this can be rewritten as follows. Define the following auxiliary variables


In words, counts the number of observation in the th bin of the th predictor —or, equivalently, the number of elements of (the th column of ) which are equal to —and for which the corresponding response is equal to , for . In this case, Equation (21) becomes


so that for a binary predictor we find


where . This simple expression aids in the derivation of some theoretical results regarding the bias of the estimator.

Figure 1: Shrinkage in . Because the estimator is bounded between -2 and 2, the shrunk value is always equal to 2 for absolute values of greater than .

Shrinkage of

Let denote the true parameter and denote the one-step IRLS estimate from Equation (24). The one-step “early stopping” shrinks or “regularises” the estimate towards the origin as,


The shrinkage, as a function of the true parameter , is shown in Figure 1; it is negligible when is small, but increases linearly with it (in magnitude) for outside the interval . The reason for this is clear from the formula in Equation (24): the range of , the one-step estimate being .

In particular, note that this shrinkage is a highly desirable property in light of the independence assumptions made in SNB. Indeed, the one-step procedure empirically outperforms the full convergence IRLS when predictors are highly correlated and moreover does not significantly underperform otherwise (tests were performed on all datasets to be presented in §5). Therefore the one-step method is not only a computational necessity for computing encrypted, but offers potential improvements in performance against the standard algorithm.

Generalisation error

Figure 2: (Left) For all , the generalisation error shrinks monotonically with increased and converges to an asymptotic curve—in green; (Right) The rate at which the generalisation error curves approach the asymptotic curve is decreasing with for all values of (several shown here) and stabilises at around .

Define and . Then, the generalisation error for can be written as


and approximated by the polynomial


Using the moment generating function for the Binomial distribution one can compute the higher-order expectations and thus arrive at a formula which depends only on

and ,


The expression is dominated by in the sense that even for very small values of , the generalisation error is close (according to our approximation) to the one obtained asymptotically,


Figure 2 (left) shows how the value of affects the generalisation error; and Figure 2 (right) shows the speed at which the generalisation error converges to the one given by the (approximate) asymptotic curve, for several values of .

The generalisation error results highlight that while the estimates of the true class conditional probabilities may be unstable, the classification error rate achieved by SNB may be low as the classifier only has to get the prediction on the correct side of the decision boundary.

4.2.4 Theoretical parameter requirements for Fan and Vercauteren (2012)

For a discussion of practical requirements for the parameter selection in homomorphic encryption schemes, see Appendix B.

The SNB fitting and prediction for both paired and unpaired approximations are implemented in the open source R package EncryptedStats (Aslett and Esperança, 2015) and can be run on unencrypted data as well as data encrypted using the HomomophicEncryption (Aslett, 2014a) package. These are briefly described in Appendix D.

In the next section, the two new machine learning techniques tailored for homomorphic encryption which have been presented are empirically tested on a range of data sets and a real example using a cluster of servers to fit a completely random forest is described.

5 Results

In this section we apply the encrypted machine learning methods presented in Sections 3 and 4 to a number of benchmark learning tasks.

5.1 Classification performance

We tested the methods on 20 data sets of varying type and dimension from the UCI machine learning data repository (Lichman, 2013), each of which is described in Appendix E. For the purposes of achieving many test replicates, the results in this subsection were generated from unencrypted runs (the code paths in the EncryptedStats package for unencrypted and encrypted values are identical), with checks to ensure that the unencrypted and encrypted versions give the same results. Runtime performance for encrypted versions are given below.

Figure 3: Performance of various methods. For each model and dataset, the AUC for 100 stratified randomisations of the training and testing sets; the horizontal lines represent the frequency of class ; an asterisk indicates the method can be computed encrypted.

Figure 3 shows the comparison of these novel methods with each other as well as with their traditional counterparts. The traditional methods included are full logistic regression (LR-full), Gaussian naïve Bayes (GNB) and random forests (RF), none of which can be computed within the constraints of homomorphic encryption. The methods of this paper included are, completely random forests (CRF), paired (SNB-paired) and unpaired (SNB-unpaired) semi-parametric naïve Bayes, and multinomial naïve Bayes (MNB). The CRFs are all 100 trees grown 3 levels deep, including stochastic fraction estimate ().

The performance measure used is the area under the ROC curve (AUC, ranging from 0 to 1). For each model and dataset the algorithms were run with the same 100 stratified randomisations of the training and testing sets (split in the proportion 80%/20%, respectively), so that each point on the graph represents the AUC for one train/test split and one method.

The first two data sets (infl and neph) are very easy classification problems and the new techniques match the traditional techniques perfectly in this setting, keeping almost uniformly good pace in the other rather easy data sets (bcw_d, bcw_o, monks3), the unpaired SNB being the exception on bcw_d.

Unsurprisingly, the traditional random forest tends to perform best in the more challenging data sets (Fernández-Delgado et al., 2014), though only in 4 of the data sets is it clearly outperforming all the other methods by the AUC metric. Indeed, in the most challenging data sets (blood, bcw_p and haber) the new methods proposed in this work exhibit slightly better average performance than their counterparts.

The results of the unpaired SNB were performed without centring (which would be equivalent to paired) and affirms the observation in the previous section that centring or paired computation is always to be preferred where available.

The SNB method with IRLS run to convergence is not presented in the figure: as alluded to in the previous section, the natural shrinkage of the one-step estimator meant it performed equally well in most situations and in the chess and musk1 cases the average ratio of full convergence over one-step AUCs was 0.76 and 0.67 respectively.

As an aside, the unencrypted version of these new methods have good computational properties which will scale to massive data sets, because the most complex operations involved are addition and multiplication which modern CPUs can evaluate with a few clock cycles latency. In addition, in all cases even these simple operations can be performed in parallel and map directly to CPU vector instructions.

5.2 Timings and memory use

All the encrypted methods presented in this work can be implemented to scale reasonably linearly in the data set size, so performance numbers are provided per 100 observations and per predictor (for logistic regression and semi-parametric naïve Bayes) or per tree (for completely random forests).

For reproducibility the timings were measured on a c4.8xlarge compute cluster Amazon EC2 instance. This corresponds to 36 cores of an Intel Xeon E5-2666 v3 (Haswell) CPU clocked at 2.9GHz. Table 1 shows the relevant timings using the EncryptedStats and HomomorphicEncryption packages.

SNB-paired CRF, CRF, CRF,
Fitting 18.0 12.5 45.2 347
Prediction 7.8 15.1 48.3 353
Approx memory per encrypted value 154KB 128KB 528KB KB
Table 1: Approximate running times (in seconds, on a c4.8xlarge EC2 instance) per 100 observations. SNB are per predictor; CRF is per tree, is the tree depth grown. (An ‘encrypted value’ is a single integer value encrypted, so for example using quintiles each is stored as 5 encrypted values.)

Note that the CRF does not scale linearly in the depth of the tree grown, not only because of the non-linear growth in computational complexity for trees but also because as increases so the parameters used for the Fan and Vercauteren (2012) scheme have to be increased in such a way that raw performance of encrypted operations drops. This drop is due to the increasing coefficient size and polynomial degree of the cipher text space. See Appendix B for a discussion of the impact of tree parameters on the encryption algorithm.

5.3 Forest parameter choices

The completely random forest has a few choices which can tune the performance: number of trees , depth of trees to build and whether to use stochastic fractions (and to what upper estimate, ). An empirical examination of these now follows.

5.3.1 Forest trees and depth

We explored the effect of varying the number and depth of the trees used in the algorithm, by varying from 10 to 1000 and from 1 to 6. We found a clear trend in most cases that growing a large forest is much more important than tall trees: indeed, in many cases 1000 tree forests with 1 level perform equally well to those with 6 levels. Figure 7 in Appendix F plots the results. The depth of trees only appears relevant in large forests for the iono, magic and two musk data sets. This may indicate the data sets with more non-linearities, a hypothesis supported by the poor performance of the linear methods in these cases. Indeed, 1 level deep trees appear to be good candidates for additive models.

5.3.2 Stochastic fraction

The CRF algorithm presented in §3 has the option, presented in §3.2, of including an unbiased encrypted stochastic fraction estimate in order to reduce the amount of shrinkage that small leaves undergo. To analyse the impact that this has on the performance, the AUC was recomputed after setting different values for from 0 (i.e. original algorithm, no stochastic fraction) through to 64 in all 100 train/test splits of the 20 datasets that were presented already.

Figure 4: The AUC change for iono data set with different values for in the stochastic fraction estimate. -axis is always with no stochastic fraction estimate; -axis is for shown value of ; one point per train/test split.

Figure 4 shows the results from the data set with the largest improvement in AUC performance from among the 20 data sets when using the stochastic fraction estimate. Improvements in AUC of up to 12.6% were achieved using the stochastic fraction versus omitting it. All points which are above the line indicate improvements in AUC when using the stochastic fraction for the particular train/test split.

Figure 5: The AUC change for bcw_p data set with different values for in the stochastic fraction estimate. -axis is always with no stochastic fraction estimate; -axis is for shown value of ; one point per train/test split.

None of the 20 tested datasets decreased in average AUC with increasing , but Figure 5 shows the data set for which the AUC was least improved. In this instance it is striking that all the points cluster around the line, showing that in essence in the worst case the stochastic fraction estimate has essentially negligible impact.

These figures empirically illustrate the fact that the stochastic fraction has the potential to dramatically improve the performance of completely random forests, whilst not really having a negative impact in those situations where it does not help. As such, it would seem to make sense to include by default.

5.4 Case study in encrypted cloud computing machine learning

To demonstrate the potential to utilise cloud computing resources for sensitive data analysis we undertook a benchmark case study performed with fully encrypted data analysis on the original Wisconsin breast cancer data set using a compute cluster of 1152 CPU cores on Amazon Web Services, at a total cost of less than US$ 24 at the time of writing. The resources used here are readily available to any scientist.

5.4.1 The problem setup

As mentioned earlier in §3.3.2, the completely random forest is amenable to embarrassingly parallel computation, whereby the data can be split into shards using the same random seed, the expensive fitting step performed on each shard, and the final fit produced by inexpensively summing the individual tree shards.

The training part of the Wisconsin data () were initially split into shards of at most 32 observations each, resulting in 17 full 32-observation shards and an 18th shard of 3 observations. These shards were then each encrypted using the HomomorphicEncryption R package (Aslett, 2014a) under the Fan and Vercauteren scheme using parameters:

This renders a theoretical cipher text size of KB per encrypted value, ignoring keys and overhead. These parameters offer around 158-bits of security (using bounds in Lindner and Peikert (2011)) — informally this means on the order of fundamental operations must be performed on average in order to break the encryption. On disk, each gzipped shard of 32 cipher texts for the predictors occupied about 737MB and for the responses occupied about 33.7MB, for a total disk space of around 13.8GB.

This data was uploaded to an Amazon S3 bucket, with the transfer time using the University internet connection being approximately 16 minutes. If this data was to be stored long-term on Amazon S3, it would cost US$ 0.42 per month at the time of writing.

Once the data was in place, an Amazon SQS queue was setup in which to store a reference to each shard. This queue acts as a simple job dispatch system, designed to ensure that each server in the cluster can remain completely independent for maximum speed, by eliminating inter-server communication and as a mechanism to ensure no duplication of work.

With these elements in place, the RStudio AMI ami-628c8a0a (Aslett, 2014b) was extended to add a startup script which (in summary) fetches the work to perform from the SQS queue, downloads it from S3, executes the forest building using the EncryptedStats R package, and uploads the result to the S3 bucket.

5.4.2 The fitting run

The fitting run used Amazon’s spot instances: these are a ‘stock market’ for unused capacity, where it is often possible to bid below the list price for compute servers. The completely random forest is well suited to exploiting low spot prices on EC2 wherever they may arise because it can be formulated in an embarrassingly parallel manner and launched in very geographically dispersed regions without regard for connectivity speeds, since communication costs between nodes are effectively zero.

When the run was performed on 5 May 2015, the spot price for c3.8xlarge instances was lowest in Dublin, Ireland and São Paulo, Brazil. Consequently, the data was replicated to two S3 buckets local to these regions and the customised AMI copied. Then a cluster of 18 c3.8xlarge servers was launched in each region, giving a total of CPU cores and GB of RAM.

Each server was setup to compute 50 trees on its shard of data and every shard was handed out twice so that a total of 100 trees were fitted. Tree growth for the two different sets of 50 trees were initialised from a common random seed, eliminating the need for servers to communicate the trees grown.

After 1 hour and 36 minutes the cluster had completed the full run and finished uploading the encrypted tree fit (that is, encrypted versions of ) back to the S3 bucket. The total space required for storing the 36 forests of 50 trees fitted on each shard was 15.6GB. At this juncture, the forests could be combined homomorphically to produce a single forest of 100 trees which would then require 868MB to store. Note that with the tree fitted, it would then be possible to archive the 13.8GB of original data so that only 868MB needs to be stored long term or downloaded.

The cost of the 36 machines for 2 hours was US$ 23.86 (about £ 15.66). Note that the spot prices were not exceptionally low on the day in question and no effort was made to select an opportune moment. By the same token the price is inherently variable and it may be necessary to wait a short time for favourable spot prices to arise if there are none at the time of analysis.

5.4.3 Results

The encrypted version of the forest was downloaded, decrypted and compared to the results achieved when performing the same fit using an unencrypted version of the data, starting tree growth from the same seeds and using identical R code from the EncryptedStats package (separate code is not required due to the HomomorphicEncryption package fully supporting operator overloading). The resultant fit from both encrypted and unencrypted computation was in exact agreement.

6 Discussion

Fully homomorphic encryption schemes open up the prospect of privacy preserving machine learning applications. However practical constraints of existing FHE schemes demand tailored approaches. With this aim we made bespoke adjustments to two popular machine learning methods, namely extremely random forests and naive Bayes classifiers, and demonstrated their performance on a variety of classifier learning tasks. We found the new methods to be competitive against their unencrypted counterparts.

To the best of our knowledge these represent the first machine learning schemes tailored explicitly for homomorphic encryption so that all stages (fitting and prediction) can be performed encrypted without any multi-party computation or communication.

Furthermore, the unencrypted version of these new methods will scale to massive data sets, because the most complex operations involved are addition and multiplication. Indeed, even these simple operations can be performed in parallel for most of the presented algorithms and map directly to CPU vector instructions. This is an interesting avenue of future research.


The authors would like to thank the EPSRC and LSI-DTC for support. Louis Aslett and Chris Holmes were supported by the i-like project (EPSRC grant reference number EP/K014463/1). Pedro Esperança was supported by the Life Sciences Interface Doctoral Training Centre doctoral studentship (EPSRC grant reference number EP/F500394/1).


  • (1)
  • Anderlik and Rothstein (2001) Anderlik, M. and Rothstein, M. A. (2001), ‘Privacy and confidentiality of genetic information: what rules for the new science?’, Annual Review of Genomics and Human Genetics 2, 401–433.
  • Angrist (2013) Angrist, M. (2013), ‘Genetic privacy needs a more nuanced approach’, Nature 494, 7.
  • Aslett (2014a) Aslett, L. J. M. (2014a), HomomorphicEncryption: Fully Homomorphic Encryption. R package version 0.2.
  • Aslett (2014b) Aslett, L. J. M. (2014b), RStudio AMIs for Amazon EC2 cloud computing. AMI ID ami-628c8a0a.
  • Aslett and Esperança (2015) Aslett, L. J. M. and Esperança, P. M. (2015), EncryptedStats: Encrypted Statistical Machine Learning. R package version 0.1.
  • Aslett et al. (2015) Aslett, L. J. M., Esperança, P. M. and Holmes, C. C. (2015), A review of homomorphic encryption and software tools for encrypted statistical machine learning, Technical report, University of Oxford. arXiv:1508.06574 [stat.ML].
  • Bos et al. (2014) Bos, J. W., Lauter, K. and Naehrig, M. (2014), ‘Private predictive analysis on encrypted medical data’, Journal of Biomedical Informatics 50, 234--243.
  • Bost et al. (2014) Bost, R., Popa, R. A., Tu, S. and Goldwasser, S. (2014), ‘Machine learning classification over encrypted data’, Cryptology ePrint Archive, Report 2014/331: eprint.iacr.org/2014/331.
  • Breiman (2001) Breiman, L. (2001), ‘Random forests’, Machine learning 45(1), 5--32.
  • Brenner (2013) Brenner, S. E. (2013), ‘Be prepared for the big genome leak’, Nature 498, 139.
  • Cutler and Zhao (2001) Cutler, A. and Zhao, G. (2001), ‘PERT - perfect random tree ensembles’, Computing Science and Statistics 33, 490--497.
  • Domingos and Pazzani (1997) Domingos, P. and Pazzani, M. (1997), ‘On the optimality of the simple bayesian classifier under zero-one loss’, Machine Learning 29(2--3), 103--130.
  • Fan and Vercauteren (2012) Fan, J. and Vercauteren, F. (2012), ‘Somewhat practical fully homomorphic encryption’, IACR Cryptology ePrint Archive .
  • Fernández-Delgado et al. (2014) Fernández-Delgado, M., Cernadas, E., Barro, S. and Amorim, D. (2014), ‘Do we need hundreds of classifiers to solve real world classification problems?’, The Journal of Machine Learning Research 15(1), 3133--3181.
  • Gentry (2009) Gentry, C. (2009), A fully homomorphic encryption scheme, PhD thesis, Stanford University.
  • Gentry (2010) Gentry, C. (2010), ‘Computing arbitrary functions of encrypted data’, Communications of the ACM 53(3), 97--105.
  • Geurts et al. (2006) Geurts, P., Ernst, D. and Wehenkel, L. (2006), ‘Extremely randomized trees’, Machine Learning 63(1), 3--42.
  • Ginsburg (2014) Ginsburg, G. (2014), ‘Medical genomics: Gather and use genetic data in health care.’, Nature 508(7497), 451--453.
  • Graepel et al. (2012) Graepel, T., Lauter, K. and Naehrig, M. (2012), ML Confidential: Machine learning on encrypted data, in T. Kwon, M.-K. Lee and D. Kwon, eds, ‘Information Security and Cryptology (ICISC 2012)’, Vol. 7839 of Lecture Notes in Computer Science, Springer, pp. 1--21.
  • Hand and Yu (2001) Hand, D. J. and Yu, K. (2001), ‘Idiot’s bayes---not so stupid after all?’, International Statistical Review 69(3), 385--398.
  • Hastie et al. (2009) Hastie, T., Tibshirani, R. and Friedman, J. (2009), The elements of statistical learning, Springer.
  • Kaufman et al. (2009) Kaufman, D. J., Murphy-Bollinger, J., Scott, J. and Hudson, K. L. (2009), ‘Public opinion about the importance of privacy in biobank research’, American Journal of Human Genetics 85(5), 643--654.
  • Lichman (2013) Lichman, M. (2013), ‘UCI Machine Learning Repository’, http://archive.ics.uci.edu/ml/index.html.
  • Lindner and Peikert (2011) Lindner, R. and Peikert, C. (2011), Better key sizes (and attacks) for lwe-based encryption, in ‘Topics in Cryptology--CT-RSA 2011’, Springer, pp. 319--339.
  • Liu et al. (2011) Liu, J., Goraczko, M., James, S., Belady, C., Lu, J. and Whitehouse, K. (2011), The data furnace: heating up with cloud computing, in ‘Proceedings of the 3rd USENIX conference on hot topics in cloud computing’, Vol. 11.
  • Ng and Jordan (2002) Ng, A. Y. and Jordan, M. I. (2002), On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes, in T. Dietterich, S. Becker and Z. Ghahramani, eds, ‘Advances in Neural Information Processing Systems (NIPS’01)’, Vol. 14, MIT Press, pp. 841--848.
  • Rennie et al. (2003) Rennie, J. D. M., Shih, L., Teevan, J. and Karger, D. R. (2003), Tackling the poor assumptions of naive bayes text classifiers, in ‘International Conference on Machine Learning (ICML’03)’, AAAI, pp. 616--623.
  • Rivest et al. (1978) Rivest, R. L., Adleman, L. and Dertouzos, M. L. (1978), ‘On data banks and privacy homomorphisms’, Foundations of Secure Computation 4(11), 169--180.
  • Wu and Haven (2012) Wu, D. and Haven, J. (2012), ‘Using homomorphic encryption for large scale statistical analysis’.

Appendix A Completely Random Forest (CRF) algorithm

In detail, consider a training set with observations consisting of a categorical response, , and predictors, , (categorical / ordinal / continuous), for . All variables (predictors and response) are first transformed using method 1 from §2.2.2 prior to encryption. Thus, for and . Consider these herein to be in encrypted form.

The proposed algorithm for Completely Random Forests (CRFs) is then as follows:


  1. Specify the number of trees to grow, , and the depth to make each tree, .

  2. For each , build a tree in the forest:

    1. Tree growth: For each , build a level:

      1. Level will have branches (splits), each of which will have a partition applied as follows. For each , construct the partitions:

        1. Splitting variable: Select a variable at random from among the predictors. Due to the encoding of §2.2.2, this variable has a partition associated with it.

        2. Split point: Create a partition of at random in order to perform a split on variable , where each for some , with and . Note that for categorical predictors this is a random assignment of levels from the partition to each , while for ordinal predictors a split point is chosen and the partition formed by the levels either side of the split.

          Note also the indexing of to emphasise if variable is selected more than once (in different levels or trees) a different random split is chosen.

    2. Tree fitting: The total number of training observations belonging to category in the completely randomly grown tree at terminal leaf is then:


      Figure 6 (page 6) is useful for understanding this.

      Note in particular that written this way is simply a polynomial and can be computed homomorphically with multiplicative depth . Both and involve only indices of the algorithm which will not be encrypted. Thus, the training data can be evaluated on the tree without the use of comparisons.

      The total number of training observations in this terminal leaf is then simply:

      Thus a single tree, , fitted to a set of training data, is defined by the tuple of sets:

  3. Prediction: Once the forest has been grown, attention turns to prediction. Given an encrypted test observation with predictors , the objective is to predict the response category. Define to be the number of votes for response category , which can be simply computed as:

    where and are as defined above. Each is returned from the cloud to the client. The client decrypts and forms a predictive empirical ‘probability’ as:


Hence, the proposed CRF algorithm makes use of a quantisation procedure on the data, followed by completely random selection of variable and completely random partition on the quantile bins, in order to eliminate any need to perform comparisons.

Tree growth (2a) occurs unencrypted and is a very fast operation.

Tree fitting (2b) involves counting the number of training observations which lie in each terminal node of each tree. In computing , the inner sum evaluates whether an observation has predictor value in the relevant partition at level of tree (by (