1. Introduction
Differential privacy (Dwork et al., 2006a; Dwork et al., 2006b) has become the gold standard for a rigorous privacy guarantee. This has prompted the development of many mechanisms including the classical Laplace mechanism (Dwork et al., 2006b) and the Exponential mechanism (McSherry and Talwar, 2007). In addition, there are other mechanisms that build upon these two classical ones such as those based on data partition and aggregation (Xiao et al., 2011b; Hay et al., 2010; Qardaji et al., 2013b; Li et al., 2014; Zhang et al., 2014; Xiao et al., 2012; Cormode et al., 2012; Qardaji et al., 2013a; Xu et al., 2013; Acs et al., 2012; Nissim et al., 2007), and those based on adaptive queries (Hardt and Rothblum, 2010; Hardt et al., 2012; Li et al., 2010; Li and Miklau, 2012; Yuan et al., 2012; Lyu et al., 2016; Dwork et al., 2010). From this observation, differentiallyprivate mechanisms may be categorized into the basic and derived mechanisms. Privacy guarantee of the basic mechanisms is selfcontained, whereas that of the derived mechanisms is achieved through a combination of basic mechanisms, composition theorems, and the postprocessing invariance property (Dwork, 2008).
In this work, we design a basic mechanism for matrixvalued queries. Existing basic mechanisms for differential privacy are designed typically for scalarvalued query functions. However, in many practical settings, the query functions are multidimensional and can be succinctly represented as matrixvalued functions. Examples of matrixvalued query functions in the realworld applications include the covariance matrix (Dwork et al., 2014; Blum et al., 2005; Chaudhuri et al., 2012), the kernel matrix (Kung, 2014), the adjacency matrix (Godsil and Royle, 2013), the incidence matrix (Godsil and Royle, 2013), the rotation matrix (Hughes et al., 2014), the Hessian matrix (Thacker, 1989), the transition matrix (Gilks et al., 1995), and the density matrix (White, 1992), which find applications in statistics (Friedman et al., 2001)
(Vapnik, 2013), graph theory (Godsil and Royle, 2013), differential equations (Thacker, 1989), computer graphics (Hughes et al., 2014)(Gilks et al., 1995), quantum mechanics (White, 1992), and many other fields (Wikipedia, 2017).One property that distinguishes the matrixvalued query functions from the scalarvalued query functions is the relationship and interconnection among the elements of the matrix. One may naively treat these matrices as merely a collection of scalar values, but that could prove suboptimal since the structure and relationship among these scalar values are often informative and essential to the understanding and analysis of the system. For example, in graph theory, the adjacency matrix is symmetric for an undirected graph, but not for a directed graph (Godsil and Royle, 2013) – an observation which is implausible to extract from simply looking at the collection of elements without considering how they are arranged in the matrix.
In differential privacy, the standard method for a matrixvalued query function is to extend a scalarvalued mechanism by adding independent and identically distributed (i.i.d.) noise to each element of the matrix (Dwork et al., 2006b; Dwork et al., 2006a; Dwork and Roth, 2014). However, this method may not be optimal as it fails to utilize the structural characteristics of the matrixvalued noise and query function. Although some advanced methods have explored this possibility in an iterative/procedural manner (Hardt and Rothblum, 2010; Hardt et al., 2012; Nikolov et al., 2013), the structural characteristics of the matrices are still largely underinvestigated. This is partly due to the lack of a basic mechanism that is directly designed for matrixvalued query functions, making such utilization and application of available tools in matrix analysis challenging.
In this work, we formalize the study of the matrixvalued differential privacy, and present a new basic mechanism that can readily exploit the structural characteristics of the matrices – the MatrixVariate Gaussian (MVG) mechanism. The highlevel concept of the MVG mechanism is simple – it adds a matrixvariate Gaussian noise scaled to the sensitivity of the matrixvalued query function (cf. Fig. 1). We rigorously prove that the MVG mechanism guarantees differential privacy. Moreover, due to the multidimensional nature of the noise and the query function, the MVG mechanism allows flexibility in the design via the novel notion of directional noise. An important consequence of the concept of directional noise is that the matrixvalued noise in the MVG mechanism can be devised to affect certain parts of the matrixvalued query function less than the others, while providing the same privacy guarantee. In practice, this property could be beneficial as the noise can be tailored to minimally impact the intended utility.
Finally, to illustrate the effectiveness of the MVG mechanism, we conduct experiments on three privacysensitive realworld datasets – Liver Disorders (Lichman, 2013; Forsyth and Rada, 1986), Movement Prediction (Bacciu et al., 2014), and Cardiotocography (Lichman, 2013; de Campos et al., 2000)
. The experiments include three tasks involving matrixvalued query functions – regression, finding the first principal component, and covariance estimation. The results show that the MVG mechanism can outperform four prior stateoftheart mechanisms – the Laplace mechanism, the Gaussian mechanism, the Exponential mechanism, and the JL transform – in utility in all experiments.
To summarize, our main contributions are as follows.

We formalize the study of matrixvalued query functions in differential privacy and introduce the novel MatrixVariate Gaussian (MVG) mechanism.

We rigorously prove that the MVG mechanism guarantees differential privacy.

We introduce a novel concept of directional noise, and propose two simple algorithms to implement this novel concept with the MVG mechanism.

We evaluate our approach on three realworld datasets and show that our approach can outperform four prior mechanisms in all experiments, and yields utility close to the nonprivate baseline.
2. Prior Works
Existing mechanisms for differential privacy may be categorized into two types: the basic mechanism (Dwork et al., 2006b; Dwork et al., 2006a; Blocki et al., 2012; McSherry and Talwar, 2007; Dwork and Roth, 2014; Liu, 2016a; Blum and Roth, 2013; Upadhyay, 2014b, a); and the derived mechanism (Nissim et al., 2007; Johnson et al., 2017; Xiao et al., 2011b; Hay et al., 2010; Qardaji et al., 2013b; Li et al., 2014; Zhang et al., 2014; Xiao et al., 2012; Cormode et al., 2012; Qardaji et al., 2013a; Xu et al., 2013; Acs et al., 2012; Hay et al., 2016; Proserpio et al., 2014; Hardt et al., 2012; Hardt and Rothblum, 2010; Day and Li, 2015; Kenthapadi et al., 2012; Chanyaswad et al., 2017; Xu et al., 2017; Li et al., 2011; Jiang et al., 2013; Li et al., 2010; Li and Miklau, 2012; Hardt and Rothblum, 2010; Dwork et al., 2010; Lyu et al., 2016). Since our work concerns the basic mechanism design, we focus our discussion on this type, and provide a general overview of the other.
2.1. Basic Mechanisms
Basic mechanisms are those whose privacy guarantee is selfcontained, i.e. it does not deduce the guarantee from another mechanism. Here, we discuss four popular existing basic mechanisms.
2.1.1. Laplace Mechanism.
The classical Laplace mechanism (Dwork et al., 2006b) adds noise drawn from the Laplace distribution scaled to the sensitivity of the query function. It was initially designed for a scalarvalued query function, but can be extended to a matrixvalued query function by adding i.i.d. Laplace noise to each element of the matrix. The Laplace mechanism provides the strong differential privacy guarantee and is relatively simple to implement. However, its generalization to a matrixvalued query function does not automatically utilize the structure of the matrices involved.
2.1.2. Gaussian Mechanism.
The Gaussian mechanism (Dwork and Roth, 2014; Dwork et al., 2006a; Liu, 2016a) uses i.i.d. additive noise drawn from the Gaussian distribution scaled to the sensitivity. The Gaussian mechanism guarantees differential privacy. Like the Laplace mechanism, it also does not automatically consider the structure of the matrices.
2.1.3. JohnsonLindenstrauss (JL) Transform.
The JL transform method (Blocki et al., 2012) uses multiplicative noise to guarantee differential privacy. It is, in fact, a rare basic mechanism designed for a matrixvalued query function. Despite its promise, previous works show that the JL transform method can be applied to queries with certain properties only, e.g.

Blocki et al. (Blocki et al., 2012)
use a random matrix, whose elements are drawn i.i.d. from a Gaussian distribution, and the method is applicable to the Laplacian of a graph and the covariance matrix;

Blum and Roth (Blum and Roth, 2013) use a hash function that implicitly represents the JL transform, and the method is suitable for a sparse query;

uses a multiplicative combination of random matrices to provide a JL transform that is applicable to any matrixvalued query function whose singular values are all above a threshold.
2.1.4. Exponential Mechanism.
The Exponential mechanism uses noise introduced via the sampling process (McSherry and Talwar, 2007). It draws its query answers from a custom distribution designed to preserve differential privacy. To provide reasonable utility, the distribution is chosen based on the quality function, which indicates the utility score of each possible sample. Due to its generality, it has been utilized for many types of query functions, including the matrixvalued query functions. We experimentally compare our approach to the Exponential mechanism, and show that, with slightly weaker privacy guarantee, our method can yield significant utility improvement.
Summary
Finally, we conclude that our method differs from the four existing basic mechanisms as follows. In contrast with the i.i.d. noise in the Laplace and Gaussian mechanisms, the MVG mechanism allows a noni.i.d. noise (cf. Sec. 5). As opposed to the multiplicative noise in the JL transform and the sampling noise in the Exponential mechanism, the MVG mechanism uses an additive noise for matrixvalued query functions.
2.2. Derived Mechanisms
Derived mechanisms – also referred to as “revised algorithms” by Blocki et al. (Blocki et al., 2012) – are those whose privacy guarantee is deduced from other basic mechanisms via the composition theorems and the postprocessing invariance property (Dwork, 2008). Derived mechanisms are often designed to provide better utility by exploiting some properties of the query function or the data.
The general techniques used by derived mechanisms are often translatable among basic mechanisms, including our MVG mechanism. Given our focus on a novel basic mechanism, these techniques are less relevant to our work, and we leave the investigation of integrating them into the MVG framework in the future work, and some of the popular techniques used by derived mechanisms are summarized here.
2.2.1. Sensitivity Control.
2.2.2. Data Partition and Aggregation.
This technique uses data partition and aggregation to produce more accurate query answers (Xiao et al., 2011b; Hay et al., 2010; Qardaji et al., 2013b; Li et al., 2014; Zhang et al., 2014; Xiao et al., 2012; Cormode et al., 2012; Qardaji et al., 2013a; Xu et al., 2013; Acs et al., 2012). The partition and aggregation processes are done in a differentiallyprivate manner either via the composition theorems and the postprocessing invariance property (Dwork, 2008), or with a small extra privacy cost. Hay et al. (Hay et al., 2016) nicely summarize many works that utilize this concept.
2.2.3. Nonuniform Data Weighting.
This technique lowers the level of perturbation required for the privacy protection by weighting each data sample or dataset differently (Proserpio et al., 2014; Hardt et al., 2012; Hardt and Rothblum, 2010; Day and Li, 2015). The rationale is that each sample in a dataset, or each instance of the dataset itself, has a heterogeneous contribution to the query output. Therefore, these mechanisms place a higher weight on the critical samples or instances of the database to provide better utility.
2.2.4. Data Compression.
This approach reduces the level of perturbation required for differential privacy via dimensionality reduction. Various dimensionality reduction methods have been proposed. For example, Kenthapadi et al. (Kenthapadi et al., 2012), Xu et al. (Xu et al., 2017), Li et al. (Li et al., 2011) and Chanyaswad et al. (Chanyaswad et al., 2017) use random projection; Jiang et al. (Jiang et al., 2013)
use principal component analysis (PCA) and linear discriminant analysis (LDA); Xiao et al.
(Xiao et al., 2011b) use wavelet transform; and Acs et al. (Acs et al., 2012)use lossy Fourier transform.
2.2.5. Adaptive Queries.
These methods use past/auxiliary information to improve the utility of the query answers. Examples are the matrix mechanism (Li et al., 2010; Li and Miklau, 2012), the multiplicative weights mechanism (Hardt and Rothblum, 2010; Hardt et al., 2012), the lowrank mechanism (Yuan et al., 2012, 2015), correlated noise (Nikolov et al., 2013; Xiao et al., 2011a), leastsquare estimation (Nikolov et al., 2013), boosting (Dwork et al., 2010)
, and the sparse vector technique
(Dwork and Roth, 2014; Lyu et al., 2016). We also note that some of these adaptive methods can be used in the restricted case of matrixvalued query where the matrixvalued query can be decomposed into multiple linear vectorvalued queries (Nikolov et al., 2013; Hardt and Talwar, 2010; Yuan et al., 2015; Yuan et al., 2016; Nikolov, 2015; Hardt and Rothblum, 2010). However, such as an approach does not generalize for arbitrary matrixvalued queries.Summary
We conclude with the following three main observations. (a) First, the MVG mechanism falls into the category of basic mechanism. (b) Second, techniques used in derived mechanisms are generally applicable to multiple basic mechanisms, including our novel MVG mechanism. (c) Finally, therefore, for fair comparison, we will compare the MVG mechanism with the four stateoftheart basic mechanisms presented in this section.
3. Background
3.1. MatrixValued Query
We use the term dataset interchangeably with database, and represent it with the data matrix , whose columns are the dimensional vector samples/records. The matrixvalued query function, , has rows and columns ^{1}^{1}1Note that we use the capital for the dimension of the dataset, but the small for the dimension of the query output.. We define the notion of neighboring datasets as two datasets that differ by a single record, and denote it as . We note, however, that although the neighboring datasets differ by only a single record, and may differ in every element.
We denote a matrixvalued random variable with the calligraphic font, e.g.
, and its instance with the bold font, e.g. . Finally, as will become relevant later, we use the columns of to denote the samples in the dataset.3.2. Differential Privacy
Differential privacy (Dwork, 2006; Dwork et al., 2006a) guarantees that the involvement of any one particular record of the dataset would not drastically change the query answer.
Definition 1 ().
A mechanism on a query function is  differentiallyprivate if for all neighboring datasets , and for all possible measurable matrixvalued outputs ,
3.3. MatrixVariate Gaussian Distribution
One of our main innovations is the use of the noise drawn from a matrixvariate probability distribution. Specifically, in the MVG mechanism, the additive noise is drawn from the matrixvariate Gaussian distribution, defined as follows
(Dawid, 1981; Gupta and Varga, 1992; Nguyen, 1997; Dutilleul, 1999; Iranmanesh et al., 2010; Waal, 2006).Definition 2 ().
Notably, the density function of looks similar to that of the multivariate Gaussian, . Indeed, the matrixvariate Gaussian distribution is a generalization of to a matrixvalued random variable. This leads to a few notable additions. First, the mean vector now becomes the mean matrix . Second, in addition to the traditional rowwise covariance matrix , there is also the columnwise covariance matrix . The latter is due to the fact that, not only could the rows of the matrix be distributed nonuniformly, but also could its columns.
We may intuitively explain this addition as follows. If we draw i.i.d. samples from denoted as , and concatenate them into a matrix , then, it can be shown that is drawn from , where
is the identity matrix
(Dawid, 1981). However, if we consider the case when the columns of are not i.i.d., and are distributed with the covariance instead, then, it can be shown that this is distributed according to (Dawid, 1981).3.4. Relevant Matrix Algebra Theorems
We recite two major theorems in linear algebra that are essential to the subsequent analysis. The first one is used in multiple parts of the analysis including the privacy proof and the interpretation of the results, while the second one is the concentration bound essential to the privacy proof.
Theorem 1 (Singular value decomposition (SVD) (Horn and Johnson, 2012)).
A matrix can be decomposed as , where are unitary, and is a diagonal matrix whose diagonal elements are the ordered singular values of , denoted as .
Theorem 2 (LaurentMassart (Laurent and Massart, 2000)).
For a matrixvariate random variable , , and ,
4. MVG Mechanism: Differential Privacy with MatrixValued Query
Matrixvalued query functions are different from their scalar counterparts in terms of the vital information contained in how the elements are arranged in the matrix. To fully exploit these structural characteristics of matrixvalued query functions, we present our novel mechanism for matrixvalued query functions: the MatrixVariate Gaussian (MVG) mechanism.
First, let us introduce the sensitivity of the matrixvalued query function used in the MVG mechanism.
Definition 3 (Sensitivity).
Given a matrixvalued query function , define the sensitivity as,
where is the Frobenius norm (Horn and Johnson, 2012).
Then, we present the MVG mechanism as follows.
Definition 4 (MVG mechanism).
Given a matrixvalued query function , and a matrixvalued random variable , the MVG mechanism is defined as,
where is the rowwise covariance matrix, and is the columnwise covariance matrix.
So far, we have not specified how to pick and according to the sensitivity in the MVG mechanism. We discuss the explicit form of and next.
database/dataset whose columns are data samples and rows are attributes/features.  

matrixvariate Gaussian distribution with zero mean, the rowwise covariance , and the columnwise covariance .  
matrixvalued query function  
generalized harmonic numbers of order  
generalized harmonic numbers of order of  
vector of nonincreasing singular values of  
vector of nonincreasing singular values of 
As the additive matrixvalued noise of the MVG mechanism is drawn from , the parameters to be designed for the MVG mechanism are the two covariance matrices and . The following theorem presents a sufficient condition for the values of and to ensure that the MVG mechanism preserves differential privacy.
Theorem 3 ().
Let
and
be the vectors of nonincreasingly ordered singular values of and , respectively, and let the relevant variables be defined according to Table 1. Then, the MVG mechanism guarantees differential privacy if and satisfy the following condition,^{2}^{2}2Note that the dependence on is via in .
(1) 
where , and .
Proof.
(Sketch) We only provide the sketch proof here. The full proof can be found in Appendix A.
The MVG mechanism guarantees differential privacy if for every pair of neighboring datasets and all measurable sets ,
Using Definition 2, this is satisfied if we have,
By inserting inside the integral on the left side, it is sufficient to show that
with probability . By algebraic manipulations, we can express this condition as,
where . This is the necessary condition that has to be satisfied for all neighboring with probability for the MVG mechanism to guarantee differential privacy. Therefore, we refer to it as the characteristic equation. From here, the proof analyzes the four terms in the sum separately since the trace is additive. The analysis relies on the following lemmas in linear algebra.
Lemma 1 (MerikoskiSarriaTarazaga (Merikoski et al., 1994)).
The nonincreasingly ordered singular values of a matrix have the values of .
Lemma 2 (von Neumann (von Neumann, 1937)).
Let ; and be the nonincreasingly ordered singular values of and , respectively; and . Then, .
Lemma 3 (Trace magnitude bound (Horn and Johnson, 1991)).
Let be the nonincreasingly ordered singular values of , and . Then, .
The proof, then, proceeds with the analysis of each term in the characteristic equation as follows.
The first term: . Let us denote , where and are any possible instances of the query and the noise, respectively. Then, we can rewrite the first term as, . Both parts are then bounded by their singular values via Lemma 2. The singular values are, in turn, bounded via Lemma 1 and Theorem 2 with probability . This gives the bound for the first term:
where .
The second term: . By following the same steps as in the first term, the second term has the exact same bound as the first term, i.e.
The fourth term: . Since this term has the negative sign, we use Lemma 3 to bound its magnitude by its singular values. Then, we use Lemma 1 to bound the singular values. This gives the bound for the forth term as,
Four terms combined: by combining the four terms and rearranging them, the characteristic equation becomes . This is a quadratic equation, of which the solution is . Since we know , we have the solution,
which implies the criterion in Theorem 3. ∎
Remark 1.
In Theorem 3, we assume that the Frobenius norm of the query function is bounded for all possible datasets by . This assumption is valid in practice because realworld data are rarely unbounded (cf. (Liu, 2016b)), and it is a common assumption in the analysis of differential privacy for multidimensional query functions (cf. (Dwork et al., 2006b; Chaudhuri et al., 2012; Zhou et al., 2009; Dwork et al., 2014)).
Remark 2.
The values of the generalized harmonic numbers – , and – can be obtained from the table lookup for a given value of , or can easily be computed recursively (Sondow and Weisstein, 2017).
The sufficient condition in Theorem 3 yields an important observation: the privacy guarantee of the MVG mechanism depends only on the singular values of and . In other words, we may have multiple instances of that yield the exact same privacy guarantee (cf. Fig. 2). To better understand this phenomenon, in the next section, we introduce the novel concept of directional noise.
5. Directional Noise
Recall from Theorem 3 that the differentialprivacy condition for the MVG mechanism only applies to the singular values of the two covariance matrices and . Here, we investigate the ramification of this result via the novel notion of directional noise.
5.1. Motivation for Noni.i.d. Noise
For a matrixvalued query function, the standard method for basic mechanisms that use additive noise is by adding the independent and identically distributed (i.i.d.) noise to each element of the matrix query. However, as common in matrix analysis (Horn and Johnson, 2012), the matrices involved often have some geometric and algebraic characteristics that can be exploited. As a result, it is usually the case that only certain “parts” – the term which will be defined more precisely shortly – of the matrices contain useful information. In fact, this is one of the rationales behind many compression techniques such as the popular principal component analysis (PCA) (Kung, 2014; Murphy, 2012; Bishop, 2006). Due to this reason, adding the same amount of noise to every “part” of the matrix query may be highly suboptimal.
5.2. Directional Noise as a Noni.i.d. Noise
Let us elaborate further on the “parts” of a matrix. In matrix analysis, matrix factorization (Horn and Johnson, 2012)
is often used to extract underlying properties of a matrix. This is a family of algorithms and the specific choice depends upon the application and types of insights it requires. Particularly, in our application, the factorization that is informative is the singular value decomposition (SVD) (Theorem
1) of the two covariance matrices of .Consider first the covariance matrix , and write its SVD as, . It is wellknown that, for the covariance matrix, we have the equality since it is positive definite (cf. (Abdi, 2007; Horn and Johnson, 2012)). Hence, let us more concisely write the SVD of as,
This representation gives us a very useful insight to the noise generated from : it tells us the directions of the noise via the column vectors of , and variance of the noise in each direction via the singular values in .
For simplicity, consider a twodimensional multivariate Gaussian distribution, i.e. , so there are two column vectors of . The geometry of this distribution can be depicted by an ellipsoid, e.g. the dash contour in Fig. 3, Left (cf. (Murphy, 2012, ch. 4), (Bishop, 2006, ch. 2)). This ellipsoid is characterized by its two principal axes – the major and the minor axes. It is wellknown that the two column vectors from SVD, i.e. and , are unit vectors pointing in the directions of the major and minor axes of this ellipsoid, and more importantly, the length of each axis is characterized by its corresponding singular value, i.e. and , respectively (cf. (Murphy, 2012, ch. 4)) (recall from Theorem 1 that ). This is illustrated by Fig. 3, Left. Therefore, when considering this 2D multivariate Gaussian noise, we arrive at the following interpretation of the SVD of its covariance matrix: the noise is distributed toward the two principal directions specified by and , with the variance scaled by the respective singular values, and .
We can extend this interpretation to a more general case with , and also to the other covariance matrix . Then, we have a full interpretation of as follows. The matrixvalued noise distributed according to has two components: the rowwise noise, and the columnwise noise. The rowwise noise and the columnwise noise are characterized by the two covariance matrices, and , respectively, as follows.
For the rowwise noise.

The rowwise noise is characterized by .

SVD of decomposes the rowwise noise into two components – the directions and the variances of the noise in those directions.

The directions of the rowwise noise are specified by the column vectors of .

The variance of each rowwisenoise direction is indicated by its corresponding singular value in .
For the columnwise noise.

The columnwise noise is characterized by .

SVD of decomposes the columnwise noise into two components – the directions and the variances of the noise in those directions.

The directions of the columnwise noise are specified by the column vectors of .

The variance of each columnwisenoise direction is indicated by its respective singular value in .
Since is fully characterized by its covariances, these two components of the matrixvalued noise drawn from provide a complete interpretation of the matrixvariate Gaussian noise.
5.3. Directional Noise & MVG Mechanism
We now revisit Theorem 3. Recall that the sufficient differentialprivacy condition for the MVG mechanism puts the constraint only on the singular values of and . However, as we discuss in the previous section, the singular values of and only indicate the variance of the noise in each direction, but not the directions they are attributed to. In other words, Theorem 3 suggests that the MVG mechanism preserves differential privacy as long as the overall variances of the noise satisfy a certain threshold, but these variances can be attributed nonuniformly in any direction.
This claim certainly warrants further discussion, and we defer it to Sec. 6, where we present the technical detail on how to practically implement this concept of directional noise. It is important to only note here that this claim does not mean that we can avoid adding noise in any particular direction altogether. On the contrary, there is still a minimum amount of noise required in every direction for the MVG mechanism to guarantee differential privacy, but the noise simply can be attributed unevenly in different directions (see Fig. 3, Right, for an example).
5.4. Utility Gain via Directional Noise
There are multiple ways to exploit the notion of directional noise to enhance utility of differential privacy. Here, we present two methods – via the domain knowledge and via the SVD/PCA.
5.4.1. Utilizing Domain Knowledge
This method is best described by examples. Consider first the personalized warfarin dosing problem (Fredrikson et al., 2014), which can be considered as the regression problem with the identity query, . In the i.i.d. noise scheme, every feature used in the warfarin dosing prediction is equally perturbed. However, domain experts may have prior knowledge that some features are more critical than the others, so adding directional noise designed such that the more critical features are perturbed less can potentially yield better prediction performance.
Let us next consider a slightly more involved matrixvalued query: the covariance matrix, i.e. , where has zero mean and the columns are the records/samples. Consider now the Netflix prize dataset (Netflix, 2009; Narayanan and Shmatikov, 2008). A popular method for solving the Netflix challenge is via lowrank approximation (Bell et al., 2008), which often involves the covariance matrix query function (Blum et al., 2005; McSherry and Mironov, 2009; Chaudhuri et al., 2012). Suppose domain experts indicate that some features are more informative than the others. Since the covariance matrix has the underlying property that each row and column correspond to a single feature (Murphy, 2012), we can use this domain knowledge with directional noise by adding less noise to the rows and columns corresponding to the informative features.
In both examples, the directions chosen are among the standard basis, e.g. , which are ones of the simplest forms of directions.
5.4.2. Using DifferentiallyPrivate SVD/PCA
When domain knowledge is not available, an alternative approach is to derive the directions via the SVD or PCA. In this context, SVD and PCA are identical with the main difference being that SVD is compatible with any matrixvalued query function, while PCA is best suited to the identity query. Hence, the terms may be used interchangeable in the subsequent discussion.
As we show in Sec. 5.2, SVD/PCA can decompose a matrix into its directions and variances. Hence, we can set aside a small portion of privacy budget to derive the directions from the SVD/PCA of the query function. This is illustrated in the following example. Consider again the warfarin dosing problem (Fredrikson et al., 2014), and assume that we do not possess any prior knowledge about the predictive features. We can learn this information from the data by spending a small privacy budget on deriving differentiallyprivate principal components (P.C.). Each P.C. can then serve as a direction and, with directional noise, we can selectively add less noise in the highly informative directions as indicated by PCA.
Clearly, the directions in this example are not necessary among the standard basis, but can be any unit vector. This example illustrates how directional noise can provide additional utility benefit even without the domain knowledge. There have been many works on differentiallyprivate SVD/PCA (Hardt and Roth, 2013a; Kapralov and Talwar, 2013; Blum et al., 2005; Dwork et al., 2014; Chaudhuri et al., 2012; Blocki et al., 2012; Hardt and Roth, 2012, 2013b), so this method is very generally applicable. Again, we reiterate that the approach similar to the one in the example using SVD applies to a general matrixvalued query function. Fig. 3, Right, illustrates this. In the illustration, the query function has two dimensions, and we have obtained the utility direction, e.g. from SVD, as represented by the green line. This can be considered as the utility subspace we desire to be least perturbed. The many small circles in the illustration represent how the i.i.d. noise and directional noise are distributed under the 2D multivariate Gaussian distribution. Clearly, directional noise can reduce the perturbation experienced on the utility directions.
In the next section, we discuss how to implement directional noise with the MVG mechanism in practice and propose two simple algorithms for two types of directional noise.
6. Practical Implementation
The differential privacy condition in Theorem 3, even along with the notion of directional noise in the previous section, still leads to a large design space for the MVG mechanism. In this section, we present two simple algorithms to implement the MVG mechanism with two types of directional noise that can be appropriate for a wide range of realworld applications. Then, we conclude the section with a discussion on a sampling algorithm for .
As discussed in Sec. 5.3, Theorem 3 states that the MVG mechanism satisfies differential privacy as long as the singular values of and satisfy the sufficient condition. This provides tremendous flexibility in the choice of the directions of the noise. First, we notice from the sufficient condition in Theorem 3 that the singular values for and are decoupled, i.e. they can be designed independently so long as, when combined, they satisfy the specified condition. Hence, the rowwise noise and columnwise noise can be considered as the two modes of noise in the MVG mechanism. By this terminology, we discuss two types of directional noise: the unimodal and equimodal directional noise.
6.1. Unimodal Directional Noise
For the unimodal directional noise, we select one mode of the noise to be directional noise, whereas the other mode of the noise is set to be i.i.d. For this discussion, we assume that the rowwise noise is directional noise, while the columnwise noise is i.i.d. However, the opposite case can be readily analyzed with the similar analysis.
We note that, apart from simplifying the practical implementation that we will discuss shortly, this type of directional noise can be appropriate for many applications. For example, for the identity query, we may not possess any prior knowledge on the quality of each sample, so the best strategy would be to consider the i.i.d. columnwise noise (recall that samples are the column vectors).
Formally, the unimodal directional noise sets , where is the identity matrix. This, consequently, reduces the design space for the MVG mechanism with directional noise to only that of . Next, consider the left side of Eq. (1), and we have
(2) 
If we square both sides of the sufficient condition and rearrange it, we get a form of the condition such that the rowwise noise in each direction is decoupled:
(3) 
This form gives a very intuitive interpretation of the directional noise. First, we note that, to have small noise in the direction, has to be small (cf. Sec. 5.2). However, the sum of of the noise in all directions, which should hence be large, is limited by the quantity on the right side of Eq. (3). This, in fact, explains why even with directional noise, we still need to add noise in every direction to guarantee differential privacy. Consider the case when we set the noise in one direction to be zero, and we have , which immediately violates the sufficient condition in Eq. (3).
From Eq. (3), the quantity is the inverse of the variance of the noise in the direction, so we may think of it as the precision measure of the query answer in that direction. The intuition is that the higher this value is, the lower the noise added in that direction, and, hence, the more precise the query value in that direction is. From this description, the constraint in Eq. (3) can be aptly named as the precision budget, and we immediately have the following theorem.
Theorem 4 ().
For the MVG mechanism with , the precision budget is .
Finally, the remaining task is to determine the directions of the noise and form accordingly. To do so systematically, we first decompose by SVD as,
This decomposition represents by two components – the directions of the rowwise noise indicated by , and the variance of the noise indicated by . Since the precision budget only puts constraint upon
, this decomposition allows us to freely chose any unitary matrix for
such that each column of indicates each independent direction of the noise.Therefore, we present the following simple approach to design the MVG mechanism with the unimodal directional noise: under a given precision budget, allocate more precision to the directions of more importance.
Alg. 1 formalizes this procedure. It takes as inputs, among other parameters, the precision allocation strategy , and the directions . The precision allocation strategy is a vector of size , whose elements, , corresponds to the importance of the direction indicated by the orthonormal column vector of . The higher the value of , the more important the direction is. Moreover, the algorithm enforces that to ensure that the precision budget is not overspent. The algorithm proceeds as follows. First, compute and and, then, the precision budget . Second, assign precision to each direction based on the precision allocation strategy. Third, derive the variance of the noise in each direction accordingly. Then, compute from the noise variance and directions, and draw a matrixvalued noise from . Finally, output the query answer with additive matrix noise.
We make a remark here about choosing directions of the noise. As discussed in Sec. 5, any orthonormal set of vectors can be used as the directions. The simplest instance is the the standard basis vectors, e.g. for .
6.2. EquiModal Directional Noise
Next, we consider the type of directional noise of which the rowwise noise and columnwise noise are distributed identically, which we call the equimodal directional noise. We recommend this type of directional noise for a symmetric query function, i.e. . This covers a widerange of query functions including the covariance matrix (Dwork et al., 2014; Blum et al., 2005; Chaudhuri et al., 2012), the kernel matrix (Kung, 2014), the adjacency matrix of an undirected graph (Godsil and Royle, 2013), and the Laplacian matrix (Godsil and Royle, 2013). The motivation for this recommendation is that, for symmetric query functions, any prior information about the rows would similarly apply to the columns, so it is reasonable to use identical rowwise and columnwise noise.
Formally, this type of directional noise imposes that . Following a similar derivation to the unimodal type, we have the following precision budget.
Theorem 5 ().
For the MVG mechanism with , the precision budget is .
6.3. Sampling from
One remaining question on the practical implementation of the MVG mechanism is how to efficiently draw the noise from the matrixvariate Gaussian distribution . One approach to implement a sampler for
is via the affine transformation of samples drawn i.i.d. from the standard normal distribution, i.e.
. The transformation is described by the following lemma (Dawid, 1981).Lemma 4 ().
Let be a matrixvalued random variable whose elements are drawn i.i.d. from the standard normal distribution . Then, the matrix is distributed according to .
This transformation consequently allows the conversion between samples drawn i.i.d. from and a sample drawn from . To derive and from given and for , we solve the two linear equations: , and , and the solutions of these two equations can be acquired readily via the Cholesky decomposition or SVD (cf. (Horn and Johnson, 2012)). We summarize the steps for this implementation here using SVD:

Draw i.i.d. samples from , and form a matrix .

Let and , where and are derived from SVD of and , respectively.

Compute the sample .
The complexity of this method depends on that of the sampler used. Plus, there is an additional complexity from SVD (Golub and Loan, 1996)^{3}^{3}3Note that here is not the number of samples or records but is the dimension of the matrixvalued query output, i.e. .. The memory needed is in the order of from the three matrices required in step (3).
7. Experimental Setups
We evaluate the proposed MVG mechanism on three experimental setups and datasets. Table 2
summarizes our setups. In all experiments, 100 trials are carried out and the average and 95% confidence interval are reported.
7.1. Experiment I: Regression
7.1.1. Task and Dataset.
The first experiment considers the regression application on the Liver Disorders dataset (Lichman, 2013; McDermott and Forsyth, 2016), which contains 5 features from the blood sample of 345 patients. We leave out the samples from 97 patients for testing, so the private dataset contains 248 patients. Following suggestions by Forsyth and Rada (Forsyth and Rada, 1986), we use these features to predict the average daily alcohol consumption. All features and teacher values are .
7.1.2. Query Function and Evaluation Metric.
We perform regression in a differentiallyprivate manner via the identity query, i.e. . Since regression involves the teacher values, we treat them as a feature, so the query size becomes
. We use the kernel ridge regression (KRR)
(Kung, 2014; Pedregosa et al., 2011) as the regressor, and the rootmeansquare error (RMSE) (Murphy, 2012; Kung, 2014)as the evaluation metric.
7.1.3. MVG Mechanism Design.
As discussed in Sec. 6.1, Alg. 1 is appropriate for the identity query, so we employ it for this experiment. The sensitivity of this query is (cf. Appendix B). To identify the informative directions to allocate the precision budget, we implement both methods discussed in Sec. 5.4 as follows.
(a) For the method using domain knowledge (denoted MVG1), we refer to Alatalo et al. (Alatalo et al., 2008), which indicates that alanine aminotransferase (ALT) is the most indicative feature for predicting the alcohol consumption behavior. Additionally, from our prior experience working with regression problems, we anticipate that the teacher value (Y) is another important feature to allocate more precision budget to. With this setup, we use the standard basis vectors as the directions (cf. Sec. 5.4), and employ the following binary precision allocation strategy.

Allocate % of the precision budget to the two important features (ALT and Y) by equal amount.

Allocate the rest of the precision budget equally to the rest of the features.
We vary and report the best results.^{4}^{4}4In the realworld deployment, this parameter selection process should also be made private (Chaudhuri and Vinterbo, 2013).
(b) For the method using differentiallyprivate SVD/PCA (denoted MVG2), given the total budget of reported in Sec. 8, we spend and on the derivation of the two most informative directions via the differentiallyprivate PCA algorithm in (Blum et al., 2005). We specify the first two principal components as the indicative features for a fair comparison with the method using domain knowledge. The remaining and are then used for Alg. 1. Again, for a fair comparison with the method using domain knowledge, we use the same binary precision allocation strategy in Alg. 1 for MVG2.
7.2. Experiment II: 1 Principal Component
Exp. I  Exp. II  Exp. III  

Task  Regression  P.C.  Covariance estimation 
Dataset  Liver (Lichman, 2013; Forsyth and Rada, 1986)  Movement (Bacciu et al., 2014)  CTG (Lichman, 2013; de Campos et al., 2000) 
# samples  248  2,021  2,126 
# features  6  4  21 
Query  
Query size  
Eval. metric  RMSE  (Eq. (4))  RSS (Eq. (5)) 
MVG Alg.  1  2  1 
Source of directions  Domain knowledge (Alatalo et al., 2008) /PCA (Blum et al., 2005)  Data collection setup (Bacciu et al., 2014)  Domain knowledge (Santos et al., 2006) 
7.2.1. Task and Dataset.
The second experiment considers the problem of determining the first principal component ( P.C.) from the principal component analysis (PCA). This is one of the most popular problems in machine learning and differential privacy. We only consider the first principal component for two reasons. First, many prior works in differentiallyprivate PCA algorithm consider this problem or the similar problem of deriving a few major P.C. (cf. (Chaudhuri et al., 2012; Blocki et al., 2012; Dwork et al., 2014)), so this allows us to compare our approach to the stateoftheart approaches of a wellstudied problem. Second, in practice, this method for deriving the P.C. may be used iteratively to derive the rest of the principal components (cf. (Jolliffe, 1986)).
We use the Movement Prediction via RSS (Movement) dataset (Bacciu et al., 2014), which consists of the radio signal strength measurement from 4 sensor anchors (ANC{03}) – corresponding to the 4 features – from 2,021 movement samples. The feature data all have the range of .
7.2.2. Query Function and Evaluation Metric.
We consider the covariance matrix query, i.e. , and use SVD to derive the P.C. from it. Hence, the query size is . We adopt the quality metric commonly used for P.C. (Kung, 2014) and also used by Dwork et al. (Dwork et al., 2014), i.e. the captured variance . For a given P.C. , the capture variance by on the covariance matrix is defined as . To be consistent with other experiments, we report the absolute error in as deviated from the maximum . It is wellestablished that the maximum
is equal to the largest eigenvalue of
(cf. (Horn and Johnson, 2012, Theorem 4.2.2), (Rao, 1964)). Hence, the metric can be written as,(4) 
where is the largest eigenvalue of . For the ideal, nonprivate case, the error would clearly be zero.
7.2.3. MVG Mechanism Design.
As discussed in Sec. 6.2, Alg. 2 is appropriate for the covariance query, so we employ it for this experiment. The sensitivity of this query is (cf. Appendix B). To identify the informative directions to allocate the precision budget, we inspect the data collection setup described in (Bacciu et al., 2014), and use two of the four anchors as the more informative anchors due to their proximity to the movement path (ANC0 and ANC3). Hence, we use the standard basis vectors as the directions (cf. Sec. 6) and allocate more precision budget to these two features using the same strategy as in Exp. I.
7.3. Experiment III: Covariance Estimation
7.3.1. Task and Dataset.
The third experiment considers the similar problem to Exp. II but with a different flavor. In this experiment, we consider the task of estimating the covariance matrix from the perturbed database. This differs from Exp. II in three ways. First, for covariance estimation, we are interested in every P.C. Second, as mentioned in Exp. II, many previous works do not consider every P.C., so the previous works for comparison are different. Third, to give a different taste of our approach, we consider the method of input perturbation for estimating the covariance, i.e. query the noisy database and use it to compute the covariance. We use the Cardiotocography (CTG) dataset (Lichman, 2013; de Campos et al., 2000), which consists of 21 features in the range of from 2,126 fetal samples.
7.3.2. Query Function and Evaluation Metric.
We consider the method via input perturbation, so we use the identity query, i.e. . The query size is . We adopt the captured variance as the quality metric similar to Exp. II, but since we are interested in every P.C., we consider the residual sum of square (RSS) (Friedman et al., 2001) of every P.C. This is similar to the total residual variance used by Dwork et al. ((Dwork et al., 2014, p. 5)). Formally, given the perturbed database , the covariance estimate is . Let be the set of P.C.’s derived from , and the RSS is,
(5) 
where is the eigenvalue of (cf. Exp. II), and is the captured variance of the P.C. derived from . Clearly, in the nonprivate case, .
7.3.3. MVG Mechanism Design.
Since we consider the identity query, we employ Alg. 1 for this experiment. The sensitivity is (cf. Appendix B). To identify the informative directions to allocate the precision budget to, we refer to the domain knowledge from Costa Santos et al. (Santos et al., 2006), which identifies three features to be most informative, viz. fetal heart rate (FHR), %time with abnormal short term variability (ASV), and %time with abnormal long term variability (ALV). Hence, we use the standard basis vectors as the directions and allocate more precision budget to these three features using the same strategy as in Exp. I.
7.4. Comparison to Previous Works
Since our approach falls into the category of basic mechanism, we compare our work to the four prior stateoftheart basic mechanisms discussed in Sec. 2.1, namely, the Laplace mechanism, the Gaussian mechanism, the Exponential mechanism, and the JL transform method.
For Exp. I and III, since we consider the identity query, the four previous works for comparison are by Dwork et al. (Dwork et al., 2006b), Dwork et al. (Dwork et al., 2006a), Blum et al. (Blum et al., 2013), and Upadhyay (Upadhyay, 2014a), for the four basic mechanisms, respectively.
For Exp. II, we consider the P.C. As this problem has been wellinvestigated, we compare our approach to the stateoftheart algorithms specially designed for this problem. These four algorithms using the four prior basic mechanisms are, respectively: Dwork et al. (Dwork et al., 2006b), Dwork et al. (Dwork et al., 2014), Chaudhuri et al. (Chaudhuri et al., 2012), and Blocki et al. (Blocki et al., 2012). We note that these four algorithms chosen for comparison are designed and optimized specifically for the particular application, so they utilize the positivesemidefinite (PSD) nature of the matrix query. On the other hand, the MVG mechanism used here is generally applicable for matrix queries even beyond the particular application, and makes no assumptions about the PSD structure of the matrix query. In other words, we intentionally give a favorable edge to the compared methods to show that, despite the handicap, the MVG mechanism can still perform comparably well.
For all previous works, we use the parameter values as suggested by the authors of the method, and vary the free variable before reporting the best performance.
Finally, we recognize that some of these prior works have a different privacy guarantee from ours, namely, differential privacy. Nevertheless, we present these prior works for comprehensive coverage of prior basic mechanisms, and we will keep this difference in mind when discussing the results.
8. Experimental Results
Table 3, Table 4, and Table 5 report the experimental results for Experiment I, II, and III, respectively. The performance shown is an average over 100 trials, along with the 95% confidence interval.
Method  RMSE ()  

Nonprivate      1.226 
Random guess      
MVG1 (Alg. 1 + knowledge in (Alatalo et al., 2008))  1.  
MVG2 (Alg. 1 + DPPCA (Blum et al., 2005))  1.  
Gaussian (Dwork et al. (Dwork et al., 2006a))  1.  
JL transform (Upadhyay (Upadhyay, 2014a))  1.  
Laplace (Dwork et al. (Dwork et al., 2006b))  1.  0  
Exponential (Blum et al. (Blum et al., 2013))  1.  0 
8.1. Experiment I: Regression
Table 3 reports the results for Exp. I. Here are the key observations.

Compared to the nonprivate baseline, the best MVG mechanism (MVG1) yields similar performance (difference of .004 in RMSE).

Compared to other basic mechanisms, i.e. the Gaussian mechanism and the JL transform, the best MVG mechanism (MVG1) has better utility (by .003 and .0006 in RMSE, respectively) with the same privacy guarantee.

Compared to other basic mechanisms, i.e. the Laplace and Exponential mechanisms, the best MVG mechanism (MVG1) provides significantly better utility (~150%) with the slightly weaker differential privacy guarantee.

Even when some privacy budget is spent on deriving the direction via PCA(Blum et al., 2005) (MVG2), the MVG mechanism still yields the best performance among all other nonMVG methods.
Overall, the results from regression show the promise of the MVG mechanism. Our approach can outperform all other basic mechanisms. Although it provides a weaker privacy guarantee than other basic mechanisms, it can provide considerably more utility (~150%). As advocated by Duchi et al. (Duchi et al., 2013) and Fienberg et al. (Fienberg et al., 2010), this tradeoff could be attractive in some settings, e.g. critical medical or emergency situations.
8.2. Experiment II: 1 Principal Component
Method  Err. (  

Nonprivate      
Random guess      
MVG (Alg. 2)  1.  
Gaussian (Dwork et al. (Dwork et al., 2014))  1.  
JL transform (Blocki et al. (Blocki et al., 2012))  1.  
Laplace (Dwork et al. (Dwork et al., 2006b))  1.  0  
Exponential (Chaudhuri et al. (Chaudhuri et al., 2012))  1.  0 
Table 4 reports the results for Exp. II. Here are the key observations.

Compared to the nonprivate baseline, the MVG mechanism has reasonably small error of .

Compared to other basic mechanisms, i.e. the Gaussian mechanism and the JL transform, the MVG mechanism provides better utility with the same privacy guarantee (.01 and .0001 smaller error , respectively).

Compared to other basic mechanisms, i.e. the Laplace and Exponential mechanisms, the MVG mechanism yields higher utility with slightly weaker differential privacy guarantee (.03 and .01 smaller error , respectively).
Overall, the MVG mechanism provides the best utility. Though we admit that, with a weaker privacy guarantee, it does not provide significant utility increase over the Exponential mechanism by Chaudhuri et al. (Chaudhuri et al., 2012). Nevertheless, this method (Chaudhuri et al., 2012) utilizes the positivesemidefinite (PSD) characteristic of the matrix query and is known to be among the best algorithms for this specific task. On the other hand, the MVG mechanism used in the experiment is more general. Furthermore, we show in the full version of this work that, when utilizing the PSD characteristic of the query function, the MVG mechanism can significantly outperform all three methods being compared here (Chanyaswad et al., 2018). Again, in some applications, this tradeoff of weaker privacy for better utility might be desirable (Fienberg et al., 2010; Duchi et al., 2013), and the MVG mechanism clearly provides the best tradeoff.
8.3. Experiment III: Covariance Estimation
Method  RSS (  

Nonprivate      
Random guess      
MVG (Alg. 1)  1.  
Gaussian (Dwork et al. (Dwork et al., 2006a))  1.  
JL transform (Upadhyay (Upadhyay, 2014a))  1.  
Laplace (Dwork et al. (Dwork et al., 2006b))  1.  0  
Exponential (Blum et al. (Blum et al., 2013))  1.  0 
Table 5 reports the results for Exp. III. Here are the key observations.

Compared to the nonprivate baseline, the MVG mechanism has very small RSS error of .

Compared to other basic mechanisms, i.e. the Gaussian mechanism and the JL transform, the MVG mechanism provides better utility with the same privacy guarantee (.004 and .001 smaller RSS error, respectively).

Compared to other basic mechanisms, i.e. the Laplace and Exponential mechanisms, the MVG mechanism gives better utility with slightly weaker differential privacy guarantee (.005 and .006 smaller RSS error, respectively).
Overall, the MVG mechanism provides the best utility (smallest error). When compared to other methods with stronger privacy guarantee, the MVG mechanism can provide much higher utility. Again, we point out that in some settings, the tradeoff of weaker privacy for better utility might be favorable (Fienberg et al., 2010; Duchi et al., 2013), and our approach provides the best tradeoff.
9. Discussion and Future Works
9.1. Effect of Directional Noise on Utility
In Sec. 5.4, we discuss how the choice of noise directions can affect the utility. Here, we investigate this effect on the obtained utility in the three experiments. Fig. 4 depicts our results.
Fig. 4, Left, shows the direction comparison from Exp. I. We compare four choices of directions. Direction (a), which uses the domain knowledge (ALT) and the teacher label (Y), yields the best result when compared to: (b) using only the domain knowledge (ALT), (c) using only the teacher label (Y), and (d) using an arbitrary extra feature (ALT+Y+AST).
Fig. 4, Middle, shows the direction comparison from Exp. II. We compare four choices of directions. Direction (a), which makes full use of the prior information (ANC0 and ANC3), performs best when compared to: (b), (c) using only partial prior information (ANC0 or ANC3, respectively), and (d) having the wrong priors completely (ANC1 and ANC2).
Fig. 4, Right, shows the comparison from Exp. III. We compare three choices of directions. Direction (a), which uses the domain knowledge (FHR, ASV, ALV3), gives the best performance compared to: (b) using the completely wrong priors (all other features), and (c) having no prior at all (i.i.d.).
The results illustrate three key points. First, as seen in all three plots in Fig. 4, the choice of directions has an effect on the performance. Second, as indicated by Fig. 4, Right, directional noise performs much better than i.i.d. noise. Third, as seen in Fig. 4, Left and Middle, there may be multiple instances of directional noise that can lead to comparable performance. The last observation shows the promise of the notion of directional noise, as it signals the robustness of the approach.
9.2. Runtime Comparison
Method  Runtime (ms)  

Exp. I  Exp. II  Exp. III  
MVG  36.2  1.8  
Gaussian ((Dwork et al., 2006a), (Dwork et al., 2014))  1.0  0.4  10.0 
JL transform ((Upadhyay, 2014a), (Blocki et al., 2012))  192.7  637.4  
Laplace ((Dwork et al., 2006b))  0.4  0.5  8.0 
Exponential ((Blum et al., 2013), (Chaudhuri et al., 2012))  627.2  2,112.7 
Next, we present the empirical runtime comparison between the MVG mechanism and the compared methods in Table 6. The experiments are run on an AMD Opteron 6320 Processor with 4 cores using Python 2.7, along with NumPy (Oliphant, 2006), SciPy (Jones et al., 01), Scikitlearn (Pedregosa et al., 2011), and emcee (ForemanMackey et al., 2013) packages. The results show that, although the MVG mechanism runs much slower than the Gaussian and Laplace mechanisms, it runs much faster than the JL transform and the Exponential mechanism.
Both observations are as expected. First, the MVG mechanism runs slower than the i.i.d.based Gaussian and Laplace mechanisms because it incurs the computational overhead of deriving the noni.i.d. noise. The amount of overhead depends on the size of the query output as discussed in Sec. 6.3. Second, the MVG mechanism runs much faster than the JL transform because, in addition to requiring SVD to modify the singular values of the matrix query and i.i.d. Gaussian samples similar to the MVG mechanism, the JL transform has a runtime overhead for the construction of its projection matrix, which consists of multiple matrix multiplications. Finally, the MVG mechanism runs much faster than the Exponential mechanism since drawing samples from the distribution defined by the Exponential mechanism may not be efficient.
9.3. Directional Noise as a Generalized Subspace Projection
Directional noise provides utility gain by adding less noise in useful directions and more in the others. This has a connection to subspace projection or dimensionality reduction, which simply removes the nonuseful directions. The main difference between the two is that, in directional noise, the nonuseful directions are kept, although are highly perturbed. However, despite being highly perturbed, these directions may still be able to contribute to the utility performance.
We test this hypothesis by running two additional regression experiments (Exp. I) as follows. Given the same two important features (ALT & Y), we use the Gaussian mechanism (Dwork et al., 2006a) and the JL transform method (Upadhyay, 2014a) to perform the regression task using only these two features. With and , the results are and of RMSE, respectively. Noticeably, these results are significantly worse than that of the MVG mechanism, with the same privacy guarantee. Specifically, by incorporating all features with directional noise via the MVG mechanism, we can achieve over 150% gain in utility over the dimensionality reduction alternatives.
9.4. Exploiting Structural Characteristics of the Matrices
In this work, we derive the sufficient condition for the MVG mechanism without making any assumption on the query function. However, many practical matrixvalued query functions have a specific structure, e.g. the covariance matrix is positive semi definite (PSD) (Horn and Johnson, 2012), the Laplacian matrix (Godsil and Royle, 2013) is symmetric. Therefore, future works may look into exploiting these intrinsic characteristics of the matrices in the derivation of the differentialprivacy condition for the MVG mechanism.
9.5. Precision Allocation Strategy Design
Alg. 1 and Alg. 2 take as an input the precision allocation strategy vector :. Elements of are chosen to emphasize how informative or useful each direction is. The design of to optimize the utility gain via the directional noise is an interesting topic for future research. For example, in our experiments, we use the intuition that our prior knowledge only tells us whether the directions are highly informative or not, but we do not know the granularity of the level of usefulness of these directions. Hence, we adopt the binary allocation strategy, i.e. give most precision budget to the useful directions in equal amount, and give the rest of the budget to the other directions in equal amount. An interesting direction for future work is to investigate general instances when the knowledge about the directions is more granular.
10. Conclusion
We study the matrixvalued query function in differential privacy, and present the MVG mechanism that is designed specifically for this type of query function. We prove that the MVG mechanism guarantees ()differential privacy, and, consequently, introduce the novel concept of directional noise, which can be used to reduce the impact of the noise on the utility of the query answer. Finally, we evaluate our approach experimentally for three matrixvalued query functions on three privacysensitive datasets, and the results show that our approach can provide the utility improvement over existing methods in all of the experiments.
Acknowledgement
The authors would like to thank the reviewers for their valuable feedback that helped improve the paper. This work was supported in part by the National Science Foundation (NSF) under Grants CNS1553437, CCF1617286, and CNS1702808; an Army Research Office YIP Award; and faculty research awards from Google, Cisco, Intel, and IBM.
References
 (1)
 Abdi (2007) Herve Abdi. 2007. Singular value decomposition (SVD) and generalized singular value decomposition. Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage (2007), 907–912.
 Acs et al. (2012) Gergely Acs, Claude Castelluccia, and Rui Chen. 2012. Differentially private histogram publishing through lossy compression. In ICDM. IEEE.
 Alatalo et al. (2008) P. I. Alatalo, H. M. Koivisto, J. P. Hietala, K. S. Puukka, R. Bloigu, and O. J. Niemela. 2008. Effect of moderate alcohol consumption on liver enzymes increases with increasing body mass index. AJCN 88, 4 (Oct 2008), 1097–1103.
 Bacciu et al. (2014) Davide Bacciu, Paolo Barsocchi, Stefano Chessa, Claudio Gallicchio, and Alessio Micheli. 2014. An experimental characterization of reservoir computing in ambient assisted living applications. Neural Computing and Applications 24, 6 (2014), 1451–1464.
 Bell et al. (2008) Robert M. Bell, Yehuda Koren, and Chris Volinsky. 2008. The bellkor 2008 solution to the netflix prize. Statistics Research Department at AT&T Research (2008).
 Bishop (2006) Christopher M. Bishop. 2006. Pattern recognition. Machine Learning 128 (2006).
 Blocki et al. (2012) Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. 2012. The johnsonlindenstrauss transform itself preserves differential privacy. In FOCS. IEEE.
 Blum et al. (2005) Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. 2005. Practical privacy: the SuLQ framework. In PODS. ACM.
 Blum et al. (2013) Avrim Blum, Katrina Ligett, and Aaron Roth. 2013. A learning theory approach to noninteractive database privacy. JACM 60, 2 (2013), 12.
 Blum and Roth (2013) Avrim Blum and Aaron Roth. 2013. Fast private data release algorithms for sparse queries. Springer, 395–410.
 Chanyaswad et al. (2018) Thee Chanyaswad, Alex Dytso, H Vincent Poor, and Prateek Mittal. 2018. A Differential Privacy Mechanism Design Under MatrixValued Query. arXiv preprint arXiv:1802.10077 (2018).
 Chanyaswad et al. (2017) Thee Chanyaswad, Changchang Liu, and Prateek Mittal. 2017. Coupling Random Orthonormal Projection with Gaussian Generative Model for NonInteractive Private Data Release. arXiv:1709.00054 (2017).
 Chaudhuri et al. (2012) Kamalika Chaudhuri, Anand Sarwate, and Kaushik Sinha. 2012. Nearoptimal differentially private principal components. In NIPS.
 Chaudhuri and Vinterbo (2013) Kamalika Chaudhuri and Staal A. Vinterbo. 2013. A stabilitybased validation procedure for differentially private machine learning. In NIPS.
 Cormode et al. (2012) Graham Cormode, Cecilia Procopiuc, Divesh Srivastava, Entong Shen, and Ting Yu. 2012. Differentially private spatial decompositions. In ICDE. IEEE.
 Dawid (1981) A. Philip Dawid. 1981. Some matrixvariate distribution theory: notational considerations and a Bayesian application. Biometrika 68, 1 (1981), 265–274.

Day and Li (2015)
WeiYen Day and Ninghui
Li. 2015.
Differentially private publishing of highdimensional data using sensitivity control. In
CCS. ACM.  de Campos et al. (2000) Diogo Ayres de Campos, Joao Bernardes, Antonio Garrido, Joaquim Marques de Sa, and Luis PereiraLeite. 2000. SisPorto 2.0: a program for automated analysis of cardiotocograms. Journal of MaternalFetal Medicine 9, 5 (2000), 311–318.
 Duchi et al. (2013) John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and statistical minimax rates. In FOCS. IEEE.
 Dutilleul (1999) Pierre Dutilleul. 1999. The MLE algorithm for the matrix normal distribution. Journal of statistical computation and simulation 64, 2 (1999), 105–123.
 Dwork (2006) Cynthia Dwork. 2006. Differential privacy. Springer, 1–12.
 Dwork (2008) Cynthia Dwork. 2008. Differential privacy: A survey of results. In TAMC. Springer.
 Dwork et al. (2006a) Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006a. Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT. Springer.
 Dwork et al. (2006b) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006b. Calibrating noise to sensitivity in private data analysis. In TCC. Springer.
 Dwork and Roth (2014) Cynthia Dwork and Aaron Roth. 2014. The algorithmic foundations of differential privacy. FnTTCS 9, 34 (2014), 211–407.
 Dwork et al. (2010) Cynthia Dwork, Guy N. Rothblum, and Salil Vadhan. 2010. Boosting and differential privacy. In FOCS. IEEE.
 Dwork et al. (2014) Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, and Li Zhang. 2014. Analyze gauss: optimal bounds for privacypreserving principal component analysis. In STOC. ACM.

Fienberg
et al. (2010)
Stephen E. Fienberg,
Alessandro Rinaldo, and Xiaolin Yang.
2010.
Differential privacy and the riskutility tradeoff for multidimensional contingency tables. In
PSD. Springer.  ForemanMackey et al. (2013) Daniel ForemanMackey, David W Hogg, Dustin Lang, and Jonathan Goodman. 2013. emcee: the MCMC hammer. Publications of the Astronomical Society of the Pacific 125, 925 (2013), 306.
 Forsyth and Rada (1986) Richard Forsyth and Roy Rada. 1986. Machine learning: applications in expert systems and information retrieval. Halsted Press.
 Fredrikson et al. (2014) Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An EndtoEnd Case Study of Personalized Warfarin Dosing.. In USENIX Security.
 Friedman et al. (2001) Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2001. The elements of statistical learning. Vol. 1. Springer series in statistics New York.
 Gilks et al. (1995) Walter R. Gilks, Sylvia Richardson, and David Spiegelhalter. 1995. Markov chain Monte Carlo in practice. CRC press.
 Godsil and Royle (2013) Chris Godsil and Gordon F. Royle. 2013. Algebraic graph theory. Vol. 207. Springer Science & Business Media.
 Golub and Loan (1996) Gene H. Golub and Charles F. Van Loan. 1996. Matrix computations. Johns Hopkins University, Press, Baltimore, MD, USA (1996), 374–426.

Gupta and Varga (1992)
AK Gupta and T. Varga.
1992.
Characterization of matrix variate normal
distributions.
Journal of Multivariate Analysis
41, 1 (1992), 80–88.  Hardt et al. (2012) Moritz Hardt, Katrina Ligett, and Frank McSherry. 2012. A simple and practical algorithm for differentially private data release. In NIPS.

Hardt and Roth (2012)
Moritz Hardt and Aaron
Roth. 2012.
Beating randomized response on incoherent
matrices. In
Proceedings of the fortyfourth annual ACM symposium on Theory of computing
. ACM, 1255–1268.  Hardt and Roth (2013a) Moritz Hardt and Aaron Roth. 2013a. Beyond worstcase analysis in private singular vector computation. In STOC. ACM.
 Hardt and Roth (2013b) Moritz Hardt and Aaron Roth. 2013b. Beyond worstcase analysis in private singular vector computation. In Proceedings of the fortyfifth annual ACM symposium on Theory of computing. ACM, 331–340.
 Hardt and Rothblum (2010) Moritz Hardt and Guy N. Rothblum. 2010. A multiplicative weights mechanism for privacypreserving data analysis. In FOCS. IEEE.
 Hardt and Talwar (2010) Moritz Hardt and Kunal Talwar. 2010. On the geometry of differential privacy. In Proceedings of the fortysecond ACM symposium on Theory of computing. ACM, 705–714.
 Hay et al. (2016) Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, Yan Chen, and Dan Zhang. 2016. Principled evaluation of differentially private algorithms using DPBench. In SIGMOD/PODS. ACM.
 Hay et al. (2010) Michael Hay, Vibhor Rastogi, Gerome Miklau, and Dan Suciu. 2010. Boosting the accuracy of differentially private histograms through consistency. PVLDB 3, 12 (2010), 1021–1032.
 Horn and Johnson (1991) Roger A. Horn and Charles R. Johnson. 1991. Topics in matrix analysis, 1991. Cambridge University Press 37 (1991), 39.
 Horn and Johnson (2012) Roger A. Horn and Charles R. Johnson. 2012. Matrix analysis. Cambridge university press.
 Hughes et al. (2014) John F. Hughes, Andries Van Dam, James D. Foley, and Steven K. Feiner. 2014. Computer graphics: principles and practice. Pearson Education.
 Iranmanesh et al. (2010) Anis Iranmanesh, M. Arashi, and SMM Tabatabaey. 2010. On conditional applications of matrix variate normal distribution. Iranian Journal of Mathematical Sciences and Informatics 5, 2 (2010), 33–43.
 Jiang et al. (2013) X. Jiang, Z. Ji, S. Wang, N. Mohammed, S. Cheng, and L. OhnoMachado. 2013. DifferentialPrivate Data Publishing Through Component Analysis. Trans. on Data Privacy 6, 1 (Apr 2013), 19–34.
 Johnson et al. (2017) Noah Johnson, Joseph P. Near, and Dawn Song. 2017. Practical Differential Privacy for SQL Queries Using Elastic Sensitivity. arXiv:1706.09479 (2017).
 Jolliffe (1986) Ian T. Jolliffe. 1986. Principal Component Analysis and Factor Analysis. Springer, 115–128.
 Jones et al. (01 ) Eric Jones, Travis Oliphant, Pearu Peterson, et al. 2001–. SciPy: Open source scientific tools for Python. http://www.scipy.org/
 Kapralov and Talwar (2013) Michael Kapralov and Kunal Talwar. 2013. On differentially private low rank approximation. In SODA. SIAM.
 Kenthapadi et al. (2012) Krishnaram Kenthapadi, Aleksandra Korolova, Ilya Mironov, and Nina Mishra. 2012. Privacy via the johnsonlindenstrauss transform. arXiv:1204.2606 (2012).
 Kung (2014) S. Y. Kung. 2014. Kernel Methods and Machine Learning. Cambridge University Press.
 Laurent and Massart (2000) Beatrice Laurent and Pascal Massart. 2000. Adaptive estimation of a quadratic functional by model selection. Annals of Statistics (2000), 1302–1338.
 Li et al. (2014) Chao Li, Michael Hay, Gerome Miklau, and Yue Wang. 2014. A dataand workloadaware algorithm for range queries under differential privacy. PVLDB 7, 5 (2014), 341–352.
 Li et al. (2010) Chao Li, Michael Hay, Vibhor Rastogi, Gerome Miklau, and Andrew McGregor. 2010. Optimizing linear counting queries under differential privacy. In PODS. ACM.
 Li and Miklau (2012) Chao Li and Gerome Miklau. 2012. An adaptive mechanism for accurate query answering under differential privacy. PVLDB 5, 6 (2012), 514–525.
 Li et al. (2011) Yang D. Li, Zhenjie Zhang, Marianne Winslett, and Yin Yang. 2011. Compressive mechanism: Utilizing sparse representation in differential privacy. In WPES. ACM.
 Lichman (2013) M. Lichman. 2013. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml.
 Liu (2016a) Fang Liu. 2016a. Generalized gaussian mechanism for differential privacy. arXiv:1602.06028 (2016).
 Liu (2016b) Fang Liu. 2016b. Modelbased differential private data synthesis. arXiv:1606.08052 (2016).
 Lyu et al. (2016) Min Lyu, Dong Su, and Ninghui Li. 2016. Understanding the sparse vector technique for differential privacy. arXiv:1603.01699 (2016).
 McDermott and Forsyth (2016) James McDermott and Richard S. Forsyth. 2016. Diagnosing a disorder in a classification benchmark. Pattern Recognition Letters 73 (2016), 41–43.
 McSherry and Mironov (2009) Frank McSherry and Ilya Mironov. 2009. Differentially private recommender systems: building privacy into the net. In KDD. ACM.
 McSherry and Talwar (2007) Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In FOCS. IEEE.
 Merikoski et al. (1994) Jorma Kaarlo Merikoski, Humberto Sarria, and Pablo Tarazaga. 1994. Bounds for singular values using traces. Linear Algebra Appl. 210 (1994), 227–254.
 Meyer (2000) Carl D. Meyer. 2000. Matrix analysis and applied linear algebra. Vol. 2. Siam.
 Murphy (2012) Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press.
 Narayanan and Shmatikov (2008) Arvind Narayanan and Vitaly Shmatikov. 2008. Robust deanonymization of large sparse datasets. In S&P. IEEE.
 Netflix (2009) Netflix. 2009. Netflix Prize. http://www.netflixprize.com/. Accessed on: 08/10/2017.
 Nguyen (1997) Truc T. Nguyen. 1997. A note on matrix variate normal distribution. Journal of Multivariate Analysis 60, 1 (1997), 148–153.
 Nikolov (2015) Aleksandar Nikolov. 2015. An improved private mechanism for small databases. In International Colloquium on Automata, Languages, and Programming. Springer, 1010–1021.
 Nikolov et al. (2013) Aleksandar Nikolov, Kunal Talwar, and Li Zhang. 2013. The geometry of differential privacy: the sparse and approximate cases. In STOC. ACM.
 Nissim et al. (2007) Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2007. Smooth sensitivity and sampling in private data analysis. In STOC. ACM.
 Oliphant (2006) Travis E Oliphant. 2006. A guide to NumPy. Vol. 1. Trelgol Publishing USA.
 Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, and Vincent Dubourg. 2011. Scikitlearn: Machine learning in Python. JMLR 12, Oct (2011), 2825–2830.
 Proserpio et al. (2014) Davide Proserpio, Sharon Goldberg, and Frank McSherry. 2014. Calibrating data to sensitivity in private data analysis: a platform for differentiallyprivate analysis of weighted datasets. PVLDB 7, 8 (2014), 637–648.
 Qardaji et al. (2013a) Wahbeh Qardaji, Weining Yang, and Ninghui Li. 2013a. Differentially private grids for geospatial data. In ICDE. IEEE.
 Qardaji et al. (2013b) Wahbeh Qardaji, Weining Yang, and Ninghui Li. 2013b. Understanding hierarchical methods for differentially private histograms. PVLDB 6, 14 (2013), 1954–1965.
 Rao (1964) C. Radhakrishna Rao. 1964. The use and interpretation of principal component analysis in applied research. Sankhya: The Indian Journal of Statistics, Series A (1964), 329–358.
 Santos et al. (2006) C. Costa Santos, João Bernardes, Paul MB Vitányi, and Luis Antunes. 2006. Clustering fetal heart rate tracings by compression. In CBMS. IEEE.
 Sondow and Weisstein (2017) Jonathan Sondow and Eric W. Weisstein. 2017. Harmonic Number. http://mathworld.wolfram.com/HarmonicNumber.html.
 Thacker (1989) William Carlisle Thacker. 1989. The role of the Hessian matrix in fitting models to measurements. JGR: Oceans 94, C5 (1989), 6177–6196.
 Upadhyay (2014a) Jalaj Upadhyay. 2014a. Circulant matrices and differential privacy. analysis 16 (2014), 47.
 Upadhyay (2014b) Jalaj Upadhyay. 2014b. Randomness Efficient FastJohnsonLindenstrauss Transform with Applications in Differential Privacy and Compressed Sensing. arXiv:1410.2470 (2014).

Vapnik (2013)
Vladimir Vapnik.
2013.
The nature of statistical learning theory
. Springer science & business media.  von Neumann (1937) J. von Neumann. 1937. Some matrix inequalities and metrization of metric space. Tomsk Univ.Rev 1 (1937), 286–296.
 Waal (2006) DJ De Waal. 2006. MatrixValued Distributions. Encyclopedia of statistical sciences (2006).
 White (1992) Steven R. White. 1992.
Comments
There are no comments yet.