1 Introduction
Organizations, companies and governments collect data from a variety of sources, including social networking, transactions, smart Internet of Things devices, industrial equipment, electronics commercial activities, and more, which can be used to dig out valuable information hidden behind the massive data for modern life. The extensive collection and further processing of personal information in the context of big data analytics and machine learningbased artificial intelligence results in serious privacy concerns. For example, in March 2018, FacebookCambridge Analytica was reported to use the personal data of millions of people’s Facebook profiles harvested without their consents for political advertising purposes in the 2016 US presidential election, which was a great political scandal and caused an uproar in the world. Despite the benefits of analytics, it cannot be accepted that big data comes at a cost for privacy. Therefore, the present study shifts the discussion from “big data versus privacy” to “big data with privacy”, adopting the privacy and data protection principles as an essential value
[5]. Privacypreserving data publishing (PPDP) and various artificial intelligenceempowered learning/computing have gained significant attentions in both academia and industry. It is, thus, of utmost importance to craft the right balance between making use of big data technologies and protecting individuals’ privacy and personal data [5].Intuitively, one can make use of the simple naive identity removal to protect data privacy, but in practice, it does not always work. For instances, AOL released an anonymized partial threemonth search history to the public in 2006. Although personally identifiable information was carefully processed, some identities were accurately reidentified. For example, The New York Times immediately located the following individual: the person with number was a yearold widowed woman who suffered from some diseases and has three dogs. Such realworld privacy leakage problems and attack instances clearly demonstrate the importance of data privacy preservation.
The problem of data privacy protection was first put forward by Dalenius in the late 1970s [6] — Dalenius pointed out that the purpose of protecting private information in a database is to prevent any user (including legitimate users and potential attackers) from obtaining accurate information about arbitrary individuals. Following that, many privacy preservation models with strong operability including anonymity, diversity [20], closeness [18] were proposed. However, each model generally provides protection against only a specific type of attacks and cannot defend against newly developed ones. A fundamental cause of this deficiency lies in that the security of a privacy preservation model is highly related to the background knowledge of an attacker. Nevertheless, it is almost impossible to define the complete set of possible background knowledge an attacker may have.
Dwork originally proposed the concept of differential privacy (DP) to protect against the privacy disclosure of statistical databases in 2006 [4]. Under differential privacy, query results of a dataset are insensitive to the change of a single record. That is, whether a single record exists in the dataset has little effect on the output distribution of the analytical results. As a result, an attacker cannot obtain accurate individual information by observing the results since the risk of privacy disclosure generated by adding or deleting a single record is kept within an acceptable range. Unlike anonymization model, DP makes the assumption that an attacker has the maximum background knowledge, which rests on a sound mathematical foundation with a formal definition and rigorous proof.
It is worth noting that differential privacy is a definition or standard for quantifying privacy risks rather than a single tool, which is widely used in statistical estimations, data publishing, data mining, and machine learning. It is a new and promising privacy framework and has become a popular research topic in both academia and industry, which can be potentially implemented in various application scenarios. However, DP is a strict privacy standard, the data utility is likely to be poor while providing a meaningful privacy guarantee. The goal of this paper is to summarize and analyze the stateoftheart research and investigations in the field of differential privacy and its applications in privacypreserving data publishing, machine learning, deep learning, and federated learning, to point out a series of limits and open challenges of corresponding research areas, so as to provide some approachable strategies for researchers and engineers to implement DP in real world applications. In our paper, we place more focus on practical applications of differential privacy rather than detailed theoretical analysis of differentially private algorithms.
The rest of this paper is organized as follows. We present the background knowledge of differential privacy in Section 2. Section 3 introduces differentially private data publishing problem and presents some challenges on this problem. In Section 4, we summarize existing research on the application of differential privacy to deep learning and federated learning. Section 5 concludes the paper with some future research discussion and open problems on differential privacy applications.
2 Preliminary of Differential Privacy
Differential privacy can be achieved by injecting a controlled level of statistical noise into a query result to hide the consequence of adding or removing an arbitrary individual from a dataset. That is, when querying two almost identical datasets (differing by only one record), the results are differentially privatized in that an attacker cannot glean any new knowledge about an individual with a high degree of probability, i.e., whether or not a given individual is present in the dataset cannot be guessed.
2.1 Definition of Differential Privacy
Let be a query function to be evaluated on a dataset . Algorithm runs on the dataset and sends back . could be with a controlled amount of random noise added. The goal of differential privacy is to make as much close to as possible, thus ensuring data utility (enabling the user to learn the target value as accurately as possible), while preserving the privacy of the individuals with the added random noise. The main procedure can be seen in Figure 1.
Definition 1
(Neighboring Datasets) Two datasets and are considered to be neighboring ones if , where is the number of records and differ.
Definition 2
(Differential Privacy [8]) A randomized algorithm is differentially private if for any two datasets and with , and for all sets of possible outputs, we have
where and are nonnegative real numbers.
When , the algorithm becomes differentially private. We say a mechanism gives approximate differential privacy when . The is often a small positive real number called privacy budget, which is used to control the probability of the algorithm getting almost the same outputs from two neighboring datasets. It reflects the level of privacy preservation that algorithm can provide. For example, if we set , the result is at most twice as likely to be generated by dataset as by any of ’s neighbor .
The smaller the , the higher the level of privacy preservation. A smaller provides greater privacy preservation at the cost of lower data accuracy with more additional noise. When , the level of privacy preservation reaches the maximum, i.e., “perfect” protection. In this case, the algorithm outputs two results with indistinguishable distributions but the corresponding results do not reflect any useful information about the dataset. Therefore, the setting of should consider the tradeoff between privacy requirements and data utility. In practical applications, usually takes very small values such as , , or , .
2.2 Noise Mechanism of Differential Privacy
Sensitivity is the key parameter to determine the magnitude of the added noise, that is, the largest change to the query result caused by adding/deleting any record in the dataset. Accordingly, global sensitivity, local sensitivity, smoothing upper bound, and smoothing sensitivity are defined under the differential privacy model. Because of the limitation of space, we will specifically introduce them here.
(1) Laplace Mechanism
The Laplace distribution (centered at ) with scale
is the distribution with probability density function
Let denote the Laplace distribution (centered at ) with scale .
Definition 3
(Laplace Mechanism [8]) For dataset and function with global sensitivity , the Laplace mechanism is differentially private, where .
The Laplace mechanism is suitable for the protection of numerical results. Taking an example Laplace mechanism for the counting function, since the global sensitivity of counting is , that is , if we choose , the Laplace mechanism outputs .
(2)Exponential Mechanism
The Laplace mechanism is appropriate only for preserving the privacy of numerical results. Nevertheless, in many practical implementations, query results are entity objects. McSherry et al. put forward the exponential mechanism [21] for the situations where the “best” needs to be selected. Let the output domain of a query function be , and each value be an entity object. In the exponential mechanism, the function , which is called the utility function of the output value , is employed to evaluate the quality of .
Definition 4
(Exponential Mechanism [21]) Given a random algorithm with the input dataset and the output entity object , let be the utility function and be the global sensitivity of function . If algorithm selects and outputs from at a probability proportional to , then is differentially private.
2.3 Local Differential Privacy
Traditional centralized differential privacy provides privacy protection based on a premise that there is a trusted thirdparty data collector who does not steal or disclose user’s sensitive information, while local differential privacy [7] does not assume the existence of any trusted thirdparty data collector. Instead, it transfers the process of data privacy protection to each user, making each user independently deal with and protect personal sensitive information.
Definition 5
(Local Differential Privacy [7]) Given users, with each corresponding to a record. A privacy algorithm with definition domains and satisfies the local differential privacy if obtains the same output result on any two records and :
One can see from this definition that local differential privacy provides privacy by controlling the similarity between the output results of any two records, while each user processes its individual data independently, that is, the privacy preserving process is transferred to a single user from the data collector, such that a trusted third party is no longer needed and privacy attacks brought from the data collection of untrusted thirdparty is thus avoided. The framework of local differential privacy can be seen in Figure 2.
3 Differentially Private Data Publishing
3.1 Differential Privacy in Tabular Data Publishing
The goal of differentially private data publishing is to output aggregate/synthetic information to public without disclosing any individual’s information. Generally, there are two settings in the data publishing scenario, interactive and noninteractive. In the first setting, users make queries request to the data curator, who answers the query with a noisy result. The fixed privacy budget will be exhausted as the number of queries increases. In the noninteractive setting, the data curator publishes statistical information related to the dataset that satisfies differential privacy. When the queries are submitted, the corresponding query result is directly returned from the published synthetic dataset.
The challenge of interactive setting is that the number of queries is limited while the privacy budget is easily exhausted. That is, a higher accuracy result for one query with less noise results and a larger usually results in a smaller number of queries.
High sensitivity presents a big challenge on the data publishing in the noninteractive setting, while high sensitivity means large magnitude of noise and low data utility especially for big data and complex data, which we will detailed introduce in Section 3.3. Another problem is that the published synthetic dataset can only be used for particular purposes or targeted a fixed query function.
3.2 Differential Privacy in Graph Data Publishing
With the widespread application of social networks, the increasing volumes of usergenerated data have become a rich source which can be published to third parties for data analysis and recommendation system. Generally, social networking data can be modeled as graph , where is a set of nodes and is a set of relational activities between nodes. Analyzing graph data such as analysis of social network data has great potential social benefits and help generate insights into the laws of data change and trend characteristics. Most popular tasks of social network analysis include degree distribution, subgraph counting (triangle counting, kstar counting, ktriangle counting,etc.) and edge weight analysis. In reality, various types of privacy attacks such as deanonymization attacks [11, 12, 15, 23, 14], inference attacks [10, 16] on social networks have raised the stakes for privacy protection while a large amount of personal user data have been exposed.
However, the privacy issue of graph is more complicated starting from how to model and formalize the notion of “privacy” in graph network. Differential privacy originates from tabular data, while the key to extending differential privacy to social networks is to determine the neighboring input entries, that is, how to define “adjacent graphs”. Figure 3 shows existing definitions of DP in graph data, namely, node differential privacy, edge differential privacy, outlink differential privacy, partition differential privacy; detailed information can be referred to [13].
3.3 Challenges on Differentially Private Data Publishing.
In this subsection, we present a few challenges and open problems on differentially private data publishing especially for big data, complex network, dynamic and continuous data publishing.
As what it reads, big data deal with massive amounts of data at a great speed passing, which exhibit various characteristics that cover challenges like gathering, analysis, storage and privacy preservation. Of the many characteristics of big data, 5V characterizes big data’s nature the best, namely Volume, Velocity, Variety, Veracity and Value.
Differential privacy on complex and high volume network structure. Network structures such as social networks and traffic networks are often complex. Since query sensitivities are usually high, much noise has to be added to query results to achieve differential privacy. Nevertheless, the noise may significantly affect the output data utility, resulting in useless data. Moreover, it may be hard to effectively compute sensitivities, either global or smooth, precise or approximate, as the computational complexity may be too high (or even NPhard) to be practical for many complex graph network analysis queries.
Differential privacy on high dimensional Data.
Most differentially private data publishing techniques cannot work effectively for high dimensional data. On one hand, since the sensitivities and entropy of different dimensions vary, evenly distributing the total privacy budget to each dimension degrades the performance. Moreover, “Curse of Dimensionality” is the common challenge in big data perturbation which means a dataset contains high dimensions and large domains resulting in a pretty low “SignaltoNoise” and extremely low data utility even useless.
Differential privacy on correlated data. Differential privacy offers a neat privacy guarantee while it is a strict privacy standard, while assumes all the data are independent, while the correlation or dependence may undermine the privacy guarantees of differential privacy mechanisms. Unfortunately, the realworld gathered data can not be strictly independent, which is not only tuple (record) correlated but also attribute information correlated. For example, the salary information in strongly correlated with education level and occupation in a dataset.
Differential privacy on highvelocity data. Velocity in big data refers to the crucial characteristic of capturing data dynamically. In practical applications, the data are dynamically updated such as recommendation system, trajectory data to capture the evolutionary behaviors of various users. Differential privacy on continuous flow of data faces critical challenges of great noise accumulation and privacy budget allocation for each time sequence.
4 Differentially Private Machine Learning
4.1 Differential Privacy in Deep Learning
The privacy protection provided by DP also could benefit the existing deep learning model. The Differential Privacy framework for deep learning is illustrated in Fig. 4. Generally, the noise can be added into the gradient, input, and embedding. Adadi et al. [1] introduce the first DP preserved optimization algorithm named DPSGD. The DP is achieved by adding Gaussian noise in every SGD optimization step. Arachchige et al. [2]
introduce a model named LATENT. The framework achieves the protection by transferring the realvale low dimensional representation into a discrete vector. Lecuyer et al.
[17] proposed a model named PixelDP. The framework achieves the goal by adding Gaussian noise in the hidden layers of a CNN model. Different from these works, Phan et al.[22] proposed a method that directly manipulates the inputs. The model induces different levels of noise for each pixel of an image based on a relevant score[3].4.2 Differential Privacy in Federated Learning
The research field of Federated Learning focuses on learning a model where data is stored in a distributed system. As pointed out by Wei et al. [25], attackers can retrieval the data information through the gradient, a DP preserved learning model could protect such information leakage in the Federated Learning setting. Wei et al. [24] integrated DP algorithm into the Secure Multiparty Computation(SMC) framework. DP is used to encrypt the response for each query in the SMC. Geyer et al. [9] introduce a DP algorithm focusing on removing the data source info. In addition to using the same SGD algorithm framework as DPSGD, the algorithm also will randomly ignore a portion of the data to protect data privacy.
4.3 Challenges on Differentially Private Machine learning
Model Dependency. Other than the gradientbased approach, most deeplearning based DP algorithms introduced in this paper are highly related to the deep learning model. For example, LATENT and PixelDP are designed only for CNN. A DP approach that does not rely on the data and model could be promising research in the DP research field.
Accuracy loss of federated learning due to added noise. In federated learning model, differential privacybased approaches add noise to the uploaded parameters which will degrade the model accuracy inevitably and further affect the convergence of the global aggregation. Moreover, there are few results about practical frameworks integrating differential privacy and other cryptographybased methods, which hinders the industrial development of federated learning.
5 Future directions and Conclusions
Differential privacy is a strong standard of privacy protection with a solid mathematical definition which can be applied in various application scenarios, however differential privacy is not a panacea for all privacy problems and the research on differential privacy is still in its infancy stage. There are still some misunderstandings, inappropriate applications and flawed implementations in differential privacy. In this section, we propose a few future research problems and open problems that worthy of more attention.
5.1 Combination of Differential Privacy and Other Technologies
As we mentioned about the privacy preservation of high dimensional data, it is feasible and promising to combine effective dimensionality reduction techniques with differential privacy to address this issue. Specifically, it is possible to try both linear and nonlinear transformation such as compressive sensing and manifold learning which maps a highdimensional space to a lowdimensional representation.
With the great high privacy concern on Federated learning, IoT network and other distributed environment, the combination of local differential privacy, multiparty computations and sampling and anonymization will be a future topic which needs openended exploration. Secure multiparty computation is a type of cryptographybased which could be concerning and infeasible on computationally constrained devices, while anonymization model has its own shortcomings about the assumption on background knowledge. However, the combination of these techniques can boost the performance of differential privacy. Specifically, differential privacy with a sampling processing can greatly amplify the privacy preservation level [19], based on which we can adapt the idea of anonymization to participants of DP processing. For example, in the scenario of federated learning, we can randomly pick up the clients and parts of differentially private local updates to form a shuffle model. Moreover, interdiscipline techniques between local differential privacy and secure multiparty computation involve the secure computation, privacy preservation and dataset partition, which need to tackle with the high communication cost and low data utility.
5.2 Variation of Differential Privacy and Personalized Privacy
Differential privacy provides strong and strict privacy guarantee at the cost of low data utility while it may be too strong and not necessary in some practical applications. To achieve a better tradeoff between privacy and preservation, various relax and extensions of differential privacy need to be proposed and in fact many of these definition have been proposed such as crowdblending privacy,individual differential privacy, and probabilistic indistinguishability. However, most of these are still in the stage of theoretical definition or be specific scenarios. The great challenge is that how to widely apply to these extensions to practical applications.
On the other hand, conventional private data privacy preservation mechanisms aim to retain as much data utility as possible while ensuring sufficient privacy protection on sensitive data while such schemes implicitly assume that all data users have the same data access privilege levels. Actually, data users often have different levels of access to the same data, personalized requirements of privacy preservation level or data utility. It is a big challenge to achieve personalized privacy and multilevel data utility while the uniform framework itself is a hard problem.
5.3 Misunderstandings of Differential Privacy vs More Than Privacy
As we mentioned in differentially private data publishing, the data utility of outputs are likely to be very poor or with large privacy budget, that is lower privacy preservation level, which we cannot sure how much privacy it can provides. Moreover, when differential privacy is applied to federated learning, it is used on local updates of parameters while traditional differential privacy is designed for record data contributed by different individuals on the basis of assumption that the data are independent. However, in federated/distributed learning, all local data are from the same client which have little possibility to be independent.
In contrast, differential privacy can do more while there exists misconceptions and misuse of differential privacy. Besides providing privacy preservation through hiding individual information in the aggregate information, from the opposite perspective of its definition, differential privacy can ensure that the probability of outcomes unchanged when modifying any individual record in the training data, and the application of this property needs to be explored. Secondly, differential privacy can also protect against the malicious attacks in learning techniques such as poisonous attacks in federated learning which can help improve the accuracy of training model. Thirdly, specific differentially private methods can be combined with reward mechanisms in distributed learning to provide privacy preservation and incentivize more clients to participate in the learning process at the same time.
References
 [1] (2016) Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318. Cited by: §4.1.
 [2] (2019) Local differential privacy for deep learning. IEEE Internet of Things Journal 7 (7), pp. 5827–5842. Cited by: §4.1.

[3]
(2015)
On pixelwise explanations for nonlinear classifier decisions by layerwise relevance propagation
. PloS one 10 (7), pp. e0130140. Cited by: §4.1.  [4] (2006) Differential privacy. Automata, languages and programming, pp. 1–12. Cited by: §1.
 [5] (2015) Privacy by design in big data: an overview of privacy enhancing technologies in the era of big data analytics. arXiv preprint arXiv:1512.06000. Cited by: §1.
 [6] (1977) Towards a methodology for statistical disclosure control. statistik Tidskrift 15 (429444), pp. 2–1. Cited by: §1.
 [7] (2013) Local privacy and statistical minimax rates. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pp. 429–438. Cited by: §2.3, Definition 5.
 [8] (2006) Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp. 265–284. Cited by: Definition 2, Definition 3.
 [9] (2017) Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557. Cited by: §4.2.
 [10] (2014) Joint link prediction and attribute inference using a socialattribute network. ACM Transactions on Intelligent Systems and Technology (TIST) 5 (2), pp. 1–20. Cited by: §3.2.
 [11] (2015) On your social network deanonymizablity: quantification and large scale evaluation with seed knowledge.. In NDSS, Cited by: §3.2.
 [12] (2017) Desag: on the deanonymization of structureattribute graph data. IEEE Transactions on Dependable and Secure Computing. Cited by: §3.2.
 [13] (2021) Applications of differential privacy in social network analysis: a survey. IEEE Transactions on Knowledge and Data Engineering. Cited by: §3.2.
 [14] (2021) Structureattributebased social network deanonymization with spectral graph partitioning. IEEE Transactions on Computational Social Systems. Cited by: §3.2.
 [15] (2018) SA framework based deanonymization of social networks. Procedia Computer Science 129, pp. 358–363. Cited by: §3.2.
 [16] (2013) Do online social network friends still threaten my privacy?. In Proceedings of the third ACM conference on Data and application security and privacy, pp. 13–24. Cited by: §3.2.
 [17] (2019) Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 656–672. Cited by: §4.1.
 [18] (2007) Tcloseness: privacy beyond kanonymity and ldiversity. In 2007 IEEE 23rd International Conference on Data Engineering, pp. 106–115. Cited by: §1.
 [19] (2012) On sampling, anonymization, and differential privacy or, kanonymization meets differential privacy. In Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security, pp. 32–33. Cited by: §5.1.
 [20] (2007) Ldiversity: privacy beyond kanonymity. ACM Transactions on Knowledge Discovery from Data (TKDD) 1 (1), pp. 3–es. Cited by: §1.
 [21] (2007) Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), pp. 94–103. Cited by: §2.2, Definition 4.
 [22] (2017) Adaptive laplace mechanism: differential privacy preservation in deep learning. In 2017 IEEE International Conference on Data Mining (ICDM), pp. 385–394. Cited by: §4.1.
 [23] (2018) Optimal active social network deanonymization using information thresholds. In 2018 IEEE International Symposium on Information Theory (ISIT), pp. 1445–1449. Cited by: §3.2.
 [24] (2020) Federated learning with differential privacy: algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15, pp. 3454–3469. Cited by: §4.2.
 [25] (2020) A framework for evaluating gradient leakage attacks in federated learning. arXiv preprint arXiv:2004.10397. Cited by: §4.2.