No-Go Theorems for Data Privacy

05/28/2020
by   Thomas Studer, et al.
Universität Bern
0

Controlled query evaluation (CQE) is an approach to guarantee data privacy for database and knowledge base systems. CQE-systems feature a censor function that may distort the answer to a query in order to hide sensitive information. We introduce a high-level formalization of controlled query evaluation and define several desirable properties of CQE-systems. Finally we establish two no-go theorems, which show that certain combinations of these properties cannot be obtained.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/24/2015

Controlled Query Evaluation for Datalog and OWL 2 Profile Ontologies

We study confidentiality enforcement in ontologies under the Controlled ...
02/01/2020

A Quantum-based Database Query Scheme for Privacy Preservation in Cloud Environment

Cloud computing is a powerful and popular information technology paradig...
04/24/2020

CQE in Description Logics Through Instance Indistinguishability (extended version)

We study privacy-preserving query answering in Description Logics (DLs)....
11/06/2019

Certain Answers to a SPARQL Query over a Knowledge Base (extended version)

Ontology-Mediated Query Answering (OMQA) is a well-established framework...
01/30/2013

Constructing Situation Specific Belief Networks

This paper describes a process for constructing situation-specific belie...
07/22/2019

On the Information Privacy Model: the Group and Composition Privacy

How to query a dataset in the way of preserving the privacy of individua...
10/02/2017

Constrained Differential Privacy for Count Data

Concern about how to aggregate sensitive user data without compromising ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Controlled query evaluation (CQE) refers to a data privacy mechanism where the database (or knowledge base) is equipped with a censor function. This censor checks for each query whether the answer to the query would reveal sensitive information to a user. If this is the case, then the censor will distort the answer. Essentially, there are two possibilities how an answer may be distorted:

  1. the CQE-system may refuse to answer the query [18] or

  2. the CQE-system may give an incorrect answer, i.e. it lies [10].

This censor based approach has the advantage that the task of maintaining privacy is separated from the task of keeping the data. This gives more flexibility than an integrated approach (like hiding rows in a database) and guarantees than no information is leaked through otherwise unidentified inference channels. Controlled query evaluation has been applied to a variety of data models and control mechansims, see, e.g. [5, 6, 7, 8, 9, 21].

No-go theorems are well-known in theoretical physics where they describe particular situations that are not physically possible. Often the term is used for results in quantum mechanics like Bell’s theorem [4], the Kochen–Specker theorem [15], or, for a more recent example, the Frauchiger–Renner paradox [12]. Nurgalieva and del Rio  [16] provide a modal logic analysis of the latter paradox. Arrow’s theorem [2] in social choice theory also is a no-go theorem stating that no voting system can be designed that meets certain given fairness conditions. Pacuit and Yang  [17] present a version of independence logic in which Arrow’s theorem is derivable.

In the present paper we develop a highly abstract model for dynamic query evaluation systems like CQE. We formulate several desirable properties of CQE-systems in our framework and establish two no-go theorems saying that certain combinations of those properties are impossible. The main contribution of this paper is the presentation of the abstract logical framework as well as the high-level formulation of the no-go theorems. Note that some particular instances of our results have already been known [5, 21].

There are many different notions of privacy available in the literature. For our results, we rely on provable privacy [19, 20], which is a rather weak notion of data privacy. Note that using a weak definition of privacy makes our impossibility theorems actually stronger since they state that under certain conditions not even this weak form of privacy can be achieved.

Clearly our work is also connected to the issues of lying and deception. Logics dealing with these notions are introduced and studied, e.g., in [1, 22, 13].

2 Logical Preliminaries

Let be a set. We use to denote the power set of . For sets and  we use for . Moreover, in such a context we write for the singleton set . Hence stands for .

Definition 1.

A logic is given by

  1. a set of formulas and

  2. a consequence relation for that is a relation between sets of formulas and formulas, i.e.  satisfying for all and :

    1. reflexivity: ;

    2. weakening: ;

    3. transitivity: .

Transitivity is sometimes called cut. The previous definition gives us single conclusion consequence relations, which is sufficient for the purpose of this paper. For other notions of consequence relations see, e.g., [3] and [14].

As usual, we write for . A formula is called a theorem of if .

We do not specify the logic any further. The only thing we need is a consequence relation as given above. For instance, may be classical propositional logic with being the usual derivation relation (see Section 4) or may be a description logic with being its semantic consequence relation [21].

Definition 2.
  1. A logic is called consistent if there exists a formula such that .

  2. A set of -formulas is called -consistent if there exists a formula such that .

We need a simple modal logic over .

Definition 3.

The set of formulas is given inductively by:

  1. if is a formula of , then is a formula of ;

  2. is a formula of ;

  3. if and are formulas of , so is , too.

We define the remaining classical connectives , , , and as usual. Note that is not a fully-fledged modal logic. For instance, it does not include nested modalities.

We give semantics to -formulas as follows.

Definition 4.

An -model is a set of sets of -formulas, that is

Definition 5.

Let be an -model. Truth of an -formula in  is inductively defined by:

  1. iff for all ;

  2. ;

  3. iff or .

We use the following standard definition.

Definition 6.

Let be a set of -formulas.

  1. We write iff for each .

  2. is called satisfiable iff there exists an -model with .

  3. entails a formula , in symbols , iff for each model we have that

3 Privacy

Definition 7.

A privacy configuration is a triple that consists of:

  1. the knowledge base , which is only accessible via the censor;

  2. the set of a priori knowledge , which formalizes general background knowledge known to the attacker and the censor;

  3. the set of secrets , which should be protected by the censor.

A privacy configuration satisfies the following conditions:

  1. is -consistent (consistency);

  2. (truthful start);

  3. for each (hidden secrets).

Note that in the above definition, and are sets of -formulas while is a set of -formulas. Thus may not only contain domain knowledge but also knowledge about the structure of . This is further explained in Section 4.

A query to a knowledge base is simply a formula of .

Given a logic , we can evaluate a query over a knowledge base . There are two possible answers: (true) and (unknown).

Definition 8.

The evaluation function is defined by:

If the language of the logic includes negation, then one may also consider an evaluation function that can return the value (false), i.e. one defines if . However, in the general setting of this paper, we cannot include this case.

A censor has to hide the secrets. In order to achieve this, it can not only answer and to a query but also (refuse to answer). We denote the set of possible answers of a censor by

Let be a set. Then denotes the set of infinite sequences of elements of .

Definition 9.

A censor is a mapping that assigns an answering function

to each privacy configuration . By abuse of notation, we also call the answering function a censor. A sequence is called query sequence.

Usually, the privacy configuration will be clear from the context. In that case we simply use instead of .

Given a sequence , we use to denote its -th element. That is for a query sequence , we use to denote the -th query and to denote the -th answer of the censor.

Example 10.

Let . We define a privacy configuration with , , and . A censor yields an answering function , which applied to a query sequence yields a sequence of answers, e.g.,

In this case, gives true answers since and and it protects the secret be refusing to answer the query .

Another option for the answering function would be to answer the third query with , i.e., it would lie (instead of refuse to answer) in order to protect the secret.

A further option would be to always refuse the answer, i.e.

This, of course, would be a trivial (and useless) answering function that would, however, preserve all secrets.

In this paper, we will consider continuous censors only, which are given as follows.

Definition 11.

A censor is continuous iff for each privacy configuration and for all query sequences and all we have that

where for an infinite sequence , we use to denote the initial segment of of length , i.e. .

Continuity means that the answer of a censor to a query does not depend on future queries, see also Lemma 14.

A censor is called truthful if it does not lie.

Definition 12.

A censor is called truthful iff for each privacy configuration , all query sequences , and all sequences

we have that for all

Hence a truthful censor may refuse to answer a query in order to protect a secret but it will not give an incorrect answer.

In the modal logic over , we can express what knowledge one can gain from the answers of a censor to a query. This is called the content of the answer.

Definition 13.

Given an answer to a query , we define its content as follows:

Assume that we are given a privacy configuration and a censor . We define the content of the answers of the censor to a query sequence up to by

where . Note that here we have also included the a priori knowledge.

The following is a trivial observation showing the role of continuity.

Lemma 14.

Let be a continuous censor. The content function is monotone in the second argument: for we have

We call a censor credible if it does not return contradicting answers.

Definition 15.

A censor is called credible iff for each privacy configuration and for every query sequence and every , the set is satisfiable.

Definition 16.

The full content of a knowledge base is given by

Lemma 17.

For any knowledge base , we have that

Proof.

Let be an -formula. We dinstinguish:

  1. . Then and further .

  2. . Then and further .

Lemma 18.

We let be a privacy configuration. Further we let be a truthful censor. For every query sequence and , we have that

Proof.

By induction on . The base case is trivial since

Induction step. Since is truthful, we have

We distinguish:

  1. . Then and the claim follows immediately from the induction hypothesis.

  2. . Then

    The claim follows from the induction hypothesis and

    which holds by Definition 16.

The following corollary is a generalization of Cor. 30 in [21].

Corollary 19.

Every truthful censor is credible.

Proof.

Let be a privacy configuration and be a truthful censor for it. By Definition 7, we have . Thus by the two previous lemmas, we find that for each ,

and

that means is satisfiable for each and thus is credible. ∎

There are several properties that a ‘good’ censor should fulfil. We call a censor effective if it protects all secrets.

Definition 20.

A censor is called effective iff for each privacy configuration and for every query sequence and every , we have

A ‘good’ censor should only distort an answer to a query when it is absolutely necessary, i.e. when giving the correct answer would leak a secret. We call such a censor minimally invasive.

Definition 21.

Let be an effective and credible censor. This censor is called minimally invasive iff for each privacy configuration and for each query sequence , we have that whenever

replacing

would lead to a violation of effectiveness or credibility, that is for any censor such that

and

we have that for some

or

It is a trivial observation that a truthful, effective and minimally invasive censor has to answer the same query always in the same way.

Lemma 22.

Let be a truthful, effective and minimally invasive censor. Further let be a privacy configuration and be a query sequence with for some . Then

Consider a truthful, effective, continuous and minimally invasive censor and a given query sequence. If the censor lies to answer some query, then giving the correct answer would immediately reveal a secret.

Lemma 23.

Let be a truthful, effective, continuous and minimally invasive censor. Further let be a privacy configuration and be a query sequence. Let be the least natural number such that

Let be such that

and

Then it holds that

Proof.

Consider the query sequence given by for and for , i.e.  has the form . In particular, we have . Thus by continuity of the censor we find

Thus . By the definition of minimally invasive we find that for some

(1)

or

(2)

Since the censor is truthful and by Corollary 19, we find that (2) is not possible. Thus (1) holds for some .

By the definition of and the previous lemma we find

if . Thus, in case , (1) implies

(3)

In case , we find by Lemma 14 that

Thus again (1) implies (3), which finishes the proof. ∎

Next we define the notion of a repudiating censor, which garantees that there is always a knowledge base in which no secret holds and which, given as input to the answering function, produces the same results as the actual knowledge base. Hence this definition provides a version of plausible deniability for all secrets.

Definition 24.

A censor is called repudiating iff for each privacy configuration and each query sequence , there are knowledge bases () such that

  1. is a privacy configuration for each ;

  2. , for each ;

  3. for each and each .

Now we can establish our first no-go theorem, which is a generalization of Th. 50 in [21].

Theorem 25 (First No-Go Theorem).

A continuous and truthful censor satisfies at most two of the properties effectiveness, minimal invasion, and repudiation.

Proof.

Let the censor be continuous, truthful, effective, and minimally invasive. We show that cannot be repudiating. We let be an -formula and consider the privacy configuration given by

and the query sequence . We set

Obviously, we have since otherwise would either be lying (i.e. not be truthful) or revealing a secret (i.e. not be effective).

Now asssume that is repudiating. Then there exists a knowledge base such that

  1. is a privacy configuration;

  2. ;

  3. .

Let . Because of and being truthful, we find that or .

Suppose towards a contradiction that

(4)

Now let be a censor as in Lemma 23, i.e. such that

(5)

By Lemma 23 we get

(6)

However, by (5) we also have , which contradicts (6).

Hence (4) is not possible and thus we have . This, however, contradicts . We conclude that cannot be repudiating. ∎

4 Non-refusing censors

In this section we study censors that do not refuse to answer a query.

Definition 26.

A censor is non-refusing if it never assigns the answer to a query.

Of course, a non-refusing censor has to lie in order to keep the secrets. That means if a censors of this kind shall be effective, then it cannot be truthful.

Even if we consider lying censors, we work with the assumption that

an attacker believes every answer of the censor. (7)

Otherwise, we are in a situation where an attacker cannot believe any answer because the attacker does not know which answers are correct and which are wrong, which means that any answer could be a lie. In that case, querying a knowledge base would not make any sense at all.111This is, of course, not completely true. It is possible to distort knowledge bases in such a way that privacy is preserved but statistical inferences are still informative, see, e.g. [11].

Because of the assumption (7), we can use our notions of effectiveness (Definition 20) and credibility (Definition 15) also in the context of lying censors: an attacker should not believe any secret and the beliefs should be satisfiable.

Theorem 25 about truthful censors did not make any assumptions on the underlying logic . The next theorem about non-refusing censors is less general as it is based on classical logic. We will use for atomic propositions and for arbitrary formulas.

Moreover, we assume that the knowledge base only contains atomic facts (we say is atomic). That is if , then is either of the form or of the form where is an atomic proposition. Hence we find that if for two distinct atomic propositions and , then or . We can formalize this using the set of a priori knowledge by letting

Now we can establish our second no-go theorem, which is a generalization of the results of [5].

Theorem 27 (Second No-Go Theorem).

Let be based on classical logic. A continuous and non-refusing censor cannot be at the same time effective and minimally invasive.

Proof.

Let the censor be continuous, non-refusing, and minimally invasive. We show that cannot be effective. Let be classical propositional logic. We consider the knowledge base

where both and shall be kept secret, i.e.

Further we assume that it is a priori knowledge that is atomic. Thus, in particular,

We consider the query sequence and set .

We find since is minimally invasive and might contain . Further, we find since is minimally invasive and might contain .

Note that after issuing the first two queries of the sequence , an attacker knows that or must be entailed by . But since the attacker does not know which one is the case, no secret is leaked. Formally we have

(8)
and
(9)

By basic modal logic, (8) and (9) yield

(10)
and
(11)

respectively. Using the a priori knowledge , we obtain from (8) and (9)

(12)
and
(13)

Because of , we get by (10) and (13) that

Thus, at this stage, it is known that a secret holds, but an attacker does not know which one and hence privacy is still preserved.

Now comes the third query, which is . There are two possibilities for a non-refusing censor to choose from:

  1. (which is true). We find . By (13) we get and a secret is leaked.

  2. (which is a lie). We find . By (10) we get and a secret is leaked.

In both cases, a secret is leaked. Thus the censor cannot be effective. ∎

To avoid this problem, a censor must not only protect the single elements of but also their disjunction [5]. For the privacy configuration of the previous proof that means must also protect . Then, already the second query, would be answered with because the answer , as shown above, reveals .

Note that protecting the disjunction of all secrets is not as simple as it sounds. Consider, for instance, a hospital information system that should protect the disease a patient is diagnosed with. In this case, protecting the disjunction of all secrets means protecting the information that the patient has some disease. This, however, is not feasible as it is general background knowledge that everybody who is a patient in a hospital has some disease. Worse than that, sometimes the disjunction of all secrets may even be a logical tautology, which cannot be protected.

5 Conclusion

In this paper, we have established two no-go theorems for data privacy using tools from modal logic. We are confident that logical methods will play an important role for finding new impossibility theorems or for better understanding already known ones, see, e.g., the logical analyses carried out in [16] and [17].

Another line of future research relates to the fact that refusing to answer a query can give away the information that there exists a secret that could be infered from some other answer. Similar phenomena may occur in multi-agent systems when one of the agents refuses to communicate. For example, imagine the situation of an oral exam where the examiner asks a question and the student keeps silent. In this case the examiner learns that the student does not know the answer to the question for otherwise the student would have answered.

It is also possible that refusing an answer can lead to knowing that someone else knows a certain fact. Consider the following scenario. A father enters a room where his daughter is playing and he notices that one of the toys is in pieces. So he asks who has broken the toy. The daughter does not want to betray her brother (who actually broke it) and she also does not want to lie. Therefore, she refuses to answer her father’s question. Of course, then the father knows that his daughter knows who broke the toy for otherwise the daughter could have said that she does not know.

We believe that it is worthwhile to study the above situations using general communication protocols that include the possibility of refusing an answer and to investigate the implications of refusing in terms of higher-order knowledge.

References

  • [1] T. Ågotnes, H. van Ditmarsch, and Y. Wang. True lies. Synthese, 195(10):4581–4615, 2018.
  • [2] K. J. Arrow. A difficulty in the concept of social welfare. Journal of Political Economy, 58(4):328–346, 1950.
  • [3] A. Avron. Simple consequence relations. Inf. Comput., 92(1):105–139, 1991.
  • [4] J. S. Bell. On the einstein podolsky rosen paradox. Physics Physique Fizika, 1:195–200, 1964.
  • [5] J. Biskup. For unknown secrecies refusal is better than lying.

    Data and Knowledge Engineering

    , 33(1):1–23, 2000.
  • [6] J. Biskup and P. A. Bonatti. Lying versus refusal for known potential secrets. Data and Knowledge Engineering, 38(2):199–222, 2001.
  • [7] J. Biskup and P. A. Bonatti. Controlled query evaluation for enforcing confidentiality in complete information systems. International Journal of Information Security, 3(1):14–27, 2004.
  • [8] J. Biskup and P. A. Bonatti. Controlled query evaluation for known policies by combining lying and refusal.

    Annals of Mathematics and Artificial Intelligence

    , 40(1):37–62, 2004.
  • [9] J. Biskup and T. Weibert. Keeping secrets in incomplete databases. International Journal of Information Security, 7(3):199–217, 2008.
  • [10] P. A. Bonatti, S. Kraus, and V. S. Subrahmanian. Foundations of secure deductive databases. Transactions on Knowledge and Data Engineering, 7(3):406–422, 1995.
  • [11] F. du Pin Calmon and N. Fawaz. Privacy against statistical inference. In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 1401–1408. IEEE, 2012.
  • [12] D. Frauchiger and R. Renner. Quantum theory cannot consistently describe the use of itself. Nature Communications, 9, 2018.
  • [13] B. Icard. Lying, deception and strategic omission : definition et evaluation. PhD thesis, Université Paris Sciences et Lettres, 2019.
  • [14] R. Iemhoff. Consequence relations and admissible rules. Journal of Philosophical Logic, 45(3):327–348, 2016.
  • [15] S. Kochen and E. Specker. The problem of hidden variables in quantum mechanics. Indiana Univ. Math. J., 17:59–87, 1968.
  • [16] N. Nurgalieva and L. del Rio. Inadequacy of modal logic in quantum settings. In P. Selinger and G. Chiribella, editors, Proceedings 15th International Conference on Quantum Physics and Logic, QPL 2018, Halifax, Canada, 3-7th June 2018., volume 287 of EPTCS, pages 267–297, 2019.
  • [17] E. Pacuit and F. Yang. Dependence and independence in social choice: Arrow’s theorem. In S. Abramsky, J. Kontinen, J. Väänänen, and H. Vollmer, editors, Dependence Logic: Theory and Applications, pages 235–260. Springer, 2016.
  • [18] G. L. Sicherman, W. De Jonge, and R. P. Van de Riet. Answering queries without revealing secrets. ACM Trans. Database Syst., 8(1):41–59, Mar. 1983.
  • [19] K. Stoffel and T. Studer. Provable data privacy. In K. V. Andersen, J. Debenham, and R. Wagner, editors, Database and Expert Systems Applications, pages 324–332. Springer, 2005.
  • [20] P. Stouppa and T. Studer. A formal model of data privacy. In I. Virbitskaite and A. Voronkov, editors, Perspectives of Systems Informatics, pages 400–408. Springer, 2007.
  • [21] T. Studer and J. Werner. Censors for boolean description logic. Transactions on Data Privacy, 7:223–252, 2014.
  • [22] H. van Ditmarsch. Dynamics of lying. Synthese, 191(5):745–777, 2014.