Differentially Private Formation Control

04/06/2020 ∙ by Calvin Hawkins, et al. ∙ University of Florida 0

As multi-agent systems proliferate, there is increasing demand for coordination protocols that protect agents' sensitive information while allowing them to collaborate. To help address this need, this paper presents a differentially private formation control framework. Agents' state trajectories are protected using differential privacy, which is a statistical notion of privacy that protects data by adding noise to it. We provide a private formation control implementation and analyze the impact of privacy upon the system. Specifically, we quantify tradeoffs between privacy level, system performance, and connectedness of the network's communication topology. These tradeoffs are used to develop guidelines for calibrating privacy in terms of control theoretic quantities, such as steady-state error, without requiring in-depth knowledge of differential privacy. Additional guidelines are also developed for treating privacy levels and network topologies as design parameters to tune the network's performance. Simulation results illustrate these tradeoffs and show that strict privacy is inherently compatible with strong system performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Multi-agent systems, such as robotic swarms and social networks, require agents to share information to collaborate. In some cases, the information shared between agents may be sensitive. For example, self-driving cars share location data to be routed to a destination. Geo-location data and other data streams can be quite revealing about users and sensitive data should be protected. However, this data must still be useful for multi-agent coordination. Thus, privacy in multi-agent control must simultaneously protect agents’ sensitive data while guaranteeing that privatized data enables the network to achieve a common task.

This type of privacy has recently been achieved using differential privacy. Differential privacy stems from the computer science literature, where it was originally used to protect sensitive data when databases are queried [dwork2014algorithmic, dwork2006calibrating]. Differential privacy is appealing because it is immune to post-processing and robust to side information [dwork2014algorithmic]. These properties mean that privacy guarantees are not compromised by performing operations on differentially private data, and that they are not weakened by much by an adversary with additional information about data-producing agents [kasiviswanathan2014semantics].

Recently, differential privacy has been applied to dynamic systems [le2013differentially, yazdani2018differentially, hale2017cloud, le2017differentially, jones2019towards, mitraC, geirEnt]. One form of differential privacy in dynamic systems protects sensitive trajectory-valued data, and this is the notion of differential privacy used in this paper. Privacy of this form ensures that an adversary is unlikely to learn much about the state trajectory of a system by observing its outputs. In multi-agent control, this lets an agent share its outputs with other agents while protecting its state trajectory from those agents and eavesdroppers [le2013differentially, yazdani2018differentially, hale2017cloud, le2017differentially].

In this paper, we develop a framework for private multi-agent formation control using differential privacy. Formation control is a well-studied network control problem and can be robots physically assembling into geometric shapes or non-physical agents maintaining relative state offsets. For differential privacy, agents add privacy noise to their states before sharing them with other agents. The other agents use privatized states in their update laws, and then this process repeats. Adding privacy noise makes this problem equivalent to a certain consensus protocol with measurement noises. This paper focuses on private formation control, though the methods presented can be used to design and analyze other private consensus-style protocols. The private formation control protocol can be implemented in a completely distributed manner, and, contrary to some other privacy approaches, it does not require a central coordinator.

Beyond the privacy implementation, we develop guidelines for calibrating privacy in formation control. Specifically, we bound the quality of formation, or performance of the system, in terms of agents’ privacy parameters and connectedness of the network. We develop guidelines by using these bounds to trade off degraded performance for stricter privacy requirements and a less connected communication topology. This ultimately allows us to formulate privacy guidelines based on control-theoretic properties without requiring users to have an in-depth understanding of differential privacy. Guidelines are also developed by analyzing the sensitivity of system performance to changes in privacy parameters and communication topology to determine which has a larger impact on system performance. Furthermore, we develop necessary and sufficient conditions for when private formation control networks achieve a desired performance level.

The rest of the paper is organized as follows. Section II gives graph theory and differential privacy background. Section III states the differentially private formation control problem and Section IV solves it. Section V provides guidelines for calibrating privacy based on performance requirements for specific communication topologies. In Section VI, we analyze the sensitivity of system performance to changes in privacy and communication topology. Next, Section VII provides simulations, and Section VIII concludes the paper.

Ii Background and Preliminaries

In this section we briefly review the required background on graph theory and differential privacy.

Ii-a Graph Theory Background

A graph is defined over a set of nodes and edges are contained in the set . For nodes, is indexed over . The edge set of is a subset , where the pair if nodes and share a connection and if they do not. This paper considers undirected, weighted, simple graphs. Undirectedness means that an edge is not distinguished from . Simplicity means that for all . Weightedness means that the edge has a weight . Of particular interest are connected graphs.

Definition 1

A graph is connected if, for all , , there is a sequence of edges one can traverse from node to node .

This paper uses the weighted graph Laplacian, which is defined with weighted adjacency and weighted degree matrices. The weighted adjacency matrix of is defined element-wise as

Because we only consider undirected graphs, is symmetric. The weighted degree of node is defined as The maximum degree is . The degree matrix is the diagonal matrix . The weighted Laplacian of is then defined as .

Let be the

smallest eigenvalue of a matrix. By definition,

for all graph Laplacians and

The value of plays a key role in this paper and is defined as follows.

Definition 2 ([fiedler1973algebraic])

The algebraic connectivity of a graph is the second smallest eigenvalue of its Laplacian and is connected if and only if .

Agent ’s neighborhood set is the set of all agents agent can communicate with, defined as .

Ii-B Differential Privacy Background

This section provides a brief description of the differential privacy background needed for the remainder of the paper. More complete expositions can be found in [le2013differentially, cynthia2006differential]. Overall, the goal of differential privacy is to make similar pieces of data appear approximately indistinguishable from one another. Differential privacy is appealing because its privacy guarantees are immune to post-processing [cynthia2006differential]. For example, private data can be filtered without threatening its privacy guarantees [le2013differentially]. More generally, arbitrary post-hoc computations on private data do not harm differential privacy. In addition, after differential privacy is implemented, an adversary with complete knowledge of the mechanism used to implement privacy has no advantage over another adversary without mechanism knowledge [dwork2014algorithmic, dwork2006calibrating].

In this paper we use differential privacy to privatize state trajectories of mobile autonomous agents. We consider vector-valued trajectories of the form

where for all . The norm of is defined as , where is the ordinary -norm on . Define the set

The set only contains trajectories that converge to the origin. However, we want to privatize arbitrary trajectories, including those that do not converge at all. To do so, we consider a larger set of trajectories. Let the truncation operator be defined as

Then we define the set

and we will privatize state trajectories in this set.

Consider a network of agents, where agent ’s state trajectory is denoted by . The element of agent ’s trajectory is for . Agent ’s state trajectory belongs to .

Differential privacy is defined with respect to an adjacency relation. We provide privacy to single agents’ state trajectories (rather than collections of trajectories as in some other works), and our choice of adjacency relation is defined for single agents. In the case of dynamic systems, the adjacency relation gives a notion of how similar trajectories are and specifies which trajectories must be made approximately indistinguishable from each other.

Definition 3 (Adjacency [yazdani2018differentially])

Fix an adjacency parameter for agent . is defined as

In words, two state trajectories of agent are adjacent if and only if the -norm of their difference is upper bounded by . This means that every state trajectory within distance from agent ’s state trajectory must be made approximately indistinguishable from it to enforce differential privacy.

To calibrate differential privacy’s protections, agent selects privacy parameters and . These parameters determine the level of privacy afforded to . Typically, and for all [yazdani2018differentially]. The value of

can be regarded as the probability that differential privacy fails for agent

, while can be regarded as the information leakage about agent .

The implementation of differential privacy in this work provides differential privacy for each agent individually. This will be accomplished by adding noise to sensitive data directly, an approach called “input perturbation” privacy in the literature [leny20]. The noise is added by a privacy mechanism, which is a randomized map. We now provide a formal definition of differential privacy, which states the guarantees a mechanism must provide. First, fix a probability space . We are considering outputs in and use a -algebra over , denoted [hajek2015random].

Definition 4 (Differential Privacy)

Let and be given. A mechanism is -differentially private if, for all adjacent , we have

The Gaussian mechanism will be used to implement differential privacy. The Gaussian mechanism adds zero-mean i.i.d. noise drawn from a Gaussian distribution pointwise in time. Stating the required distribution uses the

-function, defined as

Lemma 1 (Gaussian Mechanism [le2013differentially])

Let , , and be given, and fix the adjacency relation . Let . The Gaussian mechanism for (,)-differential privacy takes the form where is a stochastic process with and

This mechanism provides (,)-differential privacy to .

For convenience, let

Iii Problem Formulation

In this section we state and analyze the differentially private formation control problem.

Problem 1

Consider a network of agents with communication topology modeled by the undirected, simple, connected, and weighted graph . Let be agent ’s state at time , be agent ’s neighborhood set, , and be a positive weight on the edge .

  1. Implement the formation control protocol

    in a differentially private manner.

  2. Analyze the relationship between network performance, privacy, and the underlying graph topology.

We will solve Problem 1 by bounding the performance of the network in terms of the privacy parameters of each agent and the algebraic connectivity of the underlying graph. This will allow us to analyze the relationship between performance, privacy, and topology.

Remark 1

We consider formation control in , which is equivalent to running  scalar-valued formation controllers. Therefore, we analyze the scalar case. This simplifies the forthcoming analysis while also giving a granular error analysis that allows for controlling error in each dimension independently. The -dimensional controller is simply an -fold replication of the scalar-valued controller. If an overall error bound for all dimensions is necessary, one need only multiply the forthcoming one-dimensional error bounds by the number of dimensions, .

Before solving Problem 1, we give the necessary definitions for formation control. First, we define agent- and network-level dynamics. Then, we detail how each agent will enforce differential privacy. Lastly, we explain how differentially private communications affect the performance of a formation control protocol and how to quantify quality of a formation.

Iii-a Multi-agent Formation control

The goal of formation control is for agents in a network to assemble into some geometric shape or set of relative states. Multi-agent formation control is a well researched problem and there are several mathematical formulations one can use to achieve similar results [jadbabaie2015scaling, krick2009stabilisation, ren2007information, ren2007consensus, fax2004information, olfati2007consensus, mesbahi2010graph]. We will define relative distances between agents that communicate and the control objective is for all agents to maintain the relative distances to each of their neighbors. This approach is similar to that of [ren2007information] and the translationally invariant formations presented in [mesbahi2010graph].

We define for all as the desired relative distance between agents and . For the formation to be feasible, for all . Agent has state at time and the network control objective is driving for all It is important to note that there is an infinite set of points that can be in formation; the formation can be centered around any point in and meet the control requirement, i.e., we allow formations to be translationally invariant [mesbahi2010graph].

Now we define the agents’ update law. Let be any collection of points in formation such that for all and let be the network-level formation specification. We consider the formation control protocol

(1)

As noted in Remark 1, we analyze convergence of Equation (1) at the component level. Thus, while , we select an arbitrary  and provide analysis for

(2)

i.e., each agents  component, which proceeds identically for each . Below, we also use the vector of  components of , denoted

(3)

Let . Then we analyze

(4)

Letting , we may write . In this form, we have the following convergence result.

Lemma 2 ([olfati2007consensus], Theorem 2)

If is connected, is doubly stochastic, and , then the protocol in Equation (4) reaches consensus asymptotically and 

Because the protocol in Equation (4) reaches consensus over , it solves the translationally invariant formation control problem [mesbahi2010graph]. Using  to denote the state offset between agents  and  in the appropriate dimension, the node-level protocol in Equation (1) can be rewritten for a single component as

(5)

which we use below.

Iii-B Private Communications

When agent transmits to the agents in , it is potentially exposing its state trajectory, , to them and adversaries or eavesdroppers. Agent therefore sends a differentially private version of to its neighborhood.

Agent starts by selecting privacy parameters , , and adjacency relation with . Agent  then privatizes its state trajectory with the Gaussian mechanism. Let denote the differentially private version of , where, pointwise in time, with and . Thus agent keeps the trajectory differentially private. Agent then shares , which is also differentially private because subtracting is merely post-processing [cynthia2006differential].

Iii-C Private Formation Control

When each agent is sharing differentially private information, the node-level formation control protocol becomes

(6)

where agent uses rather than because it always has access to its own unprivatized state. The stochastic nature of this protocol implies that agents no longer exactly reach a formation, and, in particular, the states will never exactly converge to a steady-state value.

To analyze performance, let

which is the state vector the protocol in Equation (4) would converge to with initial state and without privacy. Also let which is the distance of the current state to the state the protocol would converge to without differential privacy. To quantity the effects of privacy on the network as a whole, let be the aggregate error of the network, and let

(7)

be the steady-state error of the network.

Problem 1 requires us to quantify the relationship between privacy, encoded by ; performance, encoded by ; and topology, encoded by . These quantitative tradeoffs are the subject of the next section.

Iv Differentially Private Formation Control

In this section we solve Problem 1. First, we show how the private formation control protocol can be modeled as a Markov chain. Then, we solve Problem 1 by deriving performance bounds that are functions of the underlying graph topology and each agent’s privacy parameters.

Iv-a Formation Control as a Markov chain

Problem 1 takes the form of a consensus protocol with Gaussian i.i.d. noise perturbing each agent’s state, which has been previously studied in [jadbabaie2015scaling]. We begin by expanding in Equation (6), which yields

(8)

For the purposes of analysis, we will consider equivalent network-level dynamics given as follows.

Lemma 3

Let agents use the communication graph with weighted Laplacian . Then Equation (8) can be represented at the network level as where and where , with .

Using Equation (5), Equation (8) can be expanded as

The last term encodes privacy noise. Without this term we have formation control without noise, which can be represented at the network level as Next, let . Using the fact that and we have and The lemma follows from setting .

For analysis, we use the network-level update law

(9)

The main result of this paper uses the fact that a stochastic matrix

can serve as the transition matrix of a Markov chain and the properties of the Markov chain can be used to analyze the network dynamics associated with . Before stating our main results we first define the conditions under which a network modeled by an undirected, weighted, connected graph can be modeled as a Markov chain and establish the properties of this Markov chain.

Lemma 4

For an undirected, weighted, simple, connected graph , let be given. If for all the graph weights are designed such that then the matrix is doubly stochastic.

The weighted graph Laplacian has row

Then row of is row of , which is

If agents and are not neighbors, then , which implies that . Then

Because and , every entry of is greater than or equal to 0. Because the sum of every row of is and every entry is greater than or equal to 0, is row stochastic. The same procedure can be used to show that is column stochastic because for all . Therefore is doubly stochastic.

Graph Laplacian properties can be used to make stronger statements. In particular, the Laplacian of an undirected graph is always symmetric, and therefore is symmetric. Let the stationary distribution of the Markov chain be , which satisfies With the symmetry of we have the following explicit form for

Lemma 5

If is symmetric, then its stationary distribution is

See [pinsky2010introduction, Chapter 4]. Furthermore, we can make a stronger statement about .

Lemma 6

Let . If is connected, simple, and finite, then is irreducible, aperiodic, positive recurrent, and reversible.

The fact that is connected and finite implies is irreducible [levin2017markov, Chaper 1]. being connected and simple along with and Lemma 4 imply that has self loops and thus is aperiodic [levin2017markov, Chapter 1]. The existence of implies positive recurrence [levin2017markov, Theorem 21.12]. Lastly, the above properties along with the symmetry of allow us to use Kolmogorov’s Criteria to deduce that is reversible [kelly2011reversibility, Section 1.5].

Lemmas 4-6 allow us to develop bounds on the steady state error in Equation (7) using [jadbabaie2015scaling]. That work details several specific cases of consensus protocols with noise vectors of the form in Equation (9). However, for this paper we are only interested in the results when is symmetric and the noise is i.i.d, which take the following form.

Lemma 7 (From [jadbabaie2015scaling])

If is irreducible, aperiodic, and reversible, and if the noises at the nodes are uncorrelated, such that the off diagonal elements of are and , then is bounded via

where is the Kemeney constant of the Markov chain with transition matrix .

In this work, given that , we wish to relate to the agents’ graph topology encoded in . We do so with the following bound.

Lemma 8

Let be the transition matrix of a finite, irreducible, and reversible Markov chain. Let be the second largest eigenvalue of . Then the Kemeney constant is bounded via and

Proof: The bounds on are derived in [levene2002kemeny]. These results also imply Then, because , we have

Iv-B Solving Problem 1

Now we state the first of our main results: a bound on performance in terms of agents’ level of privacy and underlying graph topology.

Theorem 1

Consider the network-level private formation control protocol If , is connected and undirected, and for all , then is upper-bounded by

With Lemma 7, Then using Lemma 8 to upper bound gives

Recalling from Lemma 3 that , we have

Then, because for all and ,

For agents , , and , which gives

Plugging this result back into the upper bound gives

We can simplify Theorem 1 when each agent has the same privacy parameters. Next, and from this point on, we consider the case where so that each agent adds the minimum amount of noise needed to attain -differential privacy.

Corollary 1 (Homogeneous Privacy Parameters)

Let each agent in the network have the privacy parameters and and the adjacency parameter . Then

The rest of the paper focuses on the homogeneous case presented in Corollary 1, though all forthcoming results are easily adapted to the heterogeneous case by considering minima and maxima over all agents where appropriate.

V Network Design Guidelines

In this section we give guidelines for designing a differentially private formation control network. The question of interest is: Given a specific communication topology, how much privacy is each agent allowed to have for ? As noted in Remark 1, we do this for each dimension of formation control individually. A smaller value of corresponds to being more private. Therefore an upper bound on , which is the measure of system performance, implies a lower bound on , each agent’s privacy parameter.

We derive an impossibility result and sufficient conditions in terms of for for specific networks. We consider connected graphs with uniform weights, where for all . By construction, the graphs we consider in this paper are weight-balanced, which implies that for any weights, the protocol in Equation (4) will converge to the unweighted average as seen in Lemma 2. Throughout this section we fix to be some small number and let vary to tune the level of privacy, which is common in differential privacy implementations [hsu2014differential].

In general, a more connected topology, i.e., one with larger  can accommodate stronger privacy and still achieve the desired level of performance. This principle is used to determine the conditions under which it is impossible to satisfy , shown next.

Theorem 2 (Impossibility Result)

Given a network of agents with specified , , , and , compute . Then  cannot be assured if

where

Proof: Being unable to achieve is equivalent to requiring

This holds if and only if

Then, using , we solve for , giving

We now derive necessary and sufficient conditions for assuring  for common graphs: the complete graph, line graph, cycle graph, and star graph. These conditions can easily be checked a priori and give a network designer a simple means of determining whether a specific network will meet performance requirements. Proofs of Corollaries 3-5 are similar to that of Corollary 2 and are omitted.

Corollary 2 (Complete graph)

The complete graph has algebraic connectivity [de2007old]. Consider a network of agents with specified , , , , , and , and communication topology modeled by the complete graph. The network can be shown to satisfy  if and only if

Proof: The complete graph satisfies the design requirements if . This is assured if , which holds if and only if

For the complete graph with all weights equal to , we have and thus we require

Then using we solve the inequality for to find

Corollary 3 (Cycle Graph)

The cycle graph has algebraic connectivity [de2007old]. Consider a network of agents with specified , , , , , and , and communication topology modeled by the cycle graph. The network is assured to satisfy  if and only if

where

Corollary 4 (Line Graph)

The line graph has algebraic connectivity [de2007old]. Consider a network of agents with specified , , , , , and communication topology modeled by the line graph. The network is assured to satisfy  if and only if

where

Corollary 5 (Star Graph)

The star graph has algebraic connectivity [de2007old]. Consider a network of agents with specified , , , , , and , and communication topology modeled by the star graph. The network is assured to satisfy  if and only if

Remark 2

Fix , ,, , and . The lower bounds on found in Corollaries 2-5 were calculated numerically for networks with a varying number of agents, the results of which are in Table I.

[width = .32]GraphN
Complete
Cycle
Line
Star
TABLE I: Comparison of the lower bounds on for various communication topologies and numbers of agents. This table illustrates that more-connected graphs accommodate privacy better when the network size grows, because they allow to be smaller, which gives stronger privacy protections.

Vi Sensitivity Results

Theorem 1 and Corollaries 2-5 show that performance of a network is a function of the network topology, each agent’s privacy parameters, adjacency relationship, step size, and the number of agents. Some of these parameters are global, in that the parameter depends on the entire network, and some are local, in that the parameter can change at the agent level. For example, the network communication topology is a global parameter while each agent’s privacy parameter, , is a local parameter.

Consider the following example: Given a network that is not performing as desired, one option is to change the network’s topology and allow more agents to communicate, while another option is loosening the agents’ privacy requirements. Depending on design constraints, it may be more effective to allow more agents to communicate or to relax privacy requirements. It is useful to understand when changing is more effective than changing the network topology and vice versa. In this section we therefore analyze how sensitive network performance is to local changes in privacy and global changes in topology.

We start by letting equal the upper bound found earlier, . Define . Now we take the partial derivatives with respect to and :

(10)
(11)

These partial derivatives give an understanding of how much changes with a change in either or the network topology. As mentioned previously, we would like to determine when changing one is more effective than the other, which leads to the following result.

Theorem 3

Let