Dynamically Protecting Privacy, under Uncertainty
We propose and analyze the ε-Noisy Goal Prediction Game to study a fundamental privacy versus efficiency tradeoff in dynamic decision-making under uncertainty. An agent wants to quickly reach a final goal in a network through a sequence of actions, while the effects of these actions are subject to random noise and perturbation. Meanwhile, an overseeing adversary observes the effects of the agent's past actions and tries to predict the goal. We are interested in understanding the probability that the adversary predicts the goal correctly (prediction risk) as a function of the time it takes the agent to reach her goal (delay). Our main results characterize the prediction risk versus delay tradeoff under various network topologies. First, we establish an asymptotically tight characterization in complete graphs, showing that (1) intrinsic uncertainty always leads to a strictly positive overhead for the agent, even as her delay tends to infinity, and (2) under a carefully designed decision policy the overhead can be only additive with respect to the noise level, and thus incur an asymptotically negligible effect on system performance. We further apply these insights to studying network topologies generated by random graphs, and to designing private networks. In both cases, we show how to achieve an additive overhead even for relatively sparse, non-complete networks. Finally, for general graphs, we construct a private agent strategy that can operate under any level of intrinsic uncertainty. Our analysis is centered around a new class of "noise-harvesting" agent strategies which adaptively combines intrinsic uncertainty with additional artificial randomization to achieve efficient obfuscation.
READ FULL TEXT