Log In Sign Up

What's a little leakage between friends?

by   Sebastian Angel, et al.

This paper introduces a new attack on recent messaging systems that protect communication metadata. The main observation is that if an adversary manages to compromise a user's friend, it can use this compromised friend to learn information about the user's other ongoing conversations. Specifically, the adversary learns whether a user is sending other messages or not, which opens the door to existing intersection and disclosure attacks. To formalize this compromised friend attack, we present an abstract scenario called the exclusive call center problem that captures the attack's root cause, and demonstrates that it is independent of the particular design or implementation of existing metadata-private messaging systems. We then introduce a new primitive called a private answering machine that can prevent the attack. Unfortunately, building a secure and efficient instance of this primitive under only computational hardness assumptions does not appear possible. Instead, we give a construction under the assumption that users can place a bound on their maximum number of friends and are okay leaking this information.


Zephyr: Hiding Metadata in a Messaging System

Private messaging over internet related services is difficult to impleme...

Function Computation Without Secure Links: Information and Leakage Rates

Consider L users, who each holds private data, and one fusion center who...

Measuring Information Leakage in Non-stochastic Brute-Force Guessing

We propose an operational measure of information leakage in a non-stocha...

Privacy Leakage of Real-World Vertical Federated Learning

Federated learning enables mutually distrusting participants to collabor...

Session: A Model for End-To-End Encrypted Conversations With Minimal Metadata Leakage

Session is an open-source, public-key-based secure messaging application...

Typer vs. CAPTCHA: Private information based CAPTCHA to defend against crowdsourcing human cheating

Crowdsourcing human-solving or online typing attacks are destructive pro...

PILOT: Password and PIN Information Leakage from Obfuscated Typing Videos

This paper studies leakage of user passwords and PINs based on observati...

1. Introduction

In the past few years there has been a renaissance of messaging systems (angel16unobservable; vandenhoof15vuvuzela; lazar16alpenhorn; kwon17atom; tyagi17stadium; alexopoulos17mcmix) that allow users to communicate online without their messages being observed by ISPs, companies, or governments. These systems target a property called metadata-privacy which is stronger than end-to-end encryption: encryption hides the content of messages but it does not hide their existence nor any of their associated metadata (identity of the sender or recipient, frequency, time, and duration of communication, etc.). While hiding metadata has been the subject of a long line of work dating back three decades (chaum81untraceable), there is renewed interest due to a proliferation of controversial surveillance practices (guardian13nsa; intercept14new; guardian15gchq; eff15dragnet; tor15cmu), and the monetization of users’ private information (wired12shady; cnet12facebook; forbes13att; reuters16yahoo).

Existing metadata-private messaging (MPM) systems guarantee that as long as the sender and the recipient of a message are not compromised, their communication cannot be observed by an adversary (the adversary learns that users are part of the system, but not whether they communicate). If either the sender or the recipient is compromised, MPM systems provide no guarantees (e.g., a compromised sender could trivially disclose to whom it is sending a message). In this paper we investigate whether an adversary—by compromising and leveraging a user’s friends—can learn anything about the user’s other ongoing communications.

At first glance the answer to the above question appears to be no (assuming that the user does not voluntarily disclose the existence of other communications to compromised friends). After all, the guarantees of MPM systems should prevent the adversary from learning anything about conversations between uncompromised clients. Nevertheless, we find that this is not actually the case: engaging in a conversation with a compromised client consumes a limited resource, namely the number of concurrent conversations that a user can support. By observing a client’s responses (or lack thereof), a compromised friend can learn whether the user has fully utilized this limited resource (i.e., the user is busy talking to others). In Section LABEL:s:attack we show how this one bit of information enables existing intersection and disclosure attacks (raymond00traffic; agrawal03disclosure; kesdogan04hitting; kesdogan09breaking; danezis03statistical; mallesh10reverse; troncoso08perfect; danezis09vida; danezis04statistical; danezis07two; perezgonzales12understanding) that invalidate MPM systems’ guarantees.

More interestingly, our compromised friend attack applies to all MPM systems that support a notion of dialing (lazar16alpenhorn) (or any other mechanism that allows clients to start new conversations over time). We give a formal characterization of the attack with a scenario that we call the exclusive call center problem, which abstracts away the design or implementation of MPM systems. We then introduce a primitive called a private answering machine that solves the abstract problem and can be used by clients of MPM systems to prevent the compromised friend attack. In particular, clients use a private answering machine to select with which friends to communicate, while guaranteeing that compromised friends learn no information about other ongoing communications.

Unfortunately, building a cryptographically-secure private answering machine that does not require placing assumptions on the number of callers (i.e., the number of friends that a user can have) or incurring prohibitive delay or bandwidth appears hard. We compromise on this point and give a construction that can be used by MPM systems under the assumption that users can place a bound on their maximum number of friends. Our construction has two limitations: (1) it leaks the bound chosen by the user, and (2) it increases the latency of communication between a pair of users proportional to the chosen bound. Despite these limitations, our work addresses a previously overlooked attack and allows users in MPM systems to communicate without leaking sensitive information.

In summary, the contributions of this work are:

  • An abstraction that captures the leakage from oversubscribing a fixed resource in the presence of adversarial probing (§3).

  • The compromised friend attack which exploits the fixed communication capacity of MPM systems (§LABEL:s:attack).

  • The construction of a private answering machine (§LABEL:s:solutions:bounded) that can be used in MPM systems to avoid leaking information to compromised friends about users’ other ongoing conversations (§LABEL:s:defense).

2. Background

The goal of metadata-private messaging systems (vandenhoof15vuvuzela; kwon17atom; alexopoulos17mcmix; tyagi17stadium; angel16unobservable) is to allow a pair (or group) of friends to exchange bidirectional messages without leaking metadata to any party besides the sender and the recipient. A pair of users are friends if they have previously shared a secret, either out-of-band (e.g., in person at a coffee shop), or in-band with an add friend protocol (lazar16alpenhorn)

. Users in these systems exchange a fixed number of messages with their friends in discrete time epochs called rounds; users participate in every round even if they are idle. This ensures that an attacker that monitors the network cannot tell when users are actively communicating with their friends or starting/stopping conversations. This also places a bound on the number of active conversations that a user can have at any time; we refer to this as the client’s

communication capacity.

Once a client reaches its communication capacity, it cannot send messages to other friends until it ends an existing conversation. As a result, clients use a separate dialing protocol to coordinate the start and end of conversations. In a dialing protocol, a client sends a short message (a few bits) to a friend regardless of whether the friend’s client has reached its communication capacity. The dialing message is sufficient to notify a user that one of their friends wishes to communicate, and to agree on a round to start the conversation (lazar16alpenhorn). There are multiple ways in which a client can react to a dialing message. Some natural choices are:

  • If the client has not reached its communication capacity, it can automatically accept the call and start a new conversation.

  • The client could prompt the user (similar to calling a friend in Skype), who can choose to accept or reject the call.

  • If at capacity, the client could randomly end an existing conversation to make room for a new one.

Each of these choices is problematic. If the client’s communication capacity is (as in some of the existing systems (vandenhoof15vuvuzela; tyagi17stadium)) and the client automatically accepts calls, then any of the client’s friends can easily learn when the client is not active in a conversation simply by calling. Leaving the choice to the users is slightly better since the user can choose to ignore or delay accepting some calls, but their choices can still inadvertently lead to intersection attacks. Ending conversations randomly hurts usability and might still leak information. The goal of the next section is to formalize the desired properties of the client’s answering mechanism.

3. Exclusive call center problem

In order to avoid the details of particular MPM systems, we introduce an abstract scenario called the exclusive call center problem. It consists of a call center that has operators capable of receiving calls (i.e., the call center has communication capacity ). The call center promises exclusivity to a single organization. This might be desirable to ensure high quality of service, for legal reasons, or to prevent the accidental leak of trade or business secrets to callers of a different organization. When a caller issues a call, an automatic answering machine routes the call to an available operator who then processes the call. If receives more calls than there are available operators, then routes as many calls as it can, and notifies the remaining callers that all operators are busy.

While the above seems reasonable, the call center in question is greedy and wishes to oversubscribe its resources by contracting with a second organization—thereby violating its exclusivity agreement. This poses two problems for the call center. First, cannot determine to which organization a call belongs; only an operator is in a position of making that distinction. Second, with the current decision logic of (route to available operators, notify remaining callers that operators are busy), a group of callers from the same organization can collectively determine that they are not being given exclusive access to the call center (e.g., by placing calls and noticing that not all are picked up). Given these issues and the limit of operators (which is publicly known), can the call center do anything to maintain the illusion of exclusivity?

The first observation that the call center’s CEO makes is that while there are operators, there is no guarantee that all of them are available at any given point in time. After all, operators are human and take breaks. This, the CEO believes, opens the door to some level of plausible deniability. In particular, if gives a caller from organization a busy signal it could mean:

  1. All operators are busy handling other callers from .

  2. Some operators are busy handling callers from and the remaining operators are on a break.

  3. Some operators are busy handling callers from , some are busy handling callers from , and some are on a break.

Possibility 1 is the expected scenario of a high-efficiency trustworthy call center. Possibility 2 is an unwanted outcome since it is inefficient, but it does not violate the contractual agreement. Possibility 3, however, violates the promise of exclusivity. The goal of the call center is to design such that it is hard for either of the two organizations and their callers (assume no coordination between organizations) to infer that possibility 3 is the one taking place. As we alluded to earlier, the key challenge is that cannot distinguish between callers (and determine to which organization they belong), and therefore cannot selectively lie to keep a consistent set of responses. We thus ask whether there exists an that can leverage the proposed ambiguity to fool the organizations into thinking they are exclusive.

We think of as acting in rounds, where in each round, receives a set of calls . We seek two informal properties from .

  • Liveness: eventually a caller in gets to talk to an operator.

  • Privacy: it is computationally hard for any colluding subset of callers (some of whom may get to speak to operators) to distinguish between a scenario where and a scenario where (i.e., it is difficult for the colluding subset of callers to determine whether they are the only callers or not).

The liveness guarantee is needed for to be useful, but also to rule out a trivial solution: if

never puts anyone through to an operator, then the probability that any colluding set of callers

can distinguish between and is 1/2.

Security game.

To define privacy and liveness more formally, we use a security game played between an adversary and a challenger parameterized by a polynomial time answering machine and a security parameter . takes as input a subset of callers from the set of all possible callers , a communication capacity , and a random string , where . outputs a set of callers , such that .

  1. is given oracle access to , and can issue a number of queries to with arbitrary inputs , , . For each query, can observe the corresponding result .

  2. Challenger samples a random bit uniformly in , and a random string uniformly in .

  3. picks a set of callers (where ) and positive integer , and sends them to the challenger.

  4. Challenger sets if , and if (where is a uniform random element from the set ).

  5. Challenger calls to obtain where .

  6. Finally, the challenger removes from (if it is present) and returns the result () to .

  7. outputs its guess , and wins the security game if .

In summary, the adversary’s goal in the game is to determine if the challenger is communicating with the uncompromised caller after compromising all of the other callers (represented by ).

Definition 0 (Private answering machine).

An answering machine guarantees privacy if in the above security game with parameter , for all PPT algorithms , there exists a negligible function111A function is negligible if there exists an integer such that for all positive polynomials and all greater than , . negl such that: , where the probability is over the random coins of and the challenger.