Formal Analysis of an E-Health Protocol

08/25/2018
by   Naipeng Dong, et al.
0

Given the sensitive nature of health data, security and privacy in e-health systems is of prime importance. It is crucial that an e-health system must ensure that users remain private - even if they are bribed or coerced to reveal themselves, or others: a pharmaceutical company could, for example, bribe a pharmacist to reveal information which breaks a doctor's privacy. In this paper, we first identify and formalise several new but important privacy properties on enforcing doctor privacy. Then we analyse the security and privacy of a complicated and practical e-health protocol (DLV08). Our analysis uncovers ambiguities in the protocol, and shows to what extent these new privacy properties as well as other security properties (such as secrecy and authentication) and privacy properties (such as anonymity and untraceability) are satisfied by the protocol. Finally, we address the found ambiguities which result in both security and privacy flaws, and propose suggestions for fixing them.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/16/2018

The 5G-AKA Authentication Protocol Privacy

We study the 5G-AKA authentication protocol described in the 5G mobile c...
05/28/2021

A Holistic Approach to Enhanced Security and Privacy in Digital Health Passports

As governments around the world decide to deploy digital health passport...
10/03/2019

On the security and privacy of Interac e-Transfers

Nowadays, the Interac e-Transfer is one of the most important remote pay...
05/23/2017

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...
08/28/2017

A Type System for Privacy Properties (Technical Report)

Mature push button tools have emerged for checking trace properties (e.g...
11/29/2017

UC Secure Issuer-Free Adaptive Oblivious Transfer with Hidden Access Policy

Privacy is a major concern in designing any cryptographic primitive when...
10/02/2019

Sensor Networks in Healthcare: Ensuring Confidentiality and User Anonymity in WBAN

Wireless body area network(WBAN) is becoming more popular in recent year...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The inefficiency of traditional paper-based health care and advances in information and communication technologies, in particular cloud computing, mobile, and satellite communications, constitute the ideal environment to facilitate the development of widespread electronic health care (e-health for short) systems. E-health systems are distributed health care systems using devices and computers which communicate with each other, typically via the Internet. E-health systems aim to support secure sharing of information and resources across different health care settings and workflows among different health care providers. The services of such systems are intended to be more secure, effective, efficient and timely than the currently existing health care systems.

Given the sensitive nature of health data, handling this data must meet strict security and privacy requirements. In traditional health care systems, this is normally implemented by controlling access to the physical documents that contain the health care data. Security and privacy are then satisfied, assuming only legitimate access is possible and assuming that those with access do not violate security or privacy.

However, the introduction of e-health systems upends this approach. The main benefit of e-health systems is that they facilitate digital exchange of information amongst the various parties involved. This has two major consequences: first, the original health care data is shared digitally with more parties, such as pharmacists and insurance companies; and second, this data can be easily shared by any of those parties with an outsider. Clearly, the assumption of a trusted network can no longer hold in such a setting. Given that it is trivial for a malicious entity to intercept or even alter digital data in transit, access control approaches to security and privacy are no longer sufficient. Therefore, we must consider security and privacy of the involved parties with respect to an outsider, the Dolev-Yao adversary [DY83], who controls the communication network (i.e., the adversary can observe, block, create and alter information). Communication security against such an adversary is mainly achieved by employing cryptographic communication protocols. Cryptography is also employed to preserve and enforce privacy, which prevents problems such as prescription bribery.

It is well known that designing such protocols is error-prone: time and again, flaws have been found in protocols that claimed to be secure (e.g., electronic voting systems [BT94, LK00] have been broken [HS00, LK02]). Therefore, we must require that security and privacy claims of an e-health protocol are verified before the protocol is used in practice. Without verifying that a protocol satisfies its security and privacy claims, subtle flaws may go undiscovered.

In order to objectively verify whether a protocol satisfies its claimed security and privacy requirements, each requirement must be formally defined as a property. Various security and privacy properties have already been defined in the literature, such as secrecy, authentication, anonymity and untraceability. We refer to these properties as regular security and privacy properties. While they are necessary to ensure security and privacy, by themselves these regular properties are not sufficient. Benaloh and Tuinstra pointed out the risk of subverting a voter [BT94] to sell her vote. The idea of coercing or bribing a party into nullifying their privacy is hardly considered in the literature of e-health systems (notable exceptions include [Mat98, dDLVV08]). However, this concept impacts e-health privacy: for example, a pharmaceutical company could bribe doctors to prescribe only their medicine. Therefore, we cannot only consider privacy with respect to the Dolev-Yao adversary. To fully evaluate privacy of e-health systems, we must also consider this new aspect of privacy in the presence of an active coercer – someone who is bribing or threatening parties to reveal private information. We refer to this new class of privacy properties as enforced privacy properties. In particular, we identify the following regular and enforced privacy properties [DJP12b] to counter doctor bribery: prescription privacy: a doctor cannot be linked to his prescriptions; receipt-freeness: a doctor cannot prove his prescriptions to the adversary for preventing doctor bribes; independency of prescription privacy: third parties cannot help the adversary to link a doctor to the doctor’s prescriptions for preventing others to reduce a doctor’s prescription privacy; and independency of receipt-freeness: a doctor and third parties cannot prove the doctor’s prescriptions to the adversary for preventing anyone from affecting a doctor’s receipt-freeness.

Contributions. We identify three enforced privacy properties in e-health systems and are the first to provide formal definitions for them. In addition, we develop an in-depth applied pi model of the DLV08 e-health protocol [dDLVV08]. As this protocol was designed for practical use in Belgium, it needed to integrate with the existing health care system. As such, it has become a complicated system with many involved parties, that relies on complex cryptographic primitives to achieve a multitude of goals. We formally analyse privacy and enforced privacy properties of the protocol, as well as regular security properties. We identify ambiguities in the protocol description that cause both security and privacy flaws, and propose suggestions for fixing them. The ProVerif code of modelling and full analysis of the DLV08 protocol can be found in [DJP12a].

Remark. This article is a revised and extended version of [DJP12b] that appears in the proceedings of the 17th European Symposium on Research in Computer Security (ESORICS’12). In this version we have added (1) the full formal modelling of the DLV08 protocol in the applied pi calculus (see Section 5); (2) the detailed analysis of secrecy and authentication properties of the protocol (see Section 6); and (3) details of the analysis of privacy properties of the protocol which are not described in the conference paper [DJP12b] (see Section 6). In addition, it contains an overview on privacy and enforced privacy in e-health systems (see Section 2) and a brief but complete description of the applied pi calculus (see Section 3.1).

2 Privacy and enforced privacy in e-health

Ensuring privacy in e-health systems has been recognised as a necessary prerequisite for adoption such systems by the general public [MRS06, KAB09]. However, due to the complexity of e-health settings, existing privacy control techniques, e.g., formal privacy methods, from domains such as e-voting (e.g., [DKR09, JPM09]) and e-auctions (e.g., [DJP11]) do not carry over directly. In e-voting and e-auctions, there is a natural division into two types of roles: participants (voters, bidders) and authorities (who run the election/auction). In contrast, e-health systems have to deal with a far more complex constellation of roles, including doctors, patients, pharmacists, insurance agencies, oversight bodies, etc. These roles interact in various ways with each other, requiring private data of one another, which makes privacy even more complex.

Depending on the level of digitalisation, health care systems have different security requirements. If electronic devices are only used to store patient records, then ensuring privacy mainly requires local access control. On the other hand, if data is communicated over a network, then communication privacy becomes paramount. Below, we sketch a typical situation of using a health care system, indicating what information is necessary where. This will help to gain an understanding for the interactions and interdependencies between the various roles.

Typically, a patient is examined by a doctor, who then prescribes medicine. The patient goes to a pharmacist to get the medicine. The medicine is reimbursed by the patient’s health insurance, and the symptoms and prescription of the patient may be logged with a medical research facility to help future research.

This overview hides many details. The patient may possess medical devices enabling her to undergo the examination at home, after which the devices digitally communicate their findings to a remote doctor. The findings of any examination (by doctor visit or by digital devices) need to be stored in the patients health record, either electronic or on paper, which may be stored at the doctor’s office, on a server in the network, on a device carried by the patient, or any combination of these. Next, the doctor returns a prescription, which also needs to be stored. The pharmacist needs to know what medicine is required, which is privacy-sensitive information. Moreover, to prevent abuse of medicine, the pharmacist must verify that the prescription came from an authorised doctor, is intended for this patient, and was not fulfilled before. On top of that, the pharmacist may be allowed (or even required) to substitute medicine of one type for another (e.g., brand medicine for generic equivalents), which again must be recorded in the patient’s health record. For reimbursement, the pharmacist or the patient registers the transaction with the patient’s health insurer. In addition, regulations may require that such information is stored (in aggregated form or directly) for future research or logged with government agencies. Some health care systems allow emergency access to health data, which complicates privacy matters even further. Finally, although a role may need to have access to privacy-sensitive data of other roles, this does not mean that he is trusted to ensure the privacy of those other roles. For instance, a pharmacist may sell his knowledge about prescription behaviour to a pharmaceutical company.

From the above overview on e-health systems, we can conclude that existing approaches to ensuring privacy from other domains deal with far simpler division of roles, and they are not properly equipped to handle the role diversity present in e-health systems. Moreover, they do not address the influence of other roles on an individual’s privacy. Therefore, current privacy approaches cannot be lifted directly, but must be redesigned specifically for the e-health domain.

In the following discussions, we focus on the privacy of the main actors in health care: patient privacy and doctor privacy. Privacy of roles such as pharmacists does not impact on the core process in health care, and is therefore relegated to future work. We do not consider privacy of roles performed by public entities such as insurance companies, medical administrations, etc.

2.1 Related work

The importance of patient privacy in e-health is traditionally seen as vital to establishing a good doctor-patient relationship. This is even more pertinent with the emergence of the Electronic Patient Record [And96]. A necessary early stage of e-health is to transform the paper-based health care process into a digital process. The most important changes in this stage are made to patient information processing, mainly health care records. Privacy policies are the de facto standard to properly express privacy requirements for such patient records. There are three main approaches to implement these requirements: access control, architectural design, and the use of cryptography.

Patient privacy by access control.

The most obvious way to preserve privacy of electronic health care records is to limit access to these records. The need for access control is supported by several privacy threats to personal health information listed by Anderson [And96]. Controlling access is not as straightforward as it sounds though: the need for access changes dynamically (e.g., a doctor only needs access to records of patients that he is currently treating). Consequently, there exists a wide variety of access control approaches designed for patient privacy in the literature, from simple access rules (e.g., [And96]), to consent-based access rules (e.g., [Lou98]), role-based access control (RBAC) (e.g., [RCHS03]), organisation based access control (e.g., [KBM03]), etc.

Patient privacy by architectural design.

E-health systems cater to a number of different roles, including doctors, patients, pharmacists, insurers, etc. Each such role has its own sub-systems or components. As such, e-health systems can be considered as a large network of systems, including administrative system components, laboratory information systems, radiology information systems, pharmacy information systems, and financial management systems. Diligent architectural design is an essential step to make such a complex system function correctly. Since privacy is important in e-health systems, keeping privacy in mind when designing the architecture of such systems is a promising path towards ensuring privacy [SV09]. Examples of how to embed privacy constraints in the architecture are given by the architecture of wireless sensor networks in e-health [KLS10], proxies that may learn location but not patient ID [MKDH09], an architecture for cross-institution image sharing in e-health [CHCK07], etc.

Cryptographic approaches to patient privacy.

Cryptography is necessary to ensure private communication between system components over public channels (e.g., [BB96]). For example, Van der Haak et al. [vWB03] use digital signatures and public-key authentication (for access control) to satisfy legal requirements for cross-institutional exchange of electronic patient records. Ateniese et al. [ACdD03] use pseudonyms to preserve patient anonymity, and enable a user to transform statements concerning one of his pseudonyms into statements concerning one of his other pseudonyms (e.g., transforming a prescription for the pseudonym used with his doctor to a prescription for the pseudonym used with the pharmacist). Layouni et al. [LVS09] consider communication between health monitoring equipment at a patient’s home and the health care centre. They propose a protocol using wallet-based credentials (a cryptographic primitive) to let patients control when and how much identifying information is revealed by the monitoring equipment. More recently, De Decker et al. [dDLVV08] propose a health care system for communication between insurance companies and administrative bodies as well as patients, doctors and pharmacists. Their system relies on various cryptographic primitives to ensure privacy, including zero-knowledge proofs, signed proofs of knowledge, and bit-commitments. We will explain this system in more detail in Section 4.

Doctor privacy.

A relatively understudied aspect is that of doctor privacy. Matyáš [Mat98] investigates the problem of enabling analysis of prescription information while ensuring doctor privacy. His approach is to group doctors, and release the data per group, hiding who is in the group. He does not motivate a need for doctor privacy, however. Two primary reasons for doctor privacy have been identified in the literature: (1) (Ateniese et al. [ACdD03]) to safeguard doctors against administrators setting specific efficiency metrics on their performance (e.g., requiring the cheapest medicine be used, irrespective of the patient’s needs). To address this, Ateniese et al. [Ad02, ACdD03] propose an anonymous prescription system that uses group signatures to achieve privacy for doctors; (2) (De Decker et al. [dDLVV08]) to prevent a pharmaceutical company from bribing a doctor to prescribe their medicine. A typical scenario can be described as follows. A pharmaceutical company seeks to persuade a doctor to favour a certain kind of medicine by bribing or coercing. To prevent this, a doctor should not be able to prove which medicine he is prescribing to this company (in general, to the adversary). This implies that doctor privacy must be enforced by e-health systems. De Decker et al. also note that preserving doctor privacy is not sufficient to prevent bribery: pharmacists could act as intermediaries, revealing the doctor’s identity to the briber, as pharmacists often have access to prescriptions, and thus know something about the prescription behaviour of a doctor. This observation leads us to formulate a new but important requirement of independency of prescription privacy in this paper: no third party should be able to help the adversary link a doctor to his prescription.

2.2 Observations

Current approaches to privacy in e-health, as witnessed from the literature study in Section 2.1, mostly focus on patient privacy as an access control or authentication problem. Even though doctor privacy is also a necessity, research into ensuring doctor privacy is still in its infancy. We believe that doctor privacy is as important as patient privacy and needs to be studied in more depth. It is also clear from the analysis that privacy in e-health systems needs to be addressed at different layers: access control ensures privacy at the service layer; privacy by architecture design addresses privacy concerns at the system/architecture layer; use of cryptography guarantees privacy at the communication layer. Since e-health systems are complex [TGC09] and rely on correct communications between many sub-systems, we study privacy in e-health as a communication problem. In fact, message exchanges in communication protocols may leak information which leads to a privacy breach [Low96, CKS04, DKR09].

Classical privacy properties, which are well-studied in the literature, attempt to ensure that privacy can be enabled. However, merely enabling privacy is insufficient in many cases: for such cases, a system must enforce user privacy instead of allowing the user to pursue it. One example is doctor bribery. To avoid doctor bribery, we take into account enforced privacy for doctors. In addition, we consider that one party’s privacy may depend on another party (e.g., in the case of a pharmacist revealing prescription behaviour of a doctor). In these cases, others can cause (some) loss of privacy. Obviously, ensuring privacy in such a case requires more from the system than merely enabling privacy. Consequently, we propose and study the following privacy properties for doctors in communication protocols in the e-health domain, in addition to regular security and privacy properties as we mentioned before in Section 1.

prescription privacy:

A protocol preserves prescription privacy if the adversary cannot link a doctor to his prescriptions.

receipt-freeness:

A protocol satisfies receipt-freeness if a doctor cannot prove his prescriptions to the adversary.

independency of prescription privacy:

A protocol ensures independency of prescription privacy if third parties cannot help the adversary to link a doctor to the doctor’s prescriptions.

independency of receipt-freeness:

A protocol ensures independency of receipt-freeness if a doctor cannot prove his prescriptions to the adversary given that third parties sharing information with the adversary.

3 Formalisation of privacy properties

In order to formally verify properties of a protocol, the protocol itself as well as the properties need to be formalised. In this section, we focus on the formalisation of key privacy properties, while the formalisation of secrecy and authentication properties can be considered standard as studied in the literature [Low96, Bla01]. Thus secrecy and authentication properties are introduced later in the case study (Section 6.1) and are omitted in this section.

We choose the formalism of the applied pi calculus, due to its capability in expressing equivalence based properties which is essential for privacy, and automatic verification supported by the tool ProVerif [Bla01]. The applied pi calculus is introduced in Section 3.1. Next, in Section 3.2, we show how to model e-health protocols in the applied pi calculus. Then, from Section 3.4 to Section 3.7, we formalise each of the privacy properties described in the end of Section 2.2. Finally, in Sections 3.8 and 3.9, we consider (strong) anonymity and (strong) untraceability in e-health, respectively. These concepts have been formally studied in other domains (e.g., [SS96, vMR08, BHM08, KT09, ACRR10, KTV10]), and thus are only briefly introduced in this section.

3.1 The applied pi calculus

The applied pi calculus is a language for modelling and analysing concurrent systems, in particular cryptographic protocols. The following (mainly based on [AF01, RS10]) briefly introduces its syntax, semantics and equivalence relations.

3.1.1 Syntax

The calculus assumes an infinite set of names, which are used to model communication channels and other atomic data, an infinite set of variables, which are used to model received messages, and a signature consisting of a finite set of function symbols, which are used to model cryptographic primitives. Each function symbol has an arity. A function symbol with arity zero is a constant. Terms (which are used to model messages) are defined as names, variables, or function symbols applied to terms (see Figure 1).

Figure 1: Terms in the applied pi calculus.
Example 1 (function symbols and terms).

Typical function symbols are enc with arity 2 for encryption, dec with arity 2 for decryption. The term for encrypting with a key is .

The applied pi calculus assumes a sort system for terms. Terms can be of a base type (e.g., Key or a universal base type Data) or type Channel where  is a type. A variable and a name can have any type. A function symbol can only be applied to, and return, terms of base type. Terms are assumed to be well-sorted and substitutions preserve types.

Terms are often equipped with an equational theory – a set of equations on terms. The equational theory is normally used to capture features of cryptographic primitives. The equivalence relation induced by is denoted as .

Example 2 (equational theory).

The behaviour of symmetric encryption and decryption can be captured by the following equation:

where and are variables.

Systems are described as processes: plain processes and extended processes (see Figure 2).

Figure 2: Processes in the applied pi calculus.

In Figure 2, are terms, is a name, is a variable and is a metavariable, standing either for a name or a variable. The null process does nothing. The parallel composition represents the sub-process and the sub-process running in parallel. The replication represents an infinite number of process running in parallel. The name restriction binds the name in the process , which means the name is secret to the adversary. The conditional evaluation represents equality over the equational theory rather than strict syntactic identity. The message input reads a message from channel , and binds the message to the variable in the following process . The message output sends the message on the channel , and then runs the process . In both of these cases we may omit when it is . Extended processes add variable restrictions and active substitutions. The variable restriction binds the variable in the process . The active substitution replaces variable with term in any process that it contacts with. We say a process is sequential if it does not involve using the parallel composition , replication , conditional, or active substitution. That is, a sequential process is either null or constructed using name/variable restriction, message input/output. In addition, applying syntactical substitution (i.e., “” in ProVerif input language) to a sequential process still results in a sequential process. For simplicity of presentation, we use as an abbreviation for , where is the sequence of names . We also use the abbreviation to represent the process , where ; and an is of the form , , , or . The intuition of is that when a process consists of a sequential sub-process followed by a sub-process , we write the process in an abbreviated manner as . In addition, we use the abbreviation to represent .

Names and variables have scopes. A name is bound if it is under restriction. A variable is bound by restrictions or inputs. Names and variables are free if they are not delimited by restrictions or by inputs. The sets of free names, free variables, bound names and bound variables of a process are denoted as , , and , respectively. A term is ground when it does not contain variables. A process is closed if it does not contain free variables.

Example 3 (processes).

Consider a protocol in which generates a nonce , encrypts the nonce with a secret key , then sends the encrypted message to . Denote with the process modelling the behaviour of , with the process modelling the behaviour of , and the whole protocol by :

Here, is a free name representing a public channel. Name is bound in process ; name is bound in process . Variable is bound in process .

A frame is defined as an extended process built up from and active substitutions by parallel composition and restrictions. The active substitutions in extended processes allow us to map an extended process to its frame by replacing every plain process in with . The domain of a frame , denoted as , is the set of variables for which the frame defines a substitution and which are not under a restriction.

Example 4 (frames).

The frame of the process , denoted as is . The domain of this frame, denoted as is .

A context is defined as a process with a hole, which may be filled with any process. An evaluation context is a context whose hole is not under a replication, a condition, an input or an output.

Example 5 (context).

Process is an evaluation context. When we fill the hole with process , we obtain the process , which is the process .

3.1.2 Operational semantics

The operational semantics of the applied pi calculus is defined by: 1) structural equivalence (), 2) internal reduction (), and 3) labelled reduction () of processes.

1) Intuitively, two processes are structurally equivalent if they model the same thing but differ in structure. Formally, structural equivalence of processes is the smallest equivalence relation on extended process that is closed by -conversion on names and variables, by application of evaluation contexts as shown in Figure 3.

Figure 3: Structural equivalence in the applied pi calculus.

2) Internal reduction is the smallest relation on extended processes closed under structural equivalence, application of evaluation of contexts as shown in Figure 4.

Figure 4: Internal reduction in the applied pi calculus.

3) The labelled reduction models the environment interacting with the processes. It defines a relation as in Figure 5. The label is either reading a term from the process’s environment, or sending a name or a variable of base type to the environment.

Figure 5: Labelled reduction in the applied pi calculus.

3.1.3 Equivalences

The applied pi calculus defines observational equivalence and labelled bisimilarity to model the indistinguishability of two processes by the adversary. It is proved that the two relations coincide, when active substitutions are of base type [AF01, Liu11]. We mainly use the labelled bisimilarity for the convenience of proofs. Labelled bisimilarity is based on static equivalence: labelled bisimilarity compares the dynamic behaviour of processes, while static equivalence compares their static states (as represented by their frames).

Definition 1 (static equivalence).

Two terms and are equal in the frame , written as , iff there exists a set of restricted names and a substitution such that , and .

Closed frames and are statically equivalent, denoted as , if
(1) ;
(2) terms : iff .

Extended processes , are statically equivalent, denoted as , if their frames are statically equivalent: .

Example 6 (equivalence of frames [Af01]).

The frame and the frame , are equivalent. However, the two frames are not equivalent to frame , because the adversary can discriminate by testing .

where and are two function symbols without equations.

Example 7 (static equivalence).

Process is statically equivalent to process where and are two closed plain process, because the frame of the two processes are statically equivalent, i.e., .

Definition 2 (labelled bisimilarity).

Labelled bisimilarity is the largest symmetric relation on closed extended processes, such that implies:
(1) ;
(2) if then and for some ;
(3) if and and ; then and for some , where * denotes zero or more.

3.2 E-health protocols

In the existing e-voting and (sealed bid) e-auction protocols, where bribery and coercion have been formally analysed using the applied pi calculus (see e.g., [DKR09, DJP11]), the number of participants is determined a priori. In contrast with these protocols, e-health systems should be able to handle newly introduced participants (e.g., patients). To this end, we model user-types and each user-type can be instantiated infinite times.

Roles.

An e-health protocol can be specified by a set of roles, each of which is modelled as a process, . Each role specifies the behaviour of the user taking this role in an execution of the protocol. By instantiating the free variables in a role process, we obtain the process of a specific user taking the role.

Users.

Users taking a role can be modelled by adding settings (identity, pseudonym, encryption key, etc.) to the process representing the role, that is , where is a sequential process which generates names/terms modelling the data of the user (e.g., ‘’ in Figure 26), reads in setting data from channels (e.g., ‘’ in Figure 25), or reveals data to the adversary (e.g., ‘’ in Figure 26). A user taking a role multiple times is captured by add replication to the role process, i.e., . A user may also take multiple roles. When the user uses two different settings in different roles, the user is treated as two separate users. If the user uses shared setting in multiple roles, the user process is modelled as the user setting sub-process followed by the multiple role processes in parallel, e.g., when the user takes two roles and ,.

User-types.

Users taking a specific role, potentially multiple times, belong to a user-type. Hence, a user-type is modelled as . The set of users of a type is captured by adding replication to the user-type process, i.e., . A protocol with roles naturally forms user-types. In protocols where users are allowed to take multiple roles with one setting, we consider these users form a new type. For example, a challenge-response protocol, which specifies two roles – a role Initiator and a role Responder, has three user-types - the Initiator, the Responder and users taking both Initiator and Responder, assuming that a user taking both roles with the same setting is allowed. A user-type with multiple roles is modelled as , where are the roles that a user of this type takes at the same time. Since each user is an instance of a user-type, the formalisation of user-types allows us to model an unbounded number of users, by simply adding replication to the user-types. In fact, in most cases, roles and user-types are identical, and the user-types that allow a user to take multiple roles can be considered as a new role as well. Hence, we use roles and user-types interchangeably.

Protocol instances.

Instances of an e-health protocol with roles/user-types are modelled in the following form:

where process , which is the abbreviation for process ( stands for private names, and stands for private channels), models the private names and channels in the protocol; is a sequential process, representing settings of the protocol, such as generating/computing data and revealing information to the adversary (see Figure 23 for example). Essentially, models the global settings of an instance and auxiliary channels in the modelling of the protocol.

Doctor role/user-type.

More specifically, we have a doctor role/user-type of the form:

In the following, we focus on the behaviour of a doctor, since our goal is to formalise privacy properties for doctors. Each doctor is associated with an identity () and can execute an infinite number of sessions (modelled by the exclamation mark ‘!’ in front of ). In case the doctor identity is revealed in the initialisation phase, we require that this unveiling does not appear in process , for the sake of uniformed formalisations of the later defined privacy properties. Instead, we model this case as identity generation () immediately followed by unveiling the identity () on the public channel . Note that we reserve the name for the adversary’s public channel. We require to be free to model the public channel that is controlled by the adversary. The adversary uses this channel by sending and receiving messages over . In fact, since the doctor identity is defined outside of the process , the doctor identity appearing in the process is a free variable of the process. Hence, in the case that the doctor identity is revealed, the doctor process can be simply modelled as , where doctor identity is a free variable. To distinguish the free variable in process from the name , we use the italic font to represent the free variable, i.e., .

Within each session, the doctor creates a prescription. Since a prescription normally contains not only prescribed medicines but also the time/date that the prescription is generated as well as other identification information, we consider the prescriptions differ in sessions. In the case that a prescription can be prescribed multiple times, one can add the replication mark in front of to model that the prescription can be prescribed in infinite sessions, i.e., . Similarly, we use the italic font of the prescription, , to represent the free variable referring to the prescription in the process .

Well-formed.

We require that is well-formed, i.e., the process satisfies the following properties:

  1. is canonical: names and variables in the process never appear both bound and free, and each name and variable is bound at most once;

  2. data is typed, channels are ground, private channels are never sent on any channel;

  3. may be null;

  4. and are sequential processes;

  5. , and can be any process (possibly ) such that is a closed plain process.

Furthermore, we use to denote a context (a process with a hole) consisting of honest users,

Dishonest agents are captured by the adversary (Section 3.3) with certain initial knowledge.

3.3 The adversary

We consider security and privacy properties of e-health protocols with respect to the presence of active attackers – the Dolev-Yao adversary. The adversary

  • controls the network – the adversary can block, read and insert messages over the network;

  • has computational power – the adversary can record messages and apply cryptographic functions to messages to obtain new messages;

  • has a set of initial knowledge – the adversary knows the participants and public information of all participants, as well as a set of his own data;

  • has the ability to initiate conversations – the adversary can take part in executions of protocols.

  • The adversary’s behaviour models that of every dishonest agent (cf. Section 6.9), which is achieved by including the initial knowledge of each dishonest agent in the adversary’s initial knowledge.

The behaviour of the adversary is modelled as a process running in parallel with the honest agents. The adversary does whatever he can to break the security and privacy requirements. We do not need to model the adversary explicitly, since he is embedded in the applied pi calculus as well as in the verification tool. Modelling the honest users’ behaviour is sufficient to verify whether the requirements hold.

Limitations.

Note that the Dolev-Yao adversary model we use includes the “perfect cryptography” assumption. This means that the adversary cannot infer any information from cryptographic messages for which he does not possess a key. For instance, the attacker cannot decrypt a ciphertext without the correct key. Moreover, the adversary does not have the ability to perform side-channel attacks. For instance, fingerprinting a doctor based on his prescriptions is beyond the scope of this attacker model.

3.4 Prescription privacy

Prescription privacy ensures unlinkability of a doctor and his prescriptions, i.e., the adversary cannot tell whether a prescription is prescribed by a doctor. This requirement helps to prevent doctors from being influenced in the prescriptions they issue.

Normally, prescriptions are eventually revealed to the general public, for example, for research purposes. In the DLV08 e-health protocol, prescriptions are revealed to the adversary observing the network. Therefore, in the extreme situation where there is only one doctor, the doctor’s prescriptions are obviously revealed to the adversary - all the observed prescriptions belong to the doctor. To avoid such a case, prescription privacy requires at least one other doctor (referred to as the counter-balancing doctor). This ensures that the adversary cannot tell whether the observed prescriptions belong to the targetted doctor or the counter-balancing doctor. With this in mind, unlinkability of a doctor to a prescription is modelled as indistinguishability between two honest users that swap their prescriptions, analogously to the formalisation of vote-privacy [DKR09]. By adopting the vote-privacy formalisation, prescription privacy is thus modelled as the equivalence of two doctor processes: in the first process, an honest doctor prescribes in one of his sessions and another honest doctor prescribes in one of his sessions; in the second one, prescribes and prescribes .

Definition 3 (prescription privacy).

A well-formed e-health protocol with a doctor role , satisfies prescription privacy if for all possible doctors and () we have

where and () are any two possible prescriptions, process and process can be .

Process models an instance of a doctor, with identity . The sub-process models a prescribing session in which prescribes for a patient. The sub-process models other prescribing sessions of . Similarly, process models another doctor . On the right-hand side of the equivalence, the two doctors, and , swap their prescriptions, and . The labelled bisimilarity () captures that any dishonest third party (the adversary) cannot distinguish the two sides. Doctor ’s process is called the counter-balancing process. We require the existence of the counter-balancing doctor and to avoid the situation in which all patients prescribe the same prescription, and thus the prescription of all patients are simply revealed.

Note that and are free names in the processes in the definition. and are free names in the processes, when the doctor identities are initially public, and are private names in the processes, when the doctor identities are initially private. Similarly, and are free names in the processes, when the prescriptions are revealed, and are private names in the processes, when the prescriptions are kept secret. This holds for the following definitions as well.

3.5 Receipt-freeness

Enforced privacy properties have been formally defined in e-voting and e-auctions. Examples include receipt-freeness and coercion-resistance in e-voting [DKR09, JPM09], and receipt-freeness for non-winning bidders in e-auctions [DJP11]. De Decker et al. [dDLVV08] identify the need to prevent a pharmaceutical company from bribing a doctor to favour their medicine. Hence, a doctor’s prescription privacy must be enforced by the e-health  system to prevent doctor bribery. This means that intuitively, even if a doctor collaborates, the adversary cannot be certain that the doctor has followed his instructions. Bribed users are not modelled as part of the adversary, as they may lie and are thus not trusted by the adversary. Due to the domain differences – in e-voting and sealed-bid e-auctions, participants are fixed before the execution, whereas in e-health, participants may be infinitely involving; in e-voting and sealed-bid e-auctions, each participant executes the protocol exactly once, whereas in e-health, a participant may involve multiple/infinite times. Thus, the formalisation in e-voting and sealed-bid e-auctions cannot be adopted. Inspired by formalisations of receipt-freeness in e-voting [DKR09] and e-auction [DJP11], we define receipt-freeness to be satisfied if there exists a process where the bribed doctor does not follow the adversary’s instruction (e.g., prescribing a particular medicine), which is indistinguishable from a process where she does.

Modelling this property necessitates modelling a doctor who genuinely reveals all her private information to the adversary. This is achieved by the process transformation by Delaune et al. [DKR09]. This operation transforms a plain process into one which shares all private information over the channel  with the adversary. The transformation is defined as follows: Let be a plain process and a fresh channel name. , the process that shares all of ’s secrets, is defined as:

  •  ,

  • ,

  • ,

  • ,

  • .

In addition, we also use the transformation  [DKR09]. This models a process which hides all outputs on channel . Formally, .

Definition 4 (receipt-freeness).

A well-formed e-health protocol with a doctor role , satisfies receipt-freeness if for any two doctors and () and any two possible prescriptions and (), there exist processes and , such that:

where is a closed plain process, is a free fresh channel name, process and can be .

In the definition, the sub-process models the sessions of that are not bribed, and the sub-processes and model the process in which the doctor lies to the adversary about one of his prescriptions. The real prescription behaviour of is modelled by the second equivalence. The first equivalence shows that the adversary cannot distinguish whether lied, given a counter-balancing doctor .

Remark

Receipt-freeness is stronger than prescription privacy (cf. Figure 6). Intuitively, this is true since receipt-freeness is like prescription privacy except that the adversary may gain more knowledge. Thus, if a protocol satisfies receipt-freeness (the adversary cannot break privacy with more knowledge), prescription privacy must also be satisfied (the adversary cannot break privacy with less knowledge). We prove this formally, following the proof that receipt-freeness is stronger than vote-privacy in [ACRR10]. We prove that by applying an evaluation context that hides the channel on both sides of the first equivalence in Definition 4, we can obtain Definition 3.

Proof.

If a protocol satisfies receipt-freeness, there exists a closed plain process such that the two equations in Definition 4 are satisfied. By applying the evaluation context (defined as in Section 3.5) on both sides of the first equation, we obtain

Lemma 1 [ACRR10]: Let and be two evaluation contexts such that and . We have that for any extended process .

Using Lemma 1, we can rewrite the left-hand side (1) and the right-hand side (2) of the equivalence as follows.

For equation , by the second equation in Definition 4, we have

Lemma 2 [ACRR10]: let be a closed plain process and a channel name such that . We have .

For equation , using Lemma 2, we obtain that

By transitivity, we have

which is exactly Definition 3.∎

The difference between this formalisation and receipt-freeness in e-voting [DKR09] and in e-auctions [DJP11] is that in this definition only a part of the doctor process (the initiation sub-process and a prescribing session) shares information with the adversary. In e-voting, each voter only votes once. In the contrast, a doctor prescribes multiple times for various patients. As patients and situations of patients vary, a doctor cannot prescribe medicine from the bribing pharmaceutical company all the time. Therefore, only part of the doctor process shares information with the adversary. Note that we model only one bribed prescribing session, as it is the simplest scenario. This definition can be extended to model multiple prescribing sessions being bribed, by replacing sub-process with the sub-process modelling multiple doctor sessions. Note that the extended definition requires multiple sessions of the counter-balancing doctor or multiple counter-balancing doctors.

Assume sessions of are bribed, denoted as , where . An arbitrary instance of the bribed sessions is denoted as . Assume there is a counter-balancing process of the bribed sessions. The process has corresponding sessions from one or more honest doctors. We use to denote the counter-balancing process where the prescriptions of the sessions are , respectively. Following definition 4, the multi-session receipt-freeness can be defined as follows.

Definition 5 (multi-session receipt-freeness).

A well-formed e-health protocol with a doctor role , satisfies multi-session receipt-freeness, if for any doctor with bribed sessions, denoted as , where , for any instantiation of the prescriptions in the bribed sessions , there exist processes and , such that

where is a closed plain process, is a free fresh channel name, process and can be , is less than doctor processes running in parallel and denotes that in some sessions of the doctor processes, the prescriptions are instantiated with .

Definition 4 (receipt-freeness) is a specific instance of this definition where only one session of the targeted doctor is bribed. When multiple sessions are bribed, to ensure multi-session receipt-freeness, it requires the existence of more than one counter-balancing doctor sessions, and thus this extended definition is stronger than receipt-freeness (Definition 4), meaning that if a protocol satisfied the multi-session receipt-freeness where multiple sessions are bribed, then the protocol satisfies receipt-freeness where only one session is bribed. The intuition is that if there exists a lying process for multiple bribed sessions such that multi-session receipt-freeness is satisfied, by hiding the communications to the adversary in the lying process except one session, we can obtain the lying process such that receipt-freeness is satisfied. More concretely, when receipt-freeness is satisfied, multi-session receipt-freeness may not be satisfied. For example, when there are exactly two users and two nonces generated by each user, if the revealed information is only the two nonces, a bribed user can lie to the adversary about his nonce, since the link between the user and the nonce is private. However, if the user is bribed on two sessions, i.e., the link between him and his two nonces, then at least one nonce has to be generated by the user, hence, multi-session receipt-freeness (two sessions in particular) is not satisfied.

Remark that we restrict the way a bribed user collaborates with the adversary: we only model forwarding information to the adversary. The scenario that the adversary provides prescriptions for a bribed doctor, similar to in coercion in e-voting [DKR09], is not modelled. Although providing ready-made prescriptions is theoretically possible, we consider this to not be a practical attack: correctly prescribing requires professional (sometimes empirical) medical expertise and heavily depends on examination of the patient. As such, no adversary can prepare an appropriate prescription without additional information. Moreover, forwarding a non-appropriate description carries serious legal consequences for the forwarding doctor. Therefore, we omit the case where the doctor merely forwards an adversary-prepared prescription. The adversary could still prepare other information for the bribed doctor, for example, the randomness of a bit-commitment. Such adversary-prepared information may lead to a stronger adversary than we are considering. To model such a scenario, the verifier needs to specify exactly which information is prepared by the adversary. Formalisation of such scenarios can follow the formal framework proposed in [DKR09, DJP13].

3.6 Independency of prescription privacy

Usually, e-health systems have to deal with a complex constellation of roles: doctors, patients, pharmacists, insurance companies, medical administration, etc. Each of these roles has access to different private information and has different privacy concerns. An untrusted role may be bribed to reveal private information to the adversary such that the adversary can break the privacy of another role. De Decker et al. [dDLVV08] note that pharmacists may have sensitive data which can be revealed to the adversary to break a doctor’s prescription privacy. To prevent a party from revealing sensitive data that affects a doctor’s privacy, e-health protocols are required to satisfy independency of prescription privacy. The DLV08 protocol, for example, requires prescription privacy independent of pharmacists [dDLVV08]. Intuitively, independency of prescription privacy means that even if another party reveals their information (i.e., ), the adversary is not able to break a doctor’s prescription privacy.

Definition 6 (independency of prescription privacy).

A well-formed e-health protocol with a doctor role , satisfies prescription privacy independent of role , if for all possible doctors and () we have

where and () are any two possible prescriptions, is a non-doctor role, process and can be .

Note that we assume a worst-case situation in which role genuinely cooperates with the adversary. For example, the pharmacist forwards all information obtained from channels hidden from the adversary. The equivalence requires that no matter how role cooperates with the adversary, the adversary cannot link a doctor to the doctor’s prescriptions. The cooperation between pharmacists and the adversary is modelled in the same way as the cooperation between bribed doctors and the adversary, i.e., . We do not model the situation where the adversary prepares information for the pharmacists, as we focus on doctor privacy – information sent out by the pharmacist does not affect doctor privacy, so there is no reason to control this information. Instead of modelling the pharmacists as compromised users, our modelling allows the definition to be easily extended to model new properties which capture situations where pharmacists lie to the adversary due to, for example, coalition between pharmacists and bribed doctors. In addition, although we do not model delivery of medicine, pharmacists do need to adhere to regulations in providing medicine. Thus, an adversary who only controls the network cannot impersonate a pharmacist.

Just as receipt-freeness is stronger than prescription privacy, independency of prescription privacy is stronger than prescription privacy (cf. Figure 6). Intuitively, this holds since the adversary obtains at least as much information in independency of prescription privacy as in prescription privacy. Formally, one can derive Definition 3 from Definition 6 by hiding channel on the left-hand side as well as the right-hand side of the equivalence in Definition 6.

Proof.

Consider a protocol that satisfies independency of prescription privacy. This protocol thus satisfies definition 6. By applying the evaluation context to both the left-hand side and the right-hand side of Definition 6, we obtain

According to Lemma 1, we have

According to Lemma 2, we have

Therefore, by transitivity, we have

which is exactly Definition 3. ∎

Note that the first step in the proof (application of an evaluation context) cannot be reversed. Therefore, prescription privacy is weaker than independency of prescription privacy.

3.7 Independency of receipt-freeness

We have discussed two situations where a doctor’s prescription behaviour can be revealed when either the doctor or another different party cooperates with the adversary. It is natural to consider the conjunction of these two, i.e., a situation in which the adversary coerces both a doctor and another party (not a doctor). Since the adversary obtains more information, this constitutes a stronger attack on doctor’s prescription privacy. To address this problem, we define independency of receipt-freeness, which is satisfied when a doctor’s prescription privacy is preserved even if both the doctor and another party reveal their private information to the adversary.

Definition 7 (independency of receipt-freeness).

A well-formed e-health protocol with a doctor role , satisfies receipt-freeness independent of role if for any two doctors and () and any two possible prescriptions and (), there exist processes and , such that:

where is a closed plain process, is a non-doctor role, is a free fresh channel name, process and can be .

Independency of receipt-freeness implies receipt-freeness and independency of prescription privacy, each of which also implies prescription privacy (cf. Figure 6). The proof follows the same reasoning as the proofs in [DJP13]. Intuitively, the adversary obtains more information with independency of receipt-freeness (namely, from both doctor and pharmacist) than with either independency of prescription privacy (from pharmacist only) or receipt-freeness (from doctor only). If the adversary is unable to break a doctor’s privacy using this much information, the adversary will not be able to break doctor privacy using less information. Therefore, if a protocol satisfies independency of receipt-freeness, then it must also satisfies independency of prescription privacy and receipt-freeness. Similarly, since the adversary obtains more information in both independency of prescription privacy and in receipt-freeness than in prescription privacy, if a protocol satisfies either independency of prescription privacy or receipt-freeness, it must also satisfies prescription privacy.

3.8 Anonymity and strong anonymity

Anonymity is a privacy property that protects users’ identities. We model anonymity as indistinguishability of processes initiated by two different users.

Definition 8 (doctor anonymity).

A well-formed e-health protocol with a doctor role satisfies doctor anonymity if for any doctor , there exists another doctor (), such that

A stronger property of anonymity is defined in [ACRR10], capturing the situation that the adversary cannot even find out whether a user (with identity ) has participated in a session of the protocol or not.

Definition 9 (strong doctor anonymity [Acrr10]).

A well-formed e-health protocol with a doctor role satisfies strong doctor anonymity, if

Recall that the unveiling of a doctor’s identity (when used) is performed outside the process (see Section 3.2). Therefore, the above two definitions do not include generation nor unveiling of doctor identities in the initialization phase.

Obviously, the concept of strong doctor anonymity is intended to be stronger than the concept of doctor anonymity. We show that it is impossible to satisfy strong doctor anonymity without satisfying doctor anonymity (arrow in Figure 6).

Proof.

Assume that a protocol satisfies strong doctor anonymity but not doctor anonymity. That is, satisfies Definition 9, i.e.,

but there exists no such that the equation in Definition 8 is satisfied. That is, s.t.