Typed-based Relaxed Noninterference for Free

05/02/2019 ∙ by Minh Ngô, et al. ∙ Inria Stevens Institute of Technology 0

Despite the clear need for specifying and enforcing information flow policies, existing tools and theories either fall short of practical languages, fail to encompass the declassification needed for practical requirements, or fail to provide provable guarantees. In this paper we make progress on provable guarantees encompassing declassification by leveraging type abstraction. We translate information flow policies, with declassification, into an interface for which an unmodified standard typechecker can be applied to a source program - if it typechecks, the program provably satisfies the policy. Our proof reduces security to the mathematical foundation of data abstraction, Reynolds' abstraction theorem. By proving this result for a large fragment of pure ML, we give evidence for the potential to build sound security tools using off the shelf language tools and their theories.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

A longstanding challenge is the enforcement of information flow (IF) policy in software systems and applications implemented in conventional general-purpose programming languages. For high assurance, precise mathematical definitions are needed for policies, enforcement mechanism, and program semantics. The latter, in particular, is a major challenge for languages in practical use. In order to minimize the cost of assurance, especially over time as systems evolve, it is desirable to leverage work on formal modeling with other goals such as functional verification, equivalence checking, and compilation.

To be auditable by stakeholders, policy should be expressed in an accessible way. This is one of several reasons why types play an important role in many works on IF. For example, Flowcaml [1] and Jif [2] express policy using types that include IF labels. They statically enforce policy using dedicated IF type checking and inference. Techniques from type theory are also used in security proofs such as those for Flowcaml and the calculus DCC [3].

IF is typically formalized as the preservation of indistinguishability relations between executions. Researchers have noticed that this should be an instance of a celebrated semantics basis in type theory: relational parametricity [4]. Relational parametricity provides an effective basis for formal reasoning about program transformations [5], representation independence and information hiding for program verification [6, 7]. The connection between IF and relation parametricity has recently been made precise, for DCC, by translation to the calculus and use of the existing parametricity theorem for  [8].

In this work, we advance the state of the art in the connection between IF and relational parametricity, guided by three main goals. One of the goals motivating our work is to reduce the burden of defining dedicated type checking, inference, and security proofs for high assurance in programming languages. A promising approach towards this goal is the idea of leveraging type abstraction to enforce policy, and in particular, leveraging the parametricity theorem in programming languages to obtain security guarantees. A concomitant goal is to do so for practical policies that encompass selective downgrading, which is needed for the vast majority of policies of practical interest. Without downgrading, a password checker program or a program that calculates aggregate or statistical information must be considered insecure, for example.

To build on the type system and formal theory of a language without a priori IF features, policy needs to be encoded somehow, and the program may need to be transformed. For example, to prove that a typechecked DCC term is secure with respect to the policy expressed by its type, Bowman and Ahmed [8] encode the typechecking judgment by nontrivial translation of both types and terms into types and terms of . Any translation becomes part of the assurance argument. Most likely, complicated translation will also make it more difficult to use extant type checking/inference (and other development tools) in diagnosing security errors and developing secure code. This leads us to highlight a third goal, needed to achieve the first goal, namely to minimize the complexity of translation.

There is a major impediment to leveraging type abstraction: few languages are relationally parametric or have parametricity theorems! The lack of parametricity can be addressed by focusing on well behaved subsets and leveraging additional features like ownership types that may be available for other purposes (e.g., in the Rust language). As for the paucity of parametricity theorems, we take hope in the recent advances in machine-checked metatheory, such as correctness of the CakeML and CompCert compilers, the VST logic for C, the relational logic of Iris. For parametricity specifically, the most relevant work is Crary’s formal proof of parametricity for the ML module calculus [9]. Our main result is a reduction of IF to parametricity in that calculus.

Before elaborating on our contributions let us review some prior work. The calculus DCC expresses policy using monad types indexed on levels in a lattice of security levels with the usual interpretation that flows are only allowed between levels in accord with the ordering. While DCC is a theoretical calculus, its monadic types fit nicely with the monads and monad transformers used by the Haskell language for computational effects like state and I/O. Algehed and Russo encode the typing judgment of DCC in Haskell using closed type families, one of the type system extensions supported by GHC [10]. However, they do not prove security; and DCC expresses strict noninterference, with no form of downgrading.

Bowman and Ahmed translate DCC to and prove the security theorem of DCC as a consequence of parametricity of  [8]. (The original security proof for DCC does not leverage parametricity [3].) DCC relies on a subsidiary judgment about types, called “protected-at”, and the cited works rely on the power of a highly expressive target calculus to encode this judgment. As we discuss in the related work section II, prior attempts to formalize security of DCC using parametricity in less powerful target calculi encountered difficulties in connection with the “protected-at” judgment. Most information flow type systems address practical policies in which the sensitive data is first order; they express and check security more simply than DCC. Our goals do not at all necessitate a system like DCC for policy.

Cruz et al [11] consider policies in which downgrading is expressed in terms of allowed declassifier programs, encoding the “relaxed noninterference” idea of Li and Zdancewic [12] using type abstraction in the object calculus. The formulation of policy in terms of allowed operations is attractive and seems adaptable to practical languages. The idea is close to the use of an explicit “declass” operation as in Jif and other works, while keeping policy distinct from program rather than embedded in it. Although the object calculus enjoys a parametricity theorem [13], the security proof of Cruz et al is done from scratch. Moreover they make a significant modification to the type system, introducing faceted types in order to express sensitivity from the perspective of observers at different levels. This makes good use of subtyping, already present in the object calculus, but is a concern with respect to our goals of leveraging existing tools and theorems.

Our first contribution is to translate policies with declassification (in the style of relaxed interference) into abstract types in a functional language, in such a way that typechecking the original program implies its security. We consider variations in which a thin wrapper is used, but we do not rely on on a specialized security type system like DCC. A program that typechecks may use the secret inputs parametrically, e.g., storing in data structures, but cannot look at the data until declassification has been applied. Our second contribution is to prove security by direct application of a parametricity theorem. We carry out this development twice: for polymorphic lambda calculus, using the original theorem of Reynolds, and for the ML module calculus using Crary’s theorem. The second handles a large fragment of a real language while the first serves to expose the ideas. The technical details for ML are far too complicated to present in a conference paper, but complete details are presented in appendices.

The ML result makes a strong connection with a large fragment of a “real” language, however, we fall short of our practical goals because our development does not account for programs with high (or multiple level) computation and output. Although this is needed in general, there are many important programs where this does not matter such as data mining computations using sensitive inputs to calculate aggregate or statistical information, and many mobile apps. To solve this problem we could follow Cruz et al and introduce a notion of faceted types for ML, but this would undercut the goal of leveraging existing tools. Instead we offer our third contribution, which is simply to pose this open problem: encode relaxed policies using type abstraction, encompassing multiple level computation and outputs while leveraging an existing parametricity theorem—or demonstrate that it cannot be done. For practical relevance, the encoding should target a language like ML with efficient type checking.

Outline

Section II describes related work and Section III introduces security policies for relaxed noninterference. Section IV recapitulates the abstraction theorem in the context of the simply typed and call-by-value lambda calculus, close to that of Reynolds [4], so we can expose the main ideas in a simple setting. Section V presents our first result: type-based encoding of policy, and proof of relaxed noninterference for this calculus by means of the abstraction theorem. Section VI discusses extensions of the first result for more expressive policies and more permissive checking.111Noninterference is undecidable so we do not expect a type system to allow all secure programs. Although many variations and extensions are possible, in this article we devote the available space to working out the chosen versions in detail. Section VII describes our result that uses the calculus and abstraction theorem of Crary [9], which formalizes the functional core of standard ML and its module system. It supports unrestricted recursion at the term level, generative and applicative functors, higher-order functors, sealing, and translucent signatures. For this calculus we have carried out a development parallel to that in Section V, which can be found in full detail in appendices. In Section VII we sketch the ideas using SML code. Section VIII concludes by highlighting limitations of our encodings and challenges for future work.

All results not proved in the paper are proved in appendices.

Ii Related Work

We focus on the closest related work regarding noninterference, declassification, and connections to the abstraction theorem. We refer the interested reader to  [14] for the early history of language-based information flow security, and to  [15] for a survey on declassification up to 2009.

Typing secure information flow

Pottier and Simonet [16] implement FlowCaml [1], the first type system for information flow analysis dealing with a real-sized programming language (a large fragment of OCaml), and they prove soundness. In comparison with our results, we do not consider any imperative features; they do not consider any form of declassification, their type system significantly departs from standard ML typing rules, and their security proof is not based on an abstraction theorem. An interesting question is whether their type system can be translated to system F or some other calculus with an abstraction theorem. FlowCaml provides type inference for security types. In this work, we rely on the standard ML type system to enforce security. Standard ML provides type inference, which endows our approach with an inference mechanism. Our work has a significant limitation compared with FlowCaml and other systems: as noted in Section I, our encoding does not allow computation that produces both secret and public outputs.

Barthe et al. [17] propose a modular method to reuse type systems and proofs for noninterference for declassification. They also provide a method to conclude declassification soundness by using an existing theorem. In contrast to our work, their type system significantly departs from standard typing rules, and does not make use of abstraction.

Tse and Zdancewic [18] propose a security-typed language for robust declassification: declassification cannot be triggered unless there is a digital certificate to assert the proper authority. Their language inherits many features from System F and uses monadic labels as in DCC [3]. In contrast to our work, security labels are based on the Decentralized Label Model (DLM) [19], and are not semantically unified with the standard safety types of the language.

Compared with type systems, relational logics can specify IF policy and prove more programs secure through semantic reasoning [20, 21, 22, 23], but at the cost of more user guidance and less familiar notations.

Relaxed Noninterference

As discussed in the introduction, our policies and security property are based on the work of Li and Zdancewic [12], which proposes two kinds of declassification policies: local and global policies. Our approach supports both of them. Their source programs are written in a pure lambda calculus with recursion, like the language we consider in Sections IV and V except that we do not include recursion until Section C. Sabelfeld and Sands [15] evaluate the formalization of [12] with respect to guiding principles for declassification.

Connections between secure information flow and type abstraction

The Dependency Core Calculus (DCC) [3] expresses security policies using monadic types. It does not include declassification, and the noninterference theorem of [3] is proved from scratch. Tse and Zdancewic [24] translate the recursion-free fragment of DCC to System F. The main theorem for this translation aims to show that parametricity of System F implies noninterference. Shikuma and Igarashi identify a mistake in the proof [25]; they also give a noninterference-preserving translation for a version of DCC to the simply-typed lambda calculus. Although they make direct use of a specific logical relation, their results are not obtained by instantiating a general parametricity theorem. Bowman and Ahmed [8] finally provide a translation from the recursion-free fragment of DCC to System F, successfully proving that parametricity implies noninterference, via a correctness theorem for the translation (which is akin to a full abstraction property). Bowman and Ahmed’s translation makes essential use of the power of System F to encode judgments of DCC, raising the question whether a simpler target type system can suffice for security policies expressed differently from DCC.

These works are “translating noninterference to parametricity” in the sense of translating both programs and types. The practical implication is that one might leverage an existing type checker by translating both a program and its security policy into another program such that it’s typability implies the original conforms to policy. Our work aims to cater more directly for practical application, by avoiding (or minimizing) the need to translate the program and hence avoiding the need to prove the correctness of a translation. This approach seems to have limitations—in particular, concerning computation that produces both public and secret outputs—which we pose as an open problem. Of course in a sufficiently powerful type system, one can express the security property semantically. But then typechecking is undecidable and the translation does not serve our goals.

Cruz et al. [11] show that type abstraction implies relaxed noninterference. Similar to ours, their definition of relaxed noninterference is a standard extensional semantics, using partial equivalence relations. This is in contrast with [12] where the semantics is entangled with typability. They allow computation on secrets and require that the result of such computation cannot be released. In contrast to our work, their results are in the context of the object calculus [13] where they use subtyping to guide the security levels order. They do not attempt to use the abstraction theorem of the object calculus to conclude soundness. We conjecture that our approach can also be applied in the context of the object calculus for relaxed noninterference as defined in [11]. We leave this as future work.

Protzenko et al. [26] propose to use abstract types as the types for secrets and use standard type systems for security. This is very close in spirit to our work. Their soundness theorem is about a property called “secret independence”, very close to noninterference. In contrast to our work, there are no results for any kind of declassification and no attempt to use the abstraction theorem.

Rajani and Garg [27] connect fine- and coarse-grained type systems for information flow in a lambda calculus with general references, defining noninterference (without declassification) as a step-indexed Kripke logical relation that expresses indistinguishability. Further afield, a connection between security and parametricity is made by Devriese et al [28], featuring a negative result: System F cannot be compiled to the the Sumii-Pierce calculus of dynamic sealing [29] (an idealized model of a cryptographic mechanism). Finally, information flow analyses have also been put at the service of parametricity: Washburn and Weirich [30] generalize parametricity in the presence of runtime type analysis using security labels for data structures that should remain confidential in order to hide implementation details .

Abstraction theorems for other languages

Vytiniotis and Weirich [31] prove the abstraction theorem for R, which extends F with constructs that are useful for programming with type equivalence propositions. Rossberg et al [32] show another path to parametricity for ML modules, by translating them to another calculus, System . Crary’s result [9] covers a large fragment of ML but not references and mutable state. Banerjee and Naumann [7] prove an abstraction theorem for a sequential Java-like language, using a form of ownership types to enforce abstraction for dynamically allocated mutable objects, and in later work they prove similar results using program annotations to enforce abstraction [33, 34]. (Around the same time, they proved noninterference for a security type system for a similar language, but from scratch rather than via an abstraction theorem [35, 36].) Ahmed et al. [37] develop a step-indexed logical relation for a language with references. Based on that work, Dreyer et al. [38] formulate a relational modal logic for proving contextual equivalence for the LADR language that has general recursive types and general ML-style references atop System F. Timany et al [39] give a logical relation for a state monad and use it to prove contextual equivalences. These works are important steps towards the development of abstraction theorems for rich fragments of practical languages.

Iii Declassification: Local Policies

The main idea in relaxed noninterference security policies is to specify for each confidential input how it can be released [12]. Inspired by this idea, our security policies, called local policies, map confidential inputs to a declassification function , or a combination of an action and a declassification function (see §VI for a generalization). When a confidential input can be declassified via the combination of an action and a function , then the result of is allowed to be made visible to a public observer. In other words, the confidential input can be declassified via , where . The result of is not visible to the observer, only is. The input can be manipulated parametrically until is applied, and then the result of can be manipulated parametrically until is applied. Thus the policy is applied to the original input, as usually advised to avoid laundering attacks [15].

The syntax for writing declassification functions and actions is as below,222In this paper, the type of confidential inputs is int. We choose this since we are sticking with [12]. The result presented in this paper can be generalized to accept confidential inputs of arbitrary types. where is an integer value, and represents primitive arithmetic operators.

Types
Terms
Actions
Declass. Functions

The static semantics and the dynamic semantics for the policy language are standard and similar to the ones of the simply typed and call-by-value lambda calculus with type variables (see § IV). For primitive operators, to simplify the presentation, we suppose that the applications of operators on well-typed arguments always terminates. Therefore, the evaluations of declassification functions and combinations on values always terminate.

For policies we refrain from using concrete syntax and instead give a simple formalization that facilitates later definitions.

Definition III.1 (Local Policy).

A local policy  is a tuple , where is a finite set of variables for confidential inputs, and is a partial mapping from variables in to declassification functions or combinations.

For simplicity we require that if or appears in the policy then they are closed terms, is of the type , and has type for some .

In the definition of local policies, if a confidential input is not associated with a declassification function or a combination, then it cannot be declassified. Formally, a combination is mathematically defined as a pair but we write it as for clarity.

Example III.2 (Policy  using ).

Consider policy  given by where and . Policy  states that only the parity of the confidential input can be released to a public observer.

Example III.3 (Policy  using ).

(inspired by [12, Example 3.2.1]) Assume that hash is a primitive operator. Consider policy  given by where , , is , and is . Policy  states that the hashed value of the confidential input cannot be released, but the lowest 64 bits of its hashed value can.

The notion of action can be generalized to multiple steps of declassification, for example to specify the correct order of application of sanitizers [40]. Our encoding can be extended straightforwardly to multiple steps, at the cost of notational clutter we prefer to avoid in this presentation.

Iv Abstraction Theorem

For source programs we choose the simply typed and call-by-value lambda calculus, with integers and type variables, because of two reasons: (1) the chosen language is similar to the language used in the paper of Reynolds [4] where the abstraction theorem was first proven, and (2) we want to illustrate our encoding approach (§V) in a minimal calculus. This section defines the syntax and semantics and presents key results that culminate in the abstraction theorem, a.k.a. parametricity. These results are basically standard. In fact our language is very close to the one in Reynolds [4, § 2], for which we prove the abstraction theorem using contemporary notation333 Some readers may find it helpful to consult references such as these for background on logical relations and parametricity: [41, Chapt. 49], [6, Chapt. 8], [42], [43]. .

Iv-a Language

The syntax of the language is as below, where denotes a type variable, a term variable, and an integer value. A value is closed when there is no free term variable in it. A type is closed when there is no type variable in it.

Types
Values
Terms
Eval. Contexts

We consider terms without type variables as source programs (the role of type variables is to encode policies, as explained in due course). We use small-step semantics, with the reduction relation defined inductively by these rules.

We write for capture-avoiding substitution of for free occurrences of in . Here and throughout, we use parentheses to disambiguate term structure. As usual, denotes the reflexive, transitive closure of .

Typing rules

A typing context is a set of type variables. A term context is a mapping from term variables to types.

We write to mean that is well-formed w.r.t. . The definition of is described below. The definition is standard and it amounts to the requirement that type variables in are in . We say that is typable w.r.t. and (denoted by ) when there exists a well-formed type s.t. .

The derivable typing judgments are defined inductively in Fig. 1. The rules are to be instantiated only with that is well-formed under , in the sense that for all . When the term context and the type context are empty, we write .


Fig. 1: Typing rules

Iv-B Logical relation

A logical relation is a type-indexed family of relations on values, based on given relations for type variables. From it is derived a relation on terms. The abstraction theorem says the latter is reflexive.

Let be a term substitution, i.e., a finite map from term variables to closed values, and be a type substitution, i.e., a finite map from type variables to closed types. In symbols:

Term Substitutions
Type Substitutions

We say respects (denoted by ) when and for any . We say respects (denoted by ) when . Let be the set of all binary relations over closed values of closed types and . Let be an environment, a mapping from type variables to relations . We write to say that is compatible with as follows: . The logical relation is inductively defined in Fig. 2, where for some and . For any , is a relation on closed values. In addition, is a relation on terms.

Lemma IV.1.

Suppose that for some and . For , it follows that:

  • if , then , and

  • if , then .

Fig. 2: Logical relation

We write to mean a term substitution obtained from by applying on the range of , i.e.:

Suppose that , , and . Then we write to mean the application of and to . For example, suppose that , for some , and , then . We write for some when , , and for all .

Definition IV.2 (Logical equivalence).

Terms and are logically equivalent at in and (written ) if , , and for all , all , and all , we have

Theorem IV.3 (Abstraction [4]).

If , then .

V Type-based Relaxed Noninterference

In this section, we show how to encode security policies as standard types in the language of § IV, we define and we prove our first free theorem. The security property is called typed-based relaxed noninterference (TRNI) and is taken from Cruz et al [11].

Through this section, we consider a fixed policy  (see Def. III.1) given by . We treat free variables in a program as inputs and, without loss of generality, we assume that there are two kinds of inputs: integer values, which are considered as confidential, and declassification functions and actions, which are fixed according to policy. A public input can be encoded as a confidential input that can be declassified via the identity function.

V-a Views and indistinguishability

In order to define TRNI we define two term contexts, called the confidential view and public view. The first view represents an observer that can access to all confidential inputs, while the second one represents an observer that can only observe declassified inputs. The views are defined using fresh term and type variables.

Confidential view

Let be the set of inputs that cannot be declassified. First we define the encoding for these inputs as a term context:

Next, we specify the encoding of confidential inputs that can be declassified. To this end, define as follows, where and are in .

Finally, we write for the term context encoding the confidential view for .

We assume that, for any , the variables and in the result of are distinct from the variables in , distinct from each other, and distinct from and for distinct . We also make this assumption in the definition of the public view, to follow.

From the construction, is a mapping, and for any , it follows that is a closed type. Therefore, is well-formed for the empty set of type variables, so it can be used in typing judgments of the form .

Example V.1 (Confidential view).

For  described in Example III.2, the confidential view is: . For  described in Example III.3, the confidential view is .

Public view

The basic idea is to encode local policies by using type variables. First we define the encoding for confidential inputs that cannot be declassified. We define a set of type variables, and a mapping for confidential inputs that cannot be declassified.

This serves to give the program access to at an opaque type.

In order to define the encoding for confidential inputs that can be declassified, we define :

The first form will serve to give the program access to only via function variable that we will ensure is interpreted as the policy function ; similarly for the second form. We define a type context and term context that comprise the public view, as follows.

where .

Example V.2 (Public view).

For , the typing context in the public view has only one type variable: . The term context in the public view is .

For , the typing context in the public view has two type variables: . The term context in the public view is .

From the construction, is a mapping, and for any , it follows that is well-formed in (i.e. ). Thus, is well-formed in the typing context . Therefore, and can be used in typing judgments of the form .

Notice that in the public view of a policy, types of variables for confidential inputs are not int. Thus, the public view does not allow programs where concrete declassifiers are applied to confidential input variables even when the applications are correct according to the policy (e.g. for , the program does not typecheck in the public view). However, the public view does allow programs where confidential input variables are used in applications of declassifier variables associated with them (e.g. for , the program is well-typed in the public view).

Indistinguishability

The security property TRNI is defined in a usual way, using partial equivalence relations called indistinguishability. To define indistinguishability, we define a type substitution such that , as follows:

(1)

The inductive definition of indistinguishability for a policy  is presented in Figure 3, where , , and are from . Indistinguishability is defined for s.t. . The definitions of indistinguishability for int and are straightforward. We say that two functions are indistinguishable at if on any indistinguishable inputs they generate indistinguishable outputs. Since we use to encode confidential integer values that cannot be declassified, any integer values and are indistinguishable, according to rule Eq-Var1. Notice that . Since we use to encode confidential integer values that can be declassified via where , we say that when . The idea behind is similar. We write to mean that and for some integer value .

Indistinguishability is illustrated in Example V.3.

Example V.3 (Indistinguishability).

For  (of Example III.2), two values and are indistinguishable at

when both of them are even numbers or odd numbers.

For  (of Example III.3), two values and are indistinguishable at when they are integer values and the lowest 64 bits of their hashed values are the same.

Fig. 3: Indistinguishability

We say that two term substitutions and are indistinguishable w.r.t. (denoted by ) if the following conditions hold.

  • and ,

  • for all , ,

  • for all , ,

  • for all other , .

Note that each maps and to the specific functions and in the policy. Input variables are mapped to indistinguishable values.

We now define type-based relaxed noninterference w.r.t.  for a type well-formed in . It says that indistinguishable inputs lead to indistinguishable results.

Definition V.4.

A term is provided that , and , and for all we have .

Notice that if a term is well-typed in the public view then by replacing all type variables in it with int, we get a term which is also well-typed in the confidential view (that is, if , then where maps all type variables in to int). However, Definition V.4 also requires that the term is itself well-typed in the confidential view. This ensures that the definition is applied, as intended, to programs that do not contain type variables.

The definition of TRNI is indexed by a type for the result of the term. The type can be interpreted as constraining the observations to be made by the public observer. We are mainly interested in concrete output types, which express that the observer can do whatever they like and has full knowledge of the result. Put differently, TRNI for an abstract type expresses security under the assumption that the observer is somehow forced to respect the abstraction. For example, we consider the policy  (of Example III.2) where can be declassified via