The number and variety of special-purpose computing devices is increasing dramatically. This includes all kinds of embedded devices, cyber-physical systems (CPS) and Internet-of-Things (IoT) gadgets, that are utilized in various “smart” settings, such as homes, offices, factories, automotive systems and public venues. As society becomes increasingly accustomed to being surrounded by, and dependent on, such devices, their security becomes extremely important. For actuation-capable devices, malware can impact both security and safety, e.g., as demonstrated by Stuxnet . Whereas, for sensing devices, malware can undermine privacy by obtaining ambient information. Furthermore, clever malware can turn vulnerable IoT devices into zombies that can become sources for DDoS attacks. For example, in 2016, a multitude of compromised “smart” cameras and DVRs formed the Mirai Botnet  which was used to mount a massive-scale DDoS attack (the largest in history).
Unfortunately, security is typically not a key priority for low-end device manufacturers, due to cost, size or power constraints. It is thus unrealistic to expect such devices to have the means to prevent current and future malware attacks. The next best thing is detection of malware presence. This typically requires some form of Remote Attestation () – a distinct security service for detecting malware on CPS, embedded and IoT devices. is especially applicable to low-end embedded devices that are incapable of defending themselves against malware infection. This is in contrast to more powerful devices (both embedded and general-purpose) that can avail themselves of sophisticated anti-malware protection. involves verification of current internal state (i.e., RAM and/or flash) of an untrusted remote hardware platform (prover or ) by a trusted entity (verifier or ). If detects malware presence, ’s software can be re-set or rolled back and out-of-band measures can be taken to prevent similar infections. In general, can help establish a static or dynamic root of trust in and can also be used to construct other security services, such as software updates  and secure deletion . Hybrid (implemented as a HW/SW co-design) is a particularly promising approach for low-end embedded devices. It aims to provide the same security guarantees as (more expensive) hardware-based approaches, while minimizing modifications to the underlying hardware.
Even though numerous techniques with different assumptions, security guarantees, and designs, have been proposed [41, 38, 34, 20, 29, 9, 19, 10, 37, 15, 16, 24, 37, 14], a major missing aspect of is the high-assurance and rigor derivable from utilizing (automated) formal verification to guarantee security of the design and implementation of techniques. Because all aforementioned architectures and their implementations are not systematically designed from abstract models, their soundness and security can not be formally argued. In fact, our verification efforts revealed that a previous hybrid design – SMART  – wrongly assumed that disabling interrupts is an atomic operation and hence opened the door to compromise of ’s secret key in the window between the time of the invocation of disable interrupts functionality and the time when interrupts are actually disabled. Another low/medium-end architecture – Trustlite  – also does not achieve our formal definition of soundness. In particular, this architecture is vulnerable to self-relocating malware (See  for details). Formal specification of properties and their (automated) verification significantly increases our confidence that such subtle issues are not overlooked.
In this paper we take a “verifiable-by-design” approach and develop, from scratch, an architecture for Verifiable Remote Attestation for Simple Embedded Devices (VRASED). VRASED is the first formally specified and verified architecture accompanied by a formally verified implementation. Verification is carried out fully, including hardware, software, and the composition of both, all the way up to end-to-end notions for soundness and security. The resulting verified implementation – along with its computer proofs – is publicly available . Formally reasoning about, and verifying, VRASED involves overcoming major challenges that have not been attempted in the context of and, to the best of our knowledge, not attempted for any security service implemented as a HW/SW co-design. These challenges include:
Formal definitions of: (i) end-to-end notions for soundness and security; (ii) a realistic machine model for low-end embedded systems; and (iii) VRASED’s guarantees. These definitions must be made in single formal system that is powerful enough to provide a common ground for reasoning about their interplay. In particular, our end goal is to prove that the definitions for soundness and security are implied by VRASED’s guarantees when applied to our machine model. Our formal system of choice is Linear Temporal Logic (LTL). A background on LTL and our reasons for choosing it are discussed in Section II.
Automatic end-to-end verification of complex systems such as VRASED is challenging from the computability perspective, as the space of possible states is extremely large. To cope with this challenge, we take a “divide-to-conquer” approach. We start by dividing the end-to-end goal of soundness and security into smaller sub-properties that are also defined in LTL. Each HW sub-module, responsible for enforcing a given sub-property, is specified as a Finite State Machine (FSM), verified using a Model Checker, and automatically translated into the Verilog hardware description language. VRASED’s SW relies on an F* verified implementation (see Section IV-C) which is also specified in LTL. This modular approach allows us to efficiently prove sub-properties enforced by individual building blocks in VRASED.
All proven sub-properties must be composed together in order to reason about security and soundness of VRASED as one whole system. To this end, we use a theorem prover to show (by using LTL equivalences) that the sub-properties that were proved for each of VRASED’s sub-modules, when composed, imply the end-to-end definitions of soundness and security. This modular approach enables efficient system-wide formal verification.
I-A The Scope of Low-End Devices
This work focuses on low-end devices based on low-power single core microcontrollers with a few KBytes of program and data memory. A representative of this class of devices is the Texas Instrument’s MSP430 microcontroller (MCU) family . It has a -bit word size, resulting in KBytes of addressable memory. SRAM is used as data memory and its size ranges between and KBytes (depending on the specific MSP430 model), while the rest of the address space is used for program memory, e.g., ROM and Flash. MSP430 is a Von Neumann architecture processor with common data and code address spaces. It can perform multiple memory accesses within a single instruction; its instruction execution time varies from to clock cycles, and instruction length varies from to bits. MSP430 was designed for low-power and low-cost. It is widely used in many application domains, e.g., automotive industry, utility meters, as well as consumer devices and computer peripherals. Our choice is also motivated by availability of a well-maintained open-source MSP430 hardware design from Open Cores . Nevertheless, our machine model is applicable to other low-end MCUs in the same class as MSP430 (e.g., Atmel AVR ATMega).
Section II provides relevant background on and automated verification. Section III contains the details of the VRASED architecture and an overview of the verification approach. Section IV contains the formal definitions of end-to-end soundness and security and the formalization of the necessary sub-properties along with the implementation of verified components to realize such sub-properties. Due to space limitation, the proofs for end-to-end soundness and security derived from the sub-properties are discussed in Appendix A. Section V discusses alternative designs to guarantee the same required properties and their trade-offs with the standard design. Section VI presents experimental results demonstrating the minimal overhead of the formally verified and synthesized components. Section VII discusses related work. Section VIII concludes with a summary of our results. End-to-end proofs of soundness and security, optional parts of the design, VRASED’s API, comparison with other architectures, and discussion on VRASED’s prototype can be found in Appendices A to E.
This section overviews and provides some background on automated verification.
Ii-A for Low-end Devices
As mentioned earlier, is a security service that facilitates detection of malware presence on a remote device. Specifically, it allows a trusted verifier () to remotely measure the software state of an untrusted remote device (). As shown in Figure 1, is typically obtained via a simple challenge-response protocol:
sends an attestation request containing a challenge () to . This request might also contain a token derived from a secret that allows to authenticate .
receives the attestation request and computes an authenticated integrity check over its memory and . The memory region might be either pre-defined, or explicitly specified in the request. In the latter case, authentication of in step (1) is paramount to the overall security/privacy of , as the request can specify arbitrary memory regions.
mstAfter phone discussions, we concluded that we don’t have enough time to get an attestation functionality which really allows bootstrapped attested execution as, besides fixing continuation address or measuring calling (and hence returing) stack-pointer (trivial with HKDF, just an additional parameter next to challenge and address range, if dynamic), it would also imply disabling interrupts (including necessary nop padding) in in VRSADED (Figure 6) and, most importantly, would leave somewhat open how to deal with DMA. Hence, i put the issue of this continuation under the rug by removing the “depending on whether after attestation we continue at a fixed location or returns from the stack, stack pointer or other registers also might have to be measured.111For simplicity, in this work we assume that the measured address range is fixed, does not overlap with key storage range and the address from which the attestation continues is fixed and inside the measured address range.”.
Ideally, though, we should see that we can make a bootstrapping vehicel to allow attested execution and beyond as plain attestation of memory without guarantees on what actually is running is not very useful. The analogy is entity authentication which for a long time was the primitives folks were asking for until people started to realize that authenticated key-exchange and secure tunnels is really what you need AND, more importantly, also realized that you cannot easily build it out of entity authentication, it doesn’t securely compose!
returns the result to .
receives the result from , and checks whether it corresponds to a valid memory state.
The authenticated integrity check can be realized as a Message Authentication Code (MAC) over ’s memory. However, computing a MAC requires to have a unique secret key (denoted by ) shared with . This must reside in secure storage, where it is not accessible to any software running on , except for attestation code. Since most threat models assume a fully compromised software state on , secure storage implies some level of hardware support.
Prior approaches can be divided into three groups: software-based, hardware-based, and hybrid. Software-based (or timing-based) is the only viable approach for legacy devices with no hardware security features. Without hardware support, it is (currently) impossible to guarantee that is not accessible by malware. Therefore, security of software-based approaches [42, 34] is attained by setting threshold communication delays between and . Thus, software-based is unsuitable for multi-hop and jitter-prone communication, or settings where a compromised is aided (during attestation) by a more powerful accomplice device. It also requires strong constraints and assumptions on the hardware platform and attestation usage [30, 33]. On the other extreme, hardware-based approaches require ’s attestation functionality to be housed entirely within dedicated hardware, e.g., SGX , TrustZone , or Trusted Platform Modules (TPMs) . Such hardware features are too expensive (in terms of physical area, energy consumption, and actual cost) for low-end devices.mstdropped “power consumption,” as i don’t think, at least in context of SGX, power consumption is relevant.ivanIt is not relevant to SGX, but it is low-end devices. I would leave it. The context here is low-end devices. mstHmm, but have you any evidence that hw makes it more power-hungry? Most often the contrary is the case, implementing it in HW makes functionality more power efficient!
While both hardware- and software-based approaches are not well-suited for settings where low-end devices communicate over the Internet (which is often the case in the IoT), hybrid (based on HW/SW co-design) is a more promising approach. Hybrid aims at providing the same security guarantees as hardware-based techniques with minimal hardware support. SMART  is the first hybrid architecture targeting low-end MCUs. In SMART, attestation code is implemented in software. SMART’s small hardware footprint guarantees that: (1) attestation code can not be modified, (2) attestation code has exclusive access to , (3) no part of remains in memory after attestation code terminates, and (4) runs atomically, i.e., from the first instruction until the last, without being interrupted. Property (4) is essential to prevent malware from relocating itself (during attestation) to evade detection. It also helps preventing Return-Oriented Programming (ROP) and similar attacks. We re-visit these properties and their corresponding requirements in Section III.
Despite much progress, a major missing aspect in research is high-assurance and rigor obtained by using (automated) formal methods to guarantee security of a concrete design and its implementation222Note that the recent hybrid technique called HYDRA  (that builds upon the formally verified seL4  microkernel) does not formally verify neither hardware modifications, nor the software implementation of the attestation code.. We believe that verifiability and formal security guarantees are particularly important for hybrid designs aimed at low-end embedded and IoT devices, as their proliferation keeps growing. This serves as the main motivation for our efforts to develop the first formally verified architecture.
Ii-B Formal Verification, Model Checking & Linear Temporal Logic
Automated formal verification typically involves three basic steps. First, the system of interest (e.g., hardware, software, communication protocol) must be described using a formal model, e.g., a Finite State Machine (FSM). Second, properties that the model should satisfy must be formally specified. Third, the system model must be checked against formally specified properties to guarantee that the system retains such properties. This checking can be achieved via either Theorem Proving or Model Checking. In this work, we use the latter and our motivation for picking it is clarified below.
In Model Checking, properties are specified as formulae using Temporal Logic and system models are represented as FSMs. Hence, a system is represented by a triple , where is a finite set of states, is the set of possible initial states, and is the transition relation set, i.e., it describes the set of states that can be reached in a single step from each state. The use of Temporal Logic to specify properties allows representation of expected system behavior over time.
We apply the model checker NuSMV , which can be used to verify generic HW or SW models. For digital hardware described at Register Transfer Level (RTL) – which is the case in this work – conversion from Hardware Description Language (HDL) to NuSMV model specification is simple. Furthermore, it can be automated and verified . This is because the standard RTL design already relies on describing hardware as an FSM.
In NuSMV, properties are specified in Linear Temporal Logic (LTL), which is particularly useful for verifying sequential systems. This is because it extends common logic statements with temporal clauses. In addition to propositional connectives, such as conjunction (), disjunction (), negation (), and implication (), LTL includes temporal connectives, thus enabling sequential reasoning. We are interested in the following temporal connectives:
X – neXt : holds if is true at the next system state.
F – Future : holds if there exists a future state where is true.
G – Globally : holds if for all future states is true.
U – Until : holds if there is a future state where holds and holds for all states prior to that.
This set of temporal connectives combined with propositional connectives (with their usual meanings) allows us to specify powerful rules. NuSMV works by checking LTL specifications against the system FSM for all reachable states in such FSM. In particular, all VRASED’s desired security properties are specified using LTL and verified by NuSMV.
Iii Overview of VRASED
VRASED is composed of a HW module (HW-Mod) and a SW implementation (SW-Att) of ’s behavior according to the protocol. HW-Mod enforces access control to in addition to secure and atomic execution of SW-Att (these properties are discussed in detail below). HW-Mod is designed with minimality in mind. The verified FSMs contain a minimal state space, which keeps hardware cost low. SW-Att is responsible for computing an attestation report. As VRASED’s security properties are jointly enforced by HW-Mod and SW-Att, both must be verified to ensure that the overall design conforms to the system specification.
Iii-A Adversarial Capabilities & Verification Axioms
We consider an adversary, , that can control the entire software state, code, and data of . can modify any writable memory and read any memory that is not explicitly protected by access control rules, i.e., it can read anything (including secrets) that is not explicitly protected by HW-Mod. It can also re-locate malware from one memory segment to another, in order to hide it from being detected. may also have full control over all Direct Memory Access (DMA) controllers on . DMA allows a hardware controller to directly access main memory (e.g., RAM, flash or ROM) without going through the CPU.
We focus on attestation functionality of ; verification of the entire MCU architecture is beyond the scope of this paper. Therefore, we assume the MCU architecture strictly adheres to, and correctly implements, its specifications. In particular, our verification approach relies on the following simple axioms:
A1 - Program Counter: The program counter () always contains the address of the instruction being executed in a given cycle.
A2 - Memory Address: Whenever memory is read or written, a data-address signal () contains the address of the corresponding memory location. For a read access, a data read-enable bit () must be set, and for a write access, a data write-enable bit () must be set.
A3 - DMA: Whenever a DMA controller attempts to access main system memory, a DMA-address signal () reflects the address of the memory location being accessed and a DMA-enable bit () must be set. DMA can not access memory when is off (logical zero).
A4 - MCU reset: At the end of a successful routine, all registers (including ) are set to zero before resuming normal software execution flow. Resets are handled by the MCU in hardware; thus, reset handling routine can not be modified.
A5 - Interrupts: Interrupts modify to point to the corresponding interrupt handler which is at a fixed memory location.
Remark: Note that Axioms A1 to A5 are satisfied by the OpenMSP430 design.
SW-Att uses the HACL*  HMAC-SHA256 function which is implemented and verified in F*333https://www.fstar-lang.org/. F* can be automatically translated to C and the proof of correctness for the translation is provided in . However, even though efforts have been made to build formally verified C compilers (CompCert  is the most prominent example), there are currently no verified compilers targeting lower-end MCUs, such as MSP430. Hence, we assume that the standard compiler can be trusted to semantically preserve its expected behavior, especially with respect to the following:
A6 - Callee-Saves-Register: Any register touched in a function is cleaned by default when the function returns.
A7 - Semantic Preservation: Functional correctness of the verified HMAC implementation in C, when converted to assembly, is semantically preserved.
Remark: Axioms A6 and A7 reflect the corresponding compiler specification (e.g., msp430-gcc).
Physical hardware attacks are out of scope in this paper. Specifically, can not modify code stored in ROM, induce hardware faults, or retrieve secrets via physical presence side-channels. Protection against physical attacks is considered orthogonal and could be supported via standard tamper-resistance techniques .
Iii-B High-Level Properties of Secure Attestation
We now describe, in high level, the sub-properties required for .
In section IV, we formalize these sub-properties in LTL and provide single end-to-end definitions for soundness and security.
Then we prove that VRASED’s design satisfies the aforementioned sub-properties and that the end-to-end definitions for soundness and security are implied by them.
The properties, shown in Figure 2, fall into two groups: key protection and safe execution.
As mentioned earlier, must not be accessible by regular software running on . To guarantee this, the following features must be correctly implemented:
P1- Access Control: can only be accessed by SW-Att.
P2- No Leakage: Neither (nor any function of other than the correctly computed HMAC) can remain in unprotected memory or registers after execution of SW-Att.
P3- Secure Reset: Any memory tainted by and all registers (including PC) must be erased (or be inaccessible to regular software) after MCU reset. Since a reset might be triggered during SW-Att execution, lack of this property could result in leakage of privileged information about the system state or . Erasure of registers as part of the reset ensures that no state from a previous execution persists. Therefore, the system must return to the default initialization state.
Safe execution ensures that is properly and securely used by SW-Att for its intended purpose in the protocol. Safe execution can be divided into four sub-properties:
P4- Functional Correctness: SW-Att must implement expected behavior of ’s role in the protocol. For instance, if expects a response containing an HMAC of memory in address range , SW-Att implementation should always reply accordingly. Moreover, SW-Att must always finish in finite time, regardless of input size and other parameters.
P5- Immutability: SW-Att executable must be immutable. Otherwise, malware residing in could modify SW-Att, e.g., to always generate valid measurements or to leak .
P6- Atomicity: SW-Att execution can not be interrupted. The first reason for atomicity is to prevent leakage of intermediate values in registers and SW-Att’s data memory (including locations that could leak functions of ) during SW-Att execution. This relates to P2 above. The second reason is to prevent roving malware from relocating itself to escape being measured by SW-Att.
P7- Controlled Invocation: SW-Att must always start from the first instruction and execute until the last instruction. Even though correct implementation of SW-Att is guaranteed by P3, isolated execution of chunks of a correctly implemented code could lead to catastrophic results. Potential ROP attacks could be constructed using gadgets of SW-Att (which, based on P1, have access to ) to compute valid attestation results.
Beyond aforementioned core security properties, in some settings, might need to authenticate ’s attestation requests in order to mitigate potential DoS attacks on . This functionality is also provided (and verified) as an optional feature in the design of VRASED. The differences between the standard design and the one with support for authentication are discussed in Appendix B.
Iii-C System Architecture
VRASED architecture is depicted in Figure 3. VRASED is implemented by adding HW-Mod to the MCU architecture, e.g., MSP430. MCU memory layout is extended to include Read-Only Memory (ROM) that houses SW-Att code and used in the HMAC computation. Because and SW-Att code are stored in ROM, we have guaranteed immutability, i.e., P5. VRASED also reserves a fixed part the memory address space for SW-Att stack. This amounts to of the address space, as discussed in Section VI 444A separate region in RAM is not strictly required. Alternatives and trade-offs are discussed in Section V. Access control to dedicated memory regions, as well as SW-Att atomic execution are enforced by HW-Mod. The memory backbone is extended to support multiplexing of the new memory regions. HW-Mod takes input signals from the MCU core: , , , , and . These inputs are used to determine a one-bit signal output, that, when set to , resets the MCU core immediately, i.e., before execution of the next instruction. The output is triggered when HW-Mod detects any violation of security properties.
Iii-D Verification Approach
An overview of HW-Mod verification is shown in Figures 4 and 5. We start by formalizing properties discussed in this section using Linear Temporal Logic (LTL) to define invariants that must hold throughout the entire system execution. HW-Mod is implemented as a composition of sub-modules written in the Verilog hardware description language (HDL). Each sub-module implements the hardware responsible for ensuring a given subset of the LTL specifications. Each sub-module is described as an FSM in: (1) Verilog at Register Transfer Level (RTL); and (2) the Model-Checking language SMV . We then use the NuSMV model checker to verify that the FSM complies with the LTL specifications. If verification fails, the sub-module is re-designed.
Once each sub-module is verified, they are combined into a single Verilog design.
The composition is converted to SMV using the automatic translation tool Verilog2SMV .
The resulting SMV is simultaneously verified against all LTL specifications to prove that the final Verilog design for
HW-Mod complies with all secure properties.
Remark: Automatic conversion of the composition of HW-Mod from Verilog to
SMV rules out the possibility of human mistakes in representing Verilog FSMs as SMV.
For the SW-Att part of VRASED, we use the SHA-256 from the HACL* library  to compute an HMAC over the attested memory and received from . This function is formally verified with respect to memory safety, functional correctness, and cryptographic security. However, key secrecy properties (such as clean-up of memory tainted by the key) are not formally verified in HACL* and thus must be ensured by VRASED.
As the last step, we prove that the conjunction of the LTL properties guaranteed by HW-Mod and SW-Att implies soundness and security of the architecture. These are formally specified in Section IV-B.
Iv Verifying VRASED
In this section we formalize secure properties. For each property, we represent it as a set of LTL specifications and construct an FSM that is verified to conform to such specifications. Finally, the conjunction of these FSMs is implemented in Verilog HDL and translated to NuSMV using Verilog2SMV. The generated NuSMV description for the conjunction is proved to simultaneously hold for all specifications.
To facilitate generic LTL specifications that represent VRASED’s architecture (see Figure 3) we use the following:
and : first and last physical addresses of the memory region to be attested;
and : physical addresses of first and last instructions of SW-Att in ROM;
and : first and last physical addresses of the ROM region where is stored;
and : first and last physical addresses of the RAM region reserved for SW-Att computation;
: fixed address that stores the result of SW-Att computation (HMAC);
: size of HMAC result;
Table I uses the above definitions and summarizes the notation used in our LTL specifications throughout the rest of this paper.
To simplify specification of defined security properties, without loss of generality, denotes a contiguous memory region between and , inclusive (). Therefore, the following equivalence holds:
For example, expression holds when the current value of signal is within and , meaning that the MCU is currently executing an instruction in CR, i.e, a SW-Att instruction. This is because in the notation introduced above: .
Iv-A1 FSM Representation
We now introduce LTL specifications and FSMs that are formally verified to hold for such specifications. As discussed in Section III, these FSMs correspond to the Verilog hardware design of HW-Mod sub-modules. The FSMs are implemented as Mealy machines, where output changes at any time as a function of both the current state and current input values555This is in contrast with Moore machines where the output is defined solely based on the current state.. Each FSM has as inputs a subset of the following signals and wires: , ,, , , .
Each FSM has only one output, , that indicates whether any security property was violated. For the sake of presentation, we do not explicitly represent the value of the output for each state. Instead, we define the following implicit representation:
output is 1 whenever an FSM transitions to the state;
output remains 1 until a transition leaving the state is triggered;
output is 0 in all other states.
|Current Program Counter value (16-bits)|
|Signal that indicates if the MCU is reading from memory (1-bit)|
|Signal that indicates if the MCU is writing to memory (1-bit)|
|Address for an MCU memory access (16-bits)|
|Signal that indicates if DMA is currently enabled (1-bit)|
|Memory address being accessed by DMA, if any (16-bits)|
|(Code ROM) Memory region where SW-Att is stored:|
|( ROM) Memory region where is stored:|
|(eXclusive Stack) secure RAM region reserved for SW-Att computations:|
|(MAC RAM) RAM region in which SW-Att computation result is written: . The same region is also used to pass the attestation challenge as input to SW-Att|
|(Attested Region) Memory region to be attested. Can be fixed/predefined or specified in an authenticated request from|
|A 1-bit signal that reboots the MCU when set to logic|
|A1, A2, …, A7||Verification axioms (outlined in section III-A)|
|P1, P2, …, P7||Properties required for secure (outlined in section III-B)|
Iv-B Formal Specifications of Soundness and Security
We now define the notions of soundness and security. Intuitively, soundness corresponds to computing an integrity ensuring function over memory at time . Our integrity ensuring function is an HMAC computed on memory with a one-time key derived from and . Since SW-Att computation is not instantaneous, soundness must ensure that attested memory does not change during computation of the HMAC. This is the notion of temporal consistency in remote attestation . In other words, the result of SW-Att call must reflect the entire state of the attested memory at the time when SW-Att is called. This notion is captured in LTL by Definition 1.
where M is any AR value and KDF is a secure key derivation function.
In Definition 1, captures the time when SW-Att is called (execution of its first instruction). and are the values of and . From this pre-condition, Definition 1 asserts that there is a time in the future when SW-Att computation finishes and, at that time, stores the result of . Note that, to satisfy Definition 1, and in the resulting HMAC must correspond to the values in and , respectively, when SW-Att was called.
security is defined using the security game in Figure 7. It models an adversary (that is a probabilistic polynomial time, ppt, machine) that has full control of the software state of (as the one described in Section III-A). It can modify at will and call SW-Att a polynomial number of times in the security parameter ( and bit-lengths). However, can not modify SW-Att code, which is stored in immutable memory. The game assumes that does not have access to , and only learns after it receives from as part of the attestation request.
In the following sections, we define SW-Att functional correctness, LTL specifications in Equations 2-9 and formally verify that VRASED’s design guarantees such LTL specifications. We define LTL specifications from the intuitive properties discussed in Section III-B and depicted in Figure 2. In Appendix A we formally prove that the conjunction of such properties achieves soundness (Definition 1) and security (Definition 2). We first show that VRASED guarantees that can never learn , thus satisfying the assumption in the security game. We then complete the proof security via reduction, i.e., show that existence of an adversary that wins the game in Definition 2 implies the existence of an adversary that breaks the conjectured existential unforgeability of HMAC.
Iv-C Vrased SW-Att
To minimize required hardware features, hybrid approaches implement integrity ensuring functions (e.g., HMAC) in software. VRASED’s SW-Att implementation is built on top of HACL*’s HMAC implementation . HACL* code is verified to be functionally correct, memory safe and secret independent. In addition, all memory is statically allocated on the stack making it predictable and deterministic.
SW-Att is simple, as depicted in Figure 8. It first derives a new unique context-specific key () from the master key () by computing an HMAC-based key derivation function, HKDF , on . This key derivation can be extended to incorporate attested memory boundaries if specifies the range (see Appendix B). Finally, it calls HACL*’s HMAC, using key as the HMAC key. and specify the memory range to be attested ( in our notation). We emphasize that SW-Att resides in ROM, which guarantees P5 under the assumption of no hardware attacks. Moreover, as discussed below, HW-Mod enforces that no other software running on can access memory allocated by SW-Att code, e.g., buffer allocated in line 2 of Figure 8.
HACL*’s verified HMAC is the core for guaranteeing P4 (Functional Correctness) in VRASED’s design. SW-Att functional correctness means that, as long as the memory regions storing values used in SW-Att computation (, , and ) do not change during its computation, the result of such computation is the correct HMAC. This guarantee can be formally expressed in LTL as in Definition 3. By this definition, the value in does not need to remain the same, as it will eventually be overwritten with the result of SW-Att computation.
where M is any arbitrary value for AR.
In addition, some HACL* properties, such as static/deterministic memory allocation, are used in alternative designs of VRASED to ensure P2 – see Section V.
Functional correctness implies that the HMAC implementation conforms to its published standard specification on all possible inputs, retaining the specification’s cryptographic security. It also implies that HMAC executes in finite time. Secret independence ensures that there are no branches taken as a function of secrets, i.e., and key in Figure 8. This mitigates leakage via timing side-channel attacks. Memory safety guarantees that implemented code is type safe, meaning that it never reads from, or writes to: invalid memory locations, out-of-bounds memory, or unallocated memory. This is particularly important for preventing ROP attacks, as long as P7 (controlled invocation) is also preserved666Otherwise, even though the implementation is memory-safe and correct as a whole, chunks of a memory-safe code could still be used in ROP attacks..
Having all memory allocated statically allows us to either: (1) confine SW-Att execution to a fixed size protected memory region inaccessible to regular software (including malware) running on ; or (2) ensure that SW-Att stack is erased before the end of execution. Note that HACL* (as well as other cryptographic libraries, such as OpenSSL and NaCl) does not provide stack erasure, in order to improve performance. Therefore, P2 does not follow from HACL* implementation. This practice is common because inter-process memory isolation is usually provided by the Operating System (OS). However, erasure before SW-Att terminates must be guaranteed. Recall that VRASED targets low-end MCUs that might run applications on bare-metal and thus can not rely on any OS features.
As discussed above, even though HACL* implementation guarantees P4 and storage in ROM guarantees P5, these must be combined with P6 and P7 to provide safe execution. P6 and P7 – along with the key protection properties (P1, P2, and P3) — are ensured by HW-Mod, which are described next.
Iv-D Key Access Control (HW-Mod)
If malware manages to read from ROM, it can reply to with a forged result.
HW-Mod access control (AC) sub-module enforces that can only be accessed by SW-Att (property P1).
Remark. We consider DMA implications for key access control (as well as for other properties) in
Iv-D1 LTL Specification
The invariant for key access control (AC) is defined in LTL Specification (2). It stipulates that system must transition to the state whenever code from outside tries to read from within the key space.
Iv-D2 Verified Model
Figure 10 shows the FSM implemented by the AC sub-module which is verified to hold for LTL Specification 2. This FSM has two states: Run and Reset. It outputs when the AC sub-module transitions to state Reset. This implies a hard-reset of the MCU. Once the reset process completes, the system leaves the Reset state.
Iv-E Atomicity and Controlled Invocation (HW-Mod)
In addition to functional correctness, safe execution of attestation code requires immutability (P5), atomicity (P6), and controlled invocation (P7). P5 is achieved directly by placing SW-Att in ROM. Therefore, we only need to formalize invariants for the other two properties: atomicity and controlled execution.
Iv-E1 LTL Specification
LTL Specification (3) enforces that the only way for SW-Att execution to terminate is through its last instruction: . This is specified by checking current and next values using LTL neXt operator. In particular, if current value is within SW-Att region, and next value is out of SW-Att region, then either current value is the address of the last instruction in SW-Att (), or is triggered in the next cycle. Also, LTL Specification (4) enforces that the only way for to enter SW-Att region is through the very first instruction: . Together, these two invariants imply atomicity captured by P7: it is impossible to jump into the middle of SW-Att, or to leave SW-Att before reaching the last instruction.
P6 is also satisfied through LTL Specifications (3) and (4). Atomicity could be violated by interrupts. However, if an interrupt occurrs, changes to point to the interrupt handling routine, the address of which in OpenMSP430 is in a fixed location (Axiom A5) and, more importantly, outside . Therefore, if interrupts are not disabled by software running on before calling SW-Att, any interrupt that might violate SW-Att atomicity will cause an MCU .
Iv-E2 Verified Model
Figure 11 presents a verified model for atomicity and controlled invocation enforcement. The FSM has five states. Two basic states and represent conditions when points to an address: (1) outside , and (2) within , respectively, not including the first and last instructions of SW-Att. Another two: and represent states when points to the first and last instructions of SW-Att, respectively. Note that the only possible path from to is through . Similarly, the only path from to is through . Any sequence of values for not obeying these conditions will trigger a transition to the state, causing the MCU to reset.
Iv-F Key Confidentiality (HW-Mod)
To guarantee secrecy of and thus satisfy P2, VRASED must enforce the following:
No leaks after attestation: any registers and memory accessible to applications must be erased at the end of each attestation instance, i.e., before application execution resumes.
No leaks on reset: since a reset can be triggered during attestation execution, any registers and memory accessible to regular applications must be erased upon reset.
In MSP430, all registers are zeroed out upon reset and at boot time: Axiom A4. Therefore, the only time when register clean-up is necessary is at the end of SW-Att. This is also guaranteed by the Callee-Saves-Register convention: Axiom A6.
Nonetheless, the leakage problem remains because of RAM allocated by SW-Att. Thus, we must guarantee that is not leaked through "dead" memory, which could be accessed by application (possibly, malware) after SW-Att terminates. A simple and effective way of addressing this issue is by reserving a separate secure stack in RAM that is only accessible (i.e., readable and writable) by attestation code. All memory allocations by SW-Att must be done on this stack, and access control to the stack must be enforced by HW-Mod. As discussed in Section VI, the size of this stack is constant – KBytes. This corresponds to of MSP430 16-bit address space. We also consider several VRASED variants and trade-offs between them in Section V.
Iv-F1 LTL Specification
Recall that denote a contiguous secure memory region reserved for exclusive access by SW-Att. LTL Specification for the secure stack sub-module is as follows:
We also want to prevent attestation code from writing into application memory. Therefore, it is only allowed to write to the designated fixed region for the HMAC result ().
In summary, invariants (5) and (6) enforce that only attestation code can read from/write to the secure reserved stack and that attestation code can only write to regular memory within the space reserved for the HMAC result. If any of these conditions is violated, the system resets.
Iv-F2 Verified Model
Iv-G DMA Support
So far, we presented a formalization of HW-Mod sub-modules under the assumption that DMA is either not present or disabled on . However, when present, a DMA controller can access arbitrary memory regions. Such memory access is performed concurrently in the memory backbone and without MCU intervention, while the MCU executes regular instructions.
DMA data transfer in MSP430 is performed using dedicated memory buses, e.g., and . Hence, regular memory access control (based on monitoring ) does not apply to memory access by DMA controller. Thus, if DMA controller is compromised, it may lead to violation of P1 and P2 by directly reading and values in the attestation stack, respectively. In addition, it can assist -resident malware to escape detection by either copying it out of the measurement range or deleting it, which results in a violation of P6.
Iv-G1 LTL Specification
We introduce three additional LTL Specifications to protect against aforementioned attacks. First, we enforce that DMA cannot access .
Similarly, LTL Specification for preventing DMA access to the attestation stack is defined as:
Finally, invariant (9) specifies that DMA must be always disabled while is in SW-Att region. This prevents DMA controller from helping malware escape during attestation.
Iv-G2 Verified Model
Iv-H HW-Mod Composition
Thus far, we designed and verified individual HW-Mod sub-modules according to the methodology in Section III-D and illustrated in Figure 4. We now follow the workflow of Figure 5 to combine the sub-modules into a single Verilog module. Since each sub-module individually guarantees a subset of properties P1–P7, the composition is simple: the system must reset whenever any sub-module reset is triggered. This is implemented by a logical OR of sub-modules reset signals. The composition is shown in Figure 14.
To verify that all LTL specifications still hold for the composition, we use Verilog2SMV  to translate HW-Mod to SMV and verify SMV for all of these specifications simultaneously.
Iv-I Secure Reset (HW-Mod)
Finally, we define LTL Specification for secure reset (P3), which is a necessary property for the composition of all sub-modules. It guarantees that the MCU reset completes before the MCU signal turns off. At the end of reset, all registers (including ) are set to , per Axiom A4. Ensuring that reset remains triggered until this point is important in order to guarantee no leak through registers after a reset. While P1 guarantees that is not leaked from ROM, exclusive stack guarantees that cannot be inferred from RAM, and Axiom A6 guarantees that registers are erased after SW-Att terminates. LTL Specification (10) is needed to prevent leakage when a reset signal arrives during SW-Att execution, since might remain in some registers.
Iv-I1 LTL Specification
To guarantee that the reset signal is active for long enough so that the MCU reset finishes and all registers are cleaned-up, it must hold that:
Invariant (10) states: when reset signal is triggered, it can only be released after . Transition from state in all sub-modules presented in this section already takes this invariant into account. Thus, HW-Mod composition also verifies LTL Specification (10).
V Alternative Designs
We now discuss alternative designs for VRASED that guarantee verified properties without requiring a separate secure stack region for SW-Att operations. Recall that HW-Mod enforces that only SW-Att can access this stack. Since memory usage in HACL* HMAC is deterministic, the size of the separate stack can be pre-determined – bytes. Even though resulting in overall (HW and SW) design simplicity, dedicating of addressable memory to secure might not be desirable. Therefore, we consider several alternatives. In Section VI the costs involved with these alternatives are quantified and compared to the standard design of VRASED.
V-A Erasure on SW-Att
The most intuitive alternative to a reserved secure stack (which prevents accidental key leakage by SW-Att) is to encode corresponding properties into the HACL* implementation and proof. Specifically, it would require extending the HACL* implementation to zero out all allocated memory before every function return. In addition, to retain verification of P2 (in Section III-B) and ensure no leakage, HACL*-verified properties must be extended to incorporate memory erasure. This is not yet supported in HACL* and doing so would incur a slight performance overhead. However, the trade-off between performance and RAM savings might be worthwhile. Furthermore, the HACL* team confirmed to us777Via private email communication in late April 2018. that this functionality is currently under development and is expected to be available sometime in mid-2018.
At the same time, we note that, even with verified erasure as a part of SW-Att, P2 is still not guaranteed if the MCU does not guarantee erasure of the entire RAM upon boot. This is necessary in order to consider the case when re-boots in the middle of SW-Att execution. Without a reserved stack, might persist in RAM. Since the memory range for SW-Att execution is not fixed, hardware support is required to bootstrap secure RAM erasure before starting any software execution. In fact, such support is necessary for all approaches without a separate secure stack. The implication is that verification of secure RAM erasure routine itself is also necessary to ensure P2.
V-B Compiler-Based Clean-Up
While stack erasure in HACL* would integrate nicely with the overall proof of SW-Att, the assurance would be at the language abstraction level, and not necessarily at the machine level. The latter would require additional assumptions about the compilation tool chain. We could also consider performing stack erasure directly in the compiler. In fact, a recent proposal to do exactly that was made in zerostack , an extension to Clang/LLVM. In case of VRASED, this feature could be used on unmodified HACL* (at compilation time), to add instructions to erase the stack before the return of each function enabling P2, assuming the existence of a verified RAM erasure routine upon boot. We emphasize that this approach may increase the compiler’s trusted code base. Ideally, it should be implemented and formally verified as part of a verified compiler suite, such as CompCert .
|Method||RAM Erasure Required Upon Boot?||FPGA Hardware||Verilog LoC||Memory (byte)||Time to attest 4KB|
|LUT||Reg||Cell||ROM||Sec. RAM||CPU cycles||ms (at 8MHz)|
|Secure Stack (Section IV)||No||2014||846||2128||2613||4500||2332||3601216||450.15|
|Erasure on SW-Att (Section V-A)||Yes||2004||844||2116||2479||4522||0||3613283||451.66|
Compiler-based Clean-up (Section V-B) 888As mentioned in Section V-B
, there is no formally verified msp430 compiler capable of performing stack erasure. Thus, we estimate overhead of this approach by manually inserting code required for erasing the stack inSW-Att.
|Double-HMAC Call (Section V-C)||Yes||2004||844||2116||2479||4570||0||7201605||900.20|
V-C Double-HMAC Call
Finally, complete stack erasure could also be achieved directly using currently verified HACL* properties, without any further modifications. This approach involves invoking HACL* HMAC function a second time, after the computation of the actual HMAC. The second "dummy" call would use the same input data, however, instead of using , an independent constant, such as , would be used as the HMAC key.
Recall that HACL* is verified to only allocate memory in a static and deterministic manner. Also, due to HACL*’s verified properties that mitigate side-channels, software flow does not change based on the secret key. Therefore, this deterministic and static allocation implies that, for inputs of the same size, any variable allocated by the first "real" HMAC call (tainted by ), would be overwritten by the corresponding variable in the second "dummy" call. Note that the same guarantee discussed in Section V-A is provided here and secure RAM erasure at boot would still be needed for the same reasons. Admittedly, this double-HMAC approach would consume twice as many CPU cycles. Still, it might be a worthwhile trade-off, especially, if there is memory shortage and lack of previously discussed HACL* or compiler extension. mstI removed the “This approach has the advantage of using unmodified HACL* HMAC (and thus reduced program memory size),” as this is not really true as you will have to add logic to the double-call compared to reserved stack which i’d argue is not fundamentally different than the stack clearing. Correspondingly, i’ve also added mark in table on SW-Att increase also for double-mac.
We now discuss implementation details and evaluate VRASED’s overhead and performance. Section VI-B reports on verification complexity. Section VI-C discusses performance in terms of time and space complexity as well as its hardware overhead. We also provide a comparison between VRASED and other architectures targeting low-end devices, namely SANCUS  and SMART , in Appendix C.
As mentioned earlier, we use OpenMSP430  as an open core implementation of the MSP430 architecture. OpenMSP430 is written in the Verilog hardware description language (HDL) and can execute software generated by any MSP430 toolchain with near cycle accuracy. We modified the standard OpenMSP430 to implement the hardware architecture presented in Section III-C, as shown in Figure 3. This includes adding ROM to store and SW-Att, adding HW-Mod, and adapting the memory backbone accordingly. We use Xilinx Vivado  – a popular open-source logic synthesis tool – to synthesize an RTL description of HW-Mod into hardware in FPGA. FPGA synthesized hardware consists of a number of logic cells. Each consists of Look-Up Tables (LUTs) and registers; LUTs are used to implement combinatorial boolean logic while registers are used for sequential logic elements, i.e., FSM states and data storage. We compiled SW-Att using the native msp430-gcc  and used Linker scripts to generate software images compatible with the memory layout of Figure 3. Finally, we evaluated VRASED on the FPGA platform targeting Artix-7  class of devices.
Vi-B Verification Results
As discussed in Section III-B, VRASED’s verification consists of properties P1–P7. P5 is achieved directly by executing SW-Att from ROM. Meanwhile, HACL* HMAC verification implies P4. All other properties are automatically verified using NuSMV model checker. Table III shows the verification results of VRASED’s HW-Mod composition as well as results for individual sub-modules. It shows that VRASED successfully achieves all the required security properties. These results also demonstrate feasibility of our verification approach, since the verification process – running on a commodity desktop computer – consumes only small amount of memory and time: MB and sec, respectively, for all properties.
(1) and (2) are guaranteed by HACL. Only interested in (3) to (8).
Table presents model checking HW-Mod with each property and all properties.
it is feasible / small run-time and memory usage.
does not run into the state explosion problem
HW-Mod passes with all specifications.
Vi-C Performance and Hardware Cost
We now report on VRASED’s performance considering the standard design (described in Section IV) and alternatives
discussed in Section V.
We evaluate the hardware footprint, memory (ROM and secure RAM), and run-time.
Table II summarizes the results.
Hardware Footprint. The secure stack approach adds around 434 lines of code in Verilog HDL. This corresponds to around 20% of the code in the original OpenMSP430 core. In terms of synthesized hardware, it requires 64 (3.3%) and 19 (2.3%) additional LUTs and registers respectively. Overall, VRASED contains 51 logic cells more than the unmodified OpenMSP430 core, corresponding to a 2.5% increase.
Memory. VRASED requires 4.5KB of ROM; most of which (96%) is for storing HACL* HMAC-SHA256 code. The secure stack approach has the smallest ROM size, as it does not need to perform a memory clean-up in software. However, this advantage is attained at the price of requiring 2.3KBytes of reserved RAM. This overhead corresponds to 3.5% of MSP430 16-bit address space.
Attestation Run-time. Attestation run-time is dominated by the time it takes to compute the HMAC of ’s memory. The secure stack, erasure on SW-Att and compiler-based clean-up approaches take roughly .45 to attest 4 of RAM on an MSP430 device with a clock frequency at 8MHz. Whereas, the double MAC approach requires invoking the HMAC function twice, leading its run-time to be roughly two times slower.
Attestation code size and memory cost. (Max size of an attestation block)
Additional hardware cost in terms of logic gates, etc.
statically measure secure stack size
Discussion. We consider VRASED’s overhead to be affordable. The additional hardware, including registers, logic gates and exclusive memory, resulted in only a 2-4% increase. The number of cycles required by SW-Att exhibits a linear increase with the size of attested memory. As MSP430 typically runs at 8-25MHz, attestation of the entire RAM on a typical MSP430 can be computed in less than a second. VRASED’s is relatively cheap to the . As a point of comparison we can consider a common cryptographic primitive such as the Curve25519 Elliptic-Curve Diffie-Hellman (ECDH) key exchange. A single execution of an optimized version of such protocol on MSP430 has been reported to take million cycles . As Table II shows, attestation of KBytes (typical size of RAM in some MSP430 models) can be computed three times faster.
Vii Related Work
We are unaware of any previous work that yielded a formally verified design. To the best of our knowledge, VRASED is the first verification of a security service implemented as HW/SW co-design. Nevertheless, formal verification has been widely used as the de facto means to guarantee that a system is free of implementation errors and bugs. In recent years, several efforts focused on verifying security-critical systems.
In terms of cryptographic primitives, Hawblitzel et al.  verified new implementations of SHA, HMAC, and RSA. Beringer et al. verified the Open-SSL SHA-256 implementation. Bond et al.  verified an assembly implementation of SHA-256, Poly1305, AES and ECDSA. More recently, Zinzindohoué, et al.  developed HACL*, a verified cryptographic library containing the entire cryptographic API of NaCl . As discussed earlier, HACL*’s verified HMAC forms the core of VRASED’s software component.
Larger security-critical systems have also been successfully verified. For example, Bhargavan  implemented the TLS protocol with verified cryptographic security. CompCert is a C compiler that is formally verified to preserve C code semantics in generated assembly code. Klein et al.  designed and proved functional correctness of seL4 – the first fully verified general-purpose microkernel. More recently, Tuncay et al. verified a design for Android OS App permissions model .
The importance of verifying has been recently acknowledged by Lugou et al. , which discussed methodologies for specifically verifying HW/SW co-designs. A follow-on result proposed the SMASH-UP tool . By modeling a hardware abstraction, SMASH-UP allows automatic conversion of assembly instructions to the effects on hardware representation. Similarly, Cabodi et al. [12, 11] discussed the first steps towards formalizing hybrid properties. However, none of these results yielded a fully verified (and publicly available) architecture, such as VRASED.
This paper presents VRASED – the first formally verified method that uses a verified cryptographic software implementation and combines it with a verified hardware design to guarantee correct implementation of security properties. VRASED is also the first verified security service implemented as a HW/SW co-design. VRASED was designed with simplicity and minimality in mind. It results in efficient computation and low hardware cost, realistic even for low-end embedded systems. VRASED’s practicality is demonstrated via publicly available implementation using the low-end MSP430 platform. The design and verification methodology presented in this paper can be extended to other MCU architectures. We believe that this work represents an important and timely advance in embedded systems security, especially, with the rise of heterogeneous ecosystems of (inter-)connected IoT devices. Since most IoT devices can not afford expensive computation (typically required in traditional security services designed for higher-end computers), we claim that a formally verified reference design is very important.
-  “VRASED source code,” https://www.dropbox.com/sh/vdlogjlp6ziy2r0/AADxQB1CV9QnVuL67nTDCpdaa, 2018.
-  M. Antonakakis et al., “Understanding the mirai botnet,” in USENIX Security Symposium, 2017.
-  Arm Ltd., “Arm TrustZone,” 2018. [Online]. Available: https://www.arm.com/products/security-on-arm/trustzone
-  L. Beringer et al., “Verified correctness and security of OpenSSL HMAC,” in USENIX, 2015.
-  D. J. Bernstein et al., “The security impact of a new cryptographic library,” in International Conference on Cryptology and Information Security in Latin America, 2012.
-  K. Bhargavan et al., “Implementing TLS with verified cryptographic security,” in SP, 2013.
-  A. Bogdanov et al., “Spongent: The design space of lightweight cryptographic hashing,” IEEE Transactions on Computers, vol. 62, 2013.
-  B. Bond et al., “Vale: Verifying high-performance cryptographic assembly code,” in USENIX, 2017.
-  F. Brasser et al., “TyTAN: tiny trust anchor for tiny devices,” in DAC. ACM.
-  ——, “Remote attestation for low-end embedded devices: the prover’s perspective,” in DAC, 2016.
-  G. Cabodi et al., “Secure embedded architectures: Taint properties verification,” in DAS, 2016.
-  ——, “Formal verification of embedded systems for remote attestation,” WSEAS Transactions on Computers, vol. 14, pp. 760–769, 2015.
-  X. Carpent et al., “Reconciling remote attestation and safety-critical operation on simple iot devices,” in 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC). IEEE, 2018, pp. 1–6.
-  ——, “Temporal consistency of integrity-ensuring computations and applications to embedded systems security,” in ASIACCS, 2018.
-  ——, “ERASMUS: Efficient remote attestation via self-measurement for unattended settings,” in Design, Automation and Test in Europe (DATE), 2018.
-  ——, “Remote attestation of iot devices via SMARM: Shuffled measurements against roving malware,” in IEEE International Symposium on Hardware Oriented Security and Trust (HOST), 2018.
-  A. Cimatti et al., “NuSMV 2: An opensource tool for symbolic model checking,” in International Conference on Computer Aided Verification. Springer, 2002, pp. 359–364.
-  A. Duret-Lutz et al., “Spot 2.0—a framework for ltl and -automata manipulation,” in International Symposium on Automated Technology for Verification and Analysis. Springer, 2016, pp. 122–129.
-  K. Eldefrawy et al., “HYDRA: hybrid design for remote attestation (using a formally verified microkernel),” in Wisec. ACM, 2017.
-  ——, “SMART: Secure and minimal architecture for (establishing dynamic) root of trust,” in NDSS. Internet Society, 2012.
-  O. Girard, “openMSP430,” 2009.
-  C. Hawblitzel et al., “Ironclad apps: End-to-end security via automated full-system verification.” in OSDI, vol. 14, 2014, pp. 165–181.
-  G. Hinterwälder et al., “Full-size high-security ECC implementation on MSP430 microcontrollers,” in International Conference on Cryptology and Information Security in Latin America. Springer, 2014, pp. 31–47.
-  A. Ibrahim et al., “SeED: secure non-interactive attestation for embedded devices,” in ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec), 2017.
-  T. Instruments. Msp430 ultra-low-power sensing & measurement mcus. http://www.ti.com/microcontrollers/msp430-ultra-low-power-mcus/overview.html.
-  Intel, “Intel Software Guard Extensions (Intel SGX).” [Online]. Available: https://software.intel.com/en-us/sgx
-  A. Irfan et al., “Verilog2SMV: A tool for word-level verification,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2016. IEEE, 2016, pp. 1156–1159.
-  G. Klein et al., “seL4: Formal verification of an OS kernel,” in Proceedings of the ACM SIGOPS 22Nd Symposium on Operating Systems Principles, ser. SOSP ’09. New York, NY, USA: ACM, 2009, pp. 207–220. [Online]. Available: http://doi.acm.org/10.1145/1629575.1629596
-  P. Koeberl et al., “TrustLite: A security architecture for tiny embedded devices,” in EuroSys. ACM, 2014.
-  X. Kovah et al., “New results for timing-based attestation,” in Proceedings of the IEEE Symposium on Research in Security and Privacy. IEEE Computer Society Press, 2012.
-  H. Krawczyk and P. Eronen, “HMAC-based extract-and-expand key derivation function (HKDF),” Internet Engineering Task Force, Internet Request for Comment RFC 5869, May 2010.
-  X. Leroy, “Formal verification of a realistic compiler,” Communications of the ACM, vol. 52, no. 7, pp. 107–115, 2009.
-  Y. Li et al., “Establishing software-only root of trust on embedded systems: Facts and fiction,” in Security Protocols—22nd International Workshop, 2015.
-  ——, “Viper: Verifying the integrity of peripherals’ firmware,” in CCS. ACM, 2011.
-  F. Lugou et al., “Toward a methodology for unified verification of hardware/software co-designs,” Journal of Cryptographic Engineering, 2016.
-  ——, “Smashup: a toolchain for unified verification of hardware/software co-designs,” Journal of Cryptographic Engineering, vol. 7, no. 1, pp. 63–74, 2017.
-  J. Noorman et al., “Sancus 2.0: A low-cost security architecture for iot devices,” ACM Trans. Priv. Secur., vol. 20, no. 3, pp. 7:1–7:33, Jul. 2017. [Online]. Available: http://doi.acm.org/10.1145/3079763
-  D. Perito and G. Tsudik, “Secure code update for embedded devices via proofs of secure erasure.” in ESORICS, 2010.
-  J. Protzenko et al., “Verified low-level programming embedded in f,” Proceedings of the ACM on Programming Languages, 2017.
-  S. Ravi et al., “Tamper resistance mechanisms for secure embedded systems,” in VLSI Design, 2004. Proceedings. 17th International Conference on. IEEE, 2004, pp. 605–611.
-  A. Seshadri et al., “Scuba: Secure code update by attestation in sensor networks,” in ACM workshop on Wireless security, 2006.
-  ——, “Pioneer: Verifying code integrity and enforcing untampered code execution on legacy systems,” ACM SIGOPS Operating Systems Review, December 2005.
-  L. Simon et al., “What you get is what you C: Controlling side effects in mainstream C compilers,” in Proceedings of the Third IEEE European Symposium on Security and Privacy (EuroSP). London, UK: ACM SIGOPS, Apr. 2018.
-  Texas Instruments, “MSP430 GCC user’s guide,” 2016.
-  Trusted Computing Group., “Trusted platform module (tpm),” 2017. [Online]. Available: http://www.trustedcomputinggroup.org/work-groups/trusted-platform-module/
-  G. S. Tuncay et al., “Resolving the predicament of Android custom permissions,” in ISOC Network and Distributed Systems Security Symposium (NDSS), 2018.
-  J. Vijayan, “Stuxnet renews power grid security concerns,” http://www.computerworld.com/article/2519574/security0/stuxnet-renews-power-grid-security-concerns.html, june 2010.
-  Xilinx, “Vivado design suite user guide,” 2017.
-  Xilinx Inc., “Artix-7 FPGA family,” 2018. [Online]. Available: https://www.xilinx.com/products/silicon-devices/fpga/artix-7.html
-  J.-K. Zinzindohoué et al., “Hacl*: A verified modern cryptographic library,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 1789–1806.
Appendix A: Vrased End-to-End Soundness and Security Proofs
Viii-A Proof Strategy
In this section we discuss the proofs for soundness (as in Definition 1) and security (as in Definition 2). Soundness is proved entirely via LTL equivalences. In the proof of security we first show, via LTL equivalences, that VRASED guarantees that adversary can never learn . We then prove security of VRASED by showing a reduction from HMAC existential unforgeability to VRASED security. In other words, we show that the existence of an adversary that breaks VRASED implies the existence of adversary HMAC- able to break the conjectured existential unforgeability of HMAC. The full machine-checked proofs for the LTL equivalences (using Spot 2.0  proof assistant) discussed in the remainder of this section are available in .
Viii-B Machine Model
To prove that VRASED’s design satisfies the end-to-end definitions of soundness and security for , we start by formally defining (in LTL) memory and execution models corresponding to the architecture introduced in Section III.
The memory model in Definition 4 captures the fact that and are ROM regions, and, as such, are immutable. Hence, the values stored in such regions always correspond to and the instructions of SW-Att, respectively. Finally, the memory model states that , , , , and are disjoint regions in the memory layout, corresponding to the architecture presented in Figure 3.
Our execution model, in Definition 5, translates MSP430 behavior by capturing the effects on the processor signals when reading and writing from/to memory. We do not model the effects of instructions that only modify register values (e.g., ALU operations such as and ) because those are not necessary in our proofs.
The execution model defines that a given memory address can be modified in two cases: by a CPU instruction or by DMA. In the first case (CPU), the signal must be on and must contain the memory address being accessed. In the second case, signal must be on and must contain the address being modified by DMA. The requirements for reading from a given memory address are similar, with the difference that instead of , must be on. Finally, the execution model also captures the fact that an interrupt implies transitioning value to point to a memory address that is necessarily outside .
Viii-C Soundness Proof
The proof for soundness follows from SW-Att functional correctness (expressed by Definition 3) and LTL specifications 3, 6, and 9:
VRASED is sound according to Definition 1.
The formal computer proof for Theorem 1 can be found in . We here try to convey the intuition behind such proof by splitting it into two parts. First, it is easy to see that SW-Att functional correctness (Definition 3) would imply Theorem 1 if the memory regions , , never change during SW-Att computation. However, memory model Definitions 4.1 and 4.2 already guarantee that and can never change. Therefore, what remains to be proved is that does not change during SW-Att computation. This is stated in Lemma 1.
In turn, Lemma 1 can be proved by:
The reasoning behind Equation 11 is as follows:
prevents the CPU from stopping execution of SW-Att before its last instruction.
guarantees that the only memory regions written by the CPU during SW-Att execution are and , which do not overlap with .
prevents DMA from writing to memory during SW-Att execution.
Therefore, there are no means for modifying during SW-Att execution, implying Lemma 1. As discussed above, it is easy to see that:
Viii-D Security Proof
Recall the definition of security in the game in Figure 7. The game makes two key assumptions:
SW-Att call results in a temporally consistent HMAC of using a key derived from and . This is already proved by VRASED’s soundness.
never has knowledge of .
By proving that VRASED’s design satisfies assumptions 1 and 2, we show that the capabilities of untrusted software (any DMA or CPU software other than SW-Att) on are equivalent to the capabilities of adversary in -game. Therefore, we still need to prove item 2 before we can use such game to prove VRASED’s security. The proof of ’s inability to learn is facilitated by A6 - Callee-Saves-Register convention stated in Section III. A6 directly implies no leakage of information through registers on the return of SW-Att. This is because, before the return of a function, registers must be restored to the same state as before the function call. Thus, untrusted software can only learn (or any function of ) through memory. However, if untrusted software can never read memory written by SW-Att, untrusted software never learns anything about (not even through timing side channels since SW-Att is secret independent). From this observation, it suffices to prove that VRASED untrusted software can not access directly and that it can never read memory written by SW-Att. These conditions are stated in LTL in Lemma 2. We prove that VRASED satisfies Lemma 2 by writing a computer proof (available in ) for Equation 13. The reasoning for such proof is similar to that of soundness and omitted due to space constraints.
It is worth to emphasize that Lemma 2 does not restrict reads and writes to , since this memory is used for inputting and receiving SW-Att result. Nonetheless, the already proved soundness and LTL 4 (which makes it impossible to execute fractions of SW-Att) guarantee that will not leak anything, because at the end of SW-Att computation it will always contain an HMAC result, which does not leak information about the key used in the HMAC ( in our case). After proving Lemma 2, the capabilities of untrusted software on are equivalent to the capabilities of adversary in -game of Definition 2. Therefore, the last remaining piece to prove VRASED’s security is to show a reduction from HMAC security according to the game in Definition 2. VRASED’s security is stated and proved in Theorem 2.
VRASED is secure according to Definition 2 as long as HMAC is a secure .
A is defined as tuple of algorithms .
For the reduction we construct a slightly modified HMAC, which has the same Mac and Vrf algorithms as standard HMAC but where .
Since function itself is implemented as a Mac call, it is easy to see that the outputs of Gen are indistinguishable from random. In other words, the security of this slightly modified construction follows from the security of HMAC itself.
Assuming that there exists such that , we show that such adversary can be used to construct HMAC- that breaks existential unforgeability of HMAC’ with probability
, we show that such adversary can be used to construct HMAC- that breaks existential unforgeability of HMAC’ with probabilityHMAC--game. To that purpose HMAC- behaves as follows:
HMAC- selects to be the same as in -game and asks to produce the same output used to win -game.
HMAC- outputs the pair (,) as a response for the challenge in the standard existential unforgeability game, where is the output produced by in step 1.
By construction, (,) is a valid response to a challenge in the existential unforgeability game considering HMAC as defined above. Therefore, HMAC- is able to win the existential unforgeability game with the same probability that has of winning -game in Definition 2. ∎
Appendix B: Optional Verifier Authentication
Depending on the setting where