Synergia: Hardening High-Assurance Security Systems with Confidential and Trusted Computing

05/12/2022
by   Wojciech Ozga, et al.
0

High-assurance security systems require strong isolation from the untrusted world to protect the security-sensitive or privacy-sensitive data they process. Existing regulations impose that such systems must execute in a trustworthy operating system (OS) to ensure they are not collocated with untrusted software that might negatively impact their availability or security. However, the existing techniques to attest to the OS integrity fall short due to the cuckoo attack. In this paper, we first show a novel defense mechanism against the cuckoo attack, and we formally prove it. Then, we implement it as part of an integrity monitoring and enforcement framework that attests to the trustworthiness of the OS from 3.7x to 8.5x faster than the existing integrity monitoring systems. We demonstrate its practicality by protecting the execution of a real-world eHealth application, performing micro and macro-benchmarks, and assessing the security risk.

READ FULL TEXT VIEW PDF

page 2

page 5

page 6

page 8

12/01/2019

Zero knowledge proofs for cloud storage integrity checking

With the wide application of cloud storage, cloud security has become a ...
07/17/2017

Downgrade Attack on TrustZone

Security-critical tasks require proper isolation from untrusted software...
07/22/2019

Maya: Falsifying Power Sidechannels with Operating System Support

The security of computers is at risk because of information leaking thro...
01/05/2021

A practical approach for updating an integrity-enforced operating system

Trusted computing defines how to securely measure, store, and verify the...
12/27/2018

Sanctorum: A lightweight security monitor for secure enclaves

Enclaves have emerged as a particularly compelling primitive to implemen...
08/29/2017

Nonmalleable Information Flow: Technical Report

Noninterference is a popular semantic security condition because it offe...
10/23/2020

Towards Efficiently Establishing Mutual Distrust Between Host Application and Enclave for SGX

Since its debut, SGX has been used in many applications, e.g., secure da...

1 Introduction

High-assurance security systems [28, 24, 49] leverage tee [19, 52, 4] because TEEs offer strong integrity and confidentiality guarantees in the face of untrusted privileged software, , firmware, hypervisors, os, and administrators. However, applications executing in a TEE cannot exist without the os, which manages the computing resources and controls applications’ life cycles. Thus, a trustworthy os is an essential element of each high-assurance security system because it guarantees its safety and security. Otherwise, an untrustworthy os might run malware that halts the victim application or steals secrets from the TEE via side-channel attacks [14, 87], as depicted in Figure 1. Germany introduced regulations requiring high-assurance security systems in the eHealth domain [28] to execute inside tee on a trustworthy os [27]. State-of-the-art mechanisms to attest to the OS’s trustworthiness rely on the tpm [77], a secure element storing and certifying integrity measurements of firmware and os. Unfortunately, the TPM is vulnerable to the cuckoo attack (a.k.a relay attack[64, 22] that makes the TPM attestation untrustworthy. We propose a novel defense mechanism against the TPM cuckoo attack, and we implement it as part of the framework responding to the German eHealth systems regulations [27].

The ima [75] and the drtm [71] are state-of-the-art mechanisms providing OS integrity auditing and enforcement. The DRTM securely loads the kernel to the memory, and IMA, which is part of that kernel, ensures that the kernel loads only software whose integrity is certified with a digital signature. Both technologies, when used together, ensure the load-time integrity of the kernel and software loaded to the memory during the OS runtime. Specifically, the DRTM, a hardware technology implemented in the CPU, stops all cores except one, disables interrupts, measures the to-be-loaded kernel, and executes the kernel with the IMA integrity enforcement mechanism. IMA restricts software loaded to the memory by reading the digital signature corresponding to the given software from the file system and verifying that this software’s integrity measurement (a cryptographic hash over its binary) matches the original integrity measurement signed by a trusted party (Figure 2). Thus, only software certified by a trusted party can be loaded to the memory by the kernel.

Figure 1: An adversary must run arbitrary software to mount a software side-channel attack that can compromise the confidentiality guarantee of Intel SGX. Colors are consistent across all figures.
Figure 2: Integrity measurement architecture (IMA) is part of the kernel. It approves software to execute and provides reporting capacity to verify what software has been executed since the load of the kernel.

The TPM enables auditing of the kernel and software integrity because DRTM and IMA store corresponding integrity measurements in the tamper-proof TPM memory. The TPM then certifies the stored measurements to a verifier accordingly with the TPM remote attestation protocol. However, the TPM remote attestation is prone to the cuckoo attack, which is a security issue for TPM-based systems [29, 47, 17]. In this attack, an adversary certifies the software integrity of the underlying computer using certified measurements of another computer (see Figure 3). A verifier connects to the compromised computer and communicates with the TPM to check the computer software integrity (). The adversary prevents the verifier from accessing the local TPM by redirecting communication to a remote TPM (). Consequently, the verifier reads the remote TPM, which attests to an arbitrary, trustworthy state (), not the state of the compromised computer accessed by the verifier.

The existing defenses against the cuckoo attack have limited application in real-world dc. The first approach relies on the time side-channel [25, 70] in which a remote TPM is unmasked by observing increased communication latency. This approach requires calculation of hardware-specific statistics, is prone to false positives because the high TPM communication latency (including signature generation) makes the distance bounding infeasible [64, 47], and requires stable measurement conditions in which extraneous OS services are suspended during the TPM communication [25] — impractical assumptions for real-world dc. Flicker [58] adapts another approach. It exploits DRTM to run an application in isolation from the untrusted os, allowing it to communicate with the TPM directly. Flicker is insufficient for the targeted systems like [28] because i) it does not attest to the computer location, making the DRTM attestation untrustworthy because of simple hardware attacks [84] and cold-boot attacks [33] and ii) while it permits to split applications in multiple services that run isolated, it does not support systems with moderate throughput and latency requirements. In more detail, DRTM provides isolation in which the entire CPU executes only a single service at a time and a single context-switching takes 10-100s of milliseconds [58, 57]

. It results in an estimated program execution’s throughput of about 1-10 requests per computer per second when running multiple eHealth services, like

[27]. A practical solution requires that hundreds of services are processed in parallel per computer. We require an improvement of at least one order of magnitude in throughput compared to Flicker. Other approaches [20, 21] fall short in the context of the TPM because i) the TPM is a passive device controlled by software that could counterfeit its communication with external devices and ii) they would require human interaction during each computer boot.

The limitations of the existing solutions motivate us to propose a new automatic, practical at the data center-scale defense mechanism that deterministically detects the cuckoo attack and allows for the processing of parallel requests. We demonstrate that despite the differences in their threat models and designs, tee and TPM-based techniques complement each other, allowing for mitigating the cuckoo attack. Consequently, high-assurance security systems executing inside TEE can attest to the OS integrity. Our solution builds trust in a remote computer starting from a piece of code executing inside the TEE, and then systematically extend it to the entire OS. First, we leverage TEE to settle a trusted piece of code on an untrusted remote computer. We use it to verify that the computer is in the correct dc and mitigate the cuckoo attack. This allows us to extend trust to the TPM, then to the loaded kernel and its integrity-enforcement mechanism and, finally, to software being executed during the OS runtime.

We implement this approach in an integrity monitoring and enforcement framework called Synergía, which ensures that high-assurance security applications execute on correctly initialized and integrity-enforced OS located in the expected dc. The high-assurance security systems conform to the TEE threat model, while they gain OS integrity guarantees under a less rigorous threat model typical for TPM-based systems. We perform security risk analysis related to the use of these techniques in §6.

Altogether, we make the following contributions:

  1. We designed and implemented an integrity monitoring and enforcement framework called Synergía that:

    • attests to the OS trustworthiness (§13),

    • defends against the cuckoo attack (§5.1, §5.2),

    • provides a reliable approach to estimate the geolocation of physical servers beyond the simple TPM geo-tagging (§4.3),

    • provides local attestation, allowing decentralization of the monitoring system (§4.1, §4.4),

    • the service itself can be remotely attested (§5.4),

    • verifies the compliance of provisioned resources with a given policy (§4.2, §4.4).

  2. We assessed the security risk of Synergía6).

  3. We demonstrated Synergía protecting a real-world application in the eHealth domain (§7.1).

  4. We evaluated its security and performance (§7).

  5. We provided the formal proof of the protocol detecting the cuckoo attack (§7.4).

Figure 3: The cuckoo attack. The verifier connects to the compromised machine (left) and reads the TPM quote to verify its integrity. The quote is, however, retrieved from the remote TPM attached to a legitimate machine (right). The verifier cannot distinguish if the quote comes from the TPM attached to the local or remote machine.

2 Threat Model

We adopt the threat model of organizations, such as governments, banks, and health, legally bound to protect the security-sensitive data they process. In particular, we assume they execute high-assurance security systems in their own dc or in the hybrid cloud in which security-critical resources are provisioned on-premises. This implies limited and well-controlled access to dc, allowing us to assume that an adversary, , a rogue operator, cannot perform physical or hardware attacks. To ensure that a high-assurance security system executes inside the dc, we only presume that dedicated computers, called trusted beacons, are located inside that dc and cannot be physically moved outside (§4.3).

Initially, we only trust the CPU (including its hardware features TEE and DRTM) and a small piece of code (the agent). Using the TEE attestation protocol, we ensure that the legitimate agent executes inside the TEE on a genuine CPU on some computer. Then, we use the agent to verify that the computer is located in the correct DC by measuring the proximity to the trusted beacon via a round-trip time distance-bounding protocol. Once we ensure that the agent runs in the expected DC (no physical and hardware attacks), we use it to establish trust with the local TPM with the help of our protocol formally proved to be resistant to the cuckoo attack (§7.4). At this point, we use the TPM to extend the trust to the kernel and its built-in integrity-enforcement mechanism, IMA. Eventually, we use IMA to expand trust to the software loaded during the OS runtime.

High-assurance security systems executing inside the TEE follow the TEE threat model, , operating system, firmware, other software, and system administrator are untrusted. The additional guarantees of the operating system integrity follow the threat model of TPM-based systems, , software whose integrity is enforced at load-time behaves in a trustworthy way also during its execution. The runtime integrity of the process can be enforced using existing techniques, such as control-flow integrity enforcement [44], fuzzing [88], formal proofs [89], memory-safe languages [55], or memory corruption mitigation techniques (position-independent executables, stack-smashing protection, relocation read-only techniques). Please note that many of these techniques are applied nowadays by default during the software packaging process, as in the case of Alpine Linux [2].

We assume a financially or governmentally motivated adversary who might gain root access to selected computers inside a dc by exploiting network or OS misconfigurations, exploiting vulnerabilities in the os, or using social engineering. Her goal is to extract security-sensitive or privacy-sensitive data, , personal data, credentials, or cryptographic material. She can stop or halt individual computers or processes, but she cannot stop all central monitoring service instances responsible for reporting security incidents. We consider an untrusted network where an adversary can view, inject, drop, and alter messages. She can call the API with any parameters and configure the routing, forcing packages to choose faster or slower routes. Our network model is consistent with the classic Dolev-Yao adversary model [23]. We rely on the soundness of the employed cryptographic primitives used within software and hardware components.

3 Design Decisions

Our objective is to provide a design that: i) enforces that only trusted software is executed on a computer; ii) monitors the remote computer os to verify compliance to integrity requirements; iii) allows high-assurance security systems to get insights into the os integrity.

We start by introducing the existing integrity monitoring systems architecture [40, 35, 37] and adjust it to meet the security guarantees required by high-assurance security systems. Figure 4 shows the integrity monitoring architecture where a central server pulls integrity measurements from computers by communicating with dedicated software, the agent. The agent on each computer collects data from the underlying security and auditing subsystems that measure and enforce the OS integrity. Central servers aggregate the data in databases, verify it against whitelists, and notify the security officer about integrity violations. Such architecture relies on the TPM as a root of trust.

  1. [wide = 0pt, nosep,leftmargin=!,font=,start=1]

  2. Enforce the load-time integrity with secure boot and OS integrity enforcement.

Secure boot [82] is the state-of-the-art technology to enforce that only trusted software bootstraps a computer. It relies on the chain of trust where each component measures the integrity (calculates a cryptographic hash) of the next component and executes it only if the hash matches a corresponding digital signature. The measured boot [76, 78] complements it by storing hashes in the TPM, thus enabling auditing.

The ima [67, 75] extends the functionality of measured boot and secure boot to the OS level. IMA is part of the kernel and verifies all files’ integrity (, executables, configuration files, dynamic libraries) before they are loaded to the memory. In particular, IMA-appraisal [34] enforces that the kernel loads files whose hashes are certified with digital signatures stored in the file system (Figure 2). The application execution is halted until a dynamic library is loaded, and fails if the library fails the integrity check. IMA enables auditing by maintaining an IMA log, a dedicated file storing hashes of all files loaded to the memory since the kernel load. It adds each file to the IMA log and stores a hash over it in the TPM before the file is loaded to the memory. Any tampering of the IMA log is detectable because the IMA log’s integrity hash must match the value stored in the TPM.

Figure 4: The architecture of existing integrity monitoring systems. The security officer uses a monitoring system to verify that high-assurance security systems execute on hosts running trusted software.
  1. [wide = 0pt, nosep,leftmargin=!,font=,start=2]

  2. Enable remote attestation to prove that secure boot and integrity enforcement are enabled.

The TPM remote attestation protocol [79] delivers a technical assurance of the computer’s integrity. The TPM chip digitally signs a report (quote) certifying hashes recorded since the computer boot. The hashes reflect loaded firmware and kernel and prove that integrity enforcement mechanisms are enabled. The verifier can check that the quote has not been manipulated because the TPM signs the quote with a signing key that is embedded in the TPM and linked to the ca of the TPM manufacturer. However, the monitoring system cannot merely rely on the tpm attestation because it is vulnerable to the cuckoo attack [64]. It is indistinguishable whether an untrusted OS proves its integrity presenting a quote from a local TPM or impersonates a trustworthy OS presenting a quote from a remote TPM.

  1. [wide = 0pt, nosep,leftmargin=!,font=,start=3]

  2. Detect the cuckoo attack by authenticating the TPM with a secret random number.

The monitoring system must ensure that the quote originated from the local TPM, , the TPM that collected integrity measurements from the software components that booted the os on the underlying computer. We propose to extend the agent with the functionality of checking that it communicates with the local TPM. The general idea consists of sharing a randomly generated secret with the local TPM to identify it uniquely and then use the secret to authenticate the TPM (Figure 5). The main challenge is generating a secret and sharing it with the local TPM without revealing it to an adversary. The main challenge is how to generate a secret and share it with the local TPM without revealing it to the adversary. Otherwise, the adversary can mount the cuckoo attack by sharing it with a remote TPM.

  1. [wide=0pt, nosep,leftmargin=!,font=,start=4]

  2. Protect the secret in the TPM by relying on the one-way cryptographic hash function.

The TPM contains dedicated memory registers, called pcr, that have important properties; they cannot be written directly, but they can only be extended with a new value using a cryptographic one-way hash function. The operation can be expressed as: PCR_extend(n,value): pcr[n] = hash(pcr[n]||value). We propose to extend the secret on top of the existing measurements stored in the PCR to achieve the following properties: i) an adversary cannot extract the secret from the PCR value after the secret is extended to the PCR because the hash function result is not invertible; ii) an adversary cannot reproduce the PCR value in another TPM without knowing the secret, or finding a collision in the hash function; iii) after extending the TPM with the secret, the secret is no longer needed to identify the TPM because the PCR value extended with the secret is unique.

  1. [wide=0pt, nosep,leftmargin=!,font=,start=5]

  2. Leverage DRTM technology to provide a trusted and measured environment to access the local TPM.

We must ensure that the secret is shared with the local TPM securely. We do it in a trusted environment established by hardware technologies available in modern CPUs because these technologies also permit verification of the established execution environment’s integrity. Therefore, they allow detecting (post-factum) any secret extraction attempt, including software side-channel attacks, because such attacks require violating the kernel or initramfs integrity.

We propose generating the secret and extending it to PCRs inside the initramfs 111The initramfs is a minimalistic root filesystem that provides a user space to perform initialization tasks, like loading device drivers, mounting network file systems, or decrypting a filesystem [65], before the OS is loaded. because DRTM allows for later verification of the kernel and initramfs integrity. Specifically, the drtm [71], which is a hardware technology that establishes an isolated execution environment to run code on a potentially untrusted computer, can be used during the boot process (, by tboot [38]) to provide a measured load of the Linux kernel and initramfs.

The integrity measurements performed by DRTM cannot be forged because the TPM offers a dedicated range of PCRs (dynamic PCRs) that can only be reset or extended when the TPM is in a certain locality [41]; Only the code executed by DRTM can enter such locality. Therefore, the presence of measurements in dynamic PCRs confirms that the DRTM was executed, and the comparison of PCRs with the golden values confirms that the secret was shared with the local TPM because the correct TPM driver was used.

  1. [wide=0pt, nosep,leftmargin=!,font=,start=6]

  2. Leverage Intel SGX to transfer the golden TPM PCR value to the OS runtime securely.

Once the secret is shared with the TPM, we must expose the unique local TPM’s identifier (PCR value extended with the secret) to the agent running in the OS. To do so, we leverage sgx [19], a hardware CPU extension that provides confidentiality and integrity guarantees to the code executed in so-called enclaves in the presence of an adversary with root access to the computer. It offers a sealing [3] property that permits storing a secret on an untrusted disk where only the same enclave running on the same CPU can read it. The sealing and its revert operation unsealing use a CPU- and an enclave-specific key to encrypt and sign data in untrusted storage. We propose to communicate with the TPM from the inside of an enclave. First, the enclave executes in the initramfs where it shares a secret with the local TPM and seals the expected value of the TPM PCR to the disk. Then, it executes in the untrusted OS, where it authenticates the TPM using the PCR value unsealed from the disk.

Figure 5: Defense against the cuckoo attack. The agent shares with the TPM a randomly generated secret , which is used later to authenticate the TPM. PCR is the TPM tamper-resistant memory.
  1. [wide=0pt, nosep,leftmargin=!,font=,start=7]

  2. Leverage the SGX local and remote attestation to expose integrity measurements to the verifiers.

sgx offers local and remote attestation protocols [42]. While both protocols allow verifying that the expected code runs on a genuine Intel CPU, the SGX local attestation also permits two enclaves to learn that they execute on the same CPU. We rely on this property to permit high-assurance security systems to establish trust with the agent running on the same computer. Like this, high-assurance security systems gain access to integrity measurements of the surrounding OS. Similarly, central monitoring services leverage the SGX remote attestation to establish trust with agents.

  1. [wide=0pt, nosep,leftmargin=!,font=,start=8]

  2. Formally prove the protocol of establishing trust between the agent and the TPM.

We use formal verification techniques to prove that the Synergía protocol is resilient against the cuckoo attack because functional software testing cannot detect protocol errors since they only appear in the presence of a malicious adversary. We rely on automated security protocol verification approaches [48, 11, 5] because they can provide guarantees of the protocol’s correctness [6, 12, 53]. Specifically, we use SAPIC [48] tool to implement a formal model of the Synergía protocol, verify its integrity, and prove that it is resilient against the cuckoo attack (§7.4).

4 Synergía architecture

Figure 6: Synergía architecture. The agent provides access to integrity measurements certified by the local TPM after mitigating the cuckoo attack. High-assurance security system and the monitoring controller query the agent to verify the computer geolocation and operating system integrity.

4.1 High-level Overview

Figure 6 shows a high-level overview of the Synergía architecture, which consists of five entities. A security officer () uses a controller () to define security policies describing correct (trusted) OS configurations. The controller communicates with agents () running on every computer to check whether high-assurance security systems () are executed in a trusted environment defined in security policies. Both the controller () and the high-assurance security system executing inside SGX () systematically query the agent to check if the operating system integrity conforms to the criteria defined inside a security policy. Note that the integrity measurements are not aggregated or verified centrally. Instead, agents aggregate them and verify them locally on computers. Agents verify their location using trusted beacons (), services running in a known geographical location, , specific dc.

We distinguish between two types of verifiers communicating with agents, local and remote verifiers. A local verifier is a high-assurance security system that requires strong confidentiality guarantees (). An example of such a service is a key management system [16, 49, 31] that executes inside an sgx enclave to protect integrity and confidentiality against privileged adversaries. The local verifier detects violations of the operating system integrity by communicating with the agent running on the same host.

A remote verifier, , (), is an application running on a different computer than the agent. It aims to verify that the remote computer is located in the specific dc and its OS is in the expected state. Typically, a remote verifier checks the integrity of the distributed system’s deployment, , various services distributed over machines, data centers, and availability zones. The controller has broader knowledge about the network load, machine failures, service migrations, software updates. It helps the security officer to manage the deployment while relying on individual services to react autonomously to integrity violations. The controller might be part of the siem system that correlates system behavior to detect multi-faceted attacks [10].

4.2 Policy

1chain: -  
2  -----BEGIN CERTIFICATE-----
3  # TPM manufacturer certificates
4  -----END CERTIFICATE-----
5whitelist:
6  - pcrs:
7# secure boot / measured boot, PCRs 0-9
8      - {id: 0, sha256: ff0c...e3} 
9      - {id: 3, sha256: e850...3e} 
10# trusted boot (DRTM) PCRs, 17-19
11      - {id: 18, sha256: f9d0...cb} 
12      - {id: 19, sha256: a1e7...00} 
13runtime:
14  certificate: |- 
15      -----BEGIN CERTIFICATE-----
16      # IMA uses a certificate to verify signatures
17      -----END CERTIFICATE-----
18  software: 
19      - name: agent-0.8.0
20        whitelist:
21          840f...72: /bin/agent
22      - name: AppArmour 
23        whitelist:
24# hash of the executable
25          1e73...f6: /sbin/apparmour
26# hash of the configuration file
27          c39e...34:  /etc/apparmour 
28location:
29  - host: https://datacenter:10000/beacon 
30    max_latency: 2 # in milliseconds
31    chain: |-
32      -----BEGIN CERTIFICATE-----
33      # TLS certificate chain of the trusted beacon
34      -----END CERTIFICATE----- 
Listing 1: Policy example

The security officer defines security policies (1) to declaratively state what software and dynamic libraries are permitted to run on the computer and what is the proper OS configuration. He creates distinct security policies for each high-assurance security system. For example, a key management system has a different policy than a system processing medical data because they use different dynamic libraries, software, and OS configurations. The monitoring controller reduces the burden of creating policies by allowing defining templates that can be combined to build individual policies with overlapping configurations. For example, services running on the same type of OS share the same template that describes software and configuration specific to that OS.

The agent uses the security policy to verify the OS integrity. The OS is trusted if and only if the load-time integrity measurements of the kernel and the load-time integrity measurements of files loaded to the memory during the OS runtime are declared on the whitelist or their corresponding digital signatures are verifiable using the certificate declared in the policy.

In more detail, the agent uses the TPM manufacturer’s CA certificate chain to verify that the TPM chip attached to the computer is legitimate (line 1). The integrity of firmware and its configuration is represented as a whitelist of static PCRs (lines 1-1), while the integrity of the Linux kernel and the initramfs is specified as a whitelist of dynamic PCRs (lines 1-1). Trusted configuration files, executables, and dynamic libraries are defined in the form of hashes (lines 1-1) and a signing certificate (line 1). Software updates are supported via complementary solutions [63, 9] and require specification of the certificate in the policy (line 1).

Figure 7: Trusted beacons. Agents rely on the trusted beacon to check that they are located in the expected data center. Only machines located inside the same data centers can achieve very low network latency required to prove their proximity.

4.3 Trusted Beacon

A policy might constrain the computers’ proximity to the well-known trusted beacons deployed in dc (lines 1-1). A trusted beacon is a network service that responds to agents’ requests with the current timestamp. The agent can then estimate the physical machine’s proximity by measuring the network communication’s round-trip times. The adversary cannot accelerate network packets enough to achieve a very short round-trip time achievable only between machines in the same local network.

Figure 7

shows a high-level view of the trusted beacon proximity verification protocol. The trusted beacon contains the asymmetric keypair with a certificate issued by a trusted authority, , a dc owner. These credentials, known only to the trusted beacon, prove that the dc owner placed the trusted beacon in the dc, and the trusted beacon executes in a trusted environment. The agent establishes trust with the trusted beacon by reading timestamps signed by the trusted beacon. The agent then estimates the network latency by calculating a trimmed mean from the differences between timestamps obtained from pairs of consecutive requests. A trimmed mean allows for tolerating network latency fluctuations because it excludes outliers.

Our design does not restrict what security mechanisms must protect the trusted beacon. In particular, the trusted beacon could be a network-accessible hardware security module (HSM) [36] returning signed timestamps. HSM is a crypto coprocessor offering the highest level of security against software and hardware attacks. It is embedded in a tamper responsive enclosure to actively detect physical and hardware attacks and protect against side-channel attacks. A cheaper but less secure alternative might run a TEE-based application implementing the abovementioned protocol over TLS. Related work [22] demonstrated that the network communication round-trip time between two SGX enclaves located in the same network take in average s, a latency not achievable from the outside of the data center.

4.4 Policy Verification Protocol

We designed the agent to act as a facade between the verifier and the TPM to enable multiple verifiers to check the OS integrity concurrently. Figure 8 shows how a verifier uses the policy verification protocol to attest to the OS integrity. The agent regularly reads the list of new software loaded by the os, the quote, and persists it into the cache that reduces the policy verification latency for future requests (). The local or remote verifier perform the SGX local or remote attestation [42] to verify the agent’s identity and integrity and the CPU genuineness. The local attestation also proves that the agent runs on the same CPU (). Once the verifier deploys the policy (), the agent checks that the computer complies with the policy, stores the policy, and returns the corresponding policy_id (). The verifier uses the policy_id to re-evaluate the policy during future health checks ().

Figure 8: Synergía policy verification protocol. The agent maintains a separate thread (agent’s cache) to constantly read the platform’s fresh integrity measurements. Verifiers query the agent in parallel to ensure the compliance of the platform to the policy.

5 Implementation

We implemented Synergía on top of the Linux kernel. We use existing integrity enforcement mechanisms built in the Linux kernel, , IMA-appraisal, kernel module signature verification, and AppArmor. We rely on the support for the secure boot built-in the underlying firmware. We developed remote attestation components, , the agent in memory-safe language Rust [55] and the monitoring controller in Python. We implemented the cuckoo attack detection mechanism and the policy verification protocol inside the agent. The monitoring controller allows defining policies, verifying the remote computer system’s integrity, and alerting about integrity violations. We rely on the SCONE framework [7] and the SCONE cross-compiler to run Synergía inside the sgx enclave.

5.1 Computer bootstrap

Figure 9 illustrates the bootstrap of a computer where the agent collects information required to detect the cuckoo attack. Consecutive uefi components execute in the chain of trust; their integrity measurements are extended in static pcr (). uefi loads the bootloader, which starts the tboot (). The tboot leverages txt [30, 39]–which implements DRTM on Intel CPUs–to establish a trusted environment. The tboot measures the integrity of the Linux kernel and initramfs, extends these measurements to dynamic pcr (), and executes them ().

The initramfs has two essential properties; its integrity is reflected in dynamic pcr, and failures during initramfs execution prevent machine booting. We rely on these properties to verify that the agent completed its execution. We refer to the agent execution inside initramfs as agent initialization ().

During the agent initialization, the agent requests the TPM to create a new aik, return the TPM’s ek certificate, and return the quote certifying pcr (). The agent performs the activation of credential procedure ([8] p. 109-111) to verify that the aik was created by the TPM, which possesses the private key associated with the ek certificate. The agent then obfuscates static pcr by extending them with a random number generated inside the sgx enclave (). To ensure that the obfuscation succeeded and the boot process to continue, the agent reads pcr again and compares them to the expected pre-computed hashes. After all, the AIK, the EK certificate, the TPM clock (includes computer reboot counter), and PCRs (original and obfuscated) are persisted in the file system in the sgx sealed configuration file (). The initramfs handles control to the OS (), after the agent initialization finishes. The OS executes the agent together with startup services. We refer to the agent execution after the OS executes as agent runtime.

Figure 9: The platform boot process. To make the cuckoo attack detectable, the agent executes twice. First, in agent initialization, the agent executes in the measured environment where it shares a secret with the TPM. Second, in agent runtime, the agent establishes trust with the local TPM or detects the cuckoo attack.

5.2 Establish Trust

During the agent runtime, the agent verifies that there was no cuckoo attack during agent initialization and agent runtime by ensuring that the following conditions are fulfilled:

Condition 1: the agent is able to unseal the configuration file (). Relying on the properties of the SGX unseal, we conclude that the configuration file was created by the agent enclave running the same binary, and both enclaves were executed on the same sgx processor.

Condition 2: a successful match between dynamic pcr read from the tpm and the golden dynamic pcr. It proves that during agent initialization, the agent enclave was executed in the trusted environment (Linux kernel, initramfs, and correct TPM driver), and it successfully obfuscated the TPM.

Condition 3: a successful match of static pcr read from the tpm with obfuscated static pcr read from the configuration file. It proves that the configuration file contains the information gathered earlier from the same tpm.

Condition 4: a successful match of the reboots counter stored in the configuration with the reboots counter value read from the fresh quote proves that the computer did not reboot since the agent initialization.

Finally, considering conditions 1, 2, 3, 4, and what they indicate once fulfilled, we conclude that the quote was issued by the TPM that collected software measurements during the computer bootstrap. §7.4 formally proves this claim.

5.3 Cache Updates

To decrease the policy verification latency, the agent starts a separate thread reading the computer state to validate it against future policy verification requests. The agent recurrently retrieves the quote and verifies that the quote certifies PCRs values read during the agent initialization, and it repeatedly reads new events from the IMA log.

Hashes of all events are stored in the enclave’s memory, together with the number of bytes read (), and the last value of IMA PCR (). To read new events, the agent first retrieves the quote and opens the IMA log file skipping bytes. It then reads a new event from the file and recalculates the integrity hash by extending with the event’s hash. This process is repeated for each new event and finishes when the integrity hash is equal to the hash of the IMA PCR retrieved from the quote. If the agent reaches the end of the IMA log and the integrity hash does not match the hash in the IMA PCR, it detects the tampering of the IMA log and the OS is considered compromised.

5.4 Policy Verification

The agent exposes the policy verification functionality via a TLS-protected rest api endpoint to simplify the communication interface between verifiers and agents. It is enough for verifiers to check the agent’s identity by verifying its X.509 certificate presented during a TLS-handshake. Currently, TLS credentials are delivered to the agent via a kms [31] but the verifier can also rely on the SGX remote attestation [42] to ensure the agent’s identity and integrity. As future work, the agent will create a self-signed certificate via sgx-ra-tls [46], thus excluding the kms from the trusted computing base.

The agent stores a once deployed policy in the in-memory key-value map under a randomly generated key policy_id to permit tenants to verify the same policy again. The agent can be queried with the policy_id to verify that the OS integrity has not changed since the last verification. An adversary cannot change once deployed policy because sgx protects the agent’s memory from tampering, , SGX guarantees integrity, confidentiality, and freshness of data.

6 Security Risk Assessment

Synergía combines different security techniques to build a framework providing technical assurance that applications execute inside tee on the trustworthy os. However, each technique operates under a different threat model, and a careful analysis of existing attacks is required to claim security guarantees.

6.1 Preventing Physical and Hardware Attacks

First of all, the applied techniques usually do not protect against hardware and physical attacks. The TPM is vulnerable to simple hardware attacks on the communication bus with the CPU that allows an adversary to reset the TPM [43], reply to arbitrary measurements [72], including measurements corresponding to the DRTM launch [83]. Similarly, Intel SGX is vulnerable to clock speed and voltage manipulation [60]. Direct memory access attacks [54] or cold-boot attacks [33] can compromise the entire operating system and applications that store data in the main memory in plaintext. To prevent these kinds of attacks, we propose to attest to the physical location of the computer. Regulators require that dc are access controlled and place computers inside security cages [27]. We argue that these techniques provide enough security to consider physical and hardware attacks inside the trusted data center negligible.

We use the concept of a trusted beacon to verify that the computer is located in the trusted dc. In real-world, the trusted beacon functionality could be provided by a hardware security module [36] or a trusted timestamping authority running on a computer with formally proved software [45, 66]. The only assumption is that trusted beacons must be securely placed inside the dc and then be protected from being moved.

6.2 Establishing Trust with the Agent.

To verify that the computer is indeed located in the expected dc, we must rely on the agent executing on a potentially untrusted computer exposed to physical and hardware attacks. To authenticate the agent and verify that it executes on a genuine Intel SGX CPU, we leverage Intel SGX remote attestation [42]. In the past, researchers managed to extract Intel SGX attestation keys [80, 14] that allowed impersonating a genuine SGX CPU. The available mitigations are: i) relying on on-premise data center attestation mechanism [68], ii) checking for revoked SGX attestation keys, and iii) verifying that the agent runs in the proximity of a trusted device to ensure that it is in the correct data center composed of legitimate SGX machines [22]. In all cases, we must trust the CPU manufacturer, SGX design, cryptographic primitives, and CPU implementation. We consider these assumptions practical because they are common industry practices.

6.3 Establishing Trust with the TPM.

Synergía relies on TXT, SGX, and TPM to detect the cuckoo attack. Researchers demonstrated that malware placed in the smm could survive the TXT late launch [85]. To mitigate attacks on smm, Intel introduced an SMI transfer monitor that constrains the system management interrupt handler mitigating these class of attacks entirely. Other TXT-related and tboot vulnerabilities [86] were related to memory vulnerabilities in Intel’s firmware and tboot implementations.

Intel SGX is vulnerable to microarchitectural and side-channel attacks that violate SGX confidentiality guarantees [14]. Intel constantly patches the vulnerabilities with microcode updates or hardware changes. Nonetheless, we do consider these attacks as a real threat because of their severity and the multitude of variants that appear.

These attacks do not impact Synergía guarantees because they only affect SGX confidentiality and not integrity. The only security-sensitive data that might be used to compromise Synergía is the secret shared between the agent and the TPM. However, the secret lives only during the agent initialization, where the presence of malware is detected. In more detail, an adversary can extract the secret shared between the agent and the TPM during the agent initialization to mount the cuckoo attack by sharing the secret with an arbitrary TPM. We formally proved (§7.4) that the Synergía protocol is immune to these kinds of attacks because the agent detects that the secret was leaked once it executes in agent runtime. The agent detects that malware was present during the agent initialization because both initramfs and kernel are measured by DRTM, and their measurements are securely transferred to the agent in agent runtime via SGX sealing. An adversary cannot tamper with the sealed data because only the same enclave running on the same CPU can seal and unseal the data. Thus, the presence of malware and secret leakage are revealed.

6.4 Establishing Trust with the OS.

Because the agent can read the load time integrity of the kernel stored inside the dynamic PCR in the TPM, it can ensure that the computer executes a kernel that was intended to load because even if an adversary boots a malicious kernel, she cannot tamper with PCRs that reflect the malicious kernel load.

An adversary who gains access to the computer by stealing credentials using social engineering or exploiting a misconfiguration cannot run arbitrary software because she does not have the signing key to issue a certificate required by the integrity-enforcement mechanisms (IMA) to authorize the file.

However, an adversary might exploit memory vulnerabilities in the existing code, such as Linux kernel or software executing on the system remotely [15]. This is feasible because most system software is implemented in unsafe memory languages. We assume that the operating system owner relies on an additional security mechanism enumerated in §2 to enforce the runtime process integrity. Typically, the system owner also minimizes the tcb by authorizing only crucial software to run on a computer. He does it by digitally signing only trusted software and relying on the IMA-appraisal to enforce it during the OS runtime.

An adversary who gains access to the computer can restart it and disable the security mechanisms or boot the computer into an untrusted state. In §7.2, we estimate the vulnerability window size in which the monitoring controller detects the computer integrity violation.

Another attack vectors are network side-channel attacks, such as NetCAT 

[50], and rowhammer attacks over the network [74]. In these attacks, an adversary does not have to run malware on the computer but instead sends malicious network packages that modern network cards place directly in the main memory. We assign a low risk to these classes of attacks because i) they are hard to perform in noisy production environment, ii) they are detectable by network traffic monitoring tools and firewalls because they generate high network activity, iii) mitigation techniques exist and can be applied independently [50, 74].

7 Evaluation

We evaluate Synergía in four-folds. In §7.1, we demonstrate Synergía protecting a real-world application from the eHealth domain. Then, in §7.2 and §7.3, we evaluate Synergía’ security and performance, respectively. Finally, in §7.4 we present the formal verification of the cuckoo attack detection protocol.

Testbed. Experiments execute on a rack-based cluster of three Dell PowerEdge R330 servers connected via a 10 Gb Ethernet. Each server is equipped with an Intel Xeon E3-1270 v5 CPU, 64 GiB of RAM, Infineon 9665 TPM 2.0, running Ubuntu 16.04 LTS with kernel Linux kernel v4.4.0-135-generic. The CPUs are on the microcode patch level (0xc6). The epc is configured to reserve 128 MiB of RAM. During all experiments, the agent, the monitoring controller, and the trusted beacon run on different machines.

native SCONE Synergía
Execution time 41 sec 52 sec 53 sec
Security level
- tolerate rogue operator
- tolerate untrusted OS
- side-channel attacks
- data processed in
correct geolocation
Table 1:

The execution time of the eHealth application. Mean values calculated from 30 independent application executions. The standard deviation in all variants was 1 sec.

7.1 Protecting a Real-world eHealth Application

We leveraged Synergía to protect an eHealth application provided to us by a partner who requires protection of his intellectual property (the application’s source code) and the confidentiality of the privacy-sensitive patients’ data. This dataset contains concentrations of metabolites in cerebrospinal fluid samples from patients with bacterial meningitis, viral meningitis/encephalitis, and non-inflamed controls. The application, implemented in Python, uses a ml algorithm to understand pathophysiological networks and mechanisms as well as to identify disease-specific pathways that could serve as targets for host-directed treatments to reduce end-organ damage. We used publicly available SCONE docker images [69] to run the application inside a container executed inside the SGX enclave. We configured the OS to use IMA and run the Synergía’s agent. On two other machines, we deployed the trusted beacon and the monitoring controller, which was constantly querying the agent to verify the OS integrity.

We measured the execution time of the machine learning algorithm run in three different variants; in

native, the application executes in the untrusted OS; in SCONE, the application executes in the untrusted OS but inside an SGX enclave provided by SCONE; in Synergía, the application executes inside the SGX enclave on an integrity-enforced OS booted with Synergía. Table 1 shows that the machine learning algorithm’s execution inside the SGX enclave takes 52 sec, which was longer than the native execution (41 sec). Synergía further increased the application execution time by 2%, compared to the SGX enclave execution. This is an acceptable performance overhead, assuming the higher security guarantees offered by Synergía and the compliance with the privacy regulations required by the EU law.

7.2 Security

An adversary cannot violate the computer system’s integrity if all integrity enforcement mechanisms are properly configured and enabled (including mechanisms protecting runtime process integrity §2) because the kernel rejects untrusted files from loading to the memory. However, an adversary can run arbitrary software if she gets enough privileges to boot the computer with disabled enforcement mechanisms. We run a set of micro-benchmarks to estimate the vulnerability window size expressed with Equation (1), during which the integrity violation remains undetected.

(1)

is the vulnerability window size, is the time to read a TPM quote, is the maximum number of events that can be opened within , is the time to read a single event from the IMA log, is the time required by the agent to verify the policy and by a verifier to send, receive, and process the verification request.

Signing scheme TPM quote read latency
RSA 2048 with SHA-256 521 ms ( ms)
ECDSA P256 with SHA-256 155 ms ( ms)
HMAC with SHA-256 107 ms ( ms)
Table 2: The latency of reading the TPM quote generated using different signing schemes. Mean values calculated from 30 experiment executions. stands for standard deviation.

What is the latency of reading a TPM quote? Each time the agent reads the IMA log, it reads a fresh TPM quote to verify the IMA log’s integrity. The TPM supports different signing schemes that have a direct impact on the TPM quote read latency. Table 2 shows that TPM issues a quote using hmac in 107 ms, which is faster than when using rsa cryptography and faster when using ecdsa. Thus, selecting an hmac or ecdsa allows validating the IMA log’s integrity faster than when using rsa. We assume usage of the ecdsa when reading a quote, thus =155 ms.

Read latency of a single IMA log entry
ImaNg event 34 s (s)
ImaSig event 58 s (s)
Table 3: The latency of reading a single event from the IMA log. Mean values calculated from 1200 events readings. stands for standard deviation.

What is the latency of reading integrity measurements? We measured the latency of reading new measurements from the IMA log to learn how fast the agent can detect the integrity violation. During the first read of the IMA log, the agent reads all measurements collected by IMA during the OS boot, which is typically the biggest chunk of the IMA log that has to be read by the agent at once. The bootstrap of Ubuntu Linux produces approximately 1800 measurements. The agent needs 130 ms to read all events from the IMA log, recalculate the IMA log integrity hash, and compare the hash to the IMA PCR.

After the initial IMA log read, the agent reads only the new IMA measurements since the last IMA log read. The time needed to read the integrity measurements depends on the number of new events measured and added to the IMA log. Table 3 shows that the agent requires 34 s and 58 s to retrieve a single ImaNg and ImaSig event, respectively. The ImaNg, a default IMA event format providing the file’s integrity hash. The ImaSig event entry extends the ImaNg format by also including the file’s signature. So, the maximum event read time =58 s.

How much time does it take to detect the integrity violation? The vulnerability window for the attack consists of the time the agent takes to read a fresh quote, retrieve new events from the IMA log, and process the policy verification request. We assume that when the agent reads a quote (), an adversary can cause IMA to open no more than =3875 files (according to our measures, opening a file takes at least 40 s). The agent would require about =225 ms to read events, and about =100 ms to verify them against the policy, see §7.3. Therefore, using Equation (1), we estimate that the policy verification protocol has a vulnerability window of approximately =805 ms.

7.3 Performance

Figure 10: Policy verification throughput. Default policy checks secure boot and trusted boot. Location proximity checks geolocation. Runtime verifies IMA measurements.

How scalable is Synergía? Can it efficiently verify policies on behalf of multiple verifiers? In our design, the agent is the security-critical component that performs local integrity attestation on behalf of high-assurance security systems, centralized monitoring services, and security officers. To verify the agent’s ability to verify security policies, we measured the policy verification throughput – the time in which the agent responds to the verifier’s request verifying OS integrity. Our experiments compare four variants of the policy content: i) default, the policy contains only the definition of static and dynamic PCRs; ii) location proximity, the default policy content with additional constraints about proximity to trusted beacon; iii) runtime, the default policy content with a whitelist of trusted software; iv) runtime and location proximity, the combination of the runtime and location proximity policies. Figure 10 shows that the agent achieves the maximum throughput of 623 req/sec when verifying a default policy. A similar throughput is achieved for the policy with the location proximity extension. The throughput decreases to 521 req/sec when the agent verifies a security policy containing IMA measurements because of the overhead caused by reading new IMA measurements. An optimal latency of 100 ms is achieved for all policy variants when the throughput 250 req/sec.

Remote attestation latency
Synergía 665 ms (se=2 ms)
Intel CIT 2475 ms (se=5 ms)
IBM ACS 5677 ms (se=22 ms)
Table 4: The mean remote attestation latency comparison between different integrity monitoring frameworks. In all systems, the TPM quote was signed with RSA signing scheme. se

stands for standard error.

How does Synergía performance compare to the existing monitoring frameworks? We measured the integrity verification latency of the existing integrity monitoring frameworks to check if the presented framework can be considered practical in terms of performance. Specifically, we compared Synergía with cit [37, 40], and acs [35], which is a sample code for a tcg attestation application. We measured the total time taken to establish a connection with an agent, retrieve a fresh quote, and compare pcr with a whitelist. In all experiments, the TPM has been previously commissioned. Table 4 shows that Synergía with the mean latency of 665 ms outperforms cit by and acs by . Synergía achieves better performance because, during the initialization, it caches aik, static pcr, and dynamic pcr that do not change during the entire agent’s life cycle. The agent verifies that those values did not change by comparing them to the certified values obtained from the quote. Furthermore, unlike others, the agent verifies the integrity of the IMA log and pcr by recomputing a hash over cached pcr and IMA log and matching it against the pcr hash in the quote. It allows the agent to skip the slow process of reading PCRs and, consequently, reduce communication with the TPM to a single recurrent quote read operation.

How much time does it take to deploy a single security policy?

Security policy content Deployment latency
Static and dynamic PCRs 576 ms ( ms)
+ location proximity 626 ms ( ms)
+ IMA measurements 606 ms ( ms)
+ location prox. and IMA measur. 677 ms ( ms)
Table 5: The latency of the policy deployment into the agent depending on the content of the security policy. Mean values calculated from 600 independent policy deployments. stands for standard deviation.

Table 5 shows the latency of the policy deployment protocol using different policy extensions. The latency is measured as the total time between establishing a tls connection with Synergía, a policy upload, a verification using a fresh quote, and a response retrieval. The default policy’s size, containing the whitelist of 13 pcr and one tpm manufacturer’s ca certificate, is 4.7 kB. Its deployment takes 576 ms. The runtime policy size, containing the whitelist of 1790 files and an IMA signing certificate, is 235 kB ( the default policy). Its deployment lasts 606 ms, which is only a of the default policy deployment latency. The deployment latency of a policy with the location proximity extension depends on the communication latency between Synergía and trusted beacons. The deployment of the policy with one trusted beacon located in the same data center takes 626 ms.

Figure 11: Impact of Synergía on boot time.

How does Synergía impact the boot time of a computer? We used the systemd-analyze tool to measure the load time of initramfs and userspace in different configuration variants of Ubuntu. Figure 11 shows that the native Ubuntu Linux starts in 19 sec, from which the load of the userspace takes 13 sec and the kernel with initramfs remaining 6 sec. tboot executes after the bootloader and before the initramfs, thus not influencing the load time of the os. The activation of ima configured to measure all files defined by the tcg group (ima_tcb boot option), increases the boot time to 158 sec, of the native. A load of userspace takes 84% of this time, which is caused by the measurement of 1790 files. The boot time could be decreased by reducing the number of services loaded by the os. Synergía increases the boot time by 58% compared to the Ubuntu Linux with tboot and 8% compared to the Ubuntu Linux with ima. The increased boot time is mostly caused by the execution of time-consuming tpm operations in initramfs performed by Synergía and ima.

7.4 Formal Analysis

We propose the PCR obfuscation as a resilience mechanism against the cuckoo attack. To prove this claim, we formally verified the protocol’s integrity without/with obfuscation using the SAPIC tool [48]. SAPIC allows to model security protocols in a variant of applied pi calculus [59] that handles parallel processes with a non-monotonic global state needed for a security API such as a TPM. The protocol model describes the actions of agents participating in the protocol, the adversary’s specification, and the desired security properties. The adversary and the protocol interact by sending/receiving messages through the network, which changes the system state and creates traces of state transitions. Security properties are modeled as trace properties, checked against traces of the transition system, or as an observational equivalence of two transition systems. While the adversary tries to violate the security properties, she is limited by the constraints of cryptographic primitives. The SAPIC tool uses constraint solving to perform an exhaustive, symbolic search for executions with satisfying traces. Since the correctness of security protocols is an undecidable problem, the tool may not terminate on a given verification problem. If it terminates, it returns either proof that the protocol fulfills the security property or a counterexample representing an attack that violates the stated property.

Model overview. 2 shows a high-level overview model of the protocol. It has three processes: Golden, TPM, and Machine (line 2). The TPM and Machine processes execute in parallel without limiting the number of instances.

1/* || : parallel process, ! : replicated process
2in(msg), out(msg): send/receive message
3pk(priv): public key of key priv
4hash(value): one-way hash function
5<v1,v2,..>: concatenate values
6sign(value, key): signs values with key
7verify(value, key): checks the value's signature using key
8senc(value, key): symmetric encryption of value using key */
9Protocol = Golden; (!TPM | !Machine)  
11Golden =  // golden hashes are made available to the network
12  new CA_priv;
13  out(CA_pub = pk(CA_priv)); //asymm. key pair 
14  new UEFI_golden; out(sign(UEFI_golden, CA_priv));
15  new tboot_golden; out(sign(tboot_golden, CA_priv));
16  new initramfs_golden; out(initramfs_golden);
17  new kernel_golden; out(kernel_golden) 
19TPM =  //creates new TPM, quote can be acquired
20  new TPM; new AIK_priv;
21  sPCR_extend(TPM, sPCR); dPCR_extend(TPM, dPCR);
22  out(TPM, AIK_pub = pk(AIK_priv)); //public key available
23  !create_TPM_quote //replicated
25create_TPM_quote =
26  in(nonce); //prevents replay attacks
27  <sPCR,dPCR> = read_local_pcr(TPM);
28  quote = sign(<sPCR,dPCR,nonce>,AIK_priv)
29  out(quote, AIK_pub) // signed quote checked with AIK_pub
31Machine =  //creates new machine
32  new Machine; new seal_key;// machine's CPU seal key
33  //attach a specific TPM to this specific machine
34  in(TPM_local); // connect to a TPM
35  in(UEFI_signed); verify(UEFI_signed, CA_pub); //trusted 
36  sPCR_extend(TPM_local, hash(UEFI_signed));// static PCR
37  in(tboot_signed);verify(tboot_signed,CA_pub);//trusted 
38  in(initramfs);  in(kernel); // might be malicious
39  extend_with = hash(<tboot_signed, initramfs, kernel>)
40  dPCR_extend(TPM_local, extend_with);// dynamic PCR 
41  agent_initialization
43agent_initialization =  // might be malicious 
44  new nonce;  out(nonce);
45  //Golden initramfs contains a driver that
46  //guarantees communication with the local TPM
47  if initramfs = initramfs_golden then
48        TPM = TPM_local;     // trusted, connect to local TPM
49  else TPM=in(TPM_remote);//malicious, connect to remote TPM 
50  <sPCR, dPCR, nonce> = in(quote, AIK_pub(TPM)); // read quote of the connected TPM
51  verify(quote, AIK_pub(TPM)); // check the signature
52  seal_value = senc(<sPCR, dPCR, AIK_pub(TPM)>, seal_key); // machine specific CPU seal key 
53  seal_to_disk(Machine,seal_value);    
54  agent_runtime 
56agent_runtime = // might be malicious 
57  <sPCR1,dPCR1,AIK_pub(TPM_sealed)> = unseal_from_disk(Machine,seal_value); 
58  new nonce;  out(nonce);
59  <sPCR2, dPCR2, nonce> = in(quote, AIK_pub(TPM_sealed));
60  verify(quote,AIK_pub(TPM_sealed)); 
61  if equal(sPCR1,sPCR_golden,sPCR2) AND  
62     equal (dPCR1,dPCR_golden,dPCR2) 
63      // trigger event (TPM quote represents local machine state)
64  then  event MachineTrusted(TPM_sealed, TPM_local);   
Listing 2: The protocol model without PCR obfuscation

The Golden process (lines 2-2) constructs a trusted image that complies with the required policy. It makes the image available for other processes and the adversary via the public network. A trusted authority keeps a private key (CA_priv) secure while the public part (CA_pub) is made available to the public network (line 2). The handles of the UEFI and the tboot (UEFI_golden, tboot_golden) are signed with the private key of the trusted authority to mimic that hardware enforces the boot of the correct software only. The handles of all boot components are available to the public network. The TPM process (lines 2-2) models a TPM chip with static (sPCR) and dynamic (dPCR) PCRs. It creates a signed quote using a TPM-specific private key (AIK_priv) repeatedly (lines 2-2).

The Machine process models provisioning of a physical machine (lines 2-2) that has a genuine TPM chip, and a TXT- and SGX-capable CPU. The boot consists of three steps.

Step 1, the Machine process gets handles of the (trusted) signed tboot, the unsigned initramfs, and the unsigned kernel. It extends dynamic PCRs with measurements of the initramfs and kernel (lines 2-2). Due to the locality protection, only processes in this step are allowed to extend dynamic PCRs. Therefore, PCRs of the attached TPM reflect measurements of the loaded execution environment. So far, the adversary has no control over the machine (lines 2-2).

Step 2, agent initialization executes the already measured and loaded initramfs (lines 2-2). Through the previously measured TPM driver, it requests to contact the TPM. If the TPM driver is malicious, the adversary might provide an AIK_pub of the remote TPM (line 2) instead of the locally attached one (line 2). Next, agent initialization reads the quote signed with the TPM private key (AIK_priv). The quote is verified using AIK_pub. The AIK public key and PCRs are sealed to the disk using the local CPU’s SGX sealing key (lines 2-2).

Step 3, the OS and the TPM driver are untrusted. The OS takes over the control, the TPM driver is loaded, and the agent runtime is executed (lines 2-2). After unsealing from the disk (line 2), agent runtime reads the quote through the untrusted TPM driver. The quote is verified with the unsealed attestation public key (line 2). Finally, agent runtime reports that the execution environment complies with the policy (line 2) if-and-only-if the unsealed PCRs and the quote PCRs both match the golden PCRs values (lines 2-2).

Security property. The integrity of the protocol is specified as "if the TPM quote read by agent runtime matches the unsealed information, its execution environment MUST correspond to the matched values". 3 states this property.

1// Security property - integrity:
2// the checked state and the local state should match
3All x y #i. MachineTrusted(x,y)@i   x=y 
Listing 3: Security property

SAPIC tool reported a violation to the given property when using the protocol specified in 2. The trace describes that the adversary owns two machines: provisioned () and oracle (), connected to and , respectively. runs a malicious initramfs and OS (sPCR_golden, dPCR_malicious), but uses a genuine hardware (TPM and CPU). The adversary wants to verify it as a trustworthy machine. To do so, she forwards the requests to the other machine , which runs a trustworthy environment with untampered software and genuine hardware (sPCR_golden, dPCR_golden). Note that forwarding read requests to does not require a change to its environment; however, the adversary cannot extend PCRs of attached to without changing initramfs and OS, which consequently would change the corresponding dPCR in . During agent initialization, the malicious initramfs in forwards the attestation request to , which responds with a signed quote from that has the PCRs_golden and can be verified using AIK_pub(). PCR values and AIK_pub keys are sealed using seal_key of . During agent runtime, unseals the values from the disk and contacts through the malicious OS to get the quote. The quote contains PCRs that match both the golden and the sealed PCRs. So, the event (line 2) is triggered with unequal TPM_sealed () and TPM_local (), which indicates the cuckoo attack. The vulnerability exists because TPM attestation does not guarantee that the received credentials (AIK_pub) belong to the attested machine.

30Machine =  //creates new machine
31  new RND; ...  
42agent_initialization =
43  new nonce1;  out(nonce1);
44  if initramfs = initramfs_golden then
45        TPM = TPM_local; // connect to local TPM
46  else TPM = TPM_remote; // connect to remote TPM
47  <sPCR, dPCR, nonce1> = in(quote, AIK_pub(TPM));
48  verify(quote, AIK_pub(TPM)); // check the signature
49  sPCR_extend(TPM, RND)); // share the secret with TPM
50  new nonce2;  out(nonce2);//read again to ensure sPCR extended
51  <sPCR_obf, dPCR, nonce2> = in(quote, AIK_pub(TPM));
52  verify(quote, AIK_pub(TPM)); // check the signature  
53  if sPCR_obf = hash(<sPCR, RND>) then 
54   seal_value = senc(<sPCR, sPCR_obf, dPCR, AIK_pub(TPM)>, seal_key);
55   seal_to_disk(Machine,seal_value);  
56   agent_runtime 
58agent_runtime =
59  <sPCR1,sPCR1_obf,dPCR1,AIK_pub(TPM_sealed)> = unseal_from_disk(Machine,seal_value);
60  new nonce;  out(nonce);
61  <sPCR2_obf,dPCR2,nonce> = in(quote,AIK_pub(TPM_sealed));
62  verify(quote,AIK_pub(TPM_sealed));
63  if equal(dPCR1,dPCR_golden,dPCR2) AND 
64     equal(sPCR1_obf,sPCR2_obf) AND
65     equal(sPCR1,sPCR_golden)
66      // trigger event (TPM quote represents local machine state)
67  then  event MachineTrusted(TPM_sealed, TPM_local);  
Listing 4: The protocol model with PCR obfuscation

Model extension. 4 shows the extended model of the protocol to implement the PCR obfuscation. It required the following changes: i) Generation of a random number (RND) (line 4); ii) PCRs obfuscation: the static PCRs are extended with the RND (lines 4-4); iii) agent initialization seals both the original and obfuscated PCRs (lines 4-4); iv) agent runtime declares a machine trusted if-and-only-if: a) the dynamic PCRs golden, sealed and read from the quote match, b) the obfuscated static PCRs sealed and read from quote match, and c) the original static PCRs golden and sealed match (lines 4-4).

We checked the model extended with the obfuscation (4) against the integrity property in 3. SAPIC tool terminated and reported that all traces of the protocol preserve the given property. The modification to the model, where the agent initialization enclave shares a secret with the TPM potentially belonging to the attested machine, overcomes the previously described vulnerability.

8 Related work

Like the existing monitoring systems [37, 35], Synergía relies on the TPM attestation protocol to verify the computer’s integrity. Unlike them, Synergía is resilient to the cuckoo attack. Existing defenses against this attack have a limited application for high-assurance security systems. Fink et al. proposed a time side-channel approach [25] to detect the cuckoo attack. As confirmed by the authors, it is prone to false positives and requires stable measurement conditions, an impractical assumption in real-world scenarios. Flicker [58] accesses local TPM from the isolated execution environment established by drtm. However, DRTM does not attest to the computer location which makes its attestation untrustworthy due to simple hardware attacks [84]. Moreover, DRTM permits executing only a single process on the entire CPU at the same time. This impacts application’s throughput because a single context switch to DRTM-established environment takes 10-100s of milliseconds [57]. Synergía instead first verifies that the computer is in the trusted data center (thus, no hardware attacks are possible) and uses DRTM only once when provisioning the TPM. This approach provides better performance as required by modern applications.

Other solutions for root of trust identification problem require the verifier to solve biometric challenge [21], observing emmited LED signals [73], verifying the device state displayed on the screen [20, 51], using trusted devices to scan bar codes sealed on the device [56], or pressing a special-purpose button for bootstrapping trust during the computer boot [64]. These approaches have limitations because i) the TPM is a passive device controlled by software which, due to lack of trusted I/O paths to external devices, can redirect, reply, or fool the communication, and ii) they require human interaction and thus do not scale for the dc-level.

Recently, Dhar et al. proposed ProximiTEE [22] to deal with the SGX (not TPM) cuckoo attack by attaching a trusted device to the computer and detecting the cuckoo attack during the SGX attestation. This solution can verify that the SGX enclave executes on the computer with the attached trusted device because of the very low communication latency between the enclave and the device. Although, as denoted by Parno [64] this approach cannot be used to detect the TPM cuckoo attack because of the slow speed of the TPM, Synergía could use ProximiTEE as a trusted beacon implementation to prove that the computer is located in the expected data center.

Other work focuses on tolerating malware in the OS while preventing side-channel attacks on TEEs. There are three approaches to mitigate these attacks: i) static vulnerability detection [32, 62], ii) attack prevention [1, 13, 26], and iii) attack detection [61, 18]. The first one consists of analyzing and modifying source code to detect gadgets [32, 62]. However, finding all gadgets is difficult or impossible because the search narrows to gadgets specific to known attacks. The second approach prevents attacks by hiding access patterns using oblivious execution/access pattern obfuscation, resource isolation [26], or hardware changes [81]. These techniques address only specific attacks [26], require hardware changes [81], or incur large performance overhead [1, 13]. The last approach consists of runtime attack detection [61, 18] by isolating and monitoring resources of instrumented programs. But, it targets selected attacks and assumes some amount of statistical misses. Synergía aims at preventing such attacks without requiring source code changes or hardware modifications, with low performance overhead but a larger trusted computing base.

9 Conclusion

We responded to regulatory demands that require stronger isolation of high-assurance security systems by running them inside trusted execution environments on top of a trustworthy operating system and in the expected geolocation. We demonstrated that the combination of Intel SGX with TPM-based solutions meets such requirements but requires protection against the cuckoo attack. We proposed a novel deterministic defense mechanism against the cuckoo attack and formally proved it. We implemented a framework that monitors and enforces the integrity as well as geolocation of computers running high-assurance security systems and mitigates the cuckoo attack. Our evaluation and security risk assessment show that the Synergía is practical.

References

  • [1] A. Ahmad, B. Joe, Y. Xiao, Y. Zhang, I. Shin, and B. Lee (2019) Obfuscuro: a commodity obfuscation engine on intel sgx. In Network and Distributed System Security Symposium, Cited by: §8.
  • [2] Alpine Linux Development Team (accessed on July, 2021) Alpine Linux - Small. Simple. Secure.. Note: https://alpinelinux.org/about/ Cited by: §2.
  • [3] I. Anati, S. Gueron, S. Johnson, and V. Scarlata (2013) Innovative technology for cpu based attestation and sealing. In Proceedings of the 2nd international workshop on hardware and architectural support for security and privacy, Vol. 13, pp. 7. Cited by: §3.
  • [4] ARM Limited (2009) Building a secure system using trustzone technology. White Paper Note: http://infocenter.arm.com/help/topic/com.arm.doc.prd29-genc-009492c/PRD29-GENC-009492C_trustzone_security_whitepaper.pdf Cited by: §1.
  • [5] A. Armando, D. Basin, Y. Boichut, Y. Chevalier, L. Compagna, J. Cuellar, P. H. Drielsma, P. C. Heám, O. Kouchnarenko, J. Mantovani, S. Mödersheim, D. von Oheimb, M. Rusinowitch, J. Santiago, M. Turuani, L. Viganò, and L. Vigneron (2005) The avispa tool for the automated validation of internet security protocols and applications. Berlin, Heidelberg, pp. 281–285. External Links: ISBN 978-3-540-31686-2 Cited by: §3.
  • [6] A. Armando, R. Carbone, L. Compagna, J. Cuellar, and L. Tobarra (2008) Formal Analysis of SAML 2.0 Web Browser Single Sign-on: Breaking the SAML-based Single Sign-on for Google Apps. New York, NY, USA, pp. 1–10. Cited by: §3.
  • [7] S. Arnautov, B. Trach, F. Gregor, T. Knauth, A. Martin, C. Priebe, J. Lind, D. Muthukumaran, D. O’Keeffe, M. Stillwell, D. Goltzsche, D. Eyers, R. Kapitza, P. Pietzuch, and C. Fetzer (2016) SCONE: Secure linux containers with Intel SGX. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 689–703. Cited by: §5.
  • [8] W. Arthur and D. Challener (2015) A practical guide to TPM 2.0: Using the new trusted platform module in the new age of security. Springer Nature. Cited by: §5.1.
  • [9] S. Berger, M. Kayaalp, D. Pendarakis, and M. Zohar (2016) File Signatures Needed!. Linux Plumbers Conference. Cited by: §4.2.
  • [10] S. Bhatt, P. K. Manadhata, and L. Zomlot (2014) The operational role of security information and event management systems. IEEE Security and Privacy (S&P) 12 (5), pp. 35–41. Cited by: §4.1.
  • [11] B. Blanchet (2001) An efficient cryptographic protocol verifier based on prolog rules. pp. 82–96. Cited by: §3.
  • [12] M. Bortolozzo, M. Centenaro, R. Focardi, and raham (2010) Attacking and fixing pkcs#11 security tokens. Cited by: §3.
  • [13] F. Brasser, S. Capkun, A. Dmitrienko, T. Frassetto, K. Kostiainen, and A. Sadeghi (2019) DR. sgx: automated and adjustable side-channel protection for sgx using data location randomization. pp. 788–800. Cited by: §8.
  • [14] J. V. Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx (2018-08) Foreshadow: extracting the keys to the intel SGX kingdom with transient out-of-order execution. Baltimore, MD, pp. 991–1008. External Links: ISBN 978-1-939133-04-5, Link Cited by: §1, §6.2, §6.3.
  • [15] M. Carvalho, J. DeMott, R. Ford, and D. A. Wheeler (2014) Heartbleed 101. IEEE security & privacy 12 (4), pp. 63–67. Cited by: §6.4.
  • [16] S. Chakrabarti, B. Baker, and M. Vij (2017) Intel SGX Enabled Key Manager Service with OpenStack Barbican. arXiv e-prints. External Links: 1712.07694 Cited by: §4.1.
  • [17] D. Chakraborty, L. Hanzlik, and S. Bugiel (2019) SimTPM: user-centric TPM for mobile devices. pp. 533–550. Cited by: §1.
  • [18] G. Chen, W. Wang, T. Chen, S. Chen, Y. Zhang, X. Wang, T. Lai, and D. Lin (2018) Racing in hyperspace: closing hyper-threading side channels on sgx with contrived data races. pp. 178–194. Cited by: §8.
  • [19] V. Costan and S. Devadas (2016) Intel sgx explained.. IACR Cryptol. ePrint Arch. 2016 (86), pp. 1–118. Cited by: §1, §3.
  • [20] J. Danisevskis, M. Peter, J. Nordholz, M. Petschick, and J. Vetter (2015) Graphical user interface for virtualized mobile handsets. IEEE S&P MoST. Cited by: §1, §8.
  • [21] I. De Oliveira Nunes, X. Ding, and G. Tsudik (2021) On the root of trust identification problem. In Proceedings of the 20th International Conference on Information Processing in Sensor Networks (Co-Located with CPS-IoT Week 2021), pp. 315–327. Cited by: §1, §8.
  • [22] A. Dhar, I. Puddu, K. Kostiainen, and S. Capkun (2020) ProximiTEE: hardened sgx attestation by proximity verification. In Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy, CODASPY ’20. Cited by: §1, §4.3, §6.2, §8.
  • [23] D. Dolev and A. Yao (1983) On the security of public key protocols. IEEE Transactions on information theory 29 (2), pp. 198–208. Cited by: §2.
  • [24] Eperi (accessed on July, 2021) Top tier bank and confidential computing. Note: https://www.intel.com/content/www/us/en/customer-spotlight/stories/eperi-sgx-customer-story.html Cited by: §1.
  • [25] R. A. Fink, A. T. Sherman, A. O. Mitchell, and D. C. Challener (2011) Catching the cuckoo: Verifying tpm proximity using a quote timing side-channel. pp. 294–301. Cited by: §1, §8.
  • [26] Q. Ge, Y. Yarom, T. Chothia, and G. Heiser (2019) Time protection: the missing os abstraction. New York, NY, USA. External Links: ISBN 9781450362818, Link, Document Cited by: §8.
  • [27] Gematik GmbH Systemspezifisches Konzept ePA. Note: https://www.vesta-gematik.de/standard/formhandler/324/gemSysL_ePA_V1_3_0.pdf Cited by: §1, §1, §6.1.
  • [28] Gematik GmbH (accessed on July, 2021) Systemspezifisches Konzept E-Rezept. Note: https://www.vesta-gematik.de/standard/formhandler/324/gemSysL_eRp_V1_0_0_CC6.pdf Cited by: §1, §1.
  • [29] V. Gligor and M. Woo (2019) Establishing software root of trust unconditionally. In Network and Distributed Systems Security (NDSS 2019), Cited by: §1.
  • [30] J. Greene (2010) Intel trusted execution technology: hardware-based technology for enhancing server platform security. Intel Corporation, Copyright 2012 (8). Cited by: §5.1.
  • [31] F. Gregor, W. Ozga, S. Vaucher, R. Pires, D. Le Quoc, S. Arnautov, A. Martin, V. Schiavoni, P. Felber, and C. Fetzer (2020) Trust management as a service: enabling trusted execution in the face of byzantine stakeholders. pp. 502–514. Cited by: §4.1, §5.4.
  • [32] M. Guarnieri, B. Köpf, J. F. Morales, J. Reineke, and A. Sánchez (2020) SPECTECTOR: principled detection of speculative information flows. pp. 1–19. Cited by: §8.
  • [33] J. A. Halderman, S. D. Schoen, N. Heninger, W. Clarkson, W. Paul, J. A. Calandrino, A. J. Feldman, J. Appelbaum, and E. W. Felten (2009) Lest we remember: cold-boot attacks on encryption keys. Communications of the ACM 52 (5), pp. 91–98. Cited by: §1, §6.1.
  • [34] S. Hallyn, D. Kasatkin, D. Safford, R. Sailer, and M. Zohar (accessed on July, 2021) Linux Integrity Measurement Architecture (IMA) - IMA appraisal. Note: https://sourceforge.net/p/linux-ima/wiki/Home/#ima-appraisal Cited by: §3.
  • [35] IBM Corporation (accessed on July, 2021) IBM TPM Attestation Client Server. Note: https://sourceforge.net/projects/ibmtpm20acs/ Cited by: §3, §7.3, §8.
  • [36] IBM (2019) IBM CEX7S / 4769 PCIe Cryptographic Coprocessor (HSM). IBM 4769 Data Sheet. Cited by: §4.3, §6.1.
  • [37] Intel and National Security Agency (accessed on July, 2021) Intel Open Cloud Intergrity Technology. Note: https://01.org/opencit Cited by: §3, §7.3, §8.
  • [38] Intel Corporation (accessed on July, 2021) Trusted Boot (tboot). Note: https://sourceforge.net/projects/tboot/ Cited by: §3.
  • [39] Intel Corportation (2008) Intel trusted execution techonology–software development guide, revision 017.0. Document. Cited by: §5.1.
  • [40] Intel (accessed on July, 2021) Intel Security Libraries for Data Center. Note: https://01.org/intel-secl Cited by: §3, §7.3.
  • [41] R. Jayaram Masti, C. Marforio, and S. Capkun (2013) An architecture for concurrent execution of secure environments in clouds. pp. 11–22. Cited by: §3.
  • [42] S. Johnson, V. Scarlata, C. Rozas, E. Brickell, and F. Mckeen (2016) Intel® software guard extensions: epid provisioning and attestation services. White Paper 1 (1-10), pp. 119. Cited by: §3, §4.4, §5.4, §6.2.
  • [43] B. Kauer (2007) OSLO: Improving the security of Trusted Computing. USENIX. Cited by: §6.1.
  • [44] M. R. Khandaker, W. Liu, A. Naser, Z. Wang, and J. Yang (2019-08) Origin-sensitive control flow integrity. In 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, pp. 195–211. External Links: ISBN 978-1-939133-06-9, Link Cited by: §2.
  • [45] G. Klein, M. Norrish, T. Sewell, H. Tuch, S. Winwood, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt, and R. Kolanski (2009) seL4: formal verification of an OS kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles - SOSP ’09, Big Sky, Montana, USA. Cited by: §6.1.
  • [46] T. Knauth, M. Steiner, S. Chakrabarti, L. Lei, C. Xing, and M. Vij (2018) Integrating remote attestation with transport layer security. arXiv preprint arXiv:1801.05863. Cited by: §5.4.
  • [47] K. Kostiainen, A. Dhar, and S. Capkun (2020) Dedicated security chips in the age of secure enclaves. IEEE Security and Privacy 18 (5), pp. 38–46. External Links: Document Cited by: §1, §1.
  • [48] S. Kremer and R. Kuennemann (accessed on July, 2021) SAPIC: a stateful applied pi calculus. Note: http://sapic.gforge.inria.fr/ Cited by: §3, §7.4.
  • [49] A. Kumar, A. Kashyap, V. Phegade, and J. Schrater (accessed on July, 2021) Self-Defending Key Management Service with Intel SGX. Fortranix Whitepaper. Cited by: §1, §4.1.
  • [50] M. Kurth, B. Gras, D. Andriesse, C. Giuffrida, H. Bos, and K. Razavi (2020) NetCAT: Practical Cache Attacks from the Network. pp. 20–38. Cited by: §6.4.
  • [51] M. Lange and S. Liebergeld (2013) Crossover: secure and usable user interface for mobile devices with multiple isolated os personalities. pp. 249–257. Cited by: §8.
  • [52] D. Lee, D. Kohlbrenner, S. Shinde, K. Asanović, and D. Song (2020) Keystone: an open framework for architecting trusted execution environments. In Proceedings of the Fifteenth European Conference on Computer Systems (EuroSys2́0), pp. 1–16. Cited by: §1.
  • [53] G. Lowe (1996) Breaking and fixing the needham-schroeder public-key protocol using fdr. Cited by: §3.
  • [54] A. T. Markettos, C. Rothwell, B. F. Gutstein, A. Pearce, P. G. Neumann, S. W. Moore, and R. N. M. Watson (2019) Thunderclap: exploring vulnerabilities in operating system IOMMU protection via DMA from untrustworthy peripherals. Cited by: §6.1.
  • [55] N. D. Matsakis and F. S. Klock (2014) The rust language. ACM SIGAda Ada Letters 34 (3), pp. 103–104. Cited by: §2, §5.
  • [56] J.M. McCune, A. Perrig, and M.K. Reiter (2005) Seeing-is-believing: using camera phones for human-verifiable authentication. Cited by: §8.
  • [57] J. M. McCune, Y. Li, N. Qu, Z. Zhou, A. Datta, V. Gligor, and A. Perrig (2010) TrustVisor: efficient tcb reduction and attestation. pp. 143–158. Cited by: §1, §8.
  • [58] J. M. McCune, B. J. Parno, A. Perrig, M. K. Reiter, and H. Isozaki (2008) Flicker: an execution infrastructure for tcb minimization. pp. 315–328. Cited by: §1, §8.
  • [59] R. Milner (1997) The pi calculus and its applications. In Formal Methods for Open Object-based Distributed Systems, pp. 3–4. Cited by: §7.4.
  • [60] K. Murdock, D. Oswald, F. D. Garcia, J. Van Bulck, D. Gruss, and F. Piessens (2020) Plundervolt: software-based fault injection attacks against intel sgx. Cited by: §6.1.
  • [61] O. Oleksenko, B. Trach, R. Krahn, M. Silberstein, and C. Fetzer (2018) Varys: protecting SGX enclaves from practical side-channel attacks. pp. 227–240. Cited by: §8.
  • [62] O. Oleksenko, B. Trach, M. Silberstein, and C. Fetzer (2020) SpecFuzz: bringing spectre-type vulnerabilities to the surface. pp. 1481–1498. Cited by: §8.
  • [63] W. Ozga, D. L. Quoc, and C. Fetzer (2020) A practical approach for updating an integrity-enforced operating system. pp. 311–325. Cited by: §4.2.
  • [64] B. Parno (2008) Bootstrapping trust in a "trusted" platform. Cited by: §1, §1, §3, §8, §8.
  • [65] M. Petullo (2005) Encrypt your root filesystem. Linux Journal 2005 (129), pp. 4. Cited by: footnote 1.
  • [66] J. Protzenko, B. Parno, A. Fromherz, C. Hawblitzel, M. Polubelova, K. Bhargavan, B. Beurdouche, J. Choi, A. Delignat-Lavaud, C. Fournet, et al. (2020) EverCrypt: a fast, verified, cross-platform cryptographic provider. pp. 983–1002. Cited by: §6.1.
  • [67] R. Sailer, X. Zhang, T. Jaeger, and L. Van Doorn (2004) Design and implementation of a tcg-based integrity measurement architecture.. pp. 223–238. Cited by: §3.
  • [68] V. Scarlata, S. Johnson, J. Beaney, and P. Zmijewski (2018) Supporting third party attestation for intel sgx with intel data center attestation primitives. White paper. Cited by: §6.2.
  • [69] Scontain UG (accessed on July, 2021) SCONE Docker curated images. Note: https://hub.docker.com/u/sconecuratedimages Cited by: §7.1.
  • [70] A. Seshadri, M. Luk, E. Shi, A. Perrig, L. van Doorn, and P. Khosla (2005) Pioneer: verifying code integrity and enforcing untampered code execution on legacy systems. Cited by: §1.
  • [71] J. Shin, B. Jacobs, M. Scott-Nash, J. Hammersley, M. Wiseman, R. Spiger, D. Wilkins, R. Findeisen, D. Challener, D. Desselle, S. Goodman, G. Simpson, K. Brannock, A. Nelson, M. Piwonka, C. Dailey, and R. Springfield (2013) TCG D-RTM Architecture, Document Version 1.0.0. Trusted Computing Group. Cited by: §1, §3.
  • [72] E. R. Sparks (2007) A Security Assessment of Trusted Platform Modules. Computer Science Technical Report TR2007-597. Cited by: §6.1.
  • [73] H. Sun, K. Sun, Y. Wang, J. Jing, and H. Wang (2015) TrustICE: hardware-assisted isolated computing environments on mobile devices. Cited by: §8.
  • [74] A. Tatar, R. K. Konoth, E. Athanasopoulos, C. Giuffrida, H. Bos, and K. Razavi (2018) Throwhammer: rowhammer attacks over the network and defenses. pp. 213–226. Cited by: §6.4.
  • [75] Trusted Computing Group (2006) TCG Infrastructure Working Group Architecture Part II - Integrity Management, Specification Version 1.0, Revision 1.0. Cited by: §1, §3.
  • [76] Trusted Computing Group (2012) TCG PC Client Specific Implementation Specification for Conventional BIOS, Specification Version 1.21, Revision 1.00. Cited by: §3.
  • [77] Trusted Computing Group (2016) TPM Library Specification, Family "2.0", Revision 01.38. Cited by: §1.
  • [78] Trusted Computing Group (2019) TCG PC Client Platform Firmware Profile Specification, Family 2.0, Level 00, Revision 1.04. Cited by: §3.
  • [79] Trusted Computing Group (2019) TCG Trusted Attestation Protocol (TAP) Information Model for TPM Families 1.2 and 2.0 and DICE Family 1.0. Version 1.0, Revision 0.36. Cited by: §3.
  • [80] S. van Schaik, A. Kwong, D. Genkin, and Y. Yarom (2020) SGAxe: how SGX fails in practice. Note: https://sgaxeattack.com/ Cited by: §6.2.
  • [81] O. Weisse, I. Neal, K. Loughlin, T. F. Wenisch, and B. Kasikci (2019) NDA: preventing speculative execution attacks at their source. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on MicroarchitectureProceedings of the Fourteenth EuroSys Conference 201928th USENIX Security Symposium (USENIX Security 19)Proceedings of the 35th Annual Computer Security Applications Conference27th USENIX Security Symposium (USENIX Security 18)2018 IEEE Symposium on Security and Privacy (SP)International Conference on Trust and Trustworthy Computing2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)2020 IEEE Symposium on Security and Privacy (SP)Proceedings of the 2013 ACM workshop on Cloud computing security workshop2019 IEEE Symposium on Security and Privacy (SP)2020 IEEE Symposium on Security and Privacy (SP)2018 USENIX Annual Technical Conference (USENIXATC 18)27th USENIX Security Symposium (USENIX Security 18)Proceedings of the 2001 Ottawa Linux symposium2010 IEEE Symposium on Security and PrivacyProceedings of the 3rd ACM SIGOPS/EuroSys European Conference on Computer Systems 20082018 Usenix Annual Technical Conference (USENIXATC 18)29th USENIX Security Symposium (USENIX Security 20)Proceedings of the 21st International Middleware ConferenceProceedings of the 3rd Conference on Hot Topics in SecurityUEFI ForumUSENIX Security symposiumEuropean Symposium on Research in Computer SecurityProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications SecurityProceedings of the 14th IEEE Computer Security Foundations WorkshopComputer Aided VerificationProceedings of the 6th ACM Workshop on Formal Methods in Security EngineeringProceedings of the 17th ACM Conference on Computer and Communications SecurityTools and Algorithms for the Construction and Analysis of Systems2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN ’15)Proceedings of the 29th Annual Computer Security Applications Conference2005 IEEE Symposium on Security and Privacy (S&P’05)Proceedings of the Seventh ACM on Conference on Data and Application Security and PrivacyProceedings of the 41st IEEE Symposium on Security and Privacy (S&P’20)41st IEEE Symposium on Security and Privacy (S&P’20)26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 201927th USENIX Security Symposium (USENIX Security 18)2020 IEEE Symposium on Security and Privacy (SP)Black Hat DC, K. Etessami, S. K. Rajamani, T. Margaria, and B. Steffen (Eds.), MICRO ’52EuroSys ’19FMSE ’08CCS ’10SOSP ’05CODASPY ’17, Vol. 13, New York, NY, USA. External Links: ISBN 9781450369381, Link, Document Cited by: §8.
  • [82] R. Wilkins and B. Richardson (2013) UEFI secure boot in modern computer security solutions. Cited by: §3.
  • [83] J. Winter and K. Dietrich (2013) A hijacker’s guide to communication interfaces of the trusted platform module. Computers & Mathematics with Applications. Cited by: §6.1.
  • [84] J. Winter and K. Dietrich (2013) A hijacker’s guide to communication interfaces of the trusted platform module. Computers & Mathematics with Applications. Cited by: §1, §8.
  • [85] R. Wojtczuk and J. Rutkowska (2009) Attacking Intel Trusted Execution Technology. Cited by: §6.3.
  • [86] R. Wojtczuk and J. Rutkowska (accessed on July, 2021) Attacking Intel TXT via SINIT code execution hijacking. Note: https://invisiblethingslab.com/resources/2011/Attacking_Intel_TXT_via_SINIT_hijacking.pdf Cited by: §6.3.
  • [87] Y. Xu, W. Cui, and M. Peinado (2015) Controlled-channel attacks: deterministic side channels for untrusted operating systems. In Proceedings of the 2015 IEEE Symposium on Security and Privacy, SP ’15, USA, pp. 640–656. Cited by: §1.
  • [88] A. Zeller, R. Gopinath, M. Böhme, G. Fraser, and C. Holler (2019) The fuzzing book. CISPA+ Saarland University. Cited by: §2.
  • [89] J. Zinzindohoué, K. Bhargavan, J. Protzenko, and B. Beurdouche (2017) HACL*: a verified modern cryptographic library. pp. 1789–1806. Cited by: §2.