High-assurance security systems [28, 24, 49] leverage tee [19, 52, 4] because TEEs offer strong integrity and confidentiality guarantees in the face of untrusted privileged software, , firmware, hypervisors, os, and administrators. However, applications executing in a TEE cannot exist without the os, which manages the computing resources and controls applications’ life cycles. Thus, a trustworthy os is an essential element of each high-assurance security system because it guarantees its safety and security. Otherwise, an untrustworthy os might run malware that halts the victim application or steals secrets from the TEE via side-channel attacks [14, 87], as depicted in Figure 1. Germany introduced regulations requiring high-assurance security systems in the eHealth domain  to execute inside tee on a trustworthy os . State-of-the-art mechanisms to attest to the OS’s trustworthiness rely on the tpm , a secure element storing and certifying integrity measurements of firmware and os. Unfortunately, the TPM is vulnerable to the cuckoo attack (a.k.a relay attack) [64, 22] that makes the TPM attestation untrustworthy. We propose a novel defense mechanism against the TPM cuckoo attack, and we implement it as part of the framework responding to the German eHealth systems regulations .
The ima  and the drtm  are state-of-the-art mechanisms providing OS integrity auditing and enforcement. The DRTM securely loads the kernel to the memory, and IMA, which is part of that kernel, ensures that the kernel loads only software whose integrity is certified with a digital signature. Both technologies, when used together, ensure the load-time integrity of the kernel and software loaded to the memory during the OS runtime. Specifically, the DRTM, a hardware technology implemented in the CPU, stops all cores except one, disables interrupts, measures the to-be-loaded kernel, and executes the kernel with the IMA integrity enforcement mechanism. IMA restricts software loaded to the memory by reading the digital signature corresponding to the given software from the file system and verifying that this software’s integrity measurement (a cryptographic hash over its binary) matches the original integrity measurement signed by a trusted party (Figure 2). Thus, only software certified by a trusted party can be loaded to the memory by the kernel.
The TPM enables auditing of the kernel and software integrity because DRTM and IMA store corresponding integrity measurements in the tamper-proof TPM memory. The TPM then certifies the stored measurements to a verifier accordingly with the TPM remote attestation protocol. However, the TPM remote attestation is prone to the cuckoo attack, which is a security issue for TPM-based systems [29, 47, 17]. In this attack, an adversary certifies the software integrity of the underlying computer using certified measurements of another computer (see Figure 3). A verifier connects to the compromised computer and communicates with the TPM to check the computer software integrity (➊). The adversary prevents the verifier from accessing the local TPM by redirecting communication to a remote TPM (➋). Consequently, the verifier reads the remote TPM, which attests to an arbitrary, trustworthy state (➌), not the state of the compromised computer accessed by the verifier.
The existing defenses against the cuckoo attack have limited application in real-world dc. The first approach relies on the time side-channel [25, 70] in which a remote TPM is unmasked by observing increased communication latency. This approach requires calculation of hardware-specific statistics, is prone to false positives because the high TPM communication latency (including signature generation) makes the distance bounding infeasible [64, 47], and requires stable measurement conditions in which extraneous OS services are suspended during the TPM communication  — impractical assumptions for real-world dc. Flicker  adapts another approach. It exploits DRTM to run an application in isolation from the untrusted os, allowing it to communicate with the TPM directly. Flicker is insufficient for the targeted systems like  because i) it does not attest to the computer location, making the DRTM attestation untrustworthy because of simple hardware attacks  and cold-boot attacks  and ii) while it permits to split applications in multiple services that run isolated, it does not support systems with moderate throughput and latency requirements. In more detail, DRTM provides isolation in which the entire CPU executes only a single service at a time and a single context-switching takes 10-100s of milliseconds [58, 57]
. It results in an estimated program execution’s throughput of about 1-10 requests per computer per second when running multiple eHealth services, like. A practical solution requires that hundreds of services are processed in parallel per computer. We require an improvement of at least one order of magnitude in throughput compared to Flicker. Other approaches [20, 21] fall short in the context of the TPM because i) the TPM is a passive device controlled by software that could counterfeit its communication with external devices and ii) they would require human interaction during each computer boot.
The limitations of the existing solutions motivate us to propose a new automatic, practical at the data center-scale defense mechanism that deterministically detects the cuckoo attack and allows for the processing of parallel requests. We demonstrate that despite the differences in their threat models and designs, tee and TPM-based techniques complement each other, allowing for mitigating the cuckoo attack. Consequently, high-assurance security systems executing inside TEE can attest to the OS integrity. Our solution builds trust in a remote computer starting from a piece of code executing inside the TEE, and then systematically extend it to the entire OS. First, we leverage TEE to settle a trusted piece of code on an untrusted remote computer. We use it to verify that the computer is in the correct dc and mitigate the cuckoo attack. This allows us to extend trust to the TPM, then to the loaded kernel and its integrity-enforcement mechanism and, finally, to software being executed during the OS runtime.
We implement this approach in an integrity monitoring and enforcement framework called Synergía, which ensures that high-assurance security applications execute on correctly initialized and integrity-enforced OS located in the expected dc. The high-assurance security systems conform to the TEE threat model, while they gain OS integrity guarantees under a less rigorous threat model typical for TPM-based systems. We perform security risk analysis related to the use of these techniques in §6.
Altogether, we make the following contributions:
We designed and implemented an integrity monitoring and enforcement framework called Synergía that:
We assessed the security risk of Synergía (§6).
We demonstrated Synergía protecting a real-world application in the eHealth domain (§7.1).
We evaluated its security and performance (§7).
We provided the formal proof of the protocol detecting the cuckoo attack (§7.4).
2 Threat Model
We adopt the threat model of organizations, such as governments, banks, and health, legally bound to protect the security-sensitive data they process. In particular, we assume they execute high-assurance security systems in their own dc or in the hybrid cloud in which security-critical resources are provisioned on-premises. This implies limited and well-controlled access to dc, allowing us to assume that an adversary, , a rogue operator, cannot perform physical or hardware attacks. To ensure that a high-assurance security system executes inside the dc, we only presume that dedicated computers, called trusted beacons, are located inside that dc and cannot be physically moved outside (§4.3).
Initially, we only trust the CPU (including its hardware features TEE and DRTM) and a small piece of code (the agent). Using the TEE attestation protocol, we ensure that the legitimate agent executes inside the TEE on a genuine CPU on some computer. Then, we use the agent to verify that the computer is located in the correct DC by measuring the proximity to the trusted beacon via a round-trip time distance-bounding protocol. Once we ensure that the agent runs in the expected DC (no physical and hardware attacks), we use it to establish trust with the local TPM with the help of our protocol formally proved to be resistant to the cuckoo attack (§7.4). At this point, we use the TPM to extend the trust to the kernel and its built-in integrity-enforcement mechanism, IMA. Eventually, we use IMA to expand trust to the software loaded during the OS runtime.
High-assurance security systems executing inside the TEE follow the TEE threat model, , operating system, firmware, other software, and system administrator are untrusted. The additional guarantees of the operating system integrity follow the threat model of TPM-based systems, , software whose integrity is enforced at load-time behaves in a trustworthy way also during its execution. The runtime integrity of the process can be enforced using existing techniques, such as control-flow integrity enforcement , fuzzing , formal proofs , memory-safe languages , or memory corruption mitigation techniques (position-independent executables, stack-smashing protection, relocation read-only techniques). Please note that many of these techniques are applied nowadays by default during the software packaging process, as in the case of Alpine Linux .
We assume a financially or governmentally motivated adversary who might gain root access to selected computers inside a dc by exploiting network or OS misconfigurations, exploiting vulnerabilities in the os, or using social engineering. Her goal is to extract security-sensitive or privacy-sensitive data, , personal data, credentials, or cryptographic material. She can stop or halt individual computers or processes, but she cannot stop all central monitoring service instances responsible for reporting security incidents. We consider an untrusted network where an adversary can view, inject, drop, and alter messages. She can call the API with any parameters and configure the routing, forcing packages to choose faster or slower routes. Our network model is consistent with the classic Dolev-Yao adversary model . We rely on the soundness of the employed cryptographic primitives used within software and hardware components.
3 Design Decisions
Our objective is to provide a design that: i) enforces that only trusted software is executed on a computer; ii) monitors the remote computer os to verify compliance to integrity requirements; iii) allows high-assurance security systems to get insights into the os integrity.
We start by introducing the existing integrity monitoring systems architecture [40, 35, 37] and adjust it to meet the security guarantees required by high-assurance security systems. Figure 4 shows the integrity monitoring architecture where a central server pulls integrity measurements from computers by communicating with dedicated software, the agent. The agent on each computer collects data from the underlying security and auditing subsystems that measure and enforce the OS integrity. Central servers aggregate the data in databases, verify it against whitelists, and notify the security officer about integrity violations. Such architecture relies on the TPM as a root of trust.
[wide = 0pt, nosep,leftmargin=!,font=,start=1]
Enforce the load-time integrity with secure boot and OS integrity enforcement.
Secure boot  is the state-of-the-art technology to enforce that only trusted software bootstraps a computer. It relies on the chain of trust where each component measures the integrity (calculates a cryptographic hash) of the next component and executes it only if the hash matches a corresponding digital signature. The measured boot [76, 78] complements it by storing hashes in the TPM, thus enabling auditing.
The ima [67, 75] extends the functionality of measured boot and secure boot to the OS level. IMA is part of the kernel and verifies all files’ integrity (, executables, configuration files, dynamic libraries) before they are loaded to the memory. In particular, IMA-appraisal  enforces that the kernel loads files whose hashes are certified with digital signatures stored in the file system (Figure 2). The application execution is halted until a dynamic library is loaded, and fails if the library fails the integrity check. IMA enables auditing by maintaining an IMA log, a dedicated file storing hashes of all files loaded to the memory since the kernel load. It adds each file to the IMA log and stores a hash over it in the TPM before the file is loaded to the memory. Any tampering of the IMA log is detectable because the IMA log’s integrity hash must match the value stored in the TPM.
[wide = 0pt, nosep,leftmargin=!,font=,start=2]
Enable remote attestation to prove that secure boot and integrity enforcement are enabled.
The TPM remote attestation protocol  delivers a technical assurance of the computer’s integrity. The TPM chip digitally signs a report (quote) certifying hashes recorded since the computer boot. The hashes reflect loaded firmware and kernel and prove that integrity enforcement mechanisms are enabled. The verifier can check that the quote has not been manipulated because the TPM signs the quote with a signing key that is embedded in the TPM and linked to the ca of the TPM manufacturer. However, the monitoring system cannot merely rely on the tpm attestation because it is vulnerable to the cuckoo attack . It is indistinguishable whether an untrusted OS proves its integrity presenting a quote from a local TPM or impersonates a trustworthy OS presenting a quote from a remote TPM.
[wide = 0pt, nosep,leftmargin=!,font=,start=3]
Detect the cuckoo attack by authenticating the TPM with a secret random number.
The monitoring system must ensure that the quote originated from the local TPM, , the TPM that collected integrity measurements from the software components that booted the os on the underlying computer. We propose to extend the agent with the functionality of checking that it communicates with the local TPM. The general idea consists of sharing a randomly generated secret with the local TPM to identify it uniquely and then use the secret to authenticate the TPM (Figure 5). The main challenge is generating a secret and sharing it with the local TPM without revealing it to an adversary. The main challenge is how to generate a secret and share it with the local TPM without revealing it to the adversary. Otherwise, the adversary can mount the cuckoo attack by sharing it with a remote TPM.
Protect the secret in the TPM by relying on the one-way cryptographic hash function.
The TPM contains dedicated memory registers, called pcr, that have important properties; they cannot be written directly, but they can only be extended with a new value using a cryptographic one-way hash function. The operation can be expressed as: PCR_extend(n,value): pcr[n] = hash(pcr[n]||value). We propose to extend the secret on top of the existing measurements stored in the PCR to achieve the following properties: i) an adversary cannot extract the secret from the PCR value after the secret is extended to the PCR because the hash function result is not invertible; ii) an adversary cannot reproduce the PCR value in another TPM without knowing the secret, or finding a collision in the hash function; iii) after extending the TPM with the secret, the secret is no longer needed to identify the TPM because the PCR value extended with the secret is unique.
Leverage DRTM technology to provide a trusted and measured environment to access the local TPM.
We must ensure that the secret is shared with the local TPM securely. We do it in a trusted environment established by hardware technologies available in modern CPUs because these technologies also permit verification of the established execution environment’s integrity. Therefore, they allow detecting (post-factum) any secret extraction attempt, including software side-channel attacks, because such attacks require violating the kernel or initramfs integrity.
We propose generating the secret and extending it to PCRs inside the initramfs 111The initramfs is a minimalistic root filesystem that provides a user space to perform initialization tasks, like loading device drivers, mounting network file systems, or decrypting a filesystem , before the OS is loaded. because DRTM allows for later verification of the kernel and initramfs integrity. Specifically, the drtm , which is a hardware technology that establishes an isolated execution environment to run code on a potentially untrusted computer, can be used during the boot process (, by tboot ) to provide a measured load of the Linux kernel and initramfs.
The integrity measurements performed by DRTM cannot be forged because the TPM offers a dedicated range of PCRs (dynamic PCRs) that can only be reset or extended when the TPM is in a certain locality ; Only the code executed by DRTM can enter such locality. Therefore, the presence of measurements in dynamic PCRs confirms that the DRTM was executed, and the comparison of PCRs with the golden values confirms that the secret was shared with the local TPM because the correct TPM driver was used.
Leverage Intel SGX to transfer the golden TPM PCR value to the OS runtime securely.
Once the secret is shared with the TPM, we must expose the unique local TPM’s identifier (PCR value extended with the secret) to the agent running in the OS. To do so, we leverage sgx , a hardware CPU extension that provides confidentiality and integrity guarantees to the code executed in so-called enclaves in the presence of an adversary with root access to the computer. It offers a sealing  property that permits storing a secret on an untrusted disk where only the same enclave running on the same CPU can read it. The sealing and its revert operation unsealing use a CPU- and an enclave-specific key to encrypt and sign data in untrusted storage. We propose to communicate with the TPM from the inside of an enclave. First, the enclave executes in the initramfs where it shares a secret with the local TPM and seals the expected value of the TPM PCR to the disk. Then, it executes in the untrusted OS, where it authenticates the TPM using the PCR value unsealed from the disk.
Leverage the SGX local and remote attestation to expose integrity measurements to the verifiers.
sgx offers local and remote attestation protocols . While both protocols allow verifying that the expected code runs on a genuine Intel CPU, the SGX local attestation also permits two enclaves to learn that they execute on the same CPU. We rely on this property to permit high-assurance security systems to establish trust with the agent running on the same computer. Like this, high-assurance security systems gain access to integrity measurements of the surrounding OS. Similarly, central monitoring services leverage the SGX remote attestation to establish trust with agents.
Formally prove the protocol of establishing trust between the agent and the TPM.
We use formal verification techniques to prove that the Synergía protocol is resilient against the cuckoo attack because functional software testing cannot detect protocol errors since they only appear in the presence of a malicious adversary. We rely on automated security protocol verification approaches [48, 11, 5] because they can provide guarantees of the protocol’s correctness [6, 12, 53]. Specifically, we use SAPIC  tool to implement a formal model of the Synergía protocol, verify its integrity, and prove that it is resilient against the cuckoo attack (§7.4).
4 Synergía architecture
4.1 High-level Overview
Figure 6 shows a high-level overview of the Synergía architecture, which consists of five entities. A security officer (➊) uses a controller (➋) to define security policies describing correct (trusted) OS configurations. The controller communicates with agents (➌) running on every computer to check whether high-assurance security systems (➍) are executed in a trusted environment defined in security policies. Both the controller (➋) and the high-assurance security system executing inside SGX (➍) systematically query the agent to check if the operating system integrity conforms to the criteria defined inside a security policy. Note that the integrity measurements are not aggregated or verified centrally. Instead, agents aggregate them and verify them locally on computers. Agents verify their location using trusted beacons (➎), services running in a known geographical location, , specific dc.
We distinguish between two types of verifiers communicating with agents, local and remote verifiers. A local verifier is a high-assurance security system that requires strong confidentiality guarantees (➍). An example of such a service is a key management system [16, 49, 31] that executes inside an sgx enclave to protect integrity and confidentiality against privileged adversaries. The local verifier detects violations of the operating system integrity by communicating with the agent running on the same host.
A remote verifier, , (➋), is an application running on a different computer than the agent. It aims to verify that the remote computer is located in the specific dc and its OS is in the expected state. Typically, a remote verifier checks the integrity of the distributed system’s deployment, , various services distributed over machines, data centers, and availability zones. The controller has broader knowledge about the network load, machine failures, service migrations, software updates. It helps the security officer to manage the deployment while relying on individual services to react autonomously to integrity violations. The controller might be part of the siem system that correlates system behavior to detect multi-faceted attacks .
The security officer defines security policies (1) to declaratively state what software and dynamic libraries are permitted to run on the computer and what is the proper OS configuration. He creates distinct security policies for each high-assurance security system. For example, a key management system has a different policy than a system processing medical data because they use different dynamic libraries, software, and OS configurations. The monitoring controller reduces the burden of creating policies by allowing defining templates that can be combined to build individual policies with overlapping configurations. For example, services running on the same type of OS share the same template that describes software and configuration specific to that OS.
The agent uses the security policy to verify the OS integrity. The OS is trusted if and only if the load-time integrity measurements of the kernel and the load-time integrity measurements of files loaded to the memory during the OS runtime are declared on the whitelist or their corresponding digital signatures are verifiable using the certificate declared in the policy.
In more detail, the agent uses the TPM manufacturer’s CA certificate chain to verify that the TPM chip attached to the computer is legitimate (line 1). The integrity of firmware and its configuration is represented as a whitelist of static PCRs (lines 1-1), while the integrity of the Linux kernel and the initramfs is specified as a whitelist of dynamic PCRs (lines 1-1). Trusted configuration files, executables, and dynamic libraries are defined in the form of hashes (lines 1-1) and a signing certificate (line 1). Software updates are supported via complementary solutions [63, 9] and require specification of the certificate in the policy (line 1).
4.3 Trusted Beacon
A policy might constrain the computers’ proximity to the well-known trusted beacons deployed in dc (lines 1-1). A trusted beacon is a network service that responds to agents’ requests with the current timestamp. The agent can then estimate the physical machine’s proximity by measuring the network communication’s round-trip times. The adversary cannot accelerate network packets enough to achieve a very short round-trip time achievable only between machines in the same local network.
shows a high-level view of the trusted beacon proximity verification protocol. The trusted beacon contains the asymmetric keypair with a certificate issued by a trusted authority, , a dc owner. These credentials, known only to the trusted beacon, prove that the dc owner placed the trusted beacon in the dc, and the trusted beacon executes in a trusted environment. The agent establishes trust with the trusted beacon by reading timestamps signed by the trusted beacon. The agent then estimates the network latency by calculating a trimmed mean from the differences between timestamps obtained from pairs of consecutive requests. A trimmed mean allows for tolerating network latency fluctuations because it excludes outliers.
Our design does not restrict what security mechanisms must protect the trusted beacon. In particular, the trusted beacon could be a network-accessible hardware security module (HSM)  returning signed timestamps. HSM is a crypto coprocessor offering the highest level of security against software and hardware attacks. It is embedded in a tamper responsive enclosure to actively detect physical and hardware attacks and protect against side-channel attacks. A cheaper but less secure alternative might run a TEE-based application implementing the abovementioned protocol over TLS. Related work  demonstrated that the network communication round-trip time between two SGX enclaves located in the same network take in average s, a latency not achievable from the outside of the data center.
4.4 Policy Verification Protocol
We designed the agent to act as a facade between the verifier and the TPM to enable multiple verifiers to check the OS integrity concurrently. Figure 8 shows how a verifier uses the policy verification protocol to attest to the OS integrity. The agent regularly reads the list of new software loaded by the os, the quote, and persists it into the cache that reduces the policy verification latency for future requests (➊). The local or remote verifier perform the SGX local or remote attestation  to verify the agent’s identity and integrity and the CPU genuineness. The local attestation also proves that the agent runs on the same CPU (➋). Once the verifier deploys the policy (➌), the agent checks that the computer complies with the policy, stores the policy, and returns the corresponding policy_id (➍). The verifier uses the policy_id to re-evaluate the policy during future health checks (➎).
We implemented Synergía on top of the Linux kernel. We use existing integrity enforcement mechanisms built in the Linux kernel, , IMA-appraisal, kernel module signature verification, and AppArmor. We rely on the support for the secure boot built-in the underlying firmware. We developed remote attestation components, , the agent in memory-safe language Rust  and the monitoring controller in Python. We implemented the cuckoo attack detection mechanism and the policy verification protocol inside the agent. The monitoring controller allows defining policies, verifying the remote computer system’s integrity, and alerting about integrity violations. We rely on the SCONE framework  and the SCONE cross-compiler to run Synergía inside the sgx enclave.
5.1 Computer bootstrap
Figure 9 illustrates the bootstrap of a computer where the agent collects information required to detect the cuckoo attack. Consecutive uefi components execute in the chain of trust; their integrity measurements are extended in static pcr (➊). uefi loads the bootloader, which starts the tboot (➋). The tboot leverages txt [30, 39]–which implements DRTM on Intel CPUs–to establish a trusted environment. The tboot measures the integrity of the Linux kernel and initramfs, extends these measurements to dynamic pcr (➌), and executes them (➍).
The initramfs has two essential properties; its integrity is reflected in dynamic pcr, and failures during initramfs execution prevent machine booting. We rely on these properties to verify that the agent completed its execution. We refer to the agent execution inside initramfs as agent initialization (➎).
During the agent initialization, the agent requests the TPM to create a new aik, return the TPM’s ek certificate, and return the quote certifying pcr (➏). The agent performs the activation of credential procedure ( p. 109-111) to verify that the aik was created by the TPM, which possesses the private key associated with the ek certificate. The agent then obfuscates static pcr by extending them with a random number generated inside the sgx enclave (➐). To ensure that the obfuscation succeeded and the boot process to continue, the agent reads pcr again and compares them to the expected pre-computed hashes. After all, the AIK, the EK certificate, the TPM clock (includes computer reboot counter), and PCRs (original and obfuscated) are persisted in the file system in the sgx sealed configuration file (➑). The initramfs handles control to the OS (➒), after the agent initialization finishes. The OS executes the agent together with startup services. We refer to the agent execution after the OS executes as agent runtime.
5.2 Establish Trust
During the agent runtime, the agent verifies that there was no cuckoo attack during agent initialization and agent runtime by ensuring that the following conditions are fulfilled:
Condition 1: the agent is able to unseal the configuration file (➓). Relying on the properties of the SGX unseal, we conclude that the configuration file was created by the agent enclave running the same binary, and both enclaves were executed on the same sgx processor.
Condition 2: a successful match between dynamic pcr read from the tpm and the golden dynamic pcr. It proves that during agent initialization, the agent enclave was executed in the trusted environment (Linux kernel, initramfs, and correct TPM driver), and it successfully obfuscated the TPM.
Condition 3: a successful match of static pcr read from the tpm with obfuscated static pcr read from the configuration file. It proves that the configuration file contains the information gathered earlier from the same tpm.
Condition 4: a successful match of the reboots counter stored in the configuration with the reboots counter value read from the fresh quote proves that the computer did not reboot since the agent initialization.
Finally, considering conditions 1, 2, 3, 4, and what they indicate once fulfilled, we conclude that the quote was issued by the TPM that collected software measurements during the computer bootstrap. §7.4 formally proves this claim.
5.3 Cache Updates
To decrease the policy verification latency, the agent starts a separate thread reading the computer state to validate it against future policy verification requests. The agent recurrently retrieves the quote and verifies that the quote certifies PCRs values read during the agent initialization, and it repeatedly reads new events from the IMA log.
Hashes of all events are stored in the enclave’s memory, together with the number of bytes read (), and the last value of IMA PCR (). To read new events, the agent first retrieves the quote and opens the IMA log file skipping bytes. It then reads a new event from the file and recalculates the integrity hash by extending with the event’s hash. This process is repeated for each new event and finishes when the integrity hash is equal to the hash of the IMA PCR retrieved from the quote. If the agent reaches the end of the IMA log and the integrity hash does not match the hash in the IMA PCR, it detects the tampering of the IMA log and the OS is considered compromised.
5.4 Policy Verification
The agent exposes the policy verification functionality via a TLS-protected rest api endpoint to simplify the communication interface between verifiers and agents. It is enough for verifiers to check the agent’s identity by verifying its X.509 certificate presented during a TLS-handshake. Currently, TLS credentials are delivered to the agent via a kms  but the verifier can also rely on the SGX remote attestation  to ensure the agent’s identity and integrity. As future work, the agent will create a self-signed certificate via sgx-ra-tls , thus excluding the kms from the trusted computing base.
The agent stores a once deployed policy in the in-memory key-value map under a randomly generated key policy_id to permit tenants to verify the same policy again. The agent can be queried with the policy_id to verify that the OS integrity has not changed since the last verification. An adversary cannot change once deployed policy because sgx protects the agent’s memory from tampering, , SGX guarantees integrity, confidentiality, and freshness of data.
6 Security Risk Assessment
Synergía combines different security techniques to build a framework providing technical assurance that applications execute inside tee on the trustworthy os. However, each technique operates under a different threat model, and a careful analysis of existing attacks is required to claim security guarantees.
6.1 Preventing Physical and Hardware Attacks
First of all, the applied techniques usually do not protect against hardware and physical attacks. The TPM is vulnerable to simple hardware attacks on the communication bus with the CPU that allows an adversary to reset the TPM , reply to arbitrary measurements , including measurements corresponding to the DRTM launch . Similarly, Intel SGX is vulnerable to clock speed and voltage manipulation . Direct memory access attacks  or cold-boot attacks  can compromise the entire operating system and applications that store data in the main memory in plaintext. To prevent these kinds of attacks, we propose to attest to the physical location of the computer. Regulators require that dc are access controlled and place computers inside security cages . We argue that these techniques provide enough security to consider physical and hardware attacks inside the trusted data center negligible.
We use the concept of a trusted beacon to verify that the computer is located in the trusted dc. In real-world, the trusted beacon functionality could be provided by a hardware security module  or a trusted timestamping authority running on a computer with formally proved software [45, 66]. The only assumption is that trusted beacons must be securely placed inside the dc and then be protected from being moved.
6.2 Establishing Trust with the Agent.
To verify that the computer is indeed located in the expected dc, we must rely on the agent executing on a potentially untrusted computer exposed to physical and hardware attacks. To authenticate the agent and verify that it executes on a genuine Intel SGX CPU, we leverage Intel SGX remote attestation . In the past, researchers managed to extract Intel SGX attestation keys [80, 14] that allowed impersonating a genuine SGX CPU. The available mitigations are: i) relying on on-premise data center attestation mechanism , ii) checking for revoked SGX attestation keys, and iii) verifying that the agent runs in the proximity of a trusted device to ensure that it is in the correct data center composed of legitimate SGX machines . In all cases, we must trust the CPU manufacturer, SGX design, cryptographic primitives, and CPU implementation. We consider these assumptions practical because they are common industry practices.
6.3 Establishing Trust with the TPM.
Synergía relies on TXT, SGX, and TPM to detect the cuckoo attack. Researchers demonstrated that malware placed in the smm could survive the TXT late launch . To mitigate attacks on smm, Intel introduced an SMI transfer monitor that constrains the system management interrupt handler mitigating these class of attacks entirely. Other TXT-related and tboot vulnerabilities  were related to memory vulnerabilities in Intel’s firmware and tboot implementations.
Intel SGX is vulnerable to microarchitectural and side-channel attacks that violate SGX confidentiality guarantees . Intel constantly patches the vulnerabilities with microcode updates or hardware changes. Nonetheless, we do consider these attacks as a real threat because of their severity and the multitude of variants that appear.
These attacks do not impact Synergía guarantees because they only affect SGX confidentiality and not integrity. The only security-sensitive data that might be used to compromise Synergía is the secret shared between the agent and the TPM. However, the secret lives only during the agent initialization, where the presence of malware is detected. In more detail, an adversary can extract the secret shared between the agent and the TPM during the agent initialization to mount the cuckoo attack by sharing the secret with an arbitrary TPM. We formally proved (§7.4) that the Synergía protocol is immune to these kinds of attacks because the agent detects that the secret was leaked once it executes in agent runtime. The agent detects that malware was present during the agent initialization because both initramfs and kernel are measured by DRTM, and their measurements are securely transferred to the agent in agent runtime via SGX sealing. An adversary cannot tamper with the sealed data because only the same enclave running on the same CPU can seal and unseal the data. Thus, the presence of malware and secret leakage are revealed.
6.4 Establishing Trust with the OS.
Because the agent can read the load time integrity of the kernel stored inside the dynamic PCR in the TPM, it can ensure that the computer executes a kernel that was intended to load because even if an adversary boots a malicious kernel, she cannot tamper with PCRs that reflect the malicious kernel load.
An adversary who gains access to the computer by stealing credentials using social engineering or exploiting a misconfiguration cannot run arbitrary software because she does not have the signing key to issue a certificate required by the integrity-enforcement mechanisms (IMA) to authorize the file.
However, an adversary might exploit memory vulnerabilities in the existing code, such as Linux kernel or software executing on the system remotely . This is feasible because most system software is implemented in unsafe memory languages. We assume that the operating system owner relies on an additional security mechanism enumerated in §2 to enforce the runtime process integrity. Typically, the system owner also minimizes the tcb by authorizing only crucial software to run on a computer. He does it by digitally signing only trusted software and relying on the IMA-appraisal to enforce it during the OS runtime.
An adversary who gains access to the computer can restart it and disable the security mechanisms or boot the computer into an untrusted state. In §7.2, we estimate the vulnerability window size in which the monitoring controller detects the computer integrity violation.
Another attack vectors are network side-channel attacks, such as NetCAT, and rowhammer attacks over the network . In these attacks, an adversary does not have to run malware on the computer but instead sends malicious network packages that modern network cards place directly in the main memory. We assign a low risk to these classes of attacks because i) they are hard to perform in noisy production environment, ii) they are detectable by network traffic monitoring tools and firewalls because they generate high network activity, iii) mitigation techniques exist and can be applied independently [50, 74].
We evaluate Synergía in four-folds. In §7.1, we demonstrate Synergía protecting a real-world application from the eHealth domain. Then, in §7.2 and §7.3, we evaluate Synergía’ security and performance, respectively. Finally, in §7.4 we present the formal verification of the cuckoo attack detection protocol.
Testbed. Experiments execute on a rack-based cluster of three Dell PowerEdge R330 servers connected via a 10 Gb Ethernet. Each server is equipped with an Intel Xeon E3-1270 v5 CPU, 64 GiB of RAM, Infineon 9665 TPM 2.0, running Ubuntu 16.04 LTS with kernel Linux kernel v4.4.0-135-generic. The CPUs are on the microcode patch level (0xc6). The epc is configured to reserve 128 MiB of RAM. During all experiments, the agent, the monitoring controller, and the trusted beacon run on different machines.
|Execution time||41 sec||52 sec||53 sec|
|- tolerate rogue operator||✗||✓||✓|
|- tolerate untrusted OS||✗||✓||✓|
|- side-channel attacks||✗||✗||✓|
|- data processed in|
The execution time of the eHealth application. Mean values calculated from 30 independent application executions. The standard deviation in all variants was 1 sec.
7.1 Protecting a Real-world eHealth Application
We leveraged Synergía to protect an eHealth application provided to us by a partner who requires protection of his intellectual property (the application’s source code) and the confidentiality of the privacy-sensitive patients’ data. This dataset contains concentrations of metabolites in cerebrospinal fluid samples from patients with bacterial meningitis, viral meningitis/encephalitis, and non-inflamed controls. The application, implemented in Python, uses a ml algorithm to understand pathophysiological networks and mechanisms as well as to identify disease-specific pathways that could serve as targets for host-directed treatments to reduce end-organ damage. We used publicly available SCONE docker images  to run the application inside a container executed inside the SGX enclave. We configured the OS to use IMA and run the Synergía’s agent. On two other machines, we deployed the trusted beacon and the monitoring controller, which was constantly querying the agent to verify the OS integrity.
We measured the execution time of the machine learning algorithm run in three different variants; innative, the application executes in the untrusted OS; in SCONE, the application executes in the untrusted OS but inside an SGX enclave provided by SCONE; in Synergía, the application executes inside the SGX enclave on an integrity-enforced OS booted with Synergía. Table 1 shows that the machine learning algorithm’s execution inside the SGX enclave takes 52 sec, which was longer than the native execution (41 sec). Synergía further increased the application execution time by 2%, compared to the SGX enclave execution. This is an acceptable performance overhead, assuming the higher security guarantees offered by Synergía and the compliance with the privacy regulations required by the EU law.
An adversary cannot violate the computer system’s integrity if all integrity enforcement mechanisms are properly configured and enabled (including mechanisms protecting runtime process integrity §2) because the kernel rejects untrusted files from loading to the memory. However, an adversary can run arbitrary software if she gets enough privileges to boot the computer with disabled enforcement mechanisms. We run a set of micro-benchmarks to estimate the vulnerability window size expressed with Equation (1), during which the integrity violation remains undetected.
is the vulnerability window size, is the time to read a TPM quote, is the maximum number of events that can be opened within , is the time to read a single event from the IMA log, is the time required by the agent to verify the policy and by a verifier to send, receive, and process the verification request.
|Signing scheme||TPM quote read latency|
|RSA 2048 with SHA-256||521 ms ( ms)|
|ECDSA P256 with SHA-256||155 ms ( ms)|
|HMAC with SHA-256||107 ms ( ms)|
What is the latency of reading a TPM quote? Each time the agent reads the IMA log, it reads a fresh TPM quote to verify the IMA log’s integrity. The TPM supports different signing schemes that have a direct impact on the TPM quote read latency. Table 2 shows that TPM issues a quote using hmac in 107 ms, which is faster than when using rsa cryptography and faster when using ecdsa. Thus, selecting an hmac or ecdsa allows validating the IMA log’s integrity faster than when using rsa. We assume usage of the ecdsa when reading a quote, thus =155 ms.
|Read latency of a single IMA log entry|
|ImaNg event||34 s ( s)|
|ImaSig event||58 s ( s)|
What is the latency of reading integrity measurements? We measured the latency of reading new measurements from the IMA log to learn how fast the agent can detect the integrity violation. During the first read of the IMA log, the agent reads all measurements collected by IMA during the OS boot, which is typically the biggest chunk of the IMA log that has to be read by the agent at once. The bootstrap of Ubuntu Linux produces approximately 1800 measurements. The agent needs 130 ms to read all events from the IMA log, recalculate the IMA log integrity hash, and compare the hash to the IMA PCR.
After the initial IMA log read, the agent reads only the new IMA measurements since the last IMA log read. The time needed to read the integrity measurements depends on the number of new events measured and added to the IMA log. Table 3 shows that the agent requires 34 s and 58 s to retrieve a single ImaNg and ImaSig event, respectively. The ImaNg, a default IMA event format providing the file’s integrity hash. The ImaSig event entry extends the ImaNg format by also including the file’s signature. So, the maximum event read time =58 s.
How much time does it take to detect the integrity violation? The vulnerability window for the attack consists of the time the agent takes to read a fresh quote, retrieve new events from the IMA log, and process the policy verification request. We assume that when the agent reads a quote (), an adversary can cause IMA to open no more than =3875 files (according to our measures, opening a file takes at least 40 s). The agent would require about =225 ms to read events, and about =100 ms to verify them against the policy, see §7.3. Therefore, using Equation (1), we estimate that the policy verification protocol has a vulnerability window of approximately =805 ms.
How scalable is Synergía? Can it efficiently verify policies on behalf of multiple verifiers? In our design, the agent is the security-critical component that performs local integrity attestation on behalf of high-assurance security systems, centralized monitoring services, and security officers. To verify the agent’s ability to verify security policies, we measured the policy verification throughput – the time in which the agent responds to the verifier’s request verifying OS integrity. Our experiments compare four variants of the policy content: i) default, the policy contains only the definition of static and dynamic PCRs; ii) location proximity, the default policy content with additional constraints about proximity to trusted beacon; iii) runtime, the default policy content with a whitelist of trusted software; iv) runtime and location proximity, the combination of the runtime and location proximity policies. Figure 10 shows that the agent achieves the maximum throughput of 623 req/sec when verifying a default policy. A similar throughput is achieved for the policy with the location proximity extension. The throughput decreases to 521 req/sec when the agent verifies a security policy containing IMA measurements because of the overhead caused by reading new IMA measurements. An optimal latency of 100 ms is achieved for all policy variants when the throughput 250 req/sec.
|Remote attestation latency|
|Synergía||665 ms (se=2 ms)|
|Intel CIT||2475 ms (se=5 ms)|
|IBM ACS||5677 ms (se=22 ms)|
stands for standard error.
How does Synergía performance compare to the existing monitoring frameworks? We measured the integrity verification latency of the existing integrity monitoring frameworks to check if the presented framework can be considered practical in terms of performance. Specifically, we compared Synergía with cit [37, 40], and acs , which is a sample code for a tcg attestation application. We measured the total time taken to establish a connection with an agent, retrieve a fresh quote, and compare pcr with a whitelist. In all experiments, the TPM has been previously commissioned. Table 4 shows that Synergía with the mean latency of 665 ms outperforms cit by and acs by . Synergía achieves better performance because, during the initialization, it caches aik, static pcr, and dynamic pcr that do not change during the entire agent’s life cycle. The agent verifies that those values did not change by comparing them to the certified values obtained from the quote. Furthermore, unlike others, the agent verifies the integrity of the IMA log and pcr by recomputing a hash over cached pcr and IMA log and matching it against the pcr hash in the quote. It allows the agent to skip the slow process of reading PCRs and, consequently, reduce communication with the TPM to a single recurrent quote read operation.
How much time does it take to deploy a single security policy?
|Security policy content||Deployment latency|
|Static and dynamic PCRs||576 ms ( ms)|
|+ location proximity||626 ms ( ms)|
|+ IMA measurements||606 ms ( ms)|
|+ location prox. and IMA measur.||677 ms ( ms)|
Table 5 shows the latency of the policy deployment protocol using different policy extensions. The latency is measured as the total time between establishing a tls connection with Synergía, a policy upload, a verification using a fresh quote, and a response retrieval. The default policy’s size, containing the whitelist of 13 pcr and one tpm manufacturer’s ca certificate, is 4.7 kB. Its deployment takes 576 ms. The runtime policy size, containing the whitelist of 1790 files and an IMA signing certificate, is 235 kB ( the default policy). Its deployment lasts 606 ms, which is only a of the default policy deployment latency. The deployment latency of a policy with the location proximity extension depends on the communication latency between Synergía and trusted beacons. The deployment of the policy with one trusted beacon located in the same data center takes 626 ms.
How does Synergía impact the boot time of a computer? We used the systemd-analyze tool to measure the load time of initramfs and userspace in different configuration variants of Ubuntu. Figure 11 shows that the native Ubuntu Linux starts in 19 sec, from which the load of the userspace takes 13 sec and the kernel with initramfs remaining 6 sec. tboot executes after the bootloader and before the initramfs, thus not influencing the load time of the os. The activation of ima configured to measure all files defined by the tcg group (ima_tcb boot option), increases the boot time to 158 sec, of the native. A load of userspace takes 84% of this time, which is caused by the measurement of 1790 files. The boot time could be decreased by reducing the number of services loaded by the os. Synergía increases the boot time by 58% compared to the Ubuntu Linux with tboot and 8% compared to the Ubuntu Linux with ima. The increased boot time is mostly caused by the execution of time-consuming tpm operations in initramfs performed by Synergía and ima.
7.4 Formal Analysis
We propose the PCR obfuscation as a resilience mechanism against the cuckoo attack. To prove this claim, we formally verified the protocol’s integrity without/with obfuscation using the SAPIC tool . SAPIC allows to model security protocols in a variant of applied pi calculus  that handles parallel processes with a non-monotonic global state needed for a security API such as a TPM. The protocol model describes the actions of agents participating in the protocol, the adversary’s specification, and the desired security properties. The adversary and the protocol interact by sending/receiving messages through the network, which changes the system state and creates traces of state transitions. Security properties are modeled as trace properties, checked against traces of the transition system, or as an observational equivalence of two transition systems. While the adversary tries to violate the security properties, she is limited by the constraints of cryptographic primitives. The SAPIC tool uses constraint solving to perform an exhaustive, symbolic search for executions with satisfying traces. Since the correctness of security protocols is an undecidable problem, the tool may not terminate on a given verification problem. If it terminates, it returns either proof that the protocol fulfills the security property or a counterexample representing an attack that violates the stated property.
Model overview. 2 shows a high-level overview model of the protocol. It has three processes: Golden, TPM, and Machine (line 2). The TPM and Machine processes execute in parallel without limiting the number of instances.
The Golden process (lines 2-2) constructs a trusted image that complies with the required policy. It makes the image available for other processes and the adversary via the public network. A trusted authority keeps a private key (CA_priv) secure while the public part (CA_pub) is made available to the public network (line 2). The handles of the UEFI and the tboot (UEFI_golden, tboot_golden) are signed with the private key of the trusted authority to mimic that hardware enforces the boot of the correct software only. The handles of all boot components are available to the public network. The TPM process (lines 2-2) models a TPM chip with static (sPCR) and dynamic (dPCR) PCRs. It creates a signed quote using a TPM-specific private key (AIK_priv) repeatedly (lines 2-2).
Step 1, the Machine process gets handles of the (trusted) signed tboot, the unsigned initramfs, and the unsigned kernel. It extends dynamic PCRs with measurements of the initramfs and kernel (lines 2-2). Due to the locality protection, only processes in this step are allowed to extend dynamic PCRs. Therefore, PCRs of the attached TPM reflect measurements of the loaded execution environment. So far, the adversary has no control over the machine (lines 2-2).
Step 2, agent initialization executes the already measured and loaded initramfs (lines 2-2). Through the previously measured TPM driver, it requests to contact the TPM. If the TPM driver is malicious, the adversary might provide an AIK_pub of the remote TPM (line 2) instead of the locally attached one (line 2). Next, agent initialization reads the quote signed with the TPM private key (AIK_priv). The quote is verified using AIK_pub. The AIK public key and PCRs are sealed to the disk using the local CPU’s SGX sealing key (lines 2-2).
Step 3, the OS and the TPM driver are untrusted. The OS takes over the control, the TPM driver is loaded, and the agent runtime is executed (lines 2-2). After unsealing from the disk (line 2), agent runtime reads the quote through the untrusted TPM driver. The quote is verified with the unsealed attestation public key (line 2). Finally, agent runtime reports that the execution environment complies with the policy (line 2) if-and-only-if the unsealed PCRs and the quote PCRs both match the golden PCRs values (lines 2-2).
Security property. The integrity of the protocol is specified as "if the TPM quote read by agent runtime matches the unsealed information, its execution environment MUST correspond to the matched values". 3 states this property.
SAPIC tool reported a violation to the given property when using the protocol specified in 2. The trace describes that the adversary owns two machines: provisioned () and oracle (), connected to and , respectively. runs a malicious initramfs and OS (sPCR_golden, dPCR_malicious), but uses a genuine hardware (TPM and CPU). The adversary wants to verify it as a trustworthy machine. To do so, she forwards the requests to the other machine , which runs a trustworthy environment with untampered software and genuine hardware (sPCR_golden, dPCR_golden). Note that forwarding read requests to does not require a change to its environment; however, the adversary cannot extend PCRs of attached to without changing initramfs and OS, which consequently would change the corresponding dPCR in . During agent initialization, the malicious initramfs in forwards the attestation request to , which responds with a signed quote from that has the PCRs_golden and can be verified using AIK_pub(). PCR values and AIK_pub keys are sealed using seal_key of . During agent runtime, unseals the values from the disk and contacts through the malicious OS to get the quote. The quote contains PCRs that match both the golden and the sealed PCRs. So, the event (line 2) is triggered with unequal TPM_sealed () and TPM_local (), which indicates the cuckoo attack. The vulnerability exists because TPM attestation does not guarantee that the received credentials (AIK_pub) belong to the attested machine.
Model extension. 4 shows the extended model of the protocol to implement the PCR obfuscation. It required the following changes: i) Generation of a random number (RND) (line 4); ii) PCRs obfuscation: the static PCRs are extended with the RND (lines 4-4); iii) agent initialization seals both the original and obfuscated PCRs (lines 4-4); iv) agent runtime declares a machine trusted if-and-only-if: a) the dynamic PCRs golden, sealed and read from the quote match, b) the obfuscated static PCRs sealed and read from quote match, and c) the original static PCRs golden and sealed match (lines 4-4).
We checked the model extended with the obfuscation (4) against the integrity property in 3. SAPIC tool terminated and reported that all traces of the protocol preserve the given property. The modification to the model, where the agent initialization enclave shares a secret with the TPM potentially belonging to the attested machine, overcomes the previously described vulnerability.
8 Related work
Like the existing monitoring systems [37, 35], Synergía relies on the TPM attestation protocol to verify the computer’s integrity. Unlike them, Synergía is resilient to the cuckoo attack. Existing defenses against this attack have a limited application for high-assurance security systems. Fink et al. proposed a time side-channel approach  to detect the cuckoo attack. As confirmed by the authors, it is prone to false positives and requires stable measurement conditions, an impractical assumption in real-world scenarios. Flicker  accesses local TPM from the isolated execution environment established by drtm. However, DRTM does not attest to the computer location which makes its attestation untrustworthy due to simple hardware attacks . Moreover, DRTM permits executing only a single process on the entire CPU at the same time. This impacts application’s throughput because a single context switch to DRTM-established environment takes 10-100s of milliseconds . Synergía instead first verifies that the computer is in the trusted data center (thus, no hardware attacks are possible) and uses DRTM only once when provisioning the TPM. This approach provides better performance as required by modern applications.
Other solutions for root of trust identification problem require the verifier to solve biometric challenge , observing emmited LED signals , verifying the device state displayed on the screen [20, 51], using trusted devices to scan bar codes sealed on the device , or pressing a special-purpose button for bootstrapping trust during the computer boot . These approaches have limitations because i) the TPM is a passive device controlled by software which, due to lack of trusted I/O paths to external devices, can redirect, reply, or fool the communication, and ii) they require human interaction and thus do not scale for the dc-level.
Recently, Dhar et al. proposed ProximiTEE  to deal with the SGX (not TPM) cuckoo attack by attaching a trusted device to the computer and detecting the cuckoo attack during the SGX attestation. This solution can verify that the SGX enclave executes on the computer with the attached trusted device because of the very low communication latency between the enclave and the device. Although, as denoted by Parno  this approach cannot be used to detect the TPM cuckoo attack because of the slow speed of the TPM, Synergía could use ProximiTEE as a trusted beacon implementation to prove that the computer is located in the expected data center.
Other work focuses on tolerating malware in the OS while preventing side-channel attacks on TEEs. There are three approaches to mitigate these attacks: i) static vulnerability detection [32, 62], ii) attack prevention [1, 13, 26], and iii) attack detection [61, 18]. The first one consists of analyzing and modifying source code to detect gadgets [32, 62]. However, finding all gadgets is difficult or impossible because the search narrows to gadgets specific to known attacks. The second approach prevents attacks by hiding access patterns using oblivious execution/access pattern obfuscation, resource isolation , or hardware changes . These techniques address only specific attacks , require hardware changes , or incur large performance overhead [1, 13]. The last approach consists of runtime attack detection [61, 18] by isolating and monitoring resources of instrumented programs. But, it targets selected attacks and assumes some amount of statistical misses. Synergía aims at preventing such attacks without requiring source code changes or hardware modifications, with low performance overhead but a larger trusted computing base.
We responded to regulatory demands that require stronger isolation of high-assurance security systems by running them inside trusted execution environments on top of a trustworthy operating system and in the expected geolocation. We demonstrated that the combination of Intel SGX with TPM-based solutions meets such requirements but requires protection against the cuckoo attack. We proposed a novel deterministic defense mechanism against the cuckoo attack and formally proved it. We implemented a framework that monitors and enforces the integrity as well as geolocation of computers running high-assurance security systems and mitigates the cuckoo attack. Our evaluation and security risk assessment show that the Synergía is practical.
-  (2019) Obfuscuro: a commodity obfuscation engine on intel sgx. In Network and Distributed System Security Symposium, Cited by: §8.
-  (accessed on July, 2021) Alpine Linux - Small. Simple. Secure.. Note: https://alpinelinux.org/about/ Cited by: §2.
-  (2013) Innovative technology for cpu based attestation and sealing. In Proceedings of the 2nd international workshop on hardware and architectural support for security and privacy, Vol. 13, pp. 7. Cited by: §3.
-  (2009) Building a secure system using trustzone technology. White Paper Note: http://infocenter.arm.com/help/topic/com.arm.doc.prd29-genc-009492c/PRD29-GENC-009492C_trustzone_security_whitepaper.pdf Cited by: §1.
-  (2005) The avispa tool for the automated validation of internet security protocols and applications. Berlin, Heidelberg, pp. 281–285. External Links: Cited by: §3.
-  (2008) Formal Analysis of SAML 2.0 Web Browser Single Sign-on: Breaking the SAML-based Single Sign-on for Google Apps. New York, NY, USA, pp. 1–10. Cited by: §3.
-  (2016) SCONE: Secure linux containers with Intel SGX. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 689–703. Cited by: §5.
-  (2015) A practical guide to TPM 2.0: Using the new trusted platform module in the new age of security. Springer Nature. Cited by: §5.1.
-  (2016) File Signatures Needed!. Linux Plumbers Conference. Cited by: §4.2.
-  (2014) The operational role of security information and event management systems. IEEE Security and Privacy (S&P) 12 (5), pp. 35–41. Cited by: §4.1.
-  (2001) An efficient cryptographic protocol verifier based on prolog rules. pp. 82–96. Cited by: §3.
-  (2010) Attacking and fixing pkcs#11 security tokens. Cited by: §3.
-  (2019) DR. sgx: automated and adjustable side-channel protection for sgx using data location randomization. pp. 788–800. Cited by: §8.
-  (2018-08) Foreshadow: extracting the keys to the intel SGX kingdom with transient out-of-order execution. Baltimore, MD, pp. 991–1008. External Links: Cited by: §1, §6.2, §6.3.
-  (2014) Heartbleed 101. IEEE security & privacy 12 (4), pp. 63–67. Cited by: §6.4.
-  (2017) Intel SGX Enabled Key Manager Service with OpenStack Barbican. arXiv e-prints. External Links: Cited by: §4.1.
-  (2019) SimTPM: user-centric TPM for mobile devices. pp. 533–550. Cited by: §1.
-  (2018) Racing in hyperspace: closing hyper-threading side channels on sgx with contrived data races. pp. 178–194. Cited by: §8.
-  (2016) Intel sgx explained.. IACR Cryptol. ePrint Arch. 2016 (86), pp. 1–118. Cited by: §1, §3.
-  (2015) Graphical user interface for virtualized mobile handsets. IEEE S&P MoST. Cited by: §1, §8.
-  (2021) On the root of trust identification problem. In Proceedings of the 20th International Conference on Information Processing in Sensor Networks (Co-Located with CPS-IoT Week 2021), pp. 315–327. Cited by: §1, §8.
-  (2020) ProximiTEE: hardened sgx attestation by proximity verification. In Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy, CODASPY ’20. Cited by: §1, §4.3, §6.2, §8.
-  (1983) On the security of public key protocols. IEEE Transactions on information theory 29 (2), pp. 198–208. Cited by: §2.
-  (accessed on July, 2021) Top tier bank and confidential computing. Note: https://www.intel.com/content/www/us/en/customer-spotlight/stories/eperi-sgx-customer-story.html Cited by: §1.
-  (2011) Catching the cuckoo: Verifying tpm proximity using a quote timing side-channel. pp. 294–301. Cited by: §1, §8.
-  (2019) Time protection: the missing os abstraction. New York, NY, USA. External Links: Cited by: §8.
-  Systemspezifisches Konzept ePA. Note: https://www.vesta-gematik.de/standard/formhandler/324/gemSysL_ePA_V1_3_0.pdf Cited by: §1, §1, §6.1.
-  (accessed on July, 2021) Systemspezifisches Konzept E-Rezept. Note: https://www.vesta-gematik.de/standard/formhandler/324/gemSysL_eRp_V1_0_0_CC6.pdf Cited by: §1, §1.
-  (2019) Establishing software root of trust unconditionally. In Network and Distributed Systems Security (NDSS 2019), Cited by: §1.
-  (2010) Intel trusted execution technology: hardware-based technology for enhancing server platform security. Intel Corporation, Copyright 2012 (8). Cited by: §5.1.
-  (2020) Trust management as a service: enabling trusted execution in the face of byzantine stakeholders. pp. 502–514. Cited by: §4.1, §5.4.
-  (2020) SPECTECTOR: principled detection of speculative information flows. pp. 1–19. Cited by: §8.
-  (2009) Lest we remember: cold-boot attacks on encryption keys. Communications of the ACM 52 (5), pp. 91–98. Cited by: §1, §6.1.
-  (accessed on July, 2021) Linux Integrity Measurement Architecture (IMA) - IMA appraisal. Note: https://sourceforge.net/p/linux-ima/wiki/Home/#ima-appraisal Cited by: §3.
-  (accessed on July, 2021) IBM TPM Attestation Client Server. Note: https://sourceforge.net/projects/ibmtpm20acs/ Cited by: §3, §7.3, §8.
-  (2019) IBM CEX7S / 4769 PCIe Cryptographic Coprocessor (HSM). IBM 4769 Data Sheet. Cited by: §4.3, §6.1.
-  (accessed on July, 2021) Intel Open Cloud Intergrity Technology. Note: https://01.org/opencit Cited by: §3, §7.3, §8.
-  (accessed on July, 2021) Trusted Boot (tboot). Note: https://sourceforge.net/projects/tboot/ Cited by: §3.
-  (2008) Intel trusted execution techonology–software development guide, revision 017.0. Document. Cited by: §5.1.
-  (accessed on July, 2021) Intel Security Libraries for Data Center. Note: https://01.org/intel-secl Cited by: §3, §7.3.
-  (2013) An architecture for concurrent execution of secure environments in clouds. pp. 11–22. Cited by: §3.
-  (2016) Intel® software guard extensions: epid provisioning and attestation services. White Paper 1 (1-10), pp. 119. Cited by: §3, §4.4, §5.4, §6.2.
-  (2007) OSLO: Improving the security of Trusted Computing. USENIX. Cited by: §6.1.
-  (2019-08) Origin-sensitive control flow integrity. In 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, pp. 195–211. External Links: Cited by: §2.
-  (2009) seL4: formal verification of an OS kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles - SOSP ’09, Big Sky, Montana, USA. Cited by: §6.1.
-  (2018) Integrating remote attestation with transport layer security. arXiv preprint arXiv:1801.05863. Cited by: §5.4.
-  (2020) Dedicated security chips in the age of secure enclaves. IEEE Security and Privacy 18 (5), pp. 38–46. External Links: Cited by: §1, §1.
-  (accessed on July, 2021) SAPIC: a stateful applied pi calculus. Note: http://sapic.gforge.inria.fr/ Cited by: §3, §7.4.
-  (accessed on July, 2021) Self-Defending Key Management Service with Intel SGX. Fortranix Whitepaper. Cited by: §1, §4.1.
-  (2020) NetCAT: Practical Cache Attacks from the Network. pp. 20–38. Cited by: §6.4.
-  (2013) Crossover: secure and usable user interface for mobile devices with multiple isolated os personalities. pp. 249–257. Cited by: §8.
-  (2020) Keystone: an open framework for architecting trusted execution environments. In Proceedings of the Fifteenth European Conference on Computer Systems (EuroSys2́0), pp. 1–16. Cited by: §1.
-  (1996) Breaking and fixing the needham-schroeder public-key protocol using fdr. Cited by: §3.
-  (2019) Thunderclap: exploring vulnerabilities in operating system IOMMU protection via DMA from untrustworthy peripherals. Cited by: §6.1.
-  (2014) The rust language. ACM SIGAda Ada Letters 34 (3), pp. 103–104. Cited by: §2, §5.
-  (2005) Seeing-is-believing: using camera phones for human-verifiable authentication. Cited by: §8.
-  (2010) TrustVisor: efficient tcb reduction and attestation. pp. 143–158. Cited by: §1, §8.
-  (2008) Flicker: an execution infrastructure for tcb minimization. pp. 315–328. Cited by: §1, §8.
-  (1997) The pi calculus and its applications. In Formal Methods for Open Object-based Distributed Systems, pp. 3–4. Cited by: §7.4.
-  (2020) Plundervolt: software-based fault injection attacks against intel sgx. Cited by: §6.1.
-  (2018) Varys: protecting SGX enclaves from practical side-channel attacks. pp. 227–240. Cited by: §8.
-  (2020) SpecFuzz: bringing spectre-type vulnerabilities to the surface. pp. 1481–1498. Cited by: §8.
-  (2020) A practical approach for updating an integrity-enforced operating system. pp. 311–325. Cited by: §4.2.
-  (2008) Bootstrapping trust in a "trusted" platform. Cited by: §1, §1, §3, §8, §8.
-  (2005) Encrypt your root filesystem. Linux Journal 2005 (129), pp. 4. Cited by: footnote 1.
-  (2020) EverCrypt: a fast, verified, cross-platform cryptographic provider. pp. 983–1002. Cited by: §6.1.
-  (2004) Design and implementation of a tcg-based integrity measurement architecture.. pp. 223–238. Cited by: §3.
-  (2018) Supporting third party attestation for intel sgx with intel data center attestation primitives. White paper. Cited by: §6.2.
-  (accessed on July, 2021) SCONE Docker curated images. Note: https://hub.docker.com/u/sconecuratedimages Cited by: §7.1.
-  (2005) Pioneer: verifying code integrity and enforcing untampered code execution on legacy systems. Cited by: §1.
-  (2013) TCG D-RTM Architecture, Document Version 1.0.0. Trusted Computing Group. Cited by: §1, §3.
-  (2007) A Security Assessment of Trusted Platform Modules. Computer Science Technical Report TR2007-597. Cited by: §6.1.
-  (2015) TrustICE: hardware-assisted isolated computing environments on mobile devices. Cited by: §8.
-  (2018) Throwhammer: rowhammer attacks over the network and defenses. pp. 213–226. Cited by: §6.4.
-  (2006) TCG Infrastructure Working Group Architecture Part II - Integrity Management, Specification Version 1.0, Revision 1.0. Cited by: §1, §3.
-  (2012) TCG PC Client Specific Implementation Specification for Conventional BIOS, Specification Version 1.21, Revision 1.00. Cited by: §3.
-  (2016) TPM Library Specification, Family "2.0", Revision 01.38. Cited by: §1.
-  (2019) TCG PC Client Platform Firmware Profile Specification, Family 2.0, Level 00, Revision 1.04. Cited by: §3.
-  (2019) TCG Trusted Attestation Protocol (TAP) Information Model for TPM Families 1.2 and 2.0 and DICE Family 1.0. Version 1.0, Revision 0.36. Cited by: §3.
-  (2020) SGAxe: how SGX fails in practice. Note: https://sgaxeattack.com/ Cited by: §6.2.
-  (2019) NDA: preventing speculative execution attacks at their source. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on MicroarchitectureProceedings of the Fourteenth EuroSys Conference 201928th USENIX Security Symposium (USENIX Security 19)Proceedings of the 35th Annual Computer Security Applications Conference27th USENIX Security Symposium (USENIX Security 18)2018 IEEE Symposium on Security and Privacy (SP)International Conference on Trust and Trustworthy Computing2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)2020 IEEE Symposium on Security and Privacy (SP)Proceedings of the 2013 ACM workshop on Cloud computing security workshop2019 IEEE Symposium on Security and Privacy (SP)2020 IEEE Symposium on Security and Privacy (SP)2018 USENIX Annual Technical Conference (USENIXATC 18)27th USENIX Security Symposium (USENIX Security 18)Proceedings of the 2001 Ottawa Linux symposium2010 IEEE Symposium on Security and PrivacyProceedings of the 3rd ACM SIGOPS/EuroSys European Conference on Computer Systems 20082018 Usenix Annual Technical Conference (USENIXATC 18)29th USENIX Security Symposium (USENIX Security 20)Proceedings of the 21st International Middleware ConferenceProceedings of the 3rd Conference on Hot Topics in SecurityUEFI ForumUSENIX Security symposiumEuropean Symposium on Research in Computer SecurityProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications SecurityProceedings of the 14th IEEE Computer Security Foundations WorkshopComputer Aided VerificationProceedings of the 6th ACM Workshop on Formal Methods in Security EngineeringProceedings of the 17th ACM Conference on Computer and Communications SecurityTools and Algorithms for the Construction and Analysis of Systems2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN ’15)Proceedings of the 29th Annual Computer Security Applications Conference2005 IEEE Symposium on Security and Privacy (S&P’05)Proceedings of the Seventh ACM on Conference on Data and Application Security and PrivacyProceedings of the 41st IEEE Symposium on Security and Privacy (S&P’20)41st IEEE Symposium on Security and Privacy (S&P’20)26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 201927th USENIX Security Symposium (USENIX Security 18)2020 IEEE Symposium on Security and Privacy (SP)Black Hat DC, K. Etessami, S. K. Rajamani, T. Margaria, and B. Steffen (Eds.), MICRO ’52EuroSys ’19FMSE ’08CCS ’10SOSP ’05CODASPY ’17, Vol. 13, New York, NY, USA. External Links: Cited by: §8.
-  (2013) UEFI secure boot in modern computer security solutions. Cited by: §3.
-  (2013) A hijacker’s guide to communication interfaces of the trusted platform module. Computers & Mathematics with Applications. Cited by: §6.1.
-  (2013) A hijacker’s guide to communication interfaces of the trusted platform module. Computers & Mathematics with Applications. Cited by: §1, §8.
-  (2009) Attacking Intel Trusted Execution Technology. Cited by: §6.3.
-  (accessed on July, 2021) Attacking Intel TXT via SINIT code execution hijacking. Note: https://invisiblethingslab.com/resources/2011/Attacking_Intel_TXT_via_SINIT_hijacking.pdf Cited by: §6.3.
-  (2015) Controlled-channel attacks: deterministic side channels for untrusted operating systems. In Proceedings of the 2015 IEEE Symposium on Security and Privacy, SP ’15, USA, pp. 640–656. Cited by: §1.
-  (2019) The fuzzing book. CISPA+ Saarland University. Cited by: §2.
-  (2017) HACL*: a verified modern cryptographic library. pp. 1789–1806. Cited by: §2.