Code coverage analysis is commonly used throughout the software testing process (Ammann2016). Structural coverage metrics such as statement and branch coverage can inspire confidence in a program under test (PUT), or at least identify untested code (Inozemtseva2014; Gopinath2014). Additionally, coverage analysis has demonstrated its usefulness in test suite reduction (Yoo2010), fault localization (Pearson2017), and detection of compiler bugs (Le2014). Moreover, certain coverage requirements are mandated by the standards in safety-critical domains (DO178C; ISO26262:2018).
In recent years, feedback-guided fuzzing has emerged as a successful method for automatically discovering software bugs and security vunlerabilities (VUzzerRawat2017; kAFL:Schumilo2017; QSYMYun2018; Bohme2016a). Notably, AFL (ZalewskiAFLWhitePaper) has pioneered the usage of code overage as a generic and effective feedback signal. This success inspired a fuzzing “renaissance” and helped move fuzzing to industrial-scale adoption like in Google’s OSS-Fuzz (OSSFUZZ).
In this work, we introduce bcov, a tool for binary-level coverage analysis using static binary instrumentation. bcov works directly on x86-64 binaries in the ELF format without compiler support. It implements a trampoline-based approach where it inserts probes in targeted locations to track basic block coverage. Each probe consists of a detour that diverts control flow to a designated trampoline. The latter updates coverage data using a single pc-relative mov instruction, potentially executes relocated instructions, and then restores control flow back to its original state. Making this scheme to work transparently with low overhead on large and well-tested C and C++ programs required addressing several challenges:
Probe pruning (§3). Instrumenting all basic blocks (BBs) can be inefficient, or even impossible, in x86-64 ISA due to its instruction-size variability. We adopt the probe pruning technique proposed by Agrawal (Agrawal1994) where dominator relationships between BBs are used to group them in super blocks (SBs). SBs are arranged in a super block dominator graph. Basically, covering a single BB implies that all BBs in the same SB are also covered in addition to SBs dominating the current SB. This allows us to significantly reduce instrumentation overhead and size of coverage data.
Precise CFG analysis (§4). Imprecision in the recovered control flow graph (CFG) can cause false positives in the reported coverage. It can also cause instrumentation errors which lead to crashes in a PUT. To address this challenge, we propose sliced microexecution, a precise and robust technique for jump table analysis. Also, we implement a non-return analysis which eliminates spurious CFG edges after non-return calls. Our experiments show that bcov can outperform IDA Pro, the leading industry disassembler.
Static instrumentation (§5). Given a set of BBs in a SB, we need to choose the best BB to probe based on the expected overhead of restoring control flow. We make this choice using a classification of BBs in x86-64 into 9 types. Also, some BBs can be too short to insert a detour. Their size is less than 5 bytes. We address this challenge by (1) aggressively exploiting padding bytes, (2) instrumenting jump table entries, and (3) introducing a greedy strategy for detour hosting where a larger BB can host the detour of a neighboring short BB. Combining these techniques with probe pruning enables tracking coverage of virtually all BBs.
Figure 1 depicts the workflow of bcov. Given an ELF module as input, bcov first analyzes module-level artifacts, such as the call graph, before moving to function-level analyses to build the CFG and dominator graphs. Then, bcov
will choose appropriate probe locations and estimate the required code and data sizes depending on theinstrumentation policy chosen by the user. Our prototype supports two instrumentation policies. The first is a complete coverage policy where for any test input it is possible to precisely identify covered BBs. The second one is a heuristic coverage policy where we probe only the leaf SBs in the super block dominator graph. Running a test suite that covers all leaf SBs implies that 100% code coverage is reached. We refer to these policies as any-node and leaf-node policies respectively. On average, the any-node policy probes 46% of BBs compared to 30% in the leaf-node policy. Average performance overheads are 14% and 8% respectively.
The patching phase can start after completing the previous analysis phase. Here, bcov first extends the ELF module by allocating two loadable segments. A code segment where trampolines are written and a data segment for storing coverage data. Then, bcov iterates over all probes identified by the chosen instrumentation policy. Each probe represents a single SB. Generally, patching a probe requires inserting a detour targeting a corresponding trampoline. We use pc-relative jmp or call detours. The trampoline first updates coverage data and then restores control flow back to its state in the original module as depicted in Figure 2.
The data segment has a simple format consisting of a small header and a byte array that is initialized to zeros. Setting a byte to one indicates that its corresponding SB is covered. It is trivial to compress this data on disk as only the LSB of each byte is used. For example, this enables storing complete coverage data of llc (LLVM backend) in 65KB only. 111The binary has around a million BBs which contain more than instructions. The data format also enables merging coverage data of multiple tests using a simple bitwise OR operation.
Dumping coverage data requires linking against bcov-rt, our small runtime library. Alternatively, bcov-rt can be injected using the LD_PRELOAD mechanism in order to avoid modifying the build system. Coverage data can be dumped on process shutdown or upon receiving a user signal. The latter enables online coverage tracking of long-running processes. Note that the data segment starts with a magic number which allows bcov-rt to identify it.
This design makes bcov achieve three main goals, namely, transparency, performance, and flexibility. Program transparency is achieved by not modifying program stack, heap, nor any general-purpose register. Also, coverage update requires a single pc-relative mov instruction which has a modest performance overhead. Finally, bcov works directly on the binary without compiler support and largely without changes to the build system. This enables users to flexibly adapt their instrumentation policy without recompilation.
To summarize, we make the following key contributions:
We are the first to bring Agrawal’s probe pruning technique to binary-level instrumentation. We show that its super blocks can be effectively leveraged to optimize probe selection and reduce coverage data.
We introduce sliced microexecution, a robust method for jump table analysis. It significantly improves CFG precision and allows us to instrument jump table entries.
We significantly push the state of the art in trampoline-based static instrumentation and show that it can be used to track code coverage efficiently and transparently.
We implemented our contributions in bcov which we make publicly available: https://github.com/abenkhadra/bcov.
We extensively experimented with bcov. In this respect, we selected 8 popular and well-tested subjects such as ffmpeg and llc. We compiled them using 4 recent major versions of gcc and clang at 3 different optimization levels each. In total, we used bcov to instrument 95 binaries and more than 1.6 million functions. Instrumented binaries did not introduce any test regressions.
There is a plethora of tools dedicated to coverage analysis. They vary widely in terms of goals and features. Therefore, we motivate the need for our approach via a comparison with a representative set of popular tools. Our discussion is based on Table 1.
We start with source-level tools supported in gcc and clang, namely, gcov and llvm-cov respectively. They track similar artifacts such as statement coverage. The key difference is in the performance of instrumented binaries. gcov relies on debug information which are less accurate in optimized builds. In comparison, llvm-cov features a custom mapping format embedded in LLVM’s intermediate representation (IR). This allows it to cope better with compiler optimizations. In addition, this mapping format tracks source code regions with better precision compared to gcov.
The ability of a binary-level tool such as bcov to report source-level artifacts is limited by the binary-to-source mapping available. Off-the-shelf debug information can be used to report statement coverage - the most important artifact in practice (IvankovicFSE19; Gopinath2014). In this setting, bcov offers several advantages including: (1) detailed view of individual branch decisions regardless of the optimization level, (2) precise handling of non-local control flow such as longjmp and C++ exception handling, and (3) flexibility in instrumenting only a selected set of functions, e.g., the ones affected by recent changes, which is important for the efficiency of continuous testing (IvankovicFSE19).
The recent fuzzing renaissance has motivated the need to improve efficiency by heuristically tracking coverage. SanitizerCoverage (sancov) (SanitizeCoverageURL) is a pass built into LLVM which supports collecting various types of feedback signals including basic block coverage. It is used in prominent fuzzers like LibFuzzer (LibFuzzerWebsite) and Honggfuzz (SwieckiHonggfuzz). The performance overhead of sancov is not directly measurable as the usage model varies significantly between sancov users. Also, sancov is tightly coupled with LLVM sanitizers (e.g., ASan) which add varying overhead. Extending bcov with additional feedback signals, similar to sancov, is an interesting future work.
Hardware instruction tracing mechanisms, like Intel® PT (IPT), can also be used for coverage analysis. However, IPT can dump gigabytes of compressed trace data within seconds which can be inefficient to store and post-process. In our experiments, IPT dumped 6.5GB trace data for a libxerces test that lasts only 5 seconds. Post-processing and deduplication took more than 3 hours. In comparison, our tool can produce an accurate coverage report for the same test after processing a 53KB dump in few seconds. Schumilo et al. (kAFL:Schumilo2017) propose to heuristically summarize IPT data on the fly and thus avoid storing the complete trace.
Dynamic binary instrumentation (DBI) tools can report binary-level coverage using dedicated clients (plug-ins) like drcov. DBI tools act as a process virtual machine which JIT emits instructions to a designated code cache. This process is complex and may break binaries. Moreover, JIT optimizations add overhead to the whole program even if we are only interested in a selected part like a shared library. Our evaluation includes a comparison with the popular DBI tools Pin (IntelPinWeb) and DynamoRIO (DynamoRIOWeb).
3. Probe Pruning
We provide here the necessary background on the probe pruning techniques implemented in bcov based on Agrawal (Agrawal1994). The original work considered source-level pruning but only for C programs.
Given a function with a set of basic blocks connected in a CFG. The straightforward way to obtain complete coverage data is to probe every basic block . However, it is possible to significantly reduce the number of required probes by computing dominance relationships between basic blocks in a CFG. We say that predominates , , iff every path from function entry () to goes through . Similarly, postdominates , , iff every path from to function exit () goes through . We say that dominates iff . The predominator and postdominator relationships are represented by the trees and respectively. The dominator graph (DG) is a directed graph that captures all dominance relationships. It is obtained by union of both trees , i.e, by merging edges of both trees.
Given a dominator graph and the fact that a particular is covered, this implies that all dominators (predecessors) of in DG are also covered. This allows us to avoid probing basic blocks that do not increase our coverage information. However, we are interested in moving a step further by leveraging strongly-connected components (SCCs) in the DG. Each SCC represents a super block, a set of basic blocks with equivalent coverage information. The super block dominator graph (SB-DG) is constructed by merging SCCs in the DG. That is, each node SB in SB-DG represents a SCC in the DG. An edge is inserted between and iff where dominates .
Constructing a SB-DG has a number of benefits. First, it is a convenient tool to measure the coverage information gained from probing any particular basic block. Second, it enables compressing coverage data by tracking super blocks instead of individual basic blocks. Finally, it provides flexibility in choosing the best basic block to probe in a super block. We show later in section 5.1 how this flexibility can be leveraged to reduce instrumentation overhead.
We implemented two instrumentation policies in bcov, namely, leaf-node and any-node. We discuss them based on the example depicted in Figure 3. In the leaf-node policy, we instrument only the leafs of the SB-DG. Covering all such leaf nodes implies that all nodes in SB-DG are also covered, i.e., achieving 100% coverage. However, this coverage percentage is usually infeasible in practice. Nevertheless, leaf nodes still provide high coverage information which make the leaf-node policy useful to approximate the coverage of a test suite at a relatively low overhead.
Generally, we are also interested in inferring the exact set of covered basic blocks given any test input. This is usually not possible in the leaf-node policy. For example, given an input that visits the path , the leaf-node policy can report that the covered set is . However, this policy can make no statement about the coverage of and since they do not dominate the visited probe in . We address this problem in the any-node policy. The set of super blocks instrumented in this policy is a super set of those in the leaf-node policy. More precisely, . represents the set of critical super blocks in the sense that each can be visited by at least one path in the CFG that does not visit any of its children in the SB-DG.
It is possible to determine using an algorithm where and are the nodes and edges in the CFG respectively. We refer to (Agrawal1994) for further details. In Figure 3, the super block is non-critical. However, the super block is critical and, consequently, will be probed in the any-node policy.
4. Control Flow Analysis
In this section, we first consider the definition of a function at binary level. Then, we discuss sliced microexecution, our proposed method for jump table analysis.
4.1. Function Definitions
The notion of function is important to our approach as it determines the scope of CFG and, consequently, the correctness of dominance relationships. Functions are well-defined constructs in the source code. However, compiler optimizations such as function splitting and inlining significantly change the layout of corresponding binary-level functions.
Fortunately, these optimizations are not of concern to us as long as well-formed function definitions are given to bcov. A function is defined by the pair where and are start address and byte size respectively. A function can have a set of entry and exit points where control flow enters and leaves the function respectively. We say that a function definition is well-formed if (1) it does not overlap with other function definitions, and (2) all of its basic blocks are reachable only through its entries.
Definitions source. Our tool uses linker symbols as a source of well-formed function definitions. These symbols, unlike debug symbols, are available by default in all builds. In stripped binaries, bcov can read function definitions from call-frame information (CFI) records which can be found in the .eh_frame section. This section stores the data necessary for stack unwinding and is part of the loadable image of the binary, i.e., is not stripped. These records must be available to enable C++ exception handling. However, they are typically available in C binaries as well since they are needed for crash reporting among other tasks.
Note that CFI records might not contain all the functions that are defined in linker symbols. For example, developers might exclude CFI records of leaf functions to save memory. However, we empirically observed that function definitions in CFI records largely match those found in linker symbols. Additionally, in the unlikely case where CFI records are unavailable, we may still resort to function identification techniques such as (Andriesse2017; BYTEWEIGHT2014Bao).
Function entries. The main entry of a function is trivially defined by its start address. Other functions can either call or tail-call only the main entry. We have empirically validated this assumption in our dataset. That is, we have not found any instance where a (direct) function call targets an internal basic block in another function. However, non-local control transfer mechanisms, such as longjmp and exception handling, violate this assumption. We refer to possible targets of non-local control transfer as auxilary function entries. Such entries are not dominated by, or even unreachable from, the main function entry. Auxilary entries of longjmp are identified during CFG construction. They are simply the successor of each basic block that calls setjmp.
The identification of auxiliary entries used in exception handling is more elaborate. The Itanium C++ ABI specifies the exception handling standard used in modern Unix-like systems. Of interest to us in this specification is the landing pad which is a code section responsible for catching, or cleaning up after, an exception. A function can have several landing pads, e.g., it can catch exceptions of different types. We consider each landing pad to be an auxiliary entry. Collecting landing pad addresses requires bcov to iterate over all CFI records in the .eh_frame section. More specifically, bcov examines all Frame Description Entry (FDE) records looking for a pointer to a language-specific data area (LSDA). If such pointer exists, then bcov would parse the corresponding LSDA to extract landing pad addresses.
Function exits. Our tool analyzes the CFG to identify the basic blocks where control flow leaves a function. We take two parameters into consideration (1) the type of the control-transfer instruction which can be jmp, call, or ret, and (2) whether it is a direct or indirect instruction. A jmp targeting another function is a tail-call and generally also an exit point. However, the jump table analysis in section 4.2 can determine that certain indirect jmp are actually intra-procedural, i.e., local to the function. On the other hand, a call typically returns, i.e, is not an exit point, except for calls to non-return functions. The non-return analysis implemented in bcov is responsible for identifying such functions. Finally, we consider all ret instructions to be exit points.
Our model of a function occupying a contiguous code region is simple; yet, we found it to be consistent with our large dataset. Moreover, it can be augmented with additional analyses to identify function entries and exits. This provides enough flexibility to handle special situations that might arise in practice. For example, using ret to implement indirect calls in Retpoline (RetpolineTurner).
4.2. Jump Table Analysis
Recovering the targets of indirect control transfer instructions is desirable in several applications such as control flow integrity. However, this problem is undecidable in general which means that we can only hope for approximate solutions. Nevertheless, the switch statement in C/C++ remains amenable to precise analysis. It is commonly implemented as an indirect jmp that is based on a bounded variable which indexes into a lookup-table.
The analysis of such jump tables enable us to (1) increase CFG precision, (2) instrument jump table data, and (3) avoid disassembly errors. The latter issue is relevant to architectures such as ARM where compilers inline jump table data in the code section. Fortunately, such data typically reside in a separate read-only section in x86-64 which enables correct disassembly using linear sweep (AndriesseUsenixSec16).
The analysis of jump tables can be challenging as compilers enjoy a lot of flexibility in implementing switch statements. A jump table can be control-bounded by checking the value of the index against a bound condition. Alternatively, should expected values be dense, e.g., many values below 16, the compiler might prefer a data-bounded jump table, e.g, using a bitwise and with 0xf. Additionally, compilers are free to divide a switch with many case labels into multiple jump tables. Our goal in this analysis is to recover information about each individual jump table. This includes its control flow targets and total number of entries.
To this end, we propose sliced microexecution, a novel method for jump table analysis which combines classical backward slicing with microexecution (Godefroid2014). The latter refers to the ability to emulate any code fragment without manual inputs. Basically, for each indirect jmp in a function, bcov attempts to test the sequence of hypotheses depicted in Table 2. If they are invalid then bcov aborts the analysis and consider this jmp to be a tail-call. Otherwise, bcov proceeds with the actual recovery depending on the type of jump table which can generally either be control-bounded or data-bounded.
|(1)||Depends on constant base address?||if yes test (2) else abort|
|(2)||Is constrained by a bound condition?||if yes test (3) else assume (4)|
|(3)||Bound condition dominates jump table?||if yes do recovery else assume (4)|
|(4)||Assume jump table is data-bounded||do recovery and try to falsify|
We discuss this method based on the example shown in Figure 4. First, bcov has to test hypothesis (1) by backward slicing from 0x9f719 until it reaches instruction at 0x9f712 which has a memory dependency. This dependency has base address in r15. So is this base address constant? Backward slicing for r15 shows that it is in fact a constant. Note that a jump table should depend on a single variable used as index. The base address is a constant determined at compile time.
We move now to test hypothesis (2). It is tested by spawning a condition slicer upon encountering each conditional jmp, .e.g, instruction at 0x9f707. This slicer is used to check whether the variable influencing the bound condition is also the jump table index. This is the case in our example at 0x9f6f0 where the value in r12b influences both the condition at 0x9f707 and the jump table index. Now that a bound condition is found we need to test it against hypothesis (3).
A jump table might be preceded by multiple conditional comparisons that depend on the index. We apply heuristics in order to quickly discard the ones that can not represent a bound condition, e.g., comparisons with zero. However, there can still be more than one candidate. Here, we leverage the fact that a bound condition should dominate the jump table. Otherwise, a path in CFG would exist where the index value remains unbounded. We check for dominance during the backward CFG traversal needed for slicing. Basically, it should not be possible to bypass the bound condition.
Backward slicing produces a slice (code fragment) which captures the essential instructions affecting the jump table. This slice represents a univariate block-box function with the index as input variable. Modifying the index should trigger behavioral changes especially in the observed jump address at the output. Assuming that this slice represents a jump table, we reason about its behavior using microexecution. Also, we try to validate our assumption by widely varying the index.
Before microexecuting a slice, bcov first loads the binary using a built-in ELF loader. Then, it initializes a valid memory environment for the given code slice. For example, it allocates memory for the pointer [rsp+0x8] and assigns a valid address to rsp. It is now possible to start “fuzzing” the index. However, the expected behavior of the slice depends on the type of the jump table.
In control-bounded jump tables, a change in behavior must be observed in the intervals and where is the bound constant. This constant is located in the first instruction that sets the flags before the bound condition. In our example, this is the instruction at 0x9f6f4. bcov tests 24 index values in total, 8 of which are sampled from including , , and . The remaining 16 values increase exponentially, in power of 2, starting from . We found this scheme to give us high confidence in the results.
The jump table is expected to target an instruction inside the current function for most inputs in . On the other hand, the jump table should not be reachable for all inputs in . That is, the bound condition should redirect control flow to the default case. Should the behavior of the code slice not match what we expect from a control-bounded jump table, then we abort and assume that it is data-bounded. Note that we are not strict about the behavior for input since the bound condition might check for equality.
Assuming that a given indirect jmp represents a data-bounded jump table, we need effective techniques to validate our assumption and explore the bound limits. To this end, bcov executes the slice 24 times, each time increasing the index exponentially while setting least significant bits to one. This allow us to explore the bound limits in the common case of bitwise and with a bit mask like 0xf. Other bit patterns are also tried in order to better penetrate combinations of bitwise instructions. Our key insight is that we should not have full control over the jump target. That is, arbitrary change in the index should be reflected in a constrained change in the jump target. Additionally, jump targets need to be located in the current function similar to the case of control-bounded jump tables. Should the slice withstand these diverse tests, then we can be highly confident that it represents a jump table.
Our evaluation shows that sliced microexecution is precise and robust against various compiler optimizations. It allowed bcov to reliably recover the jump tables in the core loop of the Python interpreter, located in function _PyEval_EvalFrameDefault. Note that these jump tables are compiled from complex computed gotos. 222Computed gotos is a gcc extension to C which is also supported in clang. It allows developers to have full control over bound checking in a jump table.
5. Static Instrumentation
In this section, we first consider a strategy to reduce instrumentation overhead by carefully selecting a basic block to probe in a super block. Then, we discuss handling short basic blocks by means of hosting their detours in larger neighboring basic blocks.
5.1. Optimized Probe Selection
Generally, probing a BB requires inserting a detour targeting its designated trampoline. A detour occupies 5 bytes and can either be a direct jmp or call. Consequently, one or more original instructions must be relocated to the trampoline. This relocation overhead varies due to the instruction-size variability in the x86-64. Note that a pc-relative mov, which occupies 7 bytes, represents an unavoidable overhead in each trampoline in order to update coverage. Hence, our goal is to reduce the relocation overhead.
To this end, we iterate over all BBs in a super block and select the one expected to incur the lowest overhead. First, we have to establish whether a detour can be accommodated in the first place. A BB that satisfies is considered a guest, where and are the byte size and padding size respectively. A super block that contains only guest BBs is handled via detour hosting (§5.2). Now we examine the type and size of the last instruction of each BB and whether the BB is targeted by a jump table. These parameters are translated into the types depicted in Table 3. These BB types are organized in a total order. This means, for example, we strictly prefer a long-call over a long-cond should both exist in the same super block. This type order is primarily derived from empirical observation. However, we did not necessarily experiment with all possible combinations. Preferring long-call over short-call should be intuitive. The latter incurs an additional overhead for relocating at least one instruction preceding the call.
|return||maybe||Can be only 1 byte depending on the padding|
|long-jump||no||Size of jmp instruction which is 5 bytes|
|long-call||no||Size of call instruction which is 5 bytes|
|jump-tab||no||Size of jmp instruction to original code (5 bytes)|
|short-call||yes||Similar to long-call but with RP overhead added|
|short-jump||yes||Similar to long-jump but with RP overhead added|
|internal||maybe||Size of relocated instruction(s) inside the BB|
|long-cond||no||Rewriting incurs a fixed 11 byte overhead|
|short-cond||yes||Similar to long-cond but with RP overhead added|
We observed that return basic blocks are usually padded (55% on average). Padding size is often more than 3 bytes which translates to a relocation overhead of only one byte - the size of a ret instruction. Also, favoring long-jmp over long-call provided around 3% improvement in both relocation and performance overheads. On the other hand, short-call had only a slight advantage over short-jmp. This might be due to the fixed 2 byte size of the latter, which leads to relocating more instructions. However, our experiments were not always conclusive, e.g., between jump-tab and short-call.
Relocating an instruction depends on its relation to the PC (called rip in x86-64). Position-independent instructions can simply be copied to the trampoline. However, we had to develop a custom rewriter for position-dependent instructions. The rewriter preserves the exact semantics of the original instruction whether it explicitly or implicitly depends on rip. For example, a long-cond instruction will be rewritten in the trampoline to a matching sequence consisting of a long-cond (6 bytes) and a jmp (5 bytes).
Jump table instrumentation has the unique property of preserving the original code. It is a data-only mechanism that enables us to probe even one byte BBs. However, in order to be applicable, a BB has to be targeted by a patchable jump table. A jump table is patchable if its entries are either 32-bit offsets or absolute addresses. We observed that about 92% of more than 46,000 jump tables in our dataset are patchable. Actually, we found that 8-bit and 16-bit offsets are only used in libopencv_core.
Finally, our probe selection strategy is effective in reducing relocation overhead. However, it is not necessarily optimal. We observed high variance in the percentage of paddedreturn, i.e., return is not always the best choice. Also, a loop-aware strategy might reduce performance overhead by avoiding loop heads. Such optimizations are left for future work.
5.2. Detour Hosting
The instruction-size variability in x86-64 suggests that some BBs are simply too short to insert a detour without overwriting the following BB. In our dataset, we found that about 7% of all BBs are short (size ¡ 5 bytes). Left without a probe, we risk losing coverage information of a particular short BB and, potentially, all of its dominators. One possible solution is to relocate the entire function to a larger memory area. However, this is costly in terms of code size and engineering effort. The latter is needed to fix relocated references. For example, throwing an exception from a relocated function without fixing its corresponding CFI record will lead to abrupt process termination.
The method adopted in bcov is detour hosting. It offers lower relocation overhead and preserves the stability of code references at basic block level. Here, the size of a guest BB needs to be at least 2 bytes which is enough to insert a short detour targeting a reachable host BB, i.e., within about ±128 bytes. The host BB must be large enough to accommodate two regular detours, i.e., at least 10 bytes. The first detour targets its own trampoline while the other detours would target the trampolines of their respective guests. Note that we can safely overwrite padding bytes of both the guest and host. Also, the host does not need to be entirely relocated. Relocating a subset of its instructions might be sufficient.
Figure 5 depicts a detour hosting example. It involves a guest consisting of an indirect call (3 bytes size). The tricky part about a call is that the return address must be preserved. A sub instruction (5 bytes) is used to adjust the return address in the trampoline from 0xad67fd to its original value of 0xad6803. CPU flags are also clobbered which should be safe since they are not preserved across function calls in the x86-64 ABI. Note that this is the only case were we modify CPU state.
Now we have the following allocation problem: given a guest and a set of suitable hosts , find the host whose selection incurs minimal overhead. Moreover, we are also interested in the more general formulation: given a set of guests and a set of hosts , where each host is suitable for at least one guest, find a function mapping such that the overhead is minimal. We approach this problem using a greedy strategy where we prefer, in this order, (1) packing more guests in a single host, (2) a host already selected to be probed over an intact host, (3) a host that is closer to the guest. Basically, for each guest, we iterate over all reachable BBs. A BB can offer a hosting offset, if possible. A higher offset means that more guests are packed in this host. The initial offset is 5 bytes from start of the host. Should offered offsets be equal, we look into (2) in order to avoid, as much as possible, relocating otherwise intact BBs. Finally, should both (1) and (2) be equal, then we look into (3) in order to have better code cache locality.
Our experiments show that this strategy provides a good balance in terms of performance and relocation overhead. It achieves a hosting ratio of 1.2 guests per host on average. Also, it was able of hosting up to 14 guests in a single host. Around 80% of the hosts are already probed, i.e., a relocation overhead is expected for them anyway. Additionally, bcov was able of hosting 94% of the guests.
We implemented our approach in the tool bcov. Our tool accepts an ELF module as input (executable or shared library). It starts with a set of module-level analyses such as reading function definitions, parsing CFI records, and building the call graph. Our non-return analysis implementation is similar to (Meng2016). We omit the details as they are not part of our core contribution. Then come function-level analyses such as building CFG (including jump tables), dominator trees, and super block dominator graph. Probes are determined based on the instrumentation policy set be the user. bcov can be used for patching or coverage reporting. The latter mode requires a data file dumped from a patched module. The instrumentation policy used for coverage reporting must match the one used for patching. We implemented the modern SEMI-NCA dominator tree algorithm (Georgiadis2005) and Tarjan’s classical SCC algorithm. We used capstone (CapstoneEngine) for disassembly and implemented a wrapper around unicorn (UnicornEngine) for microexecution. In total, this required about 17,000 LoC in C++ (testing code excluded). The run-time library bcov-rt is implemented in C in ~250 LoC.
Our evaluation is guided by the following research questions:
Can bcov transparently scale to real-world binaries?
What is the instrumentation overhead in terms of performance, memory, and file size?
Have we pushed the state of the art in jump table analysis?
To what extent can bcov provide better efficiency in comparison to its direct alternatives, namely, DBI tools?
Can bcov accurately report binary-level coverage?
For our evaluation, we selected eight modules from popular open source packages offering diverse functionality. They are summarized in Table4. We compiled each module using four compilers in three different build types. Specifically, we used the compilers gcc-5.5, gcc-7.4, clang-5.0, and clang-8.0. This gives us a representative snapshot of the past three years of developments in gcc and clang respectively. The build types are debug, release, and lto. The latter refers to link-time optimizations. Compiler optimizations were disabled in debug builds and enabled in release and lto builds. Enabled optimizations depend on the default options of their respective package which can be at levels O2 or O3.
This results in 12 versions of each module and a total of 95 binaries. 333Compiling llc with gcc-5.5 in lto build resulted in a compiler crash. Our tool was able of patching 88 binaries without modifying the build system. However, we had to modify the linker script in 7 instances where relocating ELF program headers was not possible. We instructed the linker to leave 112 bytes, which is enough for our segments headers, after the original program headers. This change is small affecting only one line in the linker script. The bcov-rt runtime was injected using the LD_PRELOAD mechanism. All experiments were conducted on an Ubuntu 16.04 PC with Intel® i7-6700 CPU and 32GB of RAM.
RQ1: Scalability and transparency. Our choice of subjects directly supports our scalability claim. Figure 6 shows a comparison in terms of code size relative to objdump, a commonly used subject in binary analysis research. Note that bcov can analyze and patch llc, our largest subject, in ~30 seconds. In our experiments, we used bcov to instrument all functions available in the .text section. More than functions have been instrumented across 95 binaries. The policies leaf-node and any-node have been applied separately, i.e., subjects were instrumented twice.
Transparency is important in coverage instrumentation. This practically means that bcov should not introduce regressions. We evaluated this criterion by replacing original binaries with instrumented versions and re-running their test suites. Our instrumentation did not introduce any regressions despite the fact that (1) we systematically patch all functions even compiler-generated ones, and (2) our benchmark packages include extensive test suites. For example, the perl test suite runs over one million checks.
RQ2: Instrumentation overhead. Figure 7 depicts the instrumentation overhead relative to original binaries. The average performance overhead of the any-node policy is 14%. The leaf-node policy is omitted due to the lack of space. The overhead is measured based on the wall-clock time required to run individual test suites, .e.g, run “make test” to completion. This covers the overhead associated with instrumentation and dumping coverage data to disk. The latter overhead varies depending on the number of processes spawned during testing. For example, opencv tests are executed within a single process which dumps coverage data only once. In contrast, unit-testing of llc spawns over 7500 processes in about 40s. This results in dumping ~4GB of coverage data which significantly contributes to the overall delay. Online merging of coverage data might reduce this disk IO overhead. To give a better intuition, we note that without online merging, llvm-cov would dump over 320GB of coverage (and profiling) data for the same benchmark.
The average memory and file size overhead introduced by bcov are 22% and %16 respectively. We measure the memory overhead relative to loadable ELF segments only since bcov does not affect run-time heap and stack. Coverage data represents only 6% of the memory overhead. It is worth noting that compiler optimizations can force bcov to relocate more instructions. This might be due to smaller basic blocks. However, our static instrumentation techniques are effective in reducing the difference in relocation overhead between debug and optimized builds as shown in Figure 6(c).
RQ3: Jump table analysis. Evaluating sliced microexecution requires comparing bcov with representative binary analysis tools. However, it was not possible to compare with BAP (BAP:CAV2011Brumley) and angr (angr:Shoshitaishvili2016) which are the leading academic tools. BAP does not have built-in support for jump table analysis, while angr (v8.18.10) crashed on opencv and llc binaries. For the remaining binaries, angr reported significantly less jump tables compared to IDA Pro. Therefore, we compare bcov only with IDA Pro (version 7.2). This should not affect our results since IDA Pro is the leading industry disassembler.
Next we have to establish the ground truth of jump table addresses. Specifically, the addresses of their indirect jmp instructions. This is challenging as compilers do not directly emit such information. Therefore, we conduct a differential comparison. We observed that bcov and IDA Pro agree on the majority of jump tables, including their targets, so we manually examined the remaining cases where they disagree. Both tools did not report false positives, i.e., they only missed jump tables. This is expected in bcov as repeated microexecution inspires high confidence in its results. Therefore, our ground truth is the union of jump table addresses recovered by both tools. Figure 8 depicts the recovery percentages relative to this ground truth of more than 46K jump tables. We control for different factors affecting compilation. We observed that IDA Pro delivers lower accuracy on clang binaries compared to gcc binaries and its accuracy was affected by compiler optimizations. On the other hand, bcov demonstrates high robustness across the board.
RQ4: Comparison with DBI tools. Pin and DynamoRIO (DR) are the most popular DBI tools. Both act as a process virtual machines which instruments programs while JIT emitting instructions to a code cache. This complex process creates the following sources of overhead: (1) JIT optimization, and (2) client instrumentation. For evaluating this overhead on our test suites, we installed the latest stable releases of both tools, namely, Pin v3.11 and DR v7.1. We then replaced each of our subjects with a wrapper executable. In the case of shared libraries, we replaced their test harness with our wrapper. The test system would now run the wrapper which in turn executes its corresponding original binary but under the control of a DBI tool. The wrapper reads a designated environment variable to choose the current DBI tool.
Figure 9 depicts the performance overhead of Pin and DR without client instrumentation. It also shows the overhead of DR after enabling drcov, its code coverage client. Note that Pin does not have a built-in coverage client. The overhead is measured relative to original binaries and is averaged for four different release builds. Both tools introduced regressions on perl and python. However, DR made tests hung on perl and crashed on the python test suite. This highlights the challenges of maintaining transparency in DBI tools. Note that the DBI overhead of executable subjects is significantly higher than that of shared libraries. This can be attributed to the start-up delay which dominates in short-running tests. Our experiments show that bcov provides significantly better performance, transparency, and usability.
RQ5: Coverage report accuracy. We evaluate the coverage reported by bcov by tracing binaries that are instrumented with the any-node policy. This is necessary as comparing coverage of original binaries to instrumented ones will introduce errors that are caused by non-determinism. Initially, we obtained ground truth traces using Intel PT (IPT). To this end, we collected about 2000 sample tests from our test suites. Running these tests produces 104GB of IPT data and 444MB of bcov coverage data. We used the standard perf tracing facilities in kernel v4.15 and later kernel v5.3. We tried many IPT configurations and restricted ourselves to tests terminating in 5 seconds. Despite these efforts, we could not reliably evaluate bcov due to non-deterministic loss in IPT data (IntelPTLinuxDocs).
We then turned to drcov to obtain the ground truth. This DR client dumps the address of encountered basic blocks (BB) heads, i.e., first instruction. We leverage the fact that our instrumentation does not modify BB heads. We expect BBs reported as covered by bcov to appear in drcov’s trace. We consider these BBs to be true positives (TP). If a BB reported by bcov was not found, it will be marked as false-positive (FP). On the other hand, failing to report the address of a tracked BB is a false-negative (FN). FPs and FNs are considered errors in the reported coverage. Our evaluation method is conservative given the potential CFG overapproximation. Also, we take into account that drcov reports the heads of dynamic BBs. This means that should A and B be consecutive BBs where A is fallthrough, i.e., does not end with a branch, drcov might only report the head of A.
Our results are shown in Table 5. They are based on running the test suites of subjects compiled with gcc-7 in release build. The results are representative of other build types. The subjects are instrumented with bcov and also run under control of DR’s drcov. For each BB category we show the minimum and maximum counts across all test processes. For example, the minimum number of FP BBs among 7862 llc
processes is 0, the maximum is 75. The average precision and recall across all subjects are 99.97% and 99.95% respectively. This suggests that the reported coverage errors are practically negligible. Nevertheless, there is still room for further improvement. Specifically, improving CFG precision and detour hosting can reduce FPs and FNs respectively.
|Module||process #||drcov size||bcov size||BB||Inst.||TP BB||TP Inst.||FP BB||FP Inst.||FN BB||FN Inst.|
In this section, we discuss potential issues and limitations of bcov.
RISC ISA. Inserting detours is generally easier in RISC ISAs thanks to their fixed instruction size. However, the addressing range can be significantly lower than the ±2GB offered by x86-64. Note that we patch each ELF module individually. This means that we only need an addressing range that is large enough to reach our patch code segment from the original code. For example, a range of 60MB would be sufficient for our largest subject. AArch64 offers a detour range of ±128MB which can accommodate the majority of binaries. AArch32 offers just ±32MB, in comparison. In such case, a single detour instruction might not be sufficient. Additional options need to be investigated such as relocating functions, literal pools, and changes to linker scripts.
In addition, we update coverage data using a single pc-relative mov which has a memory operand with 32-bit offset. Generally, emulating the same functionality in RISC ISAs require more instructions and clobbering of register(s). However, saving and restoring the clobbered registers is not always necessary. A liveness analysis can help us acquire registers with dead values. Similar analyses are already implemented in DBI tools.
Limitations and threats to validity. The precision of the recovered CFG can affect the coverage reported by our tool. While the implemented jump table and non-return analyses significantly increase CFG precision, they are still not perfect. Our prototype might miss jump tables, albeit only in few situations. Also, while our experiments show that the non-return analysis in bcov is comparable to IDA Pro, both tools face the challenge of may-return functions. Such functions might not return to their caller depending on their arguments. Function Perl__force_out_malformed_utf8_message in perl is particularly noteworthy. In one binary, it is called 88 times (out of 89 total) with the argument die_here set, i.e, will not return. Developers can signal to the compiler that a particular call will not return using __builtin_unreachable(). Such information is not available in the binaries so we simply assume that all calls to may-return functions are returning. Consequently, bcov might spuriously report BBs following a may-return call as covered.
On another note, we believe that our subjects are representative of C/C++ user-space software in Linux. However, generalizing our results to other languages and platforms requires further investigation. The simple mechanisms we use to implement detours and update coverage are also applicable to system software. However, special considerations might exist. Finally, our approach can not be directly applied to dynamic code, e.g., self-modifying code.
9. Related Work
Instrumentation using trampolines is known since a long time. It is typically used in limited applications such as function interception (MicrosoftDetours). We systematically use trampolines at fine granularity to instrument basic blocks. Also, we are aggressive in exploiting padding bytes and hosting detours. This allows us to avoid relocating entire functions which is the approach followed in PEBIL (Laurenzano2010).
Recently, several works considered static instrumentation via reassembly (Wang2015; RetroWriteSP2020) and recompilation (Anand2013). In principle, they are orthogonal to our approach as they allow us to inline coverage update code in the recovered artifacts, namely, assembly and IR respectively. However, they both face the challenge of distinguishing references from scalars, an undecidable problem in general. Also, they require additional engineering effort to fix relocated references, e.g., CFI records. In comparison, trampolines allowed us to seamlessly scale to large binaries in both C and C++. Also, they make it easy to map analysis results back to original binaries.
The analysis of jump tables has been considered in several works. A combination of pattern matching and data-flow analysis has been proposed in(BenKhadra2016; Meng2016). Cifuentes et al. (Cifuentes1999) use backward slicing to produce a slice that is converted to a canonical IR expression and then checked against known jump table forms. A custom value-set analysis using SMT solving has been implemented in JTR (Cojocar2017a). It is applied after lifting instructions to LLVM IR. In contrast, our approach semantically reasons about jump tables without manual pattern matching. Also, we do not pay the performance and engineering overhead of lifting instructions to an IR. Moreover, we move beyond mere recovery to jump table instrumentation.
Tikir et al. (Tikir2002) propose an approach to binary-level coverage analysis and enhance its efficiency via probe pruning. It is the closest related work to ours. However, our approaches differ in a several aspects. First, they focus on dynamic coverage analysis where they analyze, patch, and potentially, restore binaries at runtime. In contrast, our approach is static which enables us to spend more time optimizing instrumentation. Second, their work builds on Dyninst (DyninstWeb), a generic binary instrumentation tool. However, the generality of Dyninst comes at a considerable cost in terms of overhead and complexity. For example, trampolines are organized into multiple levels. In comparison, we focus on the bare minimum required for tracking code coverage. Consequently, bcov provides better performance and transparency. Finally, the probe pruning technique implemented in bcov is more optimal. A fact that is acknowledged in their work.
In this work, we presented bcov, a tool for binary-level coverage analysis. We implemented a trampoline-based instrumentation approach and demonstrated that it can transparently scale to large real-world programs in both C and C++. Our tool works directly on x86-64 ELF binaries without compiler support. Improving efficiency required an orchestrated effort where we leverage probe pruning, improve CFG precision, and cope with the instruction-size variability in x86-64 ISA. We make our tool and dataset publicly available to foster further research in this area.