Means of creating distributed consensus have given rise to a family of distributed protocols for building a replicated transaction log (a blockchain). These technological advances enabled the creation of decentralised cryptocurrencies, such as Bitcoin [Nakamoto:08]. Ethereum [Wood:Ethereum], one of Bitcoin’s most prominent successors, adds Turing-complete stateful computation associated with funds-exchanging transactions—so-called smart contracts—to replicated distributed storage.
Smart contracts are small programs stored in a blockchain that can be invoked by transactions initiated by parties involved in the protocol, executing some business logic as automatic and trustworthy mediators. Typical applications of smart contracts involve implementations of multi-party accounting, voting and arbitration mechanisms, auctions, as well as puzzle-solving games with reward distribution. To preserve the global consistency of the blockchain, every transaction involving an interaction with a smart contract is replicated across the system. In Ethereum, replicated execution is implemented by means of a uniform execution back-end—Ethereum Virtual Machine (EVM) [Wood:Ethereum]—a stack-based operational formalism, enriched with a number of primitives, allowing contracts to call each other, refer to the global blockchain state, initiate sub-transactions, and even create new contract instances dynamically. That is, EVM provides a convenient compilation target for multiple high-level programming languages for implementing Ethereum-based smart contracts. In contrast with prior low-level languages for smart contract scripting, EVM features mutable persistent state that can be modified, during a contract’s lifetime, by parties interacting with it. Finally, in order to tackle the issue of possible denial-of-service attacks, EVM comes with a notion of gas—a cost semantics of virtual machine instructions.
All these features make EVM a very powerful execution formalism, simultaneously making it quite difficult to formally analyse its bytecode for possible inefficiencies and vulnerabilities—a challenge exacerbated by the mission-critical nature of smart contracts, which, after having been deployed, cannot be amended or taken off the blockchain.
Contributions. In this work, we take a step further towards sound and automated reasoning about high-level properties of Ethereum smart contracts.
We do so by providing EthIR, an open-source tool for precise decompilation of EVM bytecode into a high-level representation in a rule-based form; EthIR is available via GitHub: https://github.com/costa-group/ethIR.
Our representation reconstructs high-level control and data-flow for EVM bytecode from the low-level encoding provided in the CFGs generated by Oyente. It enables application of state-of-the-art analysis tools developed for high-level languages to infer properties of bytecode.
We showcase this application by conducting an automated resource analysis of existing contracts from the blockchain inferring their loop bounds.
2 From EVM to a Rule-based Representation
The purpose of decompilation –as for other bytecode languages (see, e.g., the Soot analysis and optimization framework[soot])– is to make explicit in a higher-level representation the control flow of the program (by means of rules which indicate the continuation of the execution) and the data flow (by means of explicit variables, which represent the data stored in the stack, in contract fields, in local variables, and in the blockchain), so that an analysis or transformation tool can have this control flow information directly available.
2.1 Extension of Oyente to Generate the CFG
Given some EVM code, the Oyente tool generates a set of blocks that store the information needed to represent the CFG of such EVM code. However, when the jump address of a block is not unique (depends on the flow of the program), the blocks generated by Oyente sometimes only store the last value of the jump address (this is because it is enough for the kind of symbolic execution they perform). We have modified the structure of Oyente blocks in order to include all possible jump addresses, so that the whole CFG is reconstructed. As an example, Fig. 1 shows the Solidity source code for a fragment of a contract (left), and the CFG generated from it (right). Observe that in the CFGs generated by our extension of Oyente, the instructions SSTORE or SLOAD are annotated with an identifier of the contract field they operate on (for instance, a SSTORE operation that stores a value on the contract field 0 is replaced by SSTORE 0). Similarly, the EVM instructions MSTORE and MLOAD instructions are annotated with the memory address they operate on (such addresses will be transformed into variables in the RBR whenever possible). These annotations cannot be generated when the memory address is not statically known, though, (for instance, when we have an array access inside a loop with a variable index). In such cases, we annotate the corresponding instructions with “?”.
Finally, when we have Solidity code available, we are able to retrieve the name of the functions invoked from the hash codes (see e.g. Block 152 in which we have annotated in the second bytecode kingBlock, the name of the function to be invoked). This allows us to statically know the continuation block.
2.2 From the CFG to Guarded Rules
The translation from EVM into our rule-based representation is done by applying the translation in Def. 1 to each block in a CFG. The identifiers given to the rules –block_x or jump_x– use x, the PC of the first bytecode in the block being translated. We distinguish among three cases: (1) if the last bytecode in the block is an unconditional jump (JUMP), we generate a single rule, with an invocation to the continuation block, (2) if it is a conditional jump (JUMPI) we produce two additional guarded rules which represent the continuation when the condition holds, and when it does not, (3) otherwise, we continue the execution in block x+s (where s is the size of the EVM bytecodes in the block being translated). As regards the variables, we distinguish the following types:
Stack variables: a key ingredient of the translation is that the stack is flattened into variables, i.e., the part of the stack that the block is using is represented, when it is translated into a rule, by the explicit variables , where is above , and so on. The initial stack variables are obtained as parameters and denoted as .
Local variables: the content of the local memory in numeric addresses appearing in the code, which are accessed through MSTORE and MLOAD with the given address, are modelled with variables , denoted as , and are passed as parameters. For the translation, we assume we are given a map lmap which associates a different local variable to every numeric address memory used in the code. When the address is not numeric, we represent it using a fresh variable local to the rule to indicate that we do not have information on this memory location.
Contract fields: we model fields with variables , denoted as , which are passed as parameters. Since these fields are accessed using SSTORE and SLOAD using the number of the field, we associate to the th field. As for the local memory, if the number of the field is not numeric because it is unknown (annotated as “?”), we use a fresh local variable to represent it.
Blockchain data: we model this data with variables which are either indexed with when they represent the message data, or with corresponding names, if they are precise information of the call, like the gas, which is accessed with the opcode GAS, or about the blockchain, like the current block number, which is accessed with the opcode NUMBER. All this data is accessed through dedicated opcodes, which may consume some offsets of the stack and normally place the result on top of the stack (although some of them, like CALLDATACOPY, can store information in the local memory).
The translation uses an auxiliary function to translate each bytecode into corresponding high-level instructions (and updates the size of the stack ) and to translate the guard of a conditional jump. The grammar of the resulting RBR language into which the EVM is translated is given in Fig. 2. We optionally can keep in the RBR the original bytecode instructions from which the higher-level ones are obtained by simply wrapping them within a nop functor (e.g., nop(DUPN)). This is relevant for a gas analyzer to assign the precise gas consumption to the higher-level instruction in which the bytecode was transformed.
Given a block B with instructions in a CFG starting at PC and local variables map lmap, the generated rules are:
where functions and for some representative bytecodes are:
is the index of the instruction, where the guard of the conditional jump starts. Note that the condition ends at the index and there is always a PUSH at . Since the pushed address (that we already have in ) and the result of the condition are consumed by the JUMPI, we do not store them in stack variables.
represents the size of the stack for the block. Initially we have .
variables , and , and , and , are local to each rule and are used to represent the use of SLOAD and SSTORE, and MLOAD and MSTORE, when the given address is not a concrete number. For SLOAD and MLOAD we also use , to denote a generator of fresh variables to safely represent the unknown value of the loaded address.
As an example, an excerpt of the RBR obtained by translating the three blocks on the right-hand side of Fig. 1 is as follows (selected instructions keep using nop annotations the bytecode from which they have been obtained):