1. Introduction
Symbolic execution is a code analysis technique which reasons about sets of input values that drive the program to a specified state (King, 1976b). Certain inputs are marked as symbolic and the analysis gathers symbolic constraints on these values, by analyzing the operations along a path of a program. Satisfying solutions to these constraints are concrete values that cause the program to execute the analyzed path leading to a particular state of interest. Manipulating these constraints allows one to reason about the reachability of different paths and states, thereby serving to guide search in the execution space efficiently. Symbolic execution, especially its mixeddynamic variant, has been widely used in computer security. Its prime application over the last decade has been in whitebox fuzzing, with the goal of discovering software vulnerabilities (Godefroid et al., 2012; Godefroid et al., 2008; Saxena et al., 2010a). More broadly, it has been used for patching (Perkins et al., 2009; Daniel et al., 2010), invariant discovery (Gupta and Rybalchenko, 2009), and verification to prove the absence of vulnerabilities (Jaffar et al., 2012; CoenPorisini et al., 2001). Offtheshelf symbolic execution tools targeting languages such as C/C++ (Siegel et al., 2015), JavaScript (Li et al., 2014; jal, 2018), Python (Canini et al., 2012; PyE, 2018), and executable binary code (Chipounov et al., 2011) are available.
Symbolic analysis is a powerful technique; however, it has a number of limitations in practical applicability. First, symbolic analysis is mostly designed as a deductive procedure classically, requiring complete modeling of the target language (e.g., C vs. x64). A set of logical rules specific to the target language describe how to construct symbolic constraints for operations in that language (Saxena et al., 2010b; Jeon et al., 2012). As new languages emerge, such symbolic analysis needs to be reimplemented for each language. More importantly, if a certain functionality of a program is unavailable for analysis — either because it is implemented in a language different from the target language, or because it is accessible as a closed, proprietary service — then, such functionality cannot be analyzed.
Second, the reasoning about symbolic constraints is limited to the expressiveness of theories supported by underlying satisfiability checkers (e.g., SAT / SMT solvers) (Baldoni et al., 2018). Symbolic analysis typically uses quantifierfree and decidable theories in firstorder logic, and satisfiability solvers have wellknown limits (Ábrahám, 2015). For instance, nonlinear arithmetic over reals is not well supported in existing solvers, and string support is relatively new and still an area of active research (Zheng et al., 2013; Ganesh et al., 2011)
. When program functionality does not fall within the supported theories, analysis either precludes such functionality altogether, or encodes it abstractly using supported theories (e.g., arrays, bitvectors, or uninterpreted functions).
Third, symbolic analyses often enumeratively analyze multiple paths in a program. Complex control flows and looping structures are wellknown to be missed by stateoftheart implementations, which have attracted besteffort extensions to the basic technique and do not offer generality (Saxena et al., 2009). In particular, dynamic symbolic execution is known to suffer from scalability issues in longrunning loops containing a large number of acyclic paths in the iterations, owing to loop unrolling and path explosion (Cadar and Sen, 2013; Xie et al., 2009).
1.1. NeuroSymbolic Execution
In this paper, we aim to improve the expressiveness of symbolic execution to reason about parts of the code that are not expressible in the theories supported by the symbolic language (including its SMT theories), too complex, or simply unavailable in analyzable form. We present a technique called neurosymbolic execution, which accumulates two types of constraints: standard symbolic constraints (derived deductively) and neural constraints (learned inductively). Neural constraints capture relations between program variables of code that are not expressible directly as purely symbolic constraints. The representation of these constraints is chosen to be a neural network (or neural net) in this work. The constraints including both symbolic and neural constraints are called neurosymbolic.
Our procedure infers and manipulates neural constraints using only two generic interfaces, namely learn and check satisfaction
. The first interface learns a neural network given concrete values of variables and an objective function to optimize. The second interface checks for satisfiability: given an output value for a neural network, finding whether an input evaluates to it. Both of these can be instantiated by many different procedures; we present a specific set of algorithms in this work for concreteness. We believe the general framework can be extended to other machine learning models which can implement such interfaces.
Our choice of representation via neural networks is motivated by two observations. First, neural nets can approximate or represent a large category of functions, as implied by the universal approximation theorem (Funahashi, 1989; Hornik, 1991); and in practice, an explosion of empirical results are showing that they are learnable for many practical functions (Andoni et al., 2014; Godfrey and Gashler, 2015). Although specialized training algorithms are continuously on the rise (Qian, 1999; Kingma and Ba, 2015), we expect that neural networks will prove effective in learning approximations to several useful functions we encounter in practice. Second, neural nets are a differentiable representation, often trained using optimization methods such as gradient descent (Ruder, 2016). This differentiability allows for efficient analytical techniques to check for satisfiability of neural constraints, and produce satisfying assignments of values to variables (Goodfellow et al., 2015; Papernot et al., 2016) — analogous to the role of SMT solvers for purely symbolic constraints. One of the core technical contributions of this work is a procedure to solve neurosymbolic constraints: checking satisfiability and finding assignments for variables involved in neural and symbolic constraints simultaneously, with good empirical accuracy on benchmarks tested.
Inductive synthesis of symbolic constraints usable in symbolic analyses has been attempted in prior work (Nguyen et al., 2014, 2017; Ernst et al., 2007). One notable difference is that our neural constraints are a form of unstructured learning, i.e. they approximate a large class of functions and do not aim to print out constraints in a symbolic form amenable to SMT reasoning. Prior constraint synthesis works predetermine a fixed template or structure of symbolic constraints — for instance, octagonal inequalities (Nguyen et al., 2014), lowdegree polynomial equalities over integers (Ernst et al., 2007), and so on. Each such templatebased learning comes with a specialized learning procedure and either resorts to standard SMT solvers for solving constraints, or has handcrafted procedures specialized to each template type. As a result, these techniques have found limited applicability in widely used symbolic execution analyses. As a side note, when the code being approximated does not fall within chosen template structure in prior works, they resolve to bruteforce enumeration of templates to fit the samples.
1.2. Applications & Results
Neurosymbolic execution has the ability to reason about purely symbolic constraints, purely neural constraints, and mixed neurosymbolic constraints. This approach has a number of possible future applications, including but not limited to: (a) analyzing protocol implementations without analyzable code (Cui et al., 2007); (b) analyzing code with complex dependency structures (Xie et al., 2016); and (c) analyzing systems that embed neural networks directly as subcomponents (Bojarski et al., 2016).
To anchor our proposal, we focus on the core technique of neurosymbolic execution through the lens of one application — finding exploits for buffer overflows. In this setting, we show that neurosymbolic execution can be used to synthesize neural constraints from parts of a program, to which the analysis only has blackbox executable access. The program logic can have complex dependencies and control structure, and the technique does not need to know the operational semantics of the target language. We show that for many real programs, our procedure can learn moderately accurate models, incorporate them with symbolic memory safety conditions, and solve them to uncover concrete exploits.
Tool.
We build a prototype tool (called NeuEx) to perform neurosymbolic execution of C programs, where the analyst specifies which parts of the code it wants to treat as a blackbox, and a memory unsafety condition (symbolic) which captures an exploit. NeuEx uses standard training algorithms to learn a neural net which approximates the blackbox functionality and conjoins it with other symbolic constraints. Next, NeuEx
employs a new procedure to solve the symbolic and neural constraints simultaneously, yielding satisfying assignments with high probability. The tool is constructive, in that it produces concrete values for free variables in the constraints, which can be tested as candidate exploits.
Results.
Our main empirical results are twofold. First, we select a benchmark which has difficult constraints, known to require special ized extensions to symbolic execution. We show that NeuEx finds exploits for of programs in the benchmark. Our results are comparable to binarylevel symbolic execution tools (Saxena et al., 2009) with little knowledge of the semantics of the target code and the specific language. The second empirical experiment analyzes two benchmarks used in prior works in invariant synthesis for verification and program synthesis (Nguyen et al., 2014, 2017). They comprise programs with loops and input variables in total. Given the neurosymbolic constraints, NeuEx successfully solves 100% neurosymbolic constraints for these benchmarks.
Contributions.
We make the following contributions:

NeuroSymbolic Constraints. NeuEx represents the relationship between variables of code as a neural net without the knowledge of code semantics and language, and then conjoins it along with symbolic constraints.

NeuroSymbolic Constraint Solving. NeuEx envisions constraint solving as a search problem and encodes symbolic constraints as an objective function for optimization along with neural net to check their satisfiability.

Evaluation. NeuEx successfully constructs exploits for out of vulnerable programs, which is comparable to binarylevel symbolic execution (Saxena et al., 2009). In addition, NeuEx solves 100% of given neurosymbolic constraints over programs comprising of input variables in total.
2. Overview
Symbolic execution provides a tool that is useful in a variety of securityrelated applications. In this work, we focus on the challenges within symbolic execution and present a solution that is general for various kinds of programs.
2.1. Motivation and Challenges
We outline a set of challenges posed to symbolic execution with the help of a realworld example from an HTTP server.
Motivating Example.
Consider the simplified example of parsing the HTTP request shown in Figure 1. The code extracts the fields (e.g., uri and version) from the request and constructs the new message for further processing. Our goal is to check whether there exists any buffer overflow in this program. If so, we find the exploit that triggers the overflow. As shown in Figure 1, on Line 45, the function process_request takes one input input and checks whether input starts with ‘GET ’. On Line 614, it finds the URI and version from input by searching for the delimiter ‘ ’ and ‘n’ separately. Then, the function checks whether the program supports the request on Line 1516 based on the version. Finally, it concatenates the version and URI with the delimiter ‘,’ into a buffer msgbuf on Line 1722. There exists a buffer overflow on Line 2122, as the pointer ptr may exceed the boundary of msgbuf.
Challenge 1: Complex Dependency.
To discover this buffer overflow via purely symbolic analysis, the technique has to reason about a complex dependency structure between the input and the variables of interest. Assume that the analyst has some knowledge of the input format, namely that the input has fields, URI and version, separated by ‘ ’ and ‘n’ and knows the allocated size of the msgbuf (which is 100). By analyzing the program, the analyst knows the vulnerable condition of msgbuf is ptr, which leads to buffer overflows. Note that the path executed for reaching the vulnerability point on Line 2122 involves updates to a number of variables (on Line 8 and 13) which do not have a direct dependency chain (rather a sequence of control dependencies) on the target variable ptr. Specifically, uri_len and ver_len are dependent on input, which in turn control ptr and the iterations of the vulnerable loop. Further, the relationship between uri_len, ver_len, and ptr
involves reasoning over the conditional statements on Line 4 and 15, which may lead to the termination of function. Therefore, without specialized heuristics (e.g., loopextension
(Saxena et al., 2009)), the stateoftheart solvers resort to enumeration (Cadar et al., 2008). For example, KLEE enumerates characters on input over ‘ ’ and ‘n’ until the input passes the checking on Line 15 and ver_len+uri_len>98.The unavailability of source code is another challenge for capturing complex dependency between variables, especially when the functions are implemented as a remote call or a library call written in a different language. For example, a symbolic execution may abort for calls to native Java methods and unmanaged code in .NET, as the symbolic values flow outside the boundary of target code (Anand et al., 2007). To handle this challenge, symbolic execution has to hardcode models for these unknown function calls, which requires considerable manual expertise. Even though symbolic execution tools often provide handcrafted models for analyzing system calls, they do not precisely capture all the behaviors (e.g., the failure of system calls) (Cadar et al., 2008). Thus, the constraints generated by purely symbolic execution cannot capture the real behavior of functions, which leads to the failure in vulnerability detection.
Challenge 2: Lack of Expressiveness.
Additional challenges can arise in such analysis due to the complexity of the constraint and the lack of backend theorem for solving the constraint. As shown in Figure 1, the function is equivalent to a replacement based on regular expressions. It replaces the request of the form, "GET␣"URI "␣"Version "n", to the message of the form, URI ","Version "0"on Line 422.^{1}^{1}1‘’ matches as many characters as possible. The complex relationship between the input and target buffer makes it infeasible for symbolic execution to capture it. Moreover, even if the regular expression is successfully extracted, the symbolic engine may not be able to solve it as the embedded SAT/SMT solver is not able to express certain theories (e.g., the string replacement and nonlinear arithmetic). Although works have targeted these theories, current support for nonlinear real and integer arithmetic is still in its infancy (Ábrahám, 2015).
2.2. Our Approach
To address the above challenges, we propose a new approach with two main insights: (1) leveraging the high representation capability of neural nets to learn constraints when symbolic execution is infeasible to capture it; (2) encoding the symbolic constraints into neural constraint and leveraging the optimization algorithms to solve the neurosymbolic constraints as a search problem.
NeuEx departs from the purist view that all variable dependencies and relations should be expressible precisely in a symbolic form. Instead, NeuEx treats the entire code from Line 422 as a blackbox, and inductively learn a neural network — an approximate representation of the logic mapping the variables of interest to target variables. The constraint represented by the neural network is termed neural constraint. This neural constraint, say , can represent relationships that may or may not be representable as symbolic constraints. Instead, our approach creates a neurosymbolic constraint, which includes both symbolic and neural constraints. Such neural constraint learning addresses the preceding first challenge as it learns the constraints from test data rather than source code.
Revisiting the example in Figure 1, the neurosymbolic constraints capturing the vulnerability at the last control location on Line 22 are as follows.
(1) 
(2) 
(3) 
(4) 
where uri_length is the length of uri field input_uri and ver_length is the length of version field input_version in input. ^{2}^{2}2input_uri and input_version are the content of the fields from input generated based on the knowledge of input, which is different from URI and version in Figure 1. The first two constraints are symbolic constraints over the input fields, uri_length and ver_length. The third symbolic constraint captures the vulnerable condition for msgbuf. The last constraint is a neural constraint capturing the relationship between the variable uri_length and ver_length and the variable ptr accessing the vulnerable buffer msgbuf.

NS  

N  

S  e1 e2  e  
Variable  StrVar  ConstStr  StrVarStrVar  
NumVar  ConstNum  NumVarNumVar  
Expression  e  contains(StrVar, StrVar)  
strstr(StrVar, StrVar) NumVar  
strlen(StrVar) NumVar  
NumVar NumVar  
Logical    
Conditional  ==   >   <   
Arithmetic  +    *  / 
To the best of our knowledge, our approach is the first to train a neural net as a constraint and solve both symbolic constraint and neural constraint together. In our approach, we design an intermediate language, termed as neurosymbolic constraint language. Table 1 presents the syntax of the neurosymbolic constraint language supported by NeuEx, which is expressive enough to model various constraints specified in many real applications such as string and arithmetic constraints.
Given the learned neurosymbolic constraints, we seek the values of variables of interest that satisfy all the constraints within it. There exist multiple approaches to solve neurosymbolic constraints. One naive way is to solve the neural and symbolic constraints separately. For example, consider the neurosymbolic constraints in Equation 1 4. We first solve the three symbolic constraints by SAT/SMT solvers and then discover a input_uri where uri_length=10, a input_version where ver_length=20 and a ptr whose value is 100. Then, we feed the values of uri_length, ver_length and ptr to the neural constraint to check whether it satisfies the learned relationship. For the above case, the neural constraint produces the output such as 32 for the ptr when uri_length=10 and ver_length=20. Although this is a valid satisfiability result for the neural constraint, ptr=100 is not satisfiable for the current input_uri and input_version. This discrepancy arises because we solve these two types of constraints individually without considering the interdependency of variables within these constraints. Alternatively, one could resort to enumeration over values of these three variables as a solution. However, it will require a lot of time for discovering the exploit.
This inspires our design of neurosymbolic constraint solving. NeuEx’s solving precedence is purely symbolic, purely neural and mixed constraints, in that order. Solving pure constraints is straightforward (de Moura and Bjørner, 2008; Ruder, 2016). The main technical novelty in our design is that NeuEx treats the mixed constraint solving as a search problem and utilizes the optimization algorithm to search for the satisfying solutions. To solve the mixed constraints simultaneously, NeuEx
converts symbolic constraints to a loss function (or objective function) which is then used to guide the optimization of the loss function, thereby enabling conjunction of symbolic and neural constraints.
3. Design
NeuEx is the first tool to solve neurosymbolic constraints. We first explain the NeuEx setup and the building blocks we use in our approach. Then, we present the core constraint solver of NeuEx along with various optimization strategies.
3.1. Overview
Symbolic execution is a generic technique to automatically construct inputs required to drive the program’s execution to a specific program point in the code. To this end, a typical symbolic execution framework takes in a program and a vulnerable condition for which we want to test the program. The analyst using the framework also needs to mark the variables of interest as symbolic. Typically, all the input variables are marked symbolic irrespective of its type. Further, environment variables, implicit inputs, user events, storage devices can also be marked as symbolic based on the usecase (Cadar et al., 2006; Cadar et al., 2008; Saxena et al., 2010b). Then the framework generates a set of test inputs to execute an execution path in the program. The analyst can aid this process by providing hints about input grammar and so on, which they know beforehand.
At each branch in the execution, the framework logs the symbolic constraints collected so far as the path conditions required to reach this code point. Specifically, a logical conjunction of all the symbolic constraints gives us the path constraints that have to be satisfied by the input to reach this code point. Solving the symbolic path constraints gives us the concrete input value which can lead us to this execution point in the program, by invoking a constraint solver to produce concrete values. The framework may also negate the symbolic constraints to explore other paths in the program or introduce a feedback loop which uses the concrete input values returned by the constraint solver as new inputs in order to increase path coverage. Figure 2 shows how NeuEx interacts with one such symbolic engine. It takes in symbolic constraint formulas in conjunctive normal form and returns concrete values for each symbolic variable in the constraint formula if the path is feasible; otherwise, it returns UNSAT which implies that the path is infeasible. There are various solvers which support a wide range of theories including but not limited to linear and nonlinear arithmetic over integers, booleans, bitvectors, arrays and strings.
However, there does not exist any theory for solving neural constraints in conjunction with symbolic constraints. A symbolic execution may need to solve neural constraints for programs which invoke a neural network for parts of the execution. For example, if the web application uses a face recognition module before granting access to a critical feature, a traditional symbolic framework will not be able to get past it. Furthermore, symbolic execution is wellknown to fare badly for complex pieces of code involving loops
(Saxena et al., 2009). Thus, whenever the symbolic engine’s default constraint solver is not able to find a solution and reaches its timeout, the framework can pause its execution and automatically trigger an alternative mechanism. This is where a neural constraint solver comes into play. If the framework is armed with a neural constraint solver such as NeuEx, it can model parts of the program a blackbox and invoke the neural counterpart to solve the constraints. Specifically, the framework can dispatch all the symbolic constraints it has collected so far along with the piece of code it wants to treat as a black box. NeuEx in turn first adds all the symbolic constraints to neurosymbolic constraints and then queries its constraint solver to produce concrete inputs or return UNSAT. In fact, any piece of code can be modeled in terms of neural constraints to leverage NeuEx. NeuEx is generic in design as it can plug in any symbolic execution engine of choice. It only requires the symbolic execution tool to provide two interfaces: one for outputting the symbolic constraints and the other for querying the SAT/SMT solvers as shown in Figure 2. Table 1 shows the grammar that NeuEx’s constraint solver can reason about. For our example in Figure 1, we want to infer the relations between input HTTP request and variable index accessing the vulnerable buffer msgbuf. So the symbolic framework will pass the following constraints to NeuEx:(5) 
3.2. Building Blocks
NeuEx’s core engine solves the neurosymbolic constraints such as in Equation 5 using its custom constraint solver detailed in Section 3.3. It relies on two existing techniques: SAT/SMT solver and gradientbased neural solver. These solvers referred to as SymSolv and NeuSolv respectively form the basic building blocks of NeuEx.
SymSolv.
NeuEx’s symbolic constraint solver takes in firstorder quantifierfree formulas over multiple theories (e.g., empty theory, the theory of linear arithmetic and strings) and returns UNSAT or concrete values as output. It internally employs Z3 Theorem Prover (de Moura and Bjørner, 2008) as an SMT solver to solve both arithmetic and string symbolic constraints.
NeuSolv.
For solving purely neural constraints, NeuSolv takes in the neural net and the associated loss function to generate the expected values that the output variables should have. NeuEx considers the neural constraint solving as a search problem and uses a gradientbased search algorithm to search for the satisfiable results. Gradientbased search algorithm searches for the minimum of a given loss function where is a ndimensional vector (Ruder, 2016). The loss function can be any differentiable function that monitors the error between the objective and current predictions. Consider the example in Figure 1. The objective of NeuEx is to check whether the index ptr overruns the boundary of msgbuf. Hence, the error is the distance between the value of ptr leading to the buffer overflow and the value of ptr on Line 22 given by the function process_request with current input. By minimizing the error, NeuEx can discover the input closest to the exploit. To minimize the error, gradientbased search algorithm first starts with a random input which is the initial state of NeuSolv. For every enumeration , it computes the derivative of given the input and then update according to . This is based on the observation that the derivative of a function always points to a local nearest valley. The updated input is defined as:
(6) 
where is the learning rate that controls how much is going to be updated. Gradientbased search algorithm keeps updating the input until it reaches the local minima. To avoid the nontermination case, we set the maximum number of enumerations to be . If it exceed , NeuSolv stops and returns current updated result. Note that gradientbased search algorithm can only find the local minima since it stops when the error increases. If the loss function is a nonconvex function with multiple local minima, the found local minima may not be the global minima. Moreover, it may find different local minima when the initial state is different. Thus, NeuEx executes the search algorithm multiple times with different initial states in order to find the global minima of .
3.3. Constraint Solver
We propose a constraint solver to solve the neurosymbolic constraints with the help of SymSolv and NeuSolv. If the solver returns SAT, then the neurosymbolic constraints guarantee to be satisfiable. It is not guaranteed to decide all satisfiability results with a timeout. Algorithm 1 shows the algorithm for neurosymbolic constraint solver. Interested readers can refer to Algorithm 1 for the precise algorithm.
DAG Generation.
NeuEx takes the neurosymbolic constraints and generates the directed acyclic graph (DAG) between constraints and its variables. Each vertex of the DAG represents a variable or constraint, and the edge shows that the variable is involved in the constraint. For example, Figure 3 shows the generated DAG for constraints where can be any operator.
Next, NeuEx partitions the DAG into connected components by breadthfirst search (Bundy and Wallen, 1984). Consider the example shown in Figure 3. There are 5 constraints that are partitioned into three connected components, , and . NeuEx topologically sorts the components based on the type of constraints to schedule the solving sequence. Specifically, it clusters the components with only one kind of constraints as pure components (e.g., and ) and the components including both constraints as mixed components (e.g., ). It further subcategorizes pure components into purely symbolic (e.g., ) and purely neural constraints (e.g., ).
NeuEx assigns solving precedence to be pure and mixed constraints. NeuEx solves the mixed constraints in the end because the constraints have different representation and hence are timeconsuming to solve. Thus, in our example, NeuEx first solves and then checks the satisfiability of .
Pure Constraint Solving.
In pure constraints, we first apply SymSolv to solve purely symbolic constraints on Line 3 and then handle purely neural constraints using NeuSolv on Line 9. Note that the order of these two kinds of constraints does not affect the result. We solve purely symbolic constraints first because the SymSolv is fast, while the search algorithm for neural constraints requires numerous iteration and may not terminate. So, if the SymSolv reports UNSAT for purely symbolic constraints, the whole neurosymbolic constraints is UNSAT, as all the constraints are conjunctive. Such an early UNSAT detection speeds up the satisfiability checking. If both solvers output SAT, NeuEx continues the process of solving the mixed constraints.
Mixed Constraint Solving I.
NeuEx obtains the symbolic constraints from mixed components (e.g., and ) by cutting the edges between the neural constraints and its variables. Then, NeuEx invokes SymSolv to check their satisfiability on Line 20. If the solver returns UNSAT, NeuEx goes to UNSAT state; otherwise, NeuEx collects the concrete values of variables used in these symbolic constraints. Then, NeuEx plugs these concrete values into neural constraints on Line 24. For example, in Figure 3, if the satisfiability result of is for the variables , NeuEx partially assigns and in to be and separately. Now, we have the partially assigned neural constraint ' from . All that remains is to search for the value of satisfying '.
To solve such a partially assigned neural constraint, NeuEx employs NeuSolv on Line 26. If the NeuSolv outputs SAT, NeuEx goes to SAT state. In SAT state, NeuEx terminates and returns SAT with the combination of the satisfiability results for all the constraints. If the NeuSolv outputs UNSAT, NeuEx considers the satisfiability result of symbolic constraints as a counterexample and derives a conflict clause on Line 19. Specifically in our example, NeuEx creates a new conflict clause . Then NeuEx adds this clause (Line 34) and queries the SymSolv with these new symbolic constraints (Line 20). This method of adding conflict clauses is similar to the backtracking in DPLL algorithm (Davis et al., 1962). Although the conflict clause learning approach used in NeuEx is simple, NeuEx is generic to adopt other advance strategies for constraint solving (Silva and Sakallah, 1997; Moskewicz et al., 2001; Liang et al., 2016).
The above mixed constraint solving keeps executing the backtracking procedure until it does not find any new counterexample. Consider the example in Figure 1. NeuEx first finds the input whose uri_length=10, ver_length=30, and ptr=100. However, the result generated in this trial does not satisfy the neural constraint. Then, NeuEx transforms this counterexample into a conflict clause and goes to next trial to discover a new result. But this trial can be very expensive. For the example in Section 2, mixed constraint solving takes more than trials in the worst case even after augmenting the constraints with additional information that the value of ptr is . To speed up mixed solving, NeuEx chooses to limit the number of trials to a threshold value.
Specifically, if we do not have a SAT decision after mixed constraint solving I within iterations^{3}^{3}3Users can adapt according to their applications., NeuEx
applies an alternative strategy where we combine the symbolic constraints with neural constraints together. There exist two possible strategies: transforming neural constraints to symbolic constraints or the other way around. However, collapsing neural constraints to symbolic constraints incurs massive encoding clauses. For example, merely encoding a small binarized neural network generates millions of variables and millions of clauses
(Narodytska et al., 2017). Thus, we transform the mixed constraints into purely neural constraints for solving them together.Mixed Constraint Solving II.
NeuEx collapses symbolic constraints to neural constraints by encoding the symbolic constraints to a loss function on Line 36. This ensures the symbolic and neural constraints are in the same form. For example, in Figure 3, NeuEx transforms the constraint and into a loss function of .
Once the symbolic constraints are encoded into neural constraints, NeuEx applies the NeuSolv to minimize the loss function on Line 38. The main intuition behind this approach is to guide the search with the help of encoded symbolic constraints. The loss function measures the distance between current result and the satisfiability result of symbolic constraints. The search algorithm gives us a candidate value for satisfiability checking of neural constraints. However, the candidate value generated by minimizing the distance may not always satisfy the symbolic constraints since the search algorithm only tries to minimize the loss, rather than exactly forces the satisfiability of symbolic constraints. To weed out such cases, NeuEx checks the satisfiability for the symbolic constraints by plugging in the candidate value and querying the SymSolv on Line 39. If the result is SAT, NeuEx goes to SAT state. Otherwise, NeuEx continues executing Approach II with a different initial state of the search algorithm. For example, in Figure 3, NeuEx changes the initial value of for every iteration. Note that each iteration in Approach I has to execute sequentially because the addition of the conflict clause forces serialization. As opposed to this, each trial in Approach II is independent and thus embarrassingly parallelizable.
To avoid the nontermination case, NeuEx sets the maximum number of trials for mixed constraint solving II to be , which can be configured independently of our constraint solver. Empirically, we notice that the mixed constraint solving II is always able to find the satisfiability result for complex constraints before hitting the threshold of 10.
3.4. Encoding Mixed Constraints
NeuEx’s mixed constraints take up most of the time during solving. We reduce this overhead by transforming them to purely neural constraints. Specifically, NeuEx encodes the symbolic constraints as a loss function such as:
(7) 
Next, NeuEx uses this loss function along with neural constraints and applies NeuSolv to minimize the loss function of the entire mixed constraints. This encoding has two main advantages. First, it is straightforward to encode symbolic constraints into loss function. Second, there exists gradientbased search algorithm for minimizing the loss function, which speeds up constraint solving in NeuEx.
Generic Encoding.
As long as we have a loss function for the symbolic constraints, we can apply NeuSolv to solve the mixed constraints. In this paper, given the grammar of symbolic constraints shown in Table 1, there exist six types of symbolic constraints and two kinds of combinations between two symbolic constraints based on its logical operators. Table 2 describes the loss function for all forms of symbolic constraints. Taking as an example, the loss function achieves the minimum value when , where and can be arbitrary expressions. Thus, minimizing the loss function is equivalent to solving the symbolic constraint. Similar logic is also useful to explain the equivalence between the other kinds of symbolic constraints and its loss function. These loss functions are not the only possible loss functions for these constraints. Any functions satisfying the Equation 7 can be used as loss functions and the same encoding mechanism can be applied to the other constraints. Note that there are three special requirements for the encoding mechanism.
NonZero Gradient Until SAT.
The derivative of the loss function should not be zero until we find the satisfiability results. For example, when we encode , the derivative of the loss function should not be equal to zero when . Otherwise, the NeuSolv will stop searching and return an unsatisfiable result. To guarantee that, we add a small positive value and adapts the loss function to be for the constraint and similarly for and . Taking the motivation sample shown in Section 2 as an example, the loss function is where
Fixed Lower Bound on Loss Function.
The loss function for each constraint needs a fixed lower bound to avoid only minimizing the loss function of one constraint within the conjunctive constraints. For instance, we should not encode to as the loss function can be negative infinity, where is a small real value. If the constraint is where and can be arbitrary expressions, NeuSolv may only minimize the loss function for , because the loss function for is the sum of the loss function for and . Thus, it may not find the satisfiability result for both symbolic constraints. To avoid this, we add a lower bound and adjust the loss function to be . This lower bound ensures that the loss functions have a finite global minima.
Symbolic Constraint  Loss Function () 

Generality of Encoding.
NeuSolv can only be applied to differentiable loss functions, because it requires computing the derivatives of the loss function. Thus, NeuEx need to transform the expression and in Table 2 to a differentiable function. The encoding mechanism of expressions is generic. As long as NeuEx can transform the expression into a differential function, any encoding mechanism can be plugged in NeuEx for neurosymbolic constraint solving.
3.5. Optimizations
NeuEx applies five optimization strategies to reduce the computation time for neurosymbolic constraint solving.
Single Variable Update.
Given a set of input variables to neural constraint, NeuEx only updates one variable for each enumeration in NeuSolv. In order to select the variable, NeuEx computes the derivative values for each variable and sorts the absolute values of derivatives. The updated variable is the one with the largest absolute value of the derivative. This is because the derivative value for each element only computes the influence of changing the value of one variable towards the value of loss function, but does not measure the joint influence of multiple variables. Thus, updating them simultaneously may increase the loss value. Moreover, updating one variable per iteration allows the search engine to perform the minimum number of mutations on the initial input in order to prevent the input from being invalid.
Typebased Update.
To ensure the input is valid, NeuEx adapts the update strategy according to the types of variables. If the variable is an integer, NeuEx first binarizes the value of derivatives and then updates the variables with the binarized value. If the variable is a float, NeuEx updates the variable with the actual derivatives.
Caching.
NeuEx stores the updated results for each enumeration in NeuSolv. As the search algorithm is a deterministic approach, if we have the same input, neural constraints and the loss function, the final generated result is the same. Thus, to avoid unnecessary recomputation, NeuEx stores the update history and checks whether current input is cached in history. If yes, NeuEx reuses previous result; otherwise, NeuEx keeping searching for new input.
SAT Checking Per Enumeration.
To speed up the solving procedure, NeuEx verifies the satisfiability of the variables after each enumeration in NeuSolv. Once it satisfies the symbolic constraints, NeuSolv terminates and returns SAT to NeuEx. This is because not only the result achieving global minima can be the satisfiability result of symbolic constraint. For example, any result can be the satisfiability result of the constraint except for the result satisfying . Hence, NeuEx does not wait for minimizing the loss function, but checks the updated result for every iteration.
Parallelization.
NeuEx executes NeuSolv with different initial input in parallel since each loop for solving mixed constraints is independent. This parallelization reduces the time for finding the global minima of the loss function.
4. Neural Constraint Learning
We have described the constraint solver for neurosymbolic constraint solving; now it remains to discuss how NeuEx obtains the neural constraints. In this section, we discuss the design of neural constraint learning engine in NeuEx.
Given a program, the selection of network architecture is the key for learning any neural constraint. In this paper, we use multilayer perceptron (MLP) architecture which consists of multiple layers of nodes and connects each node with all nodes in the previous layer
(Rumelhart et al., 1985). Each node in the same layer does not share any connections with others. We select this architecture because it is a suitable choice for the fixedlength inputs. There are other more efficient architectures (e.g., CNN (Lawrence et al., 1997; Krizhevsky et al., 2012) and RNN (Medsker and Jain, 2001; Mikolov et al., 2010)) for the data with special relationships, and NeuEx gives users the flexibility to add more network architectures in NeuEx.The selection of activation function plays significant role for neural constraint inference as well. In this paper, we consider multiple activation functions (e.g., Sigmoid and Tanh) and finally select the rectifier function Relu as the activation function, because Relu obtains parse representation and reduces the likelihood of vanishing gradient
(Glorot et al., 2011; Maas et al., 2013). In other words, the neural network with Relu has higher chance to converge than other activation functions.In addition, to ensure the generality of neural constraint, we implement an earlystopping mechanism which is a regularization approach to reduce overfitting (Yao et al., 2007). It stops the learning procedure when the current learned neural constraint behaves worse on unseen test executions than the previous constraint. As the unseen test executions are never used for learning the neural constraint, the performance of learned neural constraint on unseen test executions is a fair measure for the generality of learned neural constraints.
NeuEx can use any machine learning approach, optimization algorithm (e.g., momentum gradient descent (Qian, 1999) and AdaGrad (Duchi et al., 2011)) and regularization solution (e.g., dropout (Srivastava et al., 2014) and Tikhonov regularization (Tikhonov, 1963)) to learn the neural constraints. With the advances in machine learning, NeuEx can adopt new architectures and learning approaches in neural constraint inference.
5. Evaluation
We implement NeuEx
in Python and Google TensorFlow
(Abadi et al., 2016) with a total of 1808 lines of code for training the neural constraints and solving the neurosymbolic constraints. Our evaluation highlights two features of NeuEx: (a) it generates the exploits for 13/14 vulnerable programs; (b) it solves 100% of the given neurosymbolic constrains for each loop.Experimental Setup.
To evaluate NeuEx, we configure the maximum enumeration of NeuSolv to be 10000 after which NeuSolv will terminate. (discussed in Section 3.1). The larger the maximum enumeration, the better the performance of NeuEx is on neural constraint solving. Our experiments are performed on a server with 40core Intel Xeon 2.6GHz CPUs with 64 GB of RAM.
5.1. Effectiveness in Exploit Generation
Program  Vulnerable Condition  LD 



BIND1  16  Yes  
BIND2  12  Yes  
BIND3  13  Yes  
BIND4  52  Yes  
Sendmail1  1  Yes  
Sendmail2  38  Yes  
Sendmail3  18  Yes  
Sendmail4  2  Yes  
Sendmail5  6  Yes  
Sendmail6  11  No  
Sendmail7 

16  Yes  
WuFTP1  5  Yes  
WuFTP2  29  Yes  
WuFTP3  7  Yes 
To evaluate the effectiveness of NeuEx in exploit generation, we select 14 vulnerable programs with buffer overflows from opensource network servers (e.g., BIND, Sendmail and WuFTP) (Zitser et al., 2004). We choose this benchmark because it comprises of multiple loops and various complex control and data dependencies which are challenging for symbolic execution to handle (discussed in Section 2). To measure the complexity of problems, we utilize the number of branches of which the condition is related to loop counts rather than input arguments in the vulnerable path. This metric is also used in (Saxena et al., 2009). Table 3 represents the complexity of each program along with the result of exploit generation.
To show the effectiveness of neurosymbolic constraint learning and solving, for each program, we mark the code from the beginning of the program to the location accessing buffers to be represented as neural constraints. Then, we mark all inputs and all buffer lengths in the program as symbolic by default. In cases where we know the input format, we provide it as additional information in form of program annotations (for e.g., specific input field values). In our example from Section 2, to analyze the program which takes HTTP requests as input, NeuEx marks the uri and version field as well as the length of all the buffers as symbolic. NeuEx randomly initializes the symbolic input arguments for each program, executes the program and collects the values of variables of interest. For our experiments, we collect up to samples of such executions. % of these samples are used for learning the neural constraints, while remaining % are used for evaluating the accuracy of learned neural constraints. To get the vulnerable conditions, we manually analyze the source code and set it as symbolic constraint.
Using the above steps, our experiments show that NeuEx is able to find the correct exploit for 13 out of 14 programs in the benchmark. Next, we compare the efficiency of NeuEx on buffer overflow exploit generation with an existing symbolic execution method called LoopExtended Symbolic Execution (LESE) (Saxena et al., 2009) which is a dynamic symbolic execution based tool. It is a heuristicbased approach which hardcodes the relationship between loop counts and inputs. We reproduce LESE’s machine configuration for fair comparison. Our experiments show that NeuEx requires maximum two hours to find the exploits on this setup. On the other hand, LESE requires more than five hours. Thus, NeuEx’s performance is comparable to LESE for exploit generation.
In addition, the time that NeuEx spends in exploit generation is not dependent on the complexity of the target code, as NeuEx is a blackbox approach for neural constraint learning. For example, the time spent for analyzing the program Sendmail1 with one loopdependent branch is as same as the time used for program Sendmail3 with 18 loopdependent branches.
Finding 1: NeuEx is able to find the correct exploit for 13 out of 14 programs.
To check whether NeuEx learns the correct constraint, we manually analyze the weights of trained neural constraint (discussed in Appendix A.1). We find that NeuEx is able to learn the neural constraints representing the correct variable relationships. For example, in program Sendmail7, NeuEx not only learns that the final length of vulnerable buffer rrrr_u.rr_txt is controlled by txtlen field of DNS response which is the element of the input, but also the fact that the allocated size for the vulnerable buffer is determined by the size field which is the and elements of DNS response. For the programs that NeuEx successfully generates exploits for, we manually analyze all the neural constraints and find that they all precisely represent the variable relationships in the source code.
Finding 2: NeuEx learns the correct neural constraint to represent the variable relationships in the source code.
NeuEx reaches timeout for exploit generation in only one program (Sendmail6) where the buffer overflow is caused by the integer overflow. NeuEx fails to generate exploits because the neural net treats integers as real values and is not aware of the programmatic behavior that integers wrap around after they exceed the maximum value representable by its bit width. For example, to capture 32bit integer overflow, NeuEx needs to know the rule of integer overflow where the value becomes negative if it is larger than 0x7FFFFFFF in x86. To address this, we can explicitly add this rule as a part of symbolic constraints for all integer types and then solve the neurosymbolic constraints.
5.2. MicroBenchmarking of NeuEx
P  Type  P  Type  P  Type  P  Type  
cohendiv  T2  1  1  dijkstra_2  T3  2    prod4br  T4  1  1  geo3  T1  1  1 
divbin_1  T2  1  1  freire1  T1  1  1  knuth  T4  1  1  ps2  T1  1  1 
divbin_2  T3  1  5  freire2  T1  1  1  fermat1  T3  1  1  ps3  T1  1  1 
mannadiv  T3  1  1  cohencu  T2  1  1  fermat2  T3  3  3  ps4  T1  1  1 
hard_1  T2  1  1  egcd  T3  1  2  lcm1  T3  1  1  ps5  T1  1  1 
hard_2  T3  1  5  egcd2  T3  1  1  lcm2  T3  1  4  ps6  T1  1  1 
sqrt1  T1  1  1  egcd3  T3  1  5  geo1  T1  1  1  
dijkstra_1  T1  1  1  prodbin  T3  1  1  geo2  T1  1  1 
P  Type  P  Type  P  Type  P  Type  
01  T1  1  1  12_1  T1  1  1  24  T1  1  1  36_2  T1  1  1 
02  T1  1  1  12_2  T2  1  1  25  T1  1  1  37  T1  1  1 
03  T1  1  1  13  T1  1  1  26  T1  1  1  38  T1  1  1 
04  T1  1  2  14  T2  3  3  27  T1  1  1  39  T3  1  1 
05  T1  1  1  15  T1  1  1  28_1  T1  1  1  40_1  T1  1  1 
06  T1  1  1  16  T3  1  2  28_2  T3  1  1  40_2  T1  1  1 
07  T1  1  1  17  T1  1  1  29  T1  1  1  41  T2  1  1 
08  T1  1  1  18  T1  1  1  31  T1  1  1  42  T1  1  1 
09_1  T1  1  1  19  T1  1  1  32  T1  1  1  43  T1  1  1 
09_2  T1  1  1  20  T1  1  1  33  T1  1  1  44  T2  2  2 
09_3  T1  1  1  21  T1  1  1  34  T1  1  1  45_1  T1  1  1 
09_4  T1  1  1  22  T1  1  1  35  T1  1  1  45_2  T1  1  1 
10  T1  1  1  23  T1  1  1  36_1  T1  1  1  46  T1  1  1 
We ask three empirical questions with our microbenchmarks:

How fast does NeuEx solve a given neurosymbolic constraint?

What is the accuracy of neural constraints learned by NeuEx?

What is the influence of learning and solving on the overall efficiency of NeuEx?
For this, we use two benchmarks, namely HOLA and NLA, which comprise programs with loops and input variables in total. These two benchmarks are widely used for invariant synthesis (Nguyen et al., 2014, 2017; Gupta and Rybalchenko, 2009) which is useful for formal verification. We select these two benchmarks because they have various kinds of loop invariants and capturing them is known to be a challenge for symbolic execution. To this end, we evaluate NeuEx’s ability to reach the postcondition of the loops in these benchmarks.
For each program, we mark the loop to be represented by neural constraints. In each loop, NeuEx needs to (1) learn the loop invariant , (2) get the symbolic invariant of loop guard from the symbolic execution engine, and (3) solve . Consider the example in Figure 5. NeuEx first learns the neural constraint representing the loop invariant on Line 5. Then, it gets the loop guard on Line 3 from the symbolic execution engine. Finally, it solves the neurosymbolic constraint . For each loop in our benchmarks, we mark all the input arguments (e.g., and ) as well as the loop count as symbolic. If the loop count is not an explicit variable, NeuEx adds an implicit count incremented for each iteration to capture the number of iterations in the loop. Figure 4 shows the type distribution of the negation of loop guards in NLA and HOLA benchmarks which covers all kinds of constraints expressed in Table 2.
Effectiveness of NeuroSymbolic Constraint Solving.
Recall that NeuSolv randomly sets an initial state when it begins the gradientbased optimization of a loss function. If it fails to find a satisfiability result before the timeout, NeuEx needs to restart the search from a different initial state because the search is dependent on the initial state (discussed in Section 3.1). We call each search attempt from a new initial state as one trial. Thus, to evaluate how fast NeuEx solves a given neurosymbolic constraint, we use the number of trials that NeuSolv takes as the metric. The lower the number of trials that NeuEx needs, the faster the neurosymbolic constraint solving. column in Table 4 and Table 5 shows the number of trials NeuEx required to solve the given neurosymbolic constraints for each loop in NLA and HOLA benchmarks. From these results, we find that NeuEx successfully solves of the given neurosymbolic constraints with a maximum of three trials. Among loops, NeuEx solves of neurosymbolic constraints with only one trial. This result indicates that NeuEx can successfully solve various kinds of given neurosymbolic constraints efficiently.
Finding 3: NeuEx is effective in neurosymbolic constraint solving for 100% of constraints with a maximum of three trials.
NeuEx needs more than one trials for loops because of two main reasons. First, our current timeout value is not enough for solving the constraints in two cases (program and in HOLA benchmark). To address this, we can either increase the timeout or restart the search with a new initial state. We experiment on both options and report that the latter can solve the constraints faster. For example, in program , NeuEx solves the given neurosymbolic constraints within trials, but it reaches timeout for one trial where the timeout is increased to threefolds. For the remaining two loops, NeuEx fails because of the inefficiency of gradientbased search in NeuSolv. For example, in program fermat2, NeuSolv gets stuck at the saddle point. To address this, we can apply trust region algorithm (Sorensen, 1982) or cubic regularization (Nesterov and Polyak, 2006) which utilizes secondorder derivative to find and avoid saddle points.
Accuracy of Neural Constraint Learning.
To measure the effectiveness of neural constraint learning, we computes the learning accuracy which is defined as: , where is the number of (unseen) test executions where learned neural constraints predict the right outputs and is the total tested executions. The higher the accuracy, the more precise the learned neural constraints. For loops in our benchmarks, NeuEx achieves more than accuracy for neural constraints. For example, NeuEx achieves accuracy for learning the second loop invariant in program which contains multiple multiplications with divisions.
Finding 4: NeuEx achieves more than learning accuracy for neural constraints.
Combined (Learning + Solving) Efficiency.
There are two steps involved in solving a neurosymbolic task (reaching the postcondition in this case) namely: infer the constraints and solve them. So far in our microbenchmarks, we have evaluated these two steps independent of each other. For completeness, we now present our experimental analysis for understanding how these two steps affect the overall efficiency of NeuEx in performing a given task.
SolvingLearning  Success  Failure 

Success  61/63  10/15 
Failure  0/3  0/1 
’s overall efficiency. We classify constraint learning to be a success when accuracy
and a failure otherwise. We classify constraint solving to be a success when NeuEx solves the given constraints with one trial and a failure otherwise. We classify task solving to be a success when the concrete values generated with 1 trial reaches the postcondition and a failure otherwise. The cell value represents the number of loops which succeed in task solving out of total loops under that category.NeuEx successfully solves out of endtoend tasks in total. Table 6 shows the contributions of each step in solving a neurosymbolic task. When both steps are successful, NeuEx succeeds in solving of tasks (in top left cell), However, when only NeuEx’s solving is unsuccessful ( cases in bottom left cell), it always fails to complete the task. This shows that task solving is directly dependent on constraint solving, and justifies our focus on improving the neurosymbolic constraint solving efficiency in our constraint solver. Ideally, NeuEx must always learn the constraints accurately as well as solve the constraints successfully in order to guarantee postcondition reachability. However, we notice that even when learning is inaccurate, NeuEx is still able to solve the of the tasks (in top right cell). This is because NeuEx is at least able to learn the trend of certain variables involved in the constraints if not the precise constraints. Consider the example in Figure 5. If the neural constraint learns , NeuEx finds the satisfiability result , , , and . Even though the neural constraint does not capture the precise loop invariant , it at least knows that the value of increases with the increase in . This partial learning aids NeuEx to solve the task and find , and . Thus, we conclude that although learning is important, it does not affect task solving as drastically as constraint solving. This highlights the importance of effectiveness in constraint solving.
Finding 5: Constraint solving affects NeuEx’s effectiveness more significantly than constraint learning.
6. Related Work
NeuEx is a new design point in constraint synthesis and constraint solving. In this section, we discuss the problems of the existing symbolic execution tools to show how NeuEx can handle it and presents how NeuEx differs from existing constraint synthesis.
6.1. Symbolic Execution
Symbolic execution (King, 1976a) has been used for program verification (Dannenberg and Ernst, 1982), software testing (King, 1976a; Cadar et al., 2008), and program repair via specification inference (Nguyen et al., 2013). In the last decade, we have witnessed an increased adoption of dynamic symbolic execution (Godefroid et al., 2005) where symbolic execution is used to partition of the input space, with the goal of achieving increased behavioral coverage. The input partitions computed are often defined as program paths, all inputs tracing the same path belong to the same partition. Thus, the test generation achieved by dynamic symbolic execution suffers from the path explosion problem. The problem of path explosion can be exacerbated owing to the presence of complex control flows, including longrunning loops (which may affect the scalability of dynamic symbolic execution since it involves loop unrolling) and external libraries. However, NeuEx does not suffer from the path explosion as it learns the constraints from test executions directly.
Tackling path explosion is a major challenge in symbolic execution. Boonstopel et al. suggest the pruning of redundant paths during the symbolic execution tree construction (Boonstoppel et al., 2008). Veritesting alternates between dynamic symbolic execution and static symbolic execution to mitigate path explosion (Avgerinos et al., 2014b). The other predominant way of tackling the path explosion problem is by summarizing the behavior of code fragments in a program (Godefroid, 2007; Anand et al., 2008; Kuznetsov et al., 2012; Avgerinos et al., 2014a; Qi et al., 2013; Sen et al., 2015). Simply speaking, a summarization technique provides an approximation of the behavior of certain fragments of a program to keep the scalability of symbolic execution manageable. Such an approximation of behaviors is also useful when certain code fragments, such as remote calls and libraries written in a different language, are not available for analysis.
Among the past approaches supporting approximation of behaviors of (parts of) a program, the use of function summaries have been studied by Godefroid (Godefroid, 2007). Such function summaries can also be computed ondemand (Anand et al., 2008)
. Kuznetsov et al. present a selective technique to merge dynamic states. It merges two dynamic symbolic execution runs based on an estimation of the difficulty in solving the resultant Satisfiability Modulo Theory (SMT) constraints
(Kuznetsov et al., 2012). Veritesting suggests supporting dynamic symbolic execution with static symbolic execution thereby alleviating path explosion due to factors such as loop unrolling(Avgerinos et al., 2014a). The works of (Qi et al., 2013; Sen et al., 2015) suggest grouping together paths based on similar symbolic expressions in variables, and use such symbolic expressions as dynamic summaries to group paths.6.2. Constraints Synthesis
To support the summarization of program behaviors, the other core technical primitive we can use is constraint synthesis. In our work, we propose a new constraint synthesis approach which utilizes neural networks to learn the constraints which are infeasible for symbolic execution. In comparison with previous solutions, the major difference is that NeuEx does not require any predefined templates of constraints and can learn any kind of relationships between variables.
Over the last decade, there are two lines of works in constraint synthesis: whitebox and blackbox approaches. Whitebox constraint inference relies on a combination of lightweight techniques such as abstract interpretation (Beyer et al., 2007; Cousot and Cousot, 1977; Cousot and Halbwachs, 1978; Cousot et al., 2005; Miné, 2004; RodríguezCarbonell and Kapur, 2007b, a)
(Chen et al., 2015; McMillan, 2003; Jhala and McMillan, 2006) or model checking algorithm IC3 (Bradley, 2011). Although some whitebox approaches can provide sound and complete constraints (Colón et al., 2003), it is dependent on the availability of source code and a humanspecified semantics of the source language. Constructing these tools have required considerable manual expertise to achieve precision, and many of these techniques can be highly computationally intensive.To handle the unavailability of source code, there also exist a rich class of works on reverse engineering from dynamic executions (Gupta and Rybalchenko, 2009; Garg et al., 2014; Ernst et al., 2007; Sankaranarayanan et al., 2008; Nguyen et al., 2014, 2017; Padhi and Millstein, 2017). Such works can be used to generate summaries of observed behavior from test executions. These summaries are not guaranteed to be complete. On the other hand, such incomplete summaries can be obtained from tests, and hence the source code of the code fragment being summarized need not be available. Daikon (Ernst et al., 2007) is one of the earlier works proposing synthesis of potential invariants from values observed in test executions. The invariants supported in Daikon are in the form of linear relations among program variables. DIG extends Daikon to enable dynamic discovery of nonlinear polynomial invariants via a combination of techniques including equation solving and polyhedral reasoning (Nguyen et al., 2014)
. Krishna et al. use the decision tree, a machine learning technique, to learn the inductive constraints from good and bad test executions
(Krishna et al., 2015).NeuEx devises a new gradientbased constraint solver which is the first work to support the solving of the conjunction of neural and SMT constraints. A similar gradientbased approach is also used in Angora (Chen and Chen, 2018), albeit for a completely different usage. It treats the predicates of branches as a blackbox function which is not differentiable. Then, it computes the changes on the predicates by directly mutating the values of each variable in order to find the direction for changing variables. Similarly, Li et al. utilize the number of satisfied primitive constraints in a path condition as the target function for optimization and applies RACOS algorithm (Yu et al., 2016) to optimize the nondifferentiable function for complementing symbolic execution (Li et al., 2016). However, NeuEx learns a differentiable function to represent the behaviors of the program from the test cases, encodes the symbolic constraints into a differentiable function and embeds it into neural constraints. It computes the values of derivatives for each variable for updating.
A recent work (Bhatia et al., 2018) suggests the combination of neural reasoning and symbolic reasoning, albeit for an entirely different purpose, automated repair of student programming assignments. In contrast, our proposed neurosymbolic execution solves neural and symbolic constraints together, and can be seen as a general purpose testing and analysis engine for programs.
7. Conclusions
To our knowledge, NeuEx is the first work utilizing neural networks to learn the constraints from values observed in test executions without predefined templates. NeuEx offers a new design point to simultaneously solve both symbolic constraints and neural constraints effectively, which can be used for complementing symbolic execution. It achieves good performance in both neurosymbolic constraint solving and exploit generation for buffer overflows.
Acknowledgements.
We thank Marcel Böhme for participating in the initial discussion of the project. We also thank Shruti Tople, Shin Hwei Tan and Xiang Gao for useful feedback on earlier drafts of this paper. This research is supported by a research grant from DSO, Singapore. All opinions expressed in this paper are solely those of the authors.References
 (1)
 jal (2018) 2018. Jalangi2: Dynamic analysis framework for JavaScript. https://github.com/Samsung/jalangi2. (2018).
 PyE (2018) 2018. PyExZ3: Python Exploration with Z3. https://github.com/thomasjball/PyExZ3. (2018).
 Abadi et al. (2016) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A System for LargeScale Machine Learning.. In OSDI, Vol. 16. 265–283.
 Ábrahám (2015) Erika Ábrahám. 2015. Building bridges between symbolic computation and satisfiability checking. In Proceedings of the 2015 ACM on International Symposium on Symbolic and Algebraic Computation. ACM, 1–6.
 Anand et al. (2008) S. Anand, P. Godefroid, and N. Tillman. 2008. Demand driven compositional symbolic execution. In International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS).
 Anand et al. (2007) Saswat Anand, Alessandro Orso, and Mary Jean Harrold. 2007. Typedependence analysis and program transformation for symbolic execution. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 117–133.
 Andoni et al. (2014) Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. 2014. Learning polynomials with neural networks. In International Conference on Machine Learning. 1908–1916.
 Avgerinos et al. (2014a) T. Avgerinos, A. Rebert, S.K. Cha, and D. Brumley. 2014a. Enhancing Symbolic Execution with Veritesting. In Proceedings of International Conference on Software Engineering (ICSE).
 Avgerinos et al. (2014b) Thanassis Avgerinos, Alexandre Rebert, Sang Kil Cha, and David Brumley. 2014b. Enhancing symbolic execution with veritesting. In Proceedings of the 36th International Conference on Software Engineering. ACM, 1083–1094.
 Baldoni et al. (2018) Roberto Baldoni, Emilio Coppa, Daniele Cono D’Elia, Camil Demetrescu, and Irene Finocchi. 2018. A Survey of Symbolic Execution Techniques. ACM Comput. Surv. 51, 3, Article 50 (2018).
 Beyer et al. (2007) Dirk Beyer, Thomas A Henzinger, Rupak Majumdar, and Andrey Rybalchenko. 2007. Path invariants. In Acm Sigplan Notices, Vol. 42. ACM, 300–309.
 Bhatia et al. (2018) S. Bhatia, P. Kohli, and R. Singh. 2018. NeuroSymbolic Program Corrector for Introductory Programming Assignments. In International Conference on Software Engineering (ICSE).
 Bojarski et al. (2016) Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. 2016. End to end learning for selfdriving cars. arXiv preprint arXiv:1604.07316 (2016).
 Boonstoppel et al. (2008) P. Boonstoppel, C. Cadar, and D. Engler. 2008. RWset: Attacking path explosion in constraintbased test generation. In International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS).
 Bradley (2011) Aaron R Bradley. 2011. SATbased model checking without unrolling. In International Workshop on Verification, Model Checking, and Abstract Interpretation. Springer, 70–87.

Bundy and Wallen (1984)
Alan Bundy and Lincoln
Wallen. 1984.
Breadthfirst search.
In
Catalogue of Artificial Intelligence Tools
. Springer, 13–13.  Cadar et al. (2008) Cristian Cadar, Daniel Dunbar, Dawson R Engler, et al. 2008. KLEE: Unassisted and Automatic Generation of HighCoverage Tests for Complex Systems Programs. Proceedings of the USENIX Symposium on Operating System Design and Implementation 8, 209–224.
 Cadar et al. (2006) Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. 2006. EXE: Automatically Generating Inputs of Death. In Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS ’06). ACM, New York, NY, USA, 322–335. https://doi.org/10.1145/1180405.1180445
 Cadar and Sen (2013) Cristian Cadar and Koushik Sen. 2013. Symbolic execution for software testing: three decades later. Commun. ACM 56, 2 (2013), 82–90.
 Canini et al. (2012) Marco Canini, Daniele Venzano, Peter Peresini, Dejan Kostic, and Jennifer Rexford. 2012. A NICE way to test OpenFlow applications. In Proceedings of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI).
 Chen and Chen (2018) Peng Chen and Hao Chen. 2018. Angora: Efficient Fuzzing by Principled Search. arXiv preprint arXiv:1803.01307 (2018).
 Chen et al. (2015) YuFang Chen, ChihDuo Hong, BowYaw Wang, and Lijun Zhang. 2015. Counterexampleguided polynomial loop invariant generation by lagrange interpolation. In International Conference on Computer Aided Verification. Springer, 658–674.
 Chipounov et al. (2011) Vitaly Chipounov, Volodymyr Kuznetsov, and George Candea. 2011. S2E: A platform for invivo multipath analysis of software systems. ACM SIGPLAN Notices 46, 3 (2011), 265–278.
 CoenPorisini et al. (2001) Alberto CoenPorisini, Giovanni Denaro, Carlo Ghezzi, and Mauro Pezzé. 2001. Using symbolic execution for verifying safetycritical systems. In ACM SIGSOFT Software Engineering Notes, Vol. 26. ACM, 142–151.
 Colón et al. (2003) Michael A Colón, Sriram Sankaranarayanan, and Henny B Sipma. 2003. Linear invariant generation using nonlinear constraint solving. In International Conference on Computer Aided Verification. Springer, 420–432.
 Cousot and Cousot (1977) Patrick Cousot and Radhia Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACTSIGPLAN symposium on Principles of programming languages. ACM, 238–252.
 Cousot et al. (2005) Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. 2005. The ASTRÉE analyzer. In European Symposium on Programming. Springer, 21–30.
 Cousot and Halbwachs (1978) Patrick Cousot and Nicolas Halbwachs. 1978. Automatic discovery of linear restraints among variables of a program. In Proceedings of the 5th ACM SIGACTSIGPLAN symposium on Principles of programming languages. ACM, 84–96.
 Cui et al. (2007) Weidong Cui, Jayanthkumar Kannan, and Helen J Wang. 2007. Discoverer: Automatic Protocol Reverse Engineering from Network Traces.. In USENIX Security Symposium. 1–14.
 Daniel et al. (2010) Brett Daniel, Tihomir Gvero, and Darko Marinov. 2010. On test repair using symbolic execution. In Proceedings of the 19th international symposium on Software testing and analysis. ACM, 207–218.
 Dannenberg and Ernst (1982) R.B. Dannenberg and G.W. Ernst. 1982. Formal Program Verification using Symbolic Execution. IEEE Transactions on Software Engineering 8 (1982). Issue 1.
 Davis et al. (1962) Martin Davis, George Logemann, and Donald Loveland. 1962. A machine program for theoremproving. Commun. ACM 5, 7 (1962), 394–397.
 de Moura and Bjørner (2008) Leonardo de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In Tools and Algorithms for the Construction and Analysis of Systems, C. R. Ramakrishnan and Jakob Rehof (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 337–340.
 Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12, Jul (2011), 2121–2159.
 Ernst et al. (2007) Michael D Ernst, Jeff H Perkins, Philip J Guo, Stephen McCamant, Carlos Pacheco, Matthew S Tschantz, and Chen Xiao. 2007. The Daikon system for dynamic detection of likely invariants. Science of Computer Programming 69, 13, 35–45.
 Funahashi (1989) KenIchi Funahashi. 1989. On the approximate realization of continuous mappings by neural networks. Neural networks 2, 3 (1989), 183–192.
 Ganesh et al. (2011) Vijay Ganesh, Adam Kieżun, Shay Artzi, Philip J Guo, Pieter Hooimeijer, and Michael Ernst. 2011. HAMPI: A string solver for testing, analysis and vulnerability detection. In International Conference on Computer Aided Verification. Springer, 1–19.
 Garg et al. (2014) Pranav Garg, Christof Löding, P Madhusudan, and Daniel Neider. 2014. ICE: A robust framework for learning invariants. In International Conference on Computer Aided Verification. Springer, 69–87.
 Glorot et al. (2011) Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 315–323.
 Godefroid (2007) Patrice Godefroid. 2007. Compositional Dynamic Test Generation. In Proceedings of 34th Symposium on Principles of Programming Languages (POPL).
 Godefroid et al. (2005) Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: Directed Automated Random Testing. In Proceedings of International Symposium on Programming Language Design and Implementation (PLDI).
 Godefroid et al. (2012) Patrice Godefroid, Michael Y Levin, and David Molnar. 2012. SAGE: whitebox fuzzing for security testing. Commun. ACM 55, 3 (2012), 40–44.
 Godefroid et al. (2008) Patrice Godefroid, Michael Y Levin, David A Molnar, et al. 2008. Automated whitebox fuzz testing.. In NDSS, Vol. 8. 151–166.

Godfrey and
Gashler (2015)
Luke B Godfrey and
Michael S Gashler. 2015.
A continuum among logarithmic, linear, and
exponential functions, and its potential to improve generalization in neural
networks. In
Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K), 2015 7th International Joint Conference on
, Vol. 1. IEEE, 481–486.  Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
 Gupta and Rybalchenko (2009) Ashutosh Gupta and Andrey Rybalchenko. 2009. Invgen: An efficient invariant generator. In International Conference on Computer Aided Verification. Springer, 634–640.
 Hornik (1991) Kurt Hornik. 1991. Approximation capabilities of multilayer feedforward networks. Neural networks 4, 2 (1991), 251–257.
 Jaffar et al. (2012) Joxan Jaffar, Vijayaraghavan Murali, Jorge A Navas, and Andrew E Santosa. 2012. TRACER: A symbolic execution tool for verification. In International Conference on Computer Aided Verification. Springer, 758–766.
 Jeon et al. (2012) Jinseong Jeon, Kristopher K Micinski, and Jeffrey S Foster. 2012. SymDroid: Symbolic execution for Dalvik bytecode. Technical Report.
 Jhala and McMillan (2006) Ranjit Jhala and Kenneth L McMillan. 2006. A practical and complete approach to predicate refinement. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 459–473.
 King (1976a) J.C. King. 1976a. Symbolic Execution and Program Testing. Commun. ACM 19 (1976). Issue 7.
 King (1976b) James C King. 1976b. Symbolic execution and program testing. Commun. ACM 19, 7 (1976), 385–394.
 Kingma and Ba (2015) Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.
 Krishna et al. (2015) Siddharth Krishna, Christian Puhrsch, and Thomas Wies. 2015. Learning invariants using decision trees. arXiv preprint arXiv:1501.04725 (2015).
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.
 Kuznetsov et al. (2012) V. Kuznetsov, J. Kinder, S. Bucur, and G. Candea. 2012. Efficient state merging in symbolic execution. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI).
 Lawrence et al. (1997) Steve Lawrence, C Lee Giles, Ah Chung Tsoi, and Andrew D Back. 1997. Face recognition: A convolutional neuralnetwork approach. IEEE transactions on neural networks 8, 1 (1997), 98–113.
 Li et al. (2014) Guodong Li, Esben Andreasen, and Indradeep Ghosh. 2014. SymJS: automatic symbolic testing of JavaScript web applications. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 449–459.
 Li et al. (2016) Xin Li, Yongjuan Liang, Hong Qian, YiQi Hu, Lei Bu, Yang Yu, Xin Chen, and Xuandong Li. 2016. Symbolic execution of complex program driven by machine learning based constraint solving. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. ACM, 554–559.
 Liang et al. (2016) Jia Hui Liang, Vijay Ganesh, Pascal Poupart, and Krzysztof Czarnecki. 2016. Exponential Recency Weighted Average Branching Heuristic for SAT Solvers.. In AAAI. 3434–3440.
 Maas et al. (2013) Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30. 3.
 McMillan (2003) Ken McMillan. 2003. Interpolation and SATbased Model Checking. In International Conference on Computer Aided Verification.
 Medsker and Jain (2001) LR Medsker and LC Jain. 2001. Recurrent neural networks. Design and Applications 5 (2001).
 Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
 Miné (2004) Antoine Miné. 2004. Weakly relational numerical abstract domains. Ph.D. Dissertation. Ecole Polytechnique X.
 Moskewicz et al. (2001) Matthew W Moskewicz, Conor F Madigan, Ying Zhao, Lintao Zhang, and Sharad Malik. 2001. Chaff: Engineering an efficient SAT solver. In Proceedings of the 38th annual Design Automation Conference. ACM, 530–535.
 Narodytska et al. (2017) Nina Narodytska, Shiva Prasad Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, and Toby Walsh. 2017. Verifying properties of binarized deep neural networks. arXiv preprint arXiv:1709.06662 (2017).
 Nesterov and Polyak (2006) Yurii Nesterov and Boris T Polyak. 2006. Cubic regularization of Newton method and its global performance. Mathematical Programming 108, 1 (2006), 177–205.
 Nguyen et al. (2013) Hoang Duong Thien Nguyen, Dawei Qi, Abhik Roychoudhury, and Satish Chandra. 2013. Semfix: Program repair via semantic analysis. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 772–781.
 Nguyen et al. (2017) ThanhVu Nguyen, Timos Antonopoulos, Andrew Ruef, and Michael Hicks. 2017. Counterexampleguided approach to finding numerical invariants. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ACM, 605–615.
 Nguyen et al. (2014) Thanhvu Nguyen, Deepak Kapur, Westley Weimer, and Stephanie Forrest. 2014. DIG: a dynamic invariant generator for polynomial and array invariants. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4, 30.
 Padhi and Millstein (2017) Saswat Padhi and Todd Millstein. 2017. DataDriven Loop Invariant Inference with Automatic Feature Synthesis. arXiv preprint arXiv:1707.02029 (2017).

Papernot et al. (2016)
Nicolas Papernot, Patrick
McDaniel, Somesh Jha, Matt Fredrikson,
Z Berkay Celik, and Ananthram Swami.
2016.
The limitations of deep learning in adversarial settings. In
Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372–387.  Perkins et al. (2009) Jeff H Perkins, Sunghun Kim, Sam Larsen, Saman Amarasinghe, Jonathan Bachrach, Michael Carbin, Carlos Pacheco, Frank Sherwood, Stelios Sidiroglou, Greg Sullivan, et al. 2009. Automatically patching errors in deployed software. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. ACM, 87–102.
 Qi et al. (2013) D. Qi, H.D.T Nguyen, and A. Roychoudhury. 2013. Path Exploration using Symbolic Output. ACM Transactions on Software Engineering and Methodology (TOSEM) 22 (2013). Issue 4.
 Qian (1999) Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks 12, 1 (1999), 145–151.
 RodríguezCarbonell and Kapur (2007a) Enric RodríguezCarbonell and Deepak Kapur. 2007a. Automatic generation of polynomial invariants of bounded degree using abstract interpretation. Science of Computer Programming 64, 1 (2007), 54–75.
 RodríguezCarbonell and Kapur (2007b) Enric RodríguezCarbonell and Deepak Kapur. 2007b. Generating all polynomial invariants in simple loops. Journal of Symbolic Computation 42, 4 (2007), 443–476.
 Ruder (2016) Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747 (2016).
 Rumelhart et al. (1985) David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical Report. California Univ San Diego La Jolla Inst for Cognitive Science.
 Sankaranarayanan et al. (2008) Sriram Sankaranarayanan, Swarat Chaudhuri, Franjo Ivančić, and Aarti Gupta. 2008. Dynamic inference of likely data preconditions over predicates by tree learning. In Proceedings of the 2008 international symposium on Software testing and analysis. ACM, 295–306.
 Saxena et al. (2010a) Prateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen McCamant, and Dawn Song. 2010a. A symbolic execution framework for javascript. In Security and Privacy (SP), 2010 IEEE Symposium on. IEEE, 513–528.
 Saxena et al. (2010b) Prateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen McCamant, and Dawn Song. 2010b. A Symbolic Execution Framework for JavaScript. In Proceedings of the 2010 IEEE Symposium on Security and Privacy (SP ’10). IEEE Computer Society, Washington, DC, USA, 513–528. https://doi.org/10.1109/SP.2010.38
 Saxena et al. (2009) Prateek Saxena, Pongsin Poosankam, Stephen McCamant, and Dawn Song. 2009. Loopextended symbolic execution on binary programs. In Proceedings of the eighteenth international symposium on Software testing and analysis. ACM, 225–236.
 Sen et al. (2015) K. Sen, G. Necula, L. Gong, and W. Choi. 2015. multiSE: Multipath Symbolic Execution. In International Symposium on Foundations of Software Engineering.
 Siegel et al. (2015) Stephen F Siegel, Manchun Zheng, Ziqing Luo, Timothy K Zirkel, Andre V Marianiello, John G Edenhofner, Matthew B Dwyer, and Michael S Rogers. 2015. CIVL: the concurrency intermediate verification language. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ACM, 61.
 Silva and Sakallah (1997) João P Marques Silva and Karem A Sakallah. 1997. GRASP—a new search algorithm for satisfiability. In Proceedings of the 1996 IEEE/ACM international conference on Computeraided design. IEEE Computer Society, 220–227.
 Sorensen (1982) Danny C Sorensen. 1982. Newton’s method with a model trust region modification. SIAM J. Numer. Anal. 19, 2 (1982), 409–426.
 Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
 Tikhonov (1963) Andrei Nikolaevich Tikhonov. 1963. On the solution of illposed problems and the method of regularization. In Doklady Akademii Nauk, Vol. 151. Russian Academy of Sciences, 501–504.
 Xie et al. (2009) Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Wolfram Schulte. 2009. Fitnessguided path exploration in dynamic symbolic execution. In Dependable Systems & Networks, 2009. DSN’09. IEEE/IFIP International Conference on. IEEE, 359–368.
 Xie et al. (2016) Xiaofei Xie, Bihuan Chen, Yang Liu, Wei Le, and Xiaohong Li. 2016. Proteus: computing disjunctive loop summary via path dependency analysis. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 61–72.
 Yao et al. (2007) Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. 2007. On early stopping in gradient descent learning. Constructive Approximation 26, 2 (2007), 289–315.
 Yu et al. (2016) Yang Yu, Hong Qian, and YiQi Hu. 2016. DerivativeFree Optimization via Classification.. In AAAI, Vol. 16. 2286–2292.
 Zheng et al. (2013) Yunhui Zheng, Xiangyu Zhang, and Vijay Ganesh. 2013. Z3str: A z3based string solver for web application analysis. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. ACM, 114–124.
 Zitser et al. (2004) Misha Zitser, Richard Lippmann, and Tim Leek. 2004. Testing static analysis tools using exploitable buffer overflows from open source code. In ACM SIGSOFT Software Engineering Notes, Vol. 29. ACM, 97–106.
Appendix A Appendix
a.1. Neural Constraint Analysis
We analyze the learned neural constraints by observing the trained weights and bias of neural network. Given a set of variables as the input to the neural network, if the input variable is not related with the output variable, the weight between the input and output variable is zero; otherwise, it is larger than zero. For example, the length of vulnerable buffer in program Bind1 is controlled by dlen field which is the byte of of DNS queries, because the weight for this input variable is which has the largest absolute value compared with other fields.
Comments
There are no comments yet.