Approximate Reduction of Finite Automata for High-Speed Network Intrusion Detection (Technical Report)

10/24/2017 ∙ by Milan Ceska, et al. ∙ 0

We consider the problem of approximate reduction of non-deterministic automata that appear in hardware-accelerated network intrusion detection systems (NIDSes). We define an error distance of a reduced automaton from the original one as the probability of packets being incorrectly classified by the reduced automaton (wrt the probabilistic distribution of packets in the network traffic). We use this notion to design an approximate reduction procedure that achieves a great size reduction (much beyond the state-of-the-art language-preserving techniques) with a controlled and small error. We have implemented our approach and evaluated it on use cases from Snort, a popular NIDS. Our results provide experimental evidence that the method can be highly efficient in practice, allowing NIDSes to follow the rapid growth in the speed of networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

appreal

APProximate REduction of Automata and Languages


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent years have seen a boom in the number of security incidents in computer networks. In order to alleviate the impact of network attacks and intrusions, Internet providers want to detect malicious traffic at their network’s entry points and on the backbones between sub-networks. Software-based network intrusion detection systems (NIDSes), such as the popular open-source system Snort [1], are capable of detecting suspicious network traffic by testing (among others) whether a packet payload matches a regular expression (regex) describing known patterns of malicious traffic. NIDSes collect and maintain vast databases of such regexes that are typically divided into groups according to types of the attacks and target protocols.

Regex matching is the most computationally demanding task of a NIDS as its cost grows with the speed of the network traffic as well as with the number and complexity of the regexes being matched. The current software-based NIDSes cannot perform the regex matching on networks beyond 1 Gbps [2, 3], so they cannot handle the current speed of backbone networks ranging between tens and hundreds of Gbps. A promising approach to speed up NIDSes is to (partially) offload regex matching into hardware [3, 4, 5]. The hardware then serves as a pre-filter of the network traffic, discarding the majority of the packets from further processing. Such pre-filtering can easily reduce the traffic the NIDS needs to handle by two or three orders of magnitude [3].

Field-programmable gate arrays (FPGAs) are the leading technology in high-throughput regex matching. Due to their inherent parallelism, FPGAs provide an efficient way of implementing nondeterministic finite automata (NFAs), which naturally arise from the input regexes. Although the amount of available resources in FPGAs is continually increasing, the speed of networks grows even faster. Working with multi-gigabit networks requires the hardware to use many parallel packet processing branches in a single FPGA [5]; each of them implementing a separate copy of the concerned NFA, and so reducing the size of the NFAs is of the utmost importance. Various language-preserving automata reduction approaches exist, mainly based on computing (bi)simulation relations on automata states (cf. the related work). The reductions they offer, however, do not satisfy the needs of high-speed hardware-accelerated NIDSes.

Our answer to the problem is approximate reduction of NFAs, allowing for a trade-off between the achieved reduction and the precision of the regex matching. To formalise the intuitive notion of precision, we propose a novel probabilistic distance of automata. It captures the probability that a packet of the input network traffic is incorrectly accepted or rejected by the approximated NFA. The distance assumes a probabilistic model of the network traffic (we show later how such a model can be obtained).

Having formalised the notion of precision, we specify the target of our reductions as two variants of an optimization problem: (1) minimizing the NFA size given the maximum allowed error (distance from the original), or (2) minimizing the error given the maximum allowed NFA size. Finding such optimal approximations is, however, computationally hard (PSPACE-complete, the same as precise NFA minimization).

Consequently, we sacrifice the optimality and, motivated by the typical structure of NFAs that emerge from a set of regexes used by NIDSes (a union of many long “tentacles” with occasional small strongly-connected components), we limit the space of possible reductions by restricting the set of operations they can apply to the original automaton. Namely, we consider two reduction operations: (i) collapsing the future of a state into a self-loop (this reduction over-approximates the language), or (ii) removing states(such a reduction is under-approximating).

The problem of identifying the optimal sets of states on which these operations should be applied is still PSPACE-complete. The restricted problem is, however, more amenable to an approximation by a greedy algorithm. The algorithm applies the reductions state-by-state in an order determined by a precomputed error labelling of the states. The process is stoppped once the given optimization goal in terms of the size or error is reached. The labelling is based on the probability of packets that may be accepted through a given state and hence over-approximates the error that may be caused by applying the reduction at a given state. As our experiments show, this approach can give us high-quality reductions while ensuring formal error bounds.

Finally, it turns out that even the pre-computation of the error labelling of the states is costly (again PSPACE-complete). Therefore, we propose several ways to cheaply over-approximate it such that the strong error bound guarantees are still preserved. Particularly, we are able to exploit the typical structure of the “union of tentacles” of the hardware NFA in an algorithm that is exponential in the size of the largest “tentacle” only, which is indeed much faster in practice.

We have implemented our approach and evaluated it on regexes used to classify malicious traffic in Snort. We obtain quite encouraging experimental results demonstrating that our approach provides a much better reduction than language-preserving techniques with an almost negligible error. In particular, our experiments, going down to the level of an actual implementation of NFAs in FPGAs, confirm that we can squeeze into an up-to-date FPGA chip real-life regexes encoding malicious traffic, allowing them to be used with a negligible error for filtering at speeds of 100 Gbps (and even 400 Gbps). This is far beyond what one can achieve with current exact reduction approaches.

Related Work

Hardware acceleration for regex matching at the line rate is an intensively studied technology that uses general-purpose hardware [6, 7, 8, 9, 10, 11, 12, 13, 14] as well as FPGAs [15, 16, 17, 18, 19, 20, 3, 4, 5]. Most of the works focus on DFA implementation and optimization techniques. NFAs can be exponentially smaller than DFAs but need, in the worst case, memory accesses to process each byte of the payload where  is the number of states. In most cases, this incurs an unacceptable slowdown. Several works alleviate this disadvantage of NFAs by exploiting reconfigurability and fine-grained parallelism of FPGAs, allowing one to process one character per clock cycle (e.g. [15, 16, 19, 20, 3, 4, 5]).

In [14], which is probably the closest work to ours, the authors consider a set of regexes describing network attacks. They replace a potentially prohibitively large DFA by a tree of smaller DFAs, an alternative to using NFAs that minimizes the latency occurring in a non-FPGA-based implementation. The language of every DFA-node in the tree over-approximates the languages of its children. Packets are filtered through the tree from the root downwards until they belong to the language of the encountered nodes, and may be finally accepted at the leaves, or are rejected otherwise. The over-approximating DFAs are constructed using a similar notion of probability of an occurrence of a state as in our approach. The main differences from our work are that (1) the approach targets approximation of DFAs (not NFAs), (2) the over-approximation is based on a given traffic sample only (it cannot benefit from a probabilistic model), and (3) no probabilistic guarantees on the approximation error are provided.

Approximation of DFAs was considered in various other contexts. Hyper-minimization is an approach that is allowed to alter language membership of a finite set of words [21, 22]. A DFA with a given maximum number of states is constructed in [23]

, minimizing the error defined either by (i) counting prefixes of misjudged words up to some length, or (ii) the sum of the probabilities of the misjudged words wrt the Poisson distribution over

. Neither of these approaches considers reduction of NFAs nor allows to control the expected error with respect to the real traffic.

In addition to the metrics mentioned above when discussing the works [23, 21, 22], the following metrics should also be mentioned. The Cesaro-Jaccard distance studied in [24] is, in spirit, similar to [23] and does also not reflect the probability of individual words. The edit distance of weighted automata from [25] depends on the minimum edit distance between pairs of words from the two compared languages, again regardless of their statistical significance. None of these notions is suitable for our needs.

Language-preserving minimization of NFAs is a PSPACE-complete problem [26, 27]. More feasible (polynomial-time) is language-preserving size reduction of NFAs based on (bi)simulations [28, 29, 30, 31], which does not aim for a truly minimal NFA. A number of advanced variants exist, based on multi-pebble or look-ahead simulations, or on combinations of forward and backward simulations [32, 33, 34]. The practical efficiency of these techniques is, however, often insufficient to allow them to handle the large NFAs that occur in practice and/or they do not manage to reduce the NFAs enough. Finally, even a minimal NFA for the given set of regexes is often too big to be implemented in the given FPGA operating on the required speed (as shown even in our experiments). Our approach is capable of a much better reduction for the price of a small change of the accepted language.

2 Preliminaries

We use to denote the set and to denote the set . Given a pair of sets and , we use to denote their symmetric difference, i.e., the set . We use the notation

to denote a vector of

 elements, to denote the all 1’s vector , to denote a matrix, and for its transpose, and

for the identity matrix.

In the following, we fix a finite non-empty alphabet . A nondeterministic finite automaton (NFA) is a quadruple where is a finite set of states, is a transition function, is a set of initial states, and is a set of accepting states. We use , and  to denote , and , respectively, and to denote that . A sequence of states is a run of over a word  from a state  to a state , denoted as , if , , and . Sometimes, we use  in set operations where it behaves as the set of states it contains. We also use to denote that and to denote that . The language of a state  is defined as and its banguage (back-language) is defined as . Both notions can be naturally extended to a set : and . We drop the subscript when the context is obvious.  accepts the language defined as . is called deterministic (DFA) if and and , and unambiguous (UFA) if .

The restriction of  to is an NFA  given as . We define the trim operation as where . For a set of states , we use to denote the set of states reachable from , formally, . We use the number of states as the measurement of the size of , i.e., .

A (discrete probability) distribution over a set is a mapping such that . An -state probabilistic automaton (PA) over is a triple where is a vector of initial weights, is a vector of final weights, and for every , is a transition matrix for symbol . We abuse notation and use to denote the set of states . Moreover, the following two properties need to hold: (i) (the initial probability is 1) and (ii) for every state it holds that (the probability of accepting or leaving a state is 1). We define the support of as the NFA  s.t.

Let us assume that every PA  is such that . For a word , we use to denote the matrix . It can be easily shown that  represents a distribution over words  defined as . We call  the probability of  in . Given a language , we define the probability of  in  as .

If Conditions (i) and (ii) from the definition of PAs are dropped, we speak about a pseudo-probabilistic automaton (PPA), which may assign a word from its support a quantity that is not necessarily in the range , denoted as the significance of the word below. PPAs may arise during some of our operations performed on PAs.

3 Approximate Reduction of NFAs

In this section, we first introduce the key notion of our approach: a probabilistic distance of a pair of finite automata wrt a given probabilistic automaton that, intuitively, represents the significance of particular words. We discuss the complexity of computing the probabilistic distance. Finally, we formulate two problems of approximate automata reduction via probabilistic distance.

3.1 Probabilistic Distance

We start by defining our notion of a probabilistic distance of two NFAs. Assume NFAs and and a probabilistic automaton  specifying the distribution . The probabilistic distance between and wrt is defined as

Intuitively, the distance captures the significance of the words accepted by one of the automata only. We use the distance to drive the reduction process towards automata with small errors and to assess the quality of the resulting automata.

The value of can be computed as follows. Using the fact that (1) and (2) , we get

Hence, the key step is to compute for an NFA  and a PA . Problems similar to computing such a probability have been extensively studied in several contexts including verification of probabilistic systems [35, 36, 37]. The below lemma summarises the complexity of this step.

Lemma 1 ()

Let be a PA and an NFA. The problem of computing is PSPACE-complete. For a UFA , can be computed in PTIME.

In our approach, we apply the method of [37] and compute in the following way. We first check whether the NFA is unambiguous. This can be done by using the standard product construction (denoted as ) for computing the intersection of the NFA with itself and trimming the result, formally , followed by a check whether there is some state s.t. [38]. If is ambiguous, we either determinise it or disambiguate it [38], leading to a DFA/UFA , respectively.111In theory, disambiguation can produce smaller automata, but, in our experiments, determinisation proved to work better. Then, we construct the trimmed product of and (this can be seen as computing while keeping the probabilities from  on the edges of the result), yielding a PPA .222 is not necessarily a PA since there might be transitions in that are either removed or copied several times in the product construction. Intuitively,  represents not only the words of but also their probability in . Now, let be the matrix that expresses, for any , the significance of getting from to via any . Further, it can be shown (cf. the proof of Lemma 1 in the Appendix) that the matrix , representing the significance of going from to via any , can be computed as . Then, to get , it suffices to take . Note that, due to the determinisation/disambiguation step, the obtained value indeed is despite  being a PPA.

3.2 Automata Reduction using Probabilistic Distance

We now exploit the above introduced probabilistic distance to formulate the task of approximate reduction of NFAs as the following two optimisation problems. Given an NFA  and a PA  specifying the distribution , we define

  • size-driven reduction: for , find an NFA  such that  and the distance is minimal,

  • error-driven reduction: for , find an NFA  such that and the size  is minimal.

The following lemma shows that the natural decision problem underlying both of the above optimization problems is PSPACE-complete, which matches the complexity of computing the probabilistic distance as well as that of the exact reduction of NFAs [26].

Lemma 2 ()

Consider an NFA , a PA , a bound on the number of states , and an error bound . It is PSPACE-complete to determine whether there exists an NFA  with  states s.t. .

The notions defined above do not distinguish between introducing a false positive ( accepts a word ) or a false negative ( does not accept a word ) answers. To this end, we define over-approximating and under-approximating reductions as reductions for which the additional conditions and hold, respectively.

A naïve solution to the reductions would enumerate all NFAs of sizes from 0 up to (resp. ), for each of them compute , and take an automaton with the smallest probabilistic distance (resp. a smallest one satisfying the restriction on ). Obviously, this approach is computationally infeasible.

4 A Heuristic Approach to Approximate Reduction

In this section, we introduce two techniques for approximate reduction of NFAs that avoid the need to iterate over all automata of a certain size. The first approach under-approximates the automata by removing states—we call it the pruning reduction—while the second approach over-approximates the automata by adding self-loops to states and removing redundant states—we call it the self-loop reduction. Finding an optimal automaton using these reductions is also PSPACE

-complete, but more amenable to heuristics like greedy algorithms. We start with introducing two high-level greedy algorithms, one for the size- and one for the error-driven reduction, and follow by showing their instantiations for the pruning and the self-loop reduction. A crucial role in the algorithms is played by a function that labels states of the automata by an estimate of the error that will be caused when some of the reductions is applied at a given state.

4.1 A General Algorithm for Size-Driven Reduction

Input : NFA , PA ,
Output : NFA , s.t. and
1 ;
2 for  in the order  do
3       ; ;
4       if  then  break ;
5      
6return , ;
7
[0mm]
Algorithm 1 A greedy size-driven reduction

Algorithm 1 shows a general greedy method for performing the size-driven reduction. In order to use the same high-level algorithm in both directions of reduction (over/under-approximating), it is parameterized with three functions: , and . The real intricacy of the procedure is hidden inside these three functions. Intuitively, assigns every state of an NFA  an approximation of the error that will be caused wrt the PA  when a reduction is applied at this state, while the purpose of is to create a new NFA  obtained from  by introducing some error at states from .333We emphasize that this does not mean that states from will be simply removed from —the performed operation depends on the particular reduction. Further, estimates the error introduced by the application of , possibly in a more precise (and costly) way than by just summing the concerned error labels: Such a computation is possible outside of the main computation loop. We show instantiations of these functions later, when discussing the reductions used. Moreover, the algorithm is also parameterized with a total order that defines which states of are processed first and which are processed later. The ordering may take into account the precomputed labelling. The algorithm accepts an NFA , a PA , and and outputs a pair consisting of an NFA  of the size and an error bound such that .

The main idea of the algorithm is that it creates a set  of states where an error is to be introduced. is constructed by starting from an empty set and adding states to it in the order given by , until the size of the result of has reached the desired bound  (in our setting, is always antitone, i.e., for , it holds that ). We now define the necessary condition for , and that makes Algorithm 1 correct.

Condition C1 holds if for every NFA , PA , and a set , we have that (a) , (b) , and (c) .

C1(a) ensures that the error computed by the reduction algorithm indeed over-approximates the exact probabilistic distance, C1(b) ensures that the algorithm can (in the worst case, by applying the reduction at every state of ) for any output a result  of the size , and C1(c) ensures that when no error is to be introduced at any state, we obtain the original automaton.

Lemma 3

Algorithm 1 is correct if C1 holds.

Proof

Follows straightforwardly from Condition C1. ∎

4.2 A General Algorithm for Error-Driven Reduction

Input : NFA , PA ,
Output : NFA s.t.
1 ;
2 ;
3 for  in the order  do
4       ;
5       if  then  ;
6      
7return ;
Algorithm 2 A greedy error-driven reduction.

In Algorithm 2, we provide a high-level method of computing the error-driven reduction. The algorithm is in many ways similar to Algorithm 1; it also computes a set ofstates  where an error is to be introduced, but an important difference is that we compute an approximation of the error in each step and only add to if it does not raise the error over the threshold . Note that the does not need to be monotone, so it may be advantageous to traverse all states from  and not terminate as soon as the threshold is reached. The correctness of Algorithm 2 also depends on C1.

Lemma 4

Algorithm 2 is correct if C1 holds.

Proof

Follows straightforwardly from Condition C1. ∎

4.3 Pruning Reduction

The pruning reduction is based on identifying a set of states to be removed from an NFA , under-approximating the language of . In particular, for , the pruning reduction finds a set and restricts  to , followed by removing useless states, to construct a reduced automaton . Note that the natural decision problem corresponding to this reduction is also PSPACE-complete.

Lemma 5 ()

Consider an NFA , a PA , a bound on the number of states , and an error bound . It is PSPACE-complete to determine whether there exists a subset of states of the size such that .

Although Lemma 5 shows that the pruning reduction is as hard as a general reduction (cf. Lemma 2), the pruning reduction is more amenable to the use of heuristics like the greedy algorithms from §4.1 and §4.2. We instantiate , , and in these high-level algorithms in the following way (the subscript means pruning):

where is defined as follows. Because of the use of in , for a pair of sets  s.t. , it holds that may, in general, yield the same automaton as . Hence, we define a partial order on as iff and , and use to denote the set of minimal elements wrt  and . The value of the approximation is therefore the minimum of the sum of errors over all sets from .

Note that the size of can again be exponential, and thus we employ a greedy approach for guessing an optimal . Clearly, this cannot affect the soundness of the algorithm, but only decreases the precision of the bound on the distance. Our experiments indicate that for automata appearing in NIDSes, this simplification has typically only a negligible impact on the precision of the bounds.

For computing the state labelling, we provide the following three functions, which differ in the precision they provide and the difficulty of their computation (naturally, more precise labellings are harder to compute): , and . Given an NFA  and a PA , they generate the labellings , and , respectively, defined as

A state label approximates the error of the words removed from  when is removed. More concretely, is a rough estimate saying that the error can be bounded by the sum of probabilities of the banguages of all final states reachable from  (in the worst case, all those final states might become unreachable). Note that (1) counts the error of a word accepted in two different final states of twice, and (2) also considers words that are accepted in some final state in without going through . The labelling  deals with (1) by computing the total probability of the banguage of the set of all final states reachable from , and the labelling  in addition also deals with (2) by only considering words that traverse through  (they can still be accepted in some final state not in though, so even is still imprecise). Note that if is unambiguous then .

When computing the label of , we first modify  to obtain  accepting the language related to the particular labelling. Then, we compute the value of using the algorithm from §3.1. Recall that this step is in general costly, due to the determinisation/disambiguation of . The key property of the labelling computation resides in the fact that if  is composed of several disjoint sub-automata, the automaton is typically much smaller than and thus the computation of the label is considerable less demanding. Since the automata appearing in regex matching for NIDS are composed of the union of “tentacles”, the particular s are very small, which enables efficient component-wise computation of the labels.

The following lemma states the correctness of using the pruning reduction as an instantiation of Algorithms 1 and 2 and also the relation among , , and .

Lemma 6 ()

For every , the functions , , and satisfy C1. Moreover, consider an NFA , a PA , and let for . Then, for each , we have .

4.4 Self-loop Reduction

The main idea of the self-loop reduction is to over-approximate the language of  by adding self-loops over every symbol at selected states. This makes some states of  redundant, allowing them to be removed without introducing any more error. Given an NFA , the self-loop reduction searches for a set of states , which will have self-loops added, and removes other transitions leading out of these states, making some states unreachable. The unreachable states are then removed.

Formally, let be the NFA whose transition function is defined, for all and , as if and otherwise. As with the pruning reduction, the natural decision problem corresponding to the self-loop reduction is also PSPACE-complete.

Lemma 7 ()

Consider an NFA , a PA , a bound on the number of states , and an error bound . It is PSPACE-complete to determine whether there exists a subset of states of the size such that .

The required functions in the error- and size-driven reduction algorithms are instantiated in the following way (the subcript means self-loop):

where is defined in a similar manner as  in the previous section (using a partial order defined similarly to ; in this case, the order has a single minimal element, though).

The functions , and compute the state labellings , and  for an NFA  and a PA  defined as follows:

Above, for a PA  and a word is defined as (i.e., similarly as  but with the final weights  discarded), and for is defined as .

Intuitively, the state labelling computes the probability that  is reached from an initial state, so if is pumped up with all possible word endings, this is the maximum possible error introduced by the added word endings. This has the following sources of imprecision: (1) the probability of some words may be included twice, e.g., when , the probabilities of all words from are included twice in because , and (2) can also contain probabilities of words that are already accepted on a run traversing . The state labelling deals with (1) by considering the probability of the language , and deals also with (2) by subtracting from the result of the probabilities of the words that pass through  and are accepted.

The computation of the state labellings for the self-loop reduction is done in a similar way as the computation of the state labellings for the pruning reduction (cf. §4.3). For a computation of one can use the same algorithm as for , only the final vector for PA is set to . The correctness of Algorithms 1 and 2 when instantiated using the self-loop reduction is stated in the following lemma.

Lemma 8 ()

For every , the functions , , and satisfy C1. Moreover, consider an NFA , a PA , and let for . Then, for each , we have .

5 Reduction of NFAs in Network Intrusion Detection Systems

We have implemented our approach in a Python prototype named Appreal (APProximate REduction of Automata and Languages)444https://github.com/vhavlena/appreal/tree/tacas18 and evaluated it on the use case of network intrusion detection using Snort [1], a popular open source NIDS. The version of Appreal used for the evaluation in the current paper is available as an artifact [39] for the TACAS’18 artifact virtual machine [40].

5.1 Network Traffic Model

The reduction we describe in this paper is driven by a probabilistic model representing a distribution over , and the formal guarantees are also wrt this model. We use learning to obtain a model of network traffic over the 8-bit ASCII alphabet at a given network point. Our model is created from several gigabytes of network traffic from a measuring point of the CESNET Internet provider connected to a 100 Gbps backbone link (unfortunately, we cannot provide the traffic dump since it may contain sensitive data).

Learning a PA representing the network traffic faithfully is hard. The PA cannot be too specific—although the number of different packets that can occur is finite, it is still extremely large (a conservative estimate assuming the most common scenario Ethernet/IPv4/TCP would still yield a number over ). If we assigned non-zero probabilities only to the packets from the dump (which are less than ), the obtained model would completely ignore virtually all packets that might appear on the network, and, moreover, the model would also be very large (millions of states), making it difficult to use in our algorithms. A generalization of the obtained traffic is therefore needed.

A natural solution is to exploit results from the area of PA learning, such as [41, 42]. Indeed, we experimented with the use of Alergia [41], a learning algorithm that constructs a PA from a prefix tree (where edges are labelled with multiplicities) by merging nodes that are “similar.” The automata that we obtained were, however, too general. In particular, the constructed automata destroyed the structure of network protocols—the merging was too permissive and the generalization merged distant states, which introduced loops over a very large substructure in the automaton (such a case usually does not correspond to the design of network protocols). As a result, the obtained PA more or less represented the Poisson distribution, having essentially no value for us.

In §5.2, we focus on the detection of malicious traffic transmitted over HTTP. We take advantage of this fact and create a PA representing the traffic while taking into account the structure of HTTP. We start by manually creating a DFA that represents the high-level structure of HTTP. Then, we proceed by feeding 34,191 HTTP packets from our sample into the DFA, at the same time taking notes about how many times every state is reached and how many times every transition is taken. The resulting PA (of 52 states) is then obtained from the DFA and the labels in the obvious way.

The described method yields automata that are much better than those obtained using Alergia in our experiments. A disadvantage of the method is that it is only semi-automatic—the basic DFA needed to be provided by an expert. We have yet to find an algorithm that would suit our needs for learning more general network traffic.

5.2 Evaluation

We start this section by introducing the experimental setting, namely, the integration of our reduction techniques into the tool chain implementing efficient regex matching, the concrete settings of Appreal, and the evaluation environment. Afterwards, we discuss the results evaluating the quality of the obtained approximate reductions as well as of the provided error bounds. Finally, we present the performance of our approach and discuss its key aspects. Due to the lack of space, we selected the most interesting results demonstrating the potential as well as the limitations of our approach.

General setting.

Snort detects malicious network traffic based on rules that contain conditions. The conditions may take into consideration, among others, network addresses, ports, or Perl compatible regular expressions (PCREs) that the packet payload should match. In our evaluation, we always select a subset of Snort rules, extract the PCREs from them, and use Netbench [20] to transform them into a single NFA . Before applying Appreal, we use the state-of-the-art NFA reduction tool Reduce [43] to decrease the size of . Reduce performs a language-preserving reduction of  using advanced variants of simulation [32] (in the experiment reported in Table 3, we skip the use of Reduce at this step as discussed in the performance evaluation). The automaton obtained as the result of Reduce is the input of Appreal, which performs one of the approximate reductions from §4 wrt the traffic model , yielding . After the approximate reduction, we, one more time, use Reduce and obtain the result .

Settings of Appreal.

In the use case of NIDS pre-filtering, it may be important to never introduce a false negative, i.e., to never drop a malicious packet. Therefore, we focus our evaluation on the self-loop reduction (§4.4). In particular, we use the state labelling function , since it provides a good trade-off between the precision and the computational demands (recall that the computation of can exploit the “tentacle” structure of the NFAs we work with). We give more attention to the size-driven reduction (§4.1) since, in our setting, a bound on the available FPGA resources is typically given and the task is to create an NFA with the smallest error that fits inside. The order over states used in §4.1 and §4.2 is defined as .

Evaluation environment.

All experiments run on a 64-bit Linux Debian workstation with the Intel Core(TM) i5-661 CPU running at 3.33 GHz with 16 GiB of RAM.

Description of tables.

In the caption of every table, we provide the name of the input file (in the directory regexps/tacas18/ of the repository of Appreal) with the selection of Snort regexes used in the particular experiment, together with the type of the reduction (size- or error-driven). All reductions are over-approximating (self-loop reduction). We further provide the size of the input automaton , the size after the initial processing by Reduce (), and the time of this reduction (). Finally, we list the times of computing the state labelling  on  (), the exact probabilistic distance (), and also the number of look-up tables () consumed on the targeted FPGA (Xilinx Virtex 7 H580T) when was synthesized (more on this in §5.3). The meaning of the columns in the tables is the following:

  • is the parameter of the reduction. In particular, is used for the size-driven reduction and denotes the desired reduction ration for an input NFA  and the desired size of the output . On the other hand, is the desired maximum error on the output for the error-driven reduction.

  • shows the number of states of the automaton  after the reduction by Appreal and the time the reduction took (we omit it when it is not interesting).

  • contains the number of states of the NFA  obtained after applying Reduce on and the time used by Reduce at this step (omitted when not interesting).

  • shows the estimation of the error of  as determined by the reduction itself, i.e., it is the probabilistic distance computed by the function in §4.

  • contains the values of that we computed after the reduction in order to evaluate the precision of the result given in Error bound. The computation of this value is very expensive () since it inherently requires determinisation of the whole automaton . We do not provide it in Table 3 (presenting the results for the automaton with 1,352 states) because the determinisation ran out of memory (the step is not required in the reduction process).

  • shows the error that we obtained when compared  with  on an HTTP traffic sample, in particular the ratio of packets misclassified by  to the total number of packets in the sample (242,468). Comparing Exact error with Traffic error gives us a feedback about the fidelity of the traffic model . We note that there are no guarantees on the relationship between Exact error and Traffic error.

  • is the number of LUTs consumed by when synthesized into the FPGA. Hardware synthesis is a costly step so we provide this value only for selected NFAs.

center Error Exact Traffic bound error error LUTs 0.1 9 (0.65 s) 9 (0.4 s) 0.0704 0.0704 0.0685 0.2 19 (0.66 s) 19 (0.5 s) 0.0677 0.0677 0.0648 0.3 29 (0.69 s) 26 (0.9 s) 0.0279 0.0278 0.0598 154 0.4 39 (0.68 s) 36 (1.1 s) 0.0032 0.0032 0.0008 0.5 49 (0.68 s) 44 (1.4 s) 2.8e-05 2.8e-05 4.1e-06 0.6 58 (0.69 s) 49 (1.7 s) 8.7e-08 8.7e-08 0.0 224 0.8 78 (0.69 s) 75 (2.7 s) 2.4e-17 2.4e-17 0.0 297

(a) size-driven reduction

center Error Exact Traffic bound error error 0.08 3 3 0.0724 0.0724 0.0720 0.07 4 4 0.0700 0.0700 0.0683 0.04 35 32 0.0267 0.0212 0.0036 0.02 36 33 0.0105 0.0096 0.0032 0.001 41 38 0.0005 0.0005 0.0003 1e-04 47 41 7.7e-05 7.7e-05 1.2e-05 1e-05 51 47 6.6e-06 6.6e-06 0.0

(b) error-driven reduction
Table 1: Results for the http-malicious regex, , ,
 s,  s, 3.8–6.5 s, and
.

Approximation errors

center Error Exact Traffic bound error error 0.1 11 (1.1s) 5 (0.4s) 1.0 0.9972 0.9957 0.2 22 (1.1s) 14 (0.6s) 1.0 0.8341 0.2313 0.3 33 (1.1s) 24 (0.7s) 0.081 0.0770 0.0067 0.4 44 (1.1s) 37 (1.6s) 0.0005 0.0005 0.0010 0.5 56 (1.1s) 49 (1.2s) 3.3e-06 3.3-06 0.0010 0.6 67 (1.1s) 61 (1.9s) 1.2e-09 1.2e-09 8.7e-05 0.7 78 (1.1s) 72 (2.4s) 4.8e-12 4.8e-12 1.2e-05 0.9 100 (1.1s) 93 (4.7s) 3.7e-16 1.1e-15 0.0

Table 2: Results for the http-attacks regex, size-driven reduction, , ,  s,  min, 14.0–16.4 min.

Table 1 presents the results of the self-loop reduction for the NFA  describing regexes from http-malicious. We can observe that the differences between the upper bounds on the probabilistic distance and its real value are negligible (typically in the order of or less). We can also see that the probabilistic distance agrees with the traffic error. This indicates a good quality of the traffic model employed in the reduction process. Further, we can see that our approach can provide useful trade-offs between the reduction error and the reduction factor. Finally, Table 1(b) shows that a significant reduction is obtained when the error threshold  is increased from 0.04 to 0.07.

Table 2 presents the results of the size-driven self-loop reduction for NFA  describing http-attacks regexes. We can observe that the error bounds provide again a very good approximation of the real probabilistic distance. On the other hand, the difference between the probabilistic distance and the traffic error is larger than for . Since all experiments use the same probabilistic automaton and the same traffic, this discrepancy is accounted to the different set of packets that are incorrectly accepted by . If the probability of these packets is adequately captured in the traffic model, the difference between the distance and the traffic error is small and vice versa. This also explains an even larger difference in Table 3 (presenting the results for constructed from http-backdoor regexes) for . Here, the traffic error is very small and caused by a small set of packets (approx. 70), whose probability is not correctly captured in the traffic model. Despite this problem, the results clearly show that our approach still provides significant reductions while keeping the traffic error small: about a 5-fold reduction is obtained for the traffic error 0.03 % and a 10-fold reduction is obtained for the traffic error 6.3 %. We discuss the practical impact of such a reduction in §5.3.

Performance of the approximate reduction

center Error Traffic bound error LUTs 0.1 135 (1.2 m) 8 (2.6 s) 1.0 0.997 202 0.2 270 (1.2 m) 111 (5.2 s) 0.0012 0.0631 579 0.3 405 (1.2 m) 233 (9.8 s) 3.4e-08 0.0003 894 0.4 540 (1.3 m) 351 (21.7 s) 1.0e-12 0.0003 1063 0.5 676 (1.3 m) 473 (41.8 s) 1.2e-17 0.0 1249 0.7 946 (1.4 m) 739 (2.1 m) 8.3e-30 0.0 1735 0.9 1216 (1.5 m) 983 (5.6 m) 1.3e-52 0.0 2033

Table 3: Results for http-backdoor, size-driven reduction, ,  min, .

In all our experiments (Tables 1–3), we can observe that the most time-consuming step of the reduction process is the computation of state labellings (it takes at least 90 % of the total time). The crucial observation is that the structure of the NFAs fundamentally affects the performance of this step. Although after Reduce, the size of is very similar to the size of , computing takes more time (28.3 min vs. 38.7 s). The key reason behind this slowdown is the determinisation (or alternatively disambiguation) process required by the product construction underlying the state labelling computation (cf. §4.4). For , the process results in a significantly larger product when compared to the product for . The size of the product directly determines the time and space complexity of solving the linear equation system required for computing the state labelling.

As explained in §4, the computation of the state labelling can exploit the “tentacle” structure of the NFAs appearing in NIDSes and thus can be done component-wise. On the other hand, our experiments reveal that the use of Reduce typically breaks this structure and thus the component-wise computation cannot be effectively used. For the NFA , this behaviour does not have any major performance impact as the determinisation leads to a moderate-sized automaton and the state labelling computation takes less than 40 s. On the other hand, this behaviour has a dramatic effect for the NFA . By disabling the initial application of Reduce and thus preserving the original structure of , we were able to speed up the state label computation from 28.3 min to 1.5 min. Note that other steps of the approximate reduction took a similar time as before disabling Reduce and also that the trade-offs between the error and the reduction factor were similar. Surprisingly, disabling Reduce caused that the computation of the exact probabilistic distance became computationally infeasible because the determinisation ran out of memory.

Due to the size of the NFA , the impact of disabling the initial application of Reduce is even more fundamental. In particular, computing the state labelling took only 19.9 min, in contrast to running out of memory when the Reduce is applied in the first step (therefore, the input automaton is not processed by Reduce in Table 3; we still give the number of LUTs of its reduced version for comparison, though). Note that the size of also slows down other reduction steps (the greedy algorithm and the final Reduce reduction). We can, however, clearly see that computing the state labelling is still the most time-consuming step.

5.3 The Real Impact in an FPGA-Accelerated NIDS

Further, we also evaluated some of the obtained automata in the setting of [5] implementing a high-speed NIDS pre-filter. In that setting, the amount of resources available for the regex matching engine is 15,000 LUTs555We omit the analysis of flip-flop consumption because in our setting it is dominated by the LUT consumption. and the frequency of the engine is 200 MHz. We synthesized NFAs that use a 32-bit-wide data path, corresponding to processing 4 ASCII characters at once, which is—according to the analysis in [5]—the best trade-off between the utilization of the chip resources and the maximum achievable frequency. A simple analysis shows that the throughput of one automaton is 6.4 Gbps, so in order to reach the desired link speed of 100 Gbps, 16 units are required, and 63 units are needed to handle 400 Gbps. With the given amount of LUTs, we are therefore bounded by 937 LUTs for 100 Gbps and 238 LUTs for 400 Gbps.

We focused on the consumption of LUTs by an implementation of the regex matching engines for http-backdoor () and http-malicious ().

  • 100 Gbps: For this speed, can be used without any approximate reduction as it is small enough to fit in the available space. On the other hand, without the approximate reduction is way too large to fit (at most 6 units fit inside the available space, yielding the throughput of only 38.4 Gbps, which is unacceptable). The column LUTs in Table 3 shows that using our framework, we are able to reduce such that it uses 894 LUTs (for = 0.3), and so all the needed 16 units fit into the FPGA, yielding the throughput over 100 Gbps and the theoretical error bound of a false positive wrt the model .

  • 400 Gbps: Regex matching at this speed is extremely challenging. The only reduced version of that fits in the available space is the one for the value with the error bound almost 1. The situation is better for . In the exact version, at most 39 units can fit inside the FPGA with the maximum throughput of 249.6 Gbps. On the other hand, when using our approximate reduction framework, we are able to place 63 units into the FPGA, each of the size 224 LUTs ( = 0.6) with the throughput over 400 Gbps and the theoretical error bound of a false positive wrt the model .

6 Conclusion

We have proposed a novel approach for approximate reduction of NFAs used in network traffic filtering. Our approach is based on a proposal of a probabilistic distance of the original and reduced automaton using a probabilistic model of the input network traffic, which characterizes the significance of particular packets. We characterized the computational complexity of approximate reductions based on the described distance and proposed a sequence of heuristics allowing one to perform the approximate reduction in an efficient way. Our experimental results are quite encouraging and show that we can often achieve a very significant reduction for a negligible loss of precision. We showed that using our approach, FPGA-accelerated network filtering on large traffic speeds can be applied on regexes of malicious traffic where it could not be applied before.

In the future, we plan to investigate other approximate reductions of the NFAs, maybe using some variant of abstraction from abstract regular model checking [44], adapted for the given probabilistic setting. Another important issue for the future is to develop better ways of learning a suitable probabilistic model of the input traffic.

Data Availability Statement and Acknowledgements.

The tool used for the experimental evaluation in the current study is available in the following figshare repository: https://doi.org/10.6084/m9.figshare.5907055. We thank Jan Kořenek, Vlastimil Košař, and Denis Matoušek for their help with translating regexes into automata and synthesis of FPGA designs, and Martin Žádník for providing us with the backbone network traffic. We thank Stefan Kiefer for helping us proving the PSPACE part of Lemma 1 and Petr Peringer for testing our artifact. The work on this paper was supported by the Czech Science Foundation project 16-17538S, the IT4IXS: IT4Innovations Excellence in Science project (LQ1602), and the FIT BUT internal project FIT-S-17-4014.

References

  • [1] The Snort Team: Snort (http://www.snort.org).
  • [2] Becchi, M., Wiseman, C., Crowley, P.: Evaluating regular expression matching engines on network and general purpose processors. In: Proceedings of the 5th ACM/IEEE Symposium on Architectures for Networking and Communications Systems. ANCS ’09, ACM (2009) 30–39
  • [3] Kořenek, J., Kobierský, P.: Intrusion detection system intended for multigigabit networks. In: 2007 IEEE Design and Diagnostics of Electronic Circuits and Systems. (April 2007) 1–4
  • [4] Kaštil, J., Kořenek, J., Lengál, O.:

    Methodology for fast pattern matching by deterministic finite automaton with perfect hashing.

    In: 2009 12th Euromicro Conference on Digital System Design, Architectures, Methods and Tools. (2009) 823–829
  • [5] Matoušek, D., Kořenek, J., Puš, V.: High-speed regular expression matching with pipelined automata. In: 2016 International Conference on Field-Programmable Technology (FPT). (2016) 93–100
  • [6] Kumar, S., Dharmapurikar, S., Yu, F., Crowley, P., Turner, J.S.: Algorithms to accelerate multiple regular expressions matching for deep packet inspection. In: SIGCOMM’06, ACM (2006) 339–350
  • [7] Tan, L., Sherwood, T.: A high throughput string matching architecture for intrusion detection and prevention. In: ISCA’05, IEEE Computer Society (2005) 112–122
  • [8] Kumar, S., Turner, J.S., Williams, J.: Advanced algorithms for fast and scalable deep packet inspection. In: ANCS’06, ACM (2006) 81–92
  • [9] Becchi, M., Crowley, P.: A hybrid finite automaton for practical deep packet inspection. In: CoNEXT’07, ACM (2007)  1
  • [10] Becchi, M., Crowley, P.: An improved algorithm to accelerate regular expression evaluation. In: ANCS’07, ACM (2007) 145–154
  • [11] Kumar, S., Chandrasekaran, B., Turner, J.S., Varghese, G.: Curing regular expressions matching algorithms from insomnia, amnesia, and acalculia. In: ANCS’07, ACM (2007) 155–164
  • [12] Yu, F., Chen, Z., Diao, Y., Lakshman, T.V., Katz, R.H.: Fast and memory-efficient regular expression matching for deep packet inspection. In: ANCS’06, ACM (2006) 93–102
  • [13] Liu, C., Wu, J.: Fast deep packet inspection with a dual finite automata. IEEE Trans. Computers 62(2) (2013) 310–321
  • [14] Luchaup, D., De Carli, L., Jha, S., Bach, E.: Deep packet inspection with DFA-trees and parametrized language overapproximation. In: INFOCOM’14, IEEE (2014) 531–539
  • [15] Mitra, A., Najjar, W.A., Bhuyan, L.N.: Compiling PCRE to FPGA for accelerating SNORT IDS. In: ANCS’07, ACM (2007) 127–136
  • [16] Brodie, B.C., Taylor, D.E., Cytron, R.K.: A scalable architecture for high-throughput regular-expression pattern matching. In: ISCA’06, IEEE Computer Society (2006) 191–202
  • [17] Clark, C.R., Schimmel, D.E.: Efficient reconfigurable logic circuits for matching complex network intrusion detection patterns. In: FPL’03. Volume 2778 of Lecture Notes in Computer Science., Springer (2003) 956–959
  • [18] Hutchings, B.L., Franklin, R., Carver, D.: Assisting network intrusion detection with reconfigurable hardware. In: FCCM’02, IEEE Computer Society (2002) 111–120
  • [19] Sidhu, R.P.S., Prasanna, V.K.: Fast regular expression matching using fpgas. In: FCCM’01, IEEE Computer Society (2001) 227–238
  • [20] Puš, V., Tobola, J., Košař, V., Kaštil, J., Kořenek, J.: Netbench: Framework for evaluation of packet processing algorithms. Symposium On Architecture For Networking And Communications Systems (2011) 95–96
  • [21] Maletti, A., Quernheim, D.: Optimal hyper-minimization. CoRR abs/1104.3007 (2011)
  • [22] Gawrychowski, P., Jez, A.: Hyper-minimisation made efficient. In: MFCS’09. Volume 5734 of Lecture Notes in Computer Science., Springer (2009) 356–368
  • [23] Gange, G., Ganty, P., Stuckey, P.J.: Fixing the state budget: Approximation of regular languages with small dfas. In: ATVA’17. Volume 10482 of Lecture Notes in Computer Science., Springer (2017) 67–83
  • [24] Parker, A.J., Yancey, K.B., Yancey, M.P.: Regular language distance and entropy. CoRR abs/1602.07715 (2016)
  • [25] Mohri, M.: Edit-distance of weighted automata. In: CIAA’02. Volume 2608 of Lecture Notes in Computer Science., Springer (2002) 1–23
  • [26] Jiang, T., Ravikumar, B.: Minimal NFA problems are hard. SIAM Journal on Computing 22(6) (1993) 1117–1141
  • [27] Malcher, A.: Minimizing finite automata is computationally hard. Theor. Comput. Sci. 327(3) (2004) 375–390
  • [28] Hopcroft, J.E.: An N log N algorithm for minimizing states in a finite automaton. Technical report (1971)
  • [29] Paige, R., Tarjan, R.E.: Three partition refinement algorithms. SIAM J. Comput. 16(6) (1987) 973–989
  • [30] Bustan, D., Grumberg, O.: Simulation-based minimazation. ACM Trans. Comput. Log. 4(2) (2003) 181–206
  • [31] Champarnaud, J., Coulon, F.: NFA reduction algorithms by means of regular inequalities. Theor. Comput. Sci. 327(3) (2004) 241–253
  • [32] Mayr, R., Clemente, L.: Advanced automata minimization. In: POPL’13, ACM Trans. Comput. Log. (2013) 63–74
  • [33] Etessami, K.: A hierarchy of polynomial-time computable simulations for automata. In: CONCUR 2002 - Concurrency Theory, 13th International Conference, Brno, Czech Republic, August 20-23, 2002, Proceedings. Volume 2421 of Lecture Notes in Computer Science., Springer (2002) 131–144
  • [34] Clemente, L.: Büchi automata can have smaller quotients. In: ICALP’11. Volume 6756 of Lecture Notes in Computer Science., Springer (2011) 258–270
  • [35] Vardi, M.Y.: Automatic verification of probabilistic concurrent finite state programs. SFCS ’85, IEEE 327–338
  • [36] Baier, C., Kiefer, S., Klein, J., Klüppelholz, S., Müller, D., Worrell, J.: Markov chains and unambiguous Büchi automata. In: CAV’16, Springer (2016) 23–42
  • [37] Baier, C., Kiefer, S., Klein, J., Klüppelholz, S., Müller, D., Worrell, J.: Markov chains and unambiguous Büchi automata. CoRR abs/1605.00950 (2016)
  • [38] Mohri, M.: A disambiguation algorithm for finite automata and functional transducers. In: CIAA’12. Springer (2012) 265–277
  • [39] Češka, M., Havlena, V., Holík, L., Lengál, O., Vojnar, T.: Approximate reduction of finite automata for high-speed network intrusion detection. In: Figshare. (2018) https://doi.org/10.6084/m9.figshare.5907055.
  • [40] Hartmanns, A., Wendler, P.: TACAS 2018 artifact evaluation VM. In: Figshare. (2018) https://doi.org/10.6084/m9.figshare.5896615.
  • [41] Carrasco, R.C., Oncina, J.: Learning stochastic regular grammars by means of a state merging method. In: Proceedings of the Second International Colloquium on Grammatical Inference and Applications. ICGI ’94, Springer-Verlag (1994) 139–152
  • [42] Thollard, F., Clark, A.: Learning stochastic deterministic regular languages. In Paliouras, G., Sakakibara, Y., eds.: Grammatical Inference: Algorithms and Applications: 7th International Colloquium, ICGI 2004, Athens, Greece, October 11-13, 2004. Proceedings, Berlin, Heidelberg, Springer Berlin Heidelberg (2004) 248–259
  • [43] Mayr, R., et al.: Reduce: A tool for minimizing nondeterministic finite-word and Büchi automata. http://languageinclusion.org/doku.php?id=tools [Online; accessed 2017-09-30].
  • [44] Bouajjani, A., Habermehl, P., Rogalewicz, A., Vojnar, T.: Abstract regular (tree) model checking. STTT 14(2) (2012) 167–191
  • [45] Csanky, L.: Fast parallel matrix inversion algorithms. In: 16th Annual Symposium on Foundations of Computer Science. (Oct 1975) 11–12
  • [46] Fortune, S., Wyllie, J.: Parallelism in random access machines.

    In: Proceedings of the Tenth Annual ACM Symposium on Theory of Computing. STOC ’78, New York, NY, USA, ACM (1978) 114–118

  • [47] Papadimitriou, C.M.: Computational complexity. Addison-Wesley (1994)
  • [48] Hogben, L.: Handbook of Linear Algebra. 2nd edn. CRC Press (2013)
  • [49] Solodovnikov, V.I.: Upper bounds on the complexity of solving systems of linear equations. Journal of Soviet Mathematics 29(4) (May 1985) 1482–1501

Appendix 0.A Proofs of Lemmas

Some of the proofs use the PA  defined as where .

models an exponential distribution over the words from 

(wrt their length). In particular,  assigns every word the probability . We use  to assign every word over  a non-zero probability; any other PA with the same property would work as well.

See 1

Proof

We prove the first and the second part of the lemma independently.

  1. Computing is PSPACE-complete for an NFA . The membership in PSPACE can be shown as follows. The computation described at the end of §3.1 corresponds to solving a linear equation system. The system has an exponential size because of the blowup caused by the determinisation/disambiguation of  required by the product construction. The equation system can, however, be constructed by a PSPACE transducer . Moreover, as solving linear equation systems can be done using a polylogarithmic-space transducer , one can combine these two transducers to obtain a PSPACE algorithm. Details of the construction follow:

    • First, we construct a transducer  that, given an NFA  and a PA  on its input, constructs a system of linear equations  of  unknowns for and representing the product of and , where is a deterministic automaton obtained from  using the standard subset construction. The system  is defined as follows (cf. [37]):