Consistent SDNs through Network State Fuzzing

The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Nevertheless, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to continuously test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly.



There are no comments yet.


page 1

page 2

page 3

page 4


Identifying Operational Data-paths in Software Defined Networking Driven Data-planes

In this paper, we propose an approach that relies on distributed traffic...

Performance analysis of SDN controllers: POX, Floodlight and Opendaylight

The IP network is time-consuming for configuration and troubleshooting b...

Impact of Adaptive Consistency on Distributed SDN Applications: An Empirical Study

Scalability of the control plane in a software-defined network (SDN) is ...

P4BFT: Hardware-Accelerated Byzantine-Resilient Network Control Plane

Byzantine Fault Tolerance (BFT) enables correct operation of distributed...

A Survey on Data Plane Flexibility and Programmability in Software-Defined Networking

Software-defined networking (SDN) attracts the attention of the research...

Distributed Consistent Network Updates in SDNs: Local Verification for Global Guarantees

While SDNs enable more flexible and adaptive network operations, (logica...

When SRv6 meets 5G Core: Implementation and Deployment of a Network Service Chaining Function in SmartNICs

Currently, we have witnessed a myriad of solutions that benefit from pro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The correctness of a software-defined network (SDN) crucially depends on the consistency between the management, the control and the data plane. There are, however, many causes that may trigger inconsistencies at run time, including, switch hardware failures [1, 2, 3], bit flips [4, 5], misconfigurations [6, 7, 8, 9, 10, 11], priority bugs [12, 13], control and switch software bugs [14, 15, 16]

. When an inconsistency occurs, the actual data plane state does not correspond to what the control plane expects it to be. Even worse, a malicious user may actively try to trigger inconsistencies as part of an attack vector.

Figure 1 shows a visualization inspired by the one by Heller et al. [17] highlighting where consistency checks operate. The figure illustrates the three network planes – management, control, and data plane – with their components. The management plane establishes the network-wide policy , which corresponds to the network operator’s intent. To realize this policy, the control plane governs a set of logical rules (logical) over a logical topology (logical), which yield a set of logical paths (logical). The data plane consists of the actual topology (physical), the rules (physical), and the resulting forwarding paths (physical).

Figure 1: Overview of consistency checks described in the literature.

Consistency checking is a complex problem. Prior work has tackled individual subpieces of the problem as highlighted by Figure 1, which we augmented with related work. Monocle [5], RuleScope [18], and RuleChecker [19] use active probing to verify whether the logical rules logical are the same as the rules physical of the data plane. ATPG [3] creates test packets based on the control plane rules to verify whether paths taken by the packets on the data plane physical are the same as the expected path from the high-level policy without giving attention to the matched rules. VeriDP [20] uses production traffic to only verify whether paths taken by the packets on the data plane physical are the same as the expected path from the control plane logical. NetSight [21], PathQuery [22], CherryPick [23], and PathDump [24] use production traffic whereas SDN Traceroute [25] uses active probes to verify physical. Control plane solutions focus on verifying network-wide invariants such as reachability, forwarding loops, slicing, and black hole detection against high-level network policies both for stateless and stateful policies. This includes tools [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42] that monitor and verify some or all of the network-wide invariants by comparing the high-level network policy with the logical rule set that translates to the logical path set at the control plane, i.e., logical or logical. These systems only model the network behavior which is insufficient to capture firmware and hardware bugs as “modelling” and verifying the control-data plane consistency are significantly different techniques.

Typically, previous approaches to consistency checking proceed “top-down,” starting from what is known to the management and control planes, and subsequently checking whether the data plane is consistent. We claim that this is insufficient and underline this with several examples (§2.3) wherein data plane inconsistencies would go undetected. This can be a major problem because, to say using an analogy to security, the overall system consistency is only as good as the weakest link in the chain.

We argue that we need to complement existing top-down approaches with a bottom-up approach. To this end, we rely on an analogy to program testing. Programs can have a huge state space, just like networks. There are two basic approaches to test program correctness: one is static testing and the other is dynamic testing using fuzz testing or fuzzing [43, 44]. Hereby, the latter is often needed as the former cannot capture the actual run-time behavior. We realize that the same holds true for network state.

Fuzz testing involves testing a program with invalid, unexpected, or random data as inputs. The art of designing an effective fuzzer lies in generating semi-valid inputs that are valid enough so that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program, and are invalid enough to expose corner cases that have not been dealt with properly. For a network, this corresponds to checking its behavior not only with the expected production traffic but with unexpected or abnormal packets. However, in networking, what is expected or unexpected depends not only on the input (ingress) port but also the topology till the exit (egress) port and configuration i.e., rules on the switches. Thus, there is a huge state space to explore. Relying only on production traffic is not sufficient because production traffic may or may not trigger inconsistencies. However, having faults that can be triggered at any point in time, due to a change in production traffic e.g., malicious or accidental, is undesirable for a stable network. Thus, we need fuzz testing for checking network consistency. Accordingly, this paper introduces Pazz which combines such capabilities with previous approaches to verify SDNs (such as those deployed in a campus or datacenter environment) against persistent data plane faults.

Our Contributions:

  • We identify and categorize the causes and symptoms of data plane faults which are currently unaddressed to provide some useful insights into the limitations of existing approaches. Based on our insights, we make a case for fuzz testing mechanism for campus and private datacenter SDNs (section 2);

  • We introduce a novel methodology, Pazz which detects and later, localizes faults by comparing control vs. data plane information for all three components, rules, topology, and paths. It uses production traffic as well as active probes (to fuzz test the data plane state) (section 3);

  • We develop and evaluate Pazz prototypein multiple experimental topologies representative of multi-path/grid campus and private datacenter SDNs. Our evaluations demonstrate that fuzzing through Pazz detects and localizes data plane faults faster than a baseline approach in all experimental topologies while consuming minimal network resources (section 4);

2 Background & Motivation

This section briefly navigates the landscape of faults and reviews the symptoms and causes (section 2.1) to set the stage for the program testing analogy in networks (section 2.2). Finally, we highlight the scenarios of data plane faults manifesting as inconsistency (section 2.3).

2.1 Landscape of Faults: Causes and Symptoms

As per the survey [2], the top primary causes for abnormal network behaviour or failures in the order of their frequency of occurence are the following:

  • Software bugs: code errors, bugs etc

  • Hardware failures or bugs: bit errors or bitflips, switch failures etc

  • Attacks and external causes: compromised security, DoS/DDos etc

  • Misconfigurations: ACL misconfigs, protocol misconfigs etc

In SDNs, the above causes still exist and are persistent [3, 4, 5, 12, 13, 14, 15, 16, 45, 46, 47]. We, however, realized that the symptoms [2] of the above causes can manifest either as functional or performance-based problems on the data plane. To clarify further, the symptoms are either functional (reachability, security policy correctness, forwarding loops, broadcast/multicast storms) or performance-based (Router CPU high utilization, congestion, latency/throughput, intermittent connectivity). To abstract the analysis, if we disregard the performance-based symptoms, we realize the functional problems can be reduced to the verification of network correctness. Making the situation worse, the faults manifest in the form of inconsistency where the expected network state at control plane is different to the actual state on the data plane.

A physical network or network data plane comprises of devices and links. In SDNs, such devices are SDN switches connected through links. The data plane of the SDNs is all about the network behaviour when subjected to input in the form of traffic. Just like in programs, we need different test cases as inputs with different coverage to test the code coverage. Similarly, in order to dynamically test the network behaviour thoroughly, we need input in the form of traffic with different coverage [48]. Historically, the network correctness or functional verification on the data plane has been either a path-based verification (network-wide) [20, 21, 22, 23, 24, 25, 49] or a rule-based verification (mostly switch-specific) [3, 5, 25, 19, 18, 50, 51]. A path-based verification can be end-to-end or hop-by-hop whereas rule-based verification is a switch-by-switch verification. The network coverage brings us to the concept of Packet Header Space Coverage.

Figure 2: Example topology with two example ingress/egress port pairs (source-destination pairs) and their packet header space coverage.

2.2 Packet Header Space Coverage: Active vs Passive

We observe that networks just as programs can have a huge distributed state space. Packets with their packet headers, including source IP, destination IP, port numbers, etc., are the inputs and the state includes all forwarding equivalence classes defined by the flow rules. Note, that every pair of ingress-egress ports (source-destination pair) can have different forwarding equivalence classes. Rather than using the term forwarding equivalence classes (which is tied to MPLS and QoS) we use the term covered packet header space. Our motivation is that the forwarding equivalence classes refer to parts of the packet header space. Indeed, the rules together with the topology and the available paths define which part of the header space is covered and which one is uncovered for each pair of ingress and egress ports. Therefore, for a given pair of ingress and egress ports, when receiving traffic on the egress port from the ingress port, we can check if the packet is covered by the corresponding “packet header space”. If it is within the space it is “expected”, otherwise it is “unexpected” and, thus, we have discovered an inconsistency due to the presence of a fault on the data plane.

Consider the example topology in Figure 2. It consists of four switches S0, S1, S2, and S3. Let us focus on two ingress ports and and one egress port . The figure also includes possible packet header space coverage. For to , it includes matches for the source and destination IPs. For to , it includes matches for the destination IP and possible destination port ranges.

When testing a network, if traffic adheres to a specific packet header space, there are multiple possible cases. If we observe a packet sent via an ingress port and received at an egress port then we need to check if it is within the covered area, if it is not we refer to the packet as “unexpected” and then, we have an inconsistency for that packet header space caused by a fault. If a packet from an ingress port is within the expected packet header space of multiple egress ports, we need to check if the sequence of rules expected to be matched and path/s expected to be taken by the packet correspond to the actual output port on data plane. This is yet another way of finding inconsistency caused by faults.

Type of monitoring/verification
Related work in the data plane Traffic Type (Packet header space coverage) Rule-based Path-based
ATPG [3] 111In this tool, if the packet is received at the expected destination from a source, path is considered to be the same. Active (✓) (✓)
Monocle [5] Active (✓)
RuleScope [18] Active (✓)
RuleChecker [19]222In this tool, authors claim that tool may detect match and action faults without guarantee. Active (✓)
SDNProbe [50] Active (✓) (✓)
FOCES [49] Passive (✓) (✓)
VeriDP [20] Passive (✓) (✓)
NetSight [21] Passive (✓) (✓)
PathQuery [22] Passive (✓) (✓)
CherryPick [23]333In this tool, issues in only symmetrical topologies are addressed. Passive (✓) (✓)
PathDump [24] Active (✓) (✓)
SDNTraceroute [25] Active (✓) (✓)
TPP [51] 444In this tool, end-hosts embed tiny packet programs for verification. Active (✓) (✓)
Pazz Active, Passive
Table 1: Classification of related work in the data plane based on the type of the verification and the packet header space coverage. ✓denotes full capability, (✓) denotes a part of full capability, denotes missing capability.

To take the analogy from testing a “program” even further, programmers should not only write test cases to test or “cover” all program functions but should also write negative test cases to “fuzz test” via invalid, semi-valid, unexpected, and/or random input data. Thus, in networking, we should not only test the network state with “expected” or the production traffic, but also with specially crafted probe packets to test corner cases and “fuzz test”. In principle, there are two ways for testing network forwarding: passive and active. Passive corresponds to using the existing traffic or production traffic while active refers to sending specific probe traffic. The advantage of passive traffic is that it has low overhead and popular forwarding paths are tested repeatedly. However, production traffic may (a) not cover all cases (covers only faults that can be triggered by production traffic only); (b) change rapidly; and (c) have delayed fault detection, as the fraction of traffic triggering the faults is delayed. Indeed, malicious users may be able to inject malformed traffic that may trigger fault/s. Thus, production traffic may not cover the whole packet header space achievable by active probing.

Furthermore, we should also fuzz test the network state. This is important as we derive our network state from the information of the controller. Yet, this is not sufficient since we cannot presume that the controller state is complete and/or accurate. Thus, we propose to generate packets that are outside of the covered packet header space of an ingress/egress port pair. We suggest doing this by systematically and continuously testing the header space just outside of the covered header space. E.g., if port 80 is within the covered header space test for port 81 and 79. If x.0/17 is in the covered header space test for x.1.0.0 which is part of the x.1/17 prefix. In addition, we propose to randomly probe the remaining packet header space continuously by generating appropriate test traffic. The goal of active traffic generation through fuzzing is to detect the faults identifiable by active traffic only.

Table 1 shows the existing data plane approaches on the basis of kind of verification or monitoring in addition to the packet header space coverage. We see that the existing dataplane verification approaches are insufficient when it comes to both path and rule-based verification in addition to ensuring sufficient packet header space coverage. In this paper, our system Pazz aims to ensure packet header space coverage in addition to path and rule-based verification to ensure network correctness on the data plane and thus, detecting and localizing persistent inconsistency.

Figure 3: Example misconfiguration with a hidden rule.
Expected/actual route blue/red arrows.

2.3 Data Plane Faults manifesting as Inconsistency

2.3.1 Faults identified by Passive Traffic: Type-p

To highlight the type of faults, consider a scenario shown in Figure 3. It has three OpenFlow switches (S1, S2, and S3) and one firewall (FW). Initially, S1 has three rules R1, R3, and R4. R4 is the least specific rule and has the lowest priority. R1 has the highest priority. Note the rules are written in the order of their priority.

Incorrect packet trajectory: We start by considering a known fault [20, 21, 24]hidden rule/misconfiguration. For this, the rule R2 is added to S1 via the switch command line utility. The controller will remain unaware of R2 since R2 is a non-overlapping flow rule. Thus, it is installed without notification to the controller [52]. [5, 12] have hinted at this problem. As a result, traffic to IP x.1.1.31 bypasses the firewall as it uses a different path.

Priority faults [19] are another reason for such incorrect forwarding where either rule priorities get swapped or are not taken into account. The Pronto-Pica8 3290 switch with PicOS 2.1.3 caches rules without accounting for rule priorities [13]. The HP ProCurve switch lacks rule priority support [12]. Furthermore, priority faults may manifest in many forms e.g., they may cause the trajectory changes or incorrect matches even when the trajectory remains the same. Action faults [3] can be another reason where bitflip in the action part of the flowrule may result in a different trajectory.

Insight 1: Typically, the packet trajectory tools only monitor the path.

Correct packet trajectory, incorrect rule matching: If we add a higher priority rule in a similar fashion where the path does not change, i.e., the match and action remains the same as in the shadowed rule, then previous work will be unable to detect it and, thus, it is unaddressed 555We validated this via experiments. OpenFlow specification [52] states that if a non-overlapping rule is added via the switch command-line utility, controller is not notified.. Even if the packet trajectory is correct but wrong rule is matched, it can inflict serious damages. Misconfigs, hidden rules, priority faults, match faults (described next) may be the reason for incorrect matches. Next, we focus on match faults where anomaly in the match part of a forwarding flow rule on a switch causes the packets to be matched incorrectly. We again highlight known as well as unaddressed cases starting with a known scenario. In Figure 3, if a bitflip666A previously unknown firmware bug in HP 5406zl switch., e.g., due to hardware problems, changes R1 from x.1.1.0/28 to match from x.1.1.0 upto x.1.1.79. Traffic to x.1.1.17 is now forwarded based on R1 rather than R4 and thus, bypasses the firewall. This may still be detectable, e.g., by observing the path of a test packet [20]. However, the bitflip in R1 also causes an overlap in the match of R1 and R3 in switch S1 and both rules have the same action, i.e., forward to port 1. Thus, traffic to x.1.1.66 supposed to be matched by R3 will be matched by R1. If later, the network administrator removes R3, the traffic still pertaining to R3 still runs. This violates the network policy. In this paper, we categorize the dataplane faults detectable by the production traffic as Type-p faults.

Insight 2: Even if the packet trajectory remains the same, the matched rules need to be monitored.

Figure 4: Example misconfiguration with a hidden rule detectable by active probing only. Expected/actual route blue/red arrows.

2.3.2 Faults identified by Active Traffic only: Type-a

To highlight, we focus on hidden or misconfigured rule R3 (in green) in Figure 4. This rule matches the traffic corresponding to x.1.1.33 on switch S1 and reaches the confidential bank server, however, the expected traffic or the production traffic does not belong to this packet header space [45, 46, 47]. Therefore, we need to generate probe packets to trigger such rules and thus, detect their presence. This will require generating and sending the traffic corresponding to the packet header space which is not expected by the control plane. We call this traffic as fuzz traffic in the rest of the paper since it tests the network behavior with unexpected packet header space. In this paper, we categorize the dataplane faults detectable by only the active or fuzz traffic as Type-a faults.

Insight 3: The tools which test the rules check only rules “known” to the control plane (SDN controller) by generating active traffic for “known” flows.

Insight 4: Typically, the active traffic for certain flows checks only if the path remains the same even when rule/s matched may be different on the data plane.

3 Pazz Methodology

Motivated by our insights gained in section 2.3 about the Type-p and Type-a faults on the data plane resulting in inconsistency, we aim to take the consistency checks further. Towards this end, our novel methodology, Pazz compares forwarding rules, topology, and paths of the control and the data plane, using top-down and bottom-up approaches, to detect and localize the data plane faults.

Figure 5: Pazz Methodology.

Pazz, derived from ssive and Active (fu testing), takes into account both production and probe traffic to ensure adequate packet header space coverage. Pazz checks the matched forwarding flow rules as well as the links constituting paths of various packet headers (5-tuple microflow) present in the passive and active traffic. To detect faults, Pazz collects state information (in terms of reports) from the control and the data plane: Pazz compares the “expected” state reported by the control to the “actual” state collected from the data plane. Figure 5 illustrates the Pazz methodology. It consists of four components sequentially:

  1. [noitemsep,wide=0pt, leftmargin=]

  2. Control Plane Component (CPC): Uses the current controller information to proactively compute the packets that are reachable between any/every source-destination pair. It then sends the corpus of seed inputs to Fuzzer. For any given packet header and source-destination pair, it reactively generates an expected report which encodes the paths and sequence of rules. (section 3.2)

  3. Fuzzer: Uses the information from CPC to compute the packet header space not covered by the controller and hence, the production traffic. It generates active traffic for fuzz testing the network. (section 3.3)

  4. Data Plane Component (DPC): For any given packet header and source-destination pair, it encodes the path and sequence of forwarding rules to generate a sampled actual report. (section 3.1)

  5. Consistency Tester: Detects and later, localizes faults by comparing the expected report/s from the CPC with the actual report/s from the DPC. (section 3.4)

Now, we will go through all components in a non-sequential manner for the ease of description.

3.1 Data Plane Component (DPC)

To record the actual path of a packet and the rules that are matched in forwarding, we rely on tagging the packets contained in active and production traffic. In particular, we propose the use of a shim header that gives us sufficient space even for larger network diameters or larger flow rule sets. Indeed, INT [53] can be used for data plane monitoring, however, it is applicable for P4 switches [54] only. Unlike [22, 20, 23, 24, 21], we use our custom shim header for tagging, therefore tagging is possible without limiting forwarding capabilities. To avoid adding extra monitoring rules on the scarce TCAM which may also affect the forwarding behavior [22, 55], we augment OpenFlow with new actions. Between any source-destination pair, the new actions are used by all rules of the switches to add/update the shim header if necessary for encoding the sequence of inports (path) and matched rules. To remove the shim header, we use another custom OpenFlow action. To trigger the actual report to the Consistency Tester, we use sFlow [56] sampling. Indeed, we can use any other sampling tool. Note sFlow is a packet sampling technique so it samples packets not flows based on sampling rate and polling interval. For a given source, the report contains the packet header, the shim header content, and the egress port of the exit switch (destination).

Even with a shim header: Verify, however, it is impractical to expect packets to have available and sufficient space to encode information about each port and rule on the path. Therefore, we rely on a combination of bloom filter [57] and binary hash chains. For scalability purposes, sampling is used before sending a report to the Consistency Tester.

Input : (, , , , ) for each incoming packet and switch with ID let be the inport ID and the outport ID for packet , is the flow rule used for forwarding.
Output : Tagged packet if necessary with the Verify shim header.
// Is there already a shim header, e.g., (,) is not an entry point or source port
1 if ( has no shim header) then
        // Add shim header with ‘‘Ethertype’’ , initialize tag values- Verify_Port: entry point hash, Verify_Rule: 1.
2        ;
// Determine ID from switch ID and port ID
= ; // Bloom Filter
()) // Determine ID from rule ID of table ID
= ; // Binary hash chain
(, ) // Shim header has to be removed if () is exit point
3 if (() is exit point) then
4        if ( has no shim header) then
               // For traffic injected between a source-destination pair
5               ;
6       ) ;
Algorithm 1 Data Plane Tagging

Data Plane Tagging: To limit the overhead, we decided to insert Verify shim header on layer-2. VerifyTagType is EtherType for Verify header, (bloom filter) encodes the local inport in a switch, and (binary hash chain) encodes the local rule/s in a switch. Thus, the encoding is done with the help of the bloom filter and binary hash chain respectively. To take actions on the proposed Verify shim header and to save TCAM space, we propose four new OpenFlow actions: two for adding (push_verify) and removing (pop_verify) the Verify shim header and two for updating the (set_Verify_Port) and (set_Verify_Port) header fields respectively. Since, the header size of the and and tagging actions are implementation-specific, we have explained them in prototype section section 4.1.1, section 4.1.2 respectively. Algorithm 1 explains the data plane tagging algorithm between a source-destination pair. For each packet either from the production or active traffic (section 3.3) entering the source inport, Verify shim header will be added automatically by the switch. For each switch on the path, the tags in the packet namely, and fields get updated automatically. Figure 6 illustrates the per-switch tagging approach. Once the packet leaves the destination outport, the resulting report known as the actual report is sent to the Consistency Tester (section 3.4). Note if there is no Verify header, Verify shim header is pushed on the exit switch to ensure that any traffic injected at any switch interface between a source-destination pair gets tagged. To reduce the overhead on the Consistency Tester as well as on the switch, we employ sampling at the egress port. Note we continuously test the network as the data plane is dynamic due to reconfigurations, link/switch/interface failures, and topology changes.

Figure 6: Data plane tagging using bloom filters and hashing.

3.2 Control Plane Component (CPC)

In principle, we can use the existing control plane mechanisms, including HSA [29], NetPlumber [30] and APVerifier [37]. In addition to experiments in [37], our independent experiments show that Binary Decision Diagram (BDD)-based [58] solutions like [37] perform better for set operations on headers than HSA [29] and NetPlumber [30]. In particular, we will propose in the following a novel BDD-based solution that supports rule verification in addition to path verification (APVerifier [37] takes into account only paths). Specifically, our Control Plane Component (CPC) performs two functions: a) Proactive reachability and corpus computation, and b) Reactive tag computation.

Proactive Reachability & Corpus Computation: We start by introducing an abstraction of a single switch configuration called switch predicate. In a nutshell, a switch predicate specifies the forwarding behavior of the switch for a given set of incoming packets, and is defined in turn by the rule predicates. More formally, the general configuration abstraction of a SDN switch with ports to can be described by switch predicates: where and where denotes the number of switch ports. The packets headers satisfying predicate can be forwarded from port to port only. The switch predicate is defined via rule predicates: which are given by the flowrules belonging to the switch and a flowtable . Each rule has an identifier that consists of a and representing the flowtable in which the rule resides, t array representing a list of inports for that rule, array representing a list of outports in the action of that rule and the rule priority . Based on the rule priority , in the match part and in the action part of a flowrule, each rule has a list of rule predicates (BDD predicates) which represent the set of packets that can be matched by the rule for the corresponding inport and forwarded to the corresponding outport.

Similar to the plumbing graph of [30], we generate a dependency graph of rules (henceforth, called rule nodes) called reachability graph based on the topology and switch configuration which computes the set of packet headers between any source-destination pair. There exists an edge between the two rules and , if (1) out_port of rule is connected to in_port of ; and (2) the intersection of rule predicates for and is non-empty. For computational efficiency, each rule node keeps track of higher priority rules in the same table in the switch. A rule node computes the match of each higher priority rule, subtracting it from its own match. We refer to this as the same-table dependency of rules in a switch. In the following, by slightly abusing the notation, we will use switch predicates and rule predicates to denote also the set of packet headers: they imply. Disregarding the ACL predicates for simplicity, the rule predicates in each switch representing packet header space forwarded from inport to outport is given by . The switch predicates are then computed as:

More specifically, to know the reachable packet header space (set of packet headers) between any source-destination pair in the network, we inject a fully-wildcarded packet header set h from the source port. If the intersection of the switch predicate and the current packet header is non-empty i.e., , the packet is forwarded to the next switch until we reach the destination port. Thus, we can compute reachability between any/every source-destination pair. For caching and tag computation, we generate the inverse reachability graph simultaneously to cache the traversed paths and rules matched by a packet header between every source-destination pair. After the reachability/inverse reachability graph computation, CPC sends the current switch predicates of the entry and exit switch pertaining to a source-destination pair to Fuzzer as a corpus for fuzz traffic generation (section 3.3).

In case of a FlowMod, the reachability/inverse reachability graph and new corpus are re-computed. Recall every rule node in a reachability graph keeps track of high-priority rules in a table in a switch. Therefore, only a part of the affected reachability/inverse reachability graph needs to be updated in the event of rule addition/deletion. In the case of rule addition, the same-table dependency of the rule is computed by comparing the priorities of new and old rule/s before it is added as a new node in the reachability graph. If the priority of a new rule is higher than any rule/s and there is an overlap in the match part, the new switch predicate: as per the new rule predicate: is computed as:

Similarly, if any rule is deleted, after checking the same-table dependency: the node from the reachability graph is removed and the new switch/rule predicates are re-computed.

Reactive Tag Computation: For any given data plane report corresponding to a packet header between any source-destination pair, we traverse the pre-computed inverse reachability graph to generate a list of sequences of rules that can match and forward the actual packet header observed at a destination port from a source port. Note, there can be multiple possible paths, e.g., due to multiple entry points and per-packet or per-flow load balancing. For a packet header , the appropriate and tags are computed similarly as in Algorithm 1. The expected report is then sent to the Consistency Tester (section 3.4) for comparison. Note we can generate expected reports for any number of source-destination pairs.

3.3 Fuzzer

Inspired by the code coverage-guided fuzzers like LibFuzz [59], we design a mutation-based fuzz testing [60] component called Fuzzer. Fuzzer receives the corpus of seed inputs in the form of the switch predicates of the entry and exit switch from the CPC for a source-destination pair. In particular, the switch predicates pertaining to the inport of the entry switch (source) and the outport of the exit switch (destination) represent the expected covered packet header space containing the set of packet headers satisfying those switch predicates. Fuzzer applies mutations to the corpus as per Algorithm 2.

Where Can Most Faults Hide: Before explaining Algorithm 2, we present a scenario to explain the packet header space area where potential faults can be present. Consider the example topology illustrated in Figure 2. Due to a huge header space in IPv6 (128-bit), we decide to focus on the destination IPv4 header space (32-bit) in a case of destination-based routing. We use , and to represent covered packet header space (switch predicates) of switches , and between i-e (source-destination pair). Note there can be multipaths for the same packet header p. Now, assume there is only a single path: , the reachable packet header space or net covered packet header space area is given by . Note this area corresponds to the control plane perspective so there may be more or less coverage on the data plane. The production traffic is generated in the area which depends on the expected rules of at an ingress port i for a packet header p destined to e. In principle, the production traffic will cover the packet header space area . Now, the active traffic should be generated for the uncovered area where represents the universe of all possible packet header space which is to for a destination IPv4 header space.

Input : Switch predicates of entry () and exit switch for a source-destination pair -
Output : 
// Generate fuzz traffic in the difference of covered packet header space area between entry and exit switch
1 ;
2 ;
3 ;
// Generate fuzz traffic in completely uncovered packet header space area randomly
4 while  do
5       ;
6        ;
Algorithm 2 Fuzzer

As stated in section 2.2, we need to start with active traffic generation on the boundary of the net covered packet header space area between a source-destination pair as there is a maximum possibility of faults in this area. A packet will reach from iff all of the rules on the switches in a path match it; else it will be dropped either midway or on the first switch. Therefore, for an end-to-end reachability, the ruleset on and should match the packet contained in the production traffic belonging to the covered packet header space: . This implies that we need to first generate active traffic in the area: and then randomly generate in the leftover area. Traffic can, however, be also injected at any switch on any path between a source-destination pair, thus the checking needs to be done for different source-destination pairs.

We now explain the Algorithm 2 in the context of Figure 2. For active or fuzz traffic generation, if there is a difference in the covered packet header space areas of S0 and S3, we first generate traffic in the area i.e., denoted by

(Line 1). Recall there is a high probability that there may be hidden rules in this area since the header space coverage of the exit switch may be bigger than the same at the entry switch. Later, we generate traffic randomly in the residual packet header space area i.e.,

denoted by (Lines 2-6). We generate traffic randomly in the area as this is mostly, a huge space and fault/s can lie anywhere. The fuzz traffic generated randomly is given by the completely uncovered packet header space area denoted by . Thus, the fuzz traffic that the Fuzzer generates belongs to the packet header space area given by and . It is worth noting that not all of the packets generated by the fuzz traffic are allowed in the network due to a default drop rule in the switches. Therefore, if some packets in the fuzz traffic are matched, the reason can be attributed to either the presence of faulty rule/s, wildcarded rules or hardware/software faults to match such traffic. This also highlights that the fuzz traffic may not cause network congestion. As discussed previously, there is another scenario where the traffic gets injected at one of the switches on the path between a source-destination pair and may end up getting matched in the data plane. Verify header is pushed at the exit switch if it is not already present (section 3.1, section 4.1.2) and thus, the packets still get tagged in the data plane to be sent in the actual report. However, the CPC may generate empty and tags as the traffic is unexpected. In such cases, the fault is still detected but may not be localized automatically (section 3.4). Furthermore, Fuzzer can be positioned to generate traffic at different inports to detect more faults in the network between any/every source-destination pair. In case if the production traffic does not cover all of the expected rules at the ingress or entry switch, Fuzzer design can be easily tuned to also generate the traffic for critical flows. Our evaluations confirm that an exhaustive active traffic generator which randomly generates the traffic in the uncovered area performs poorly against Pazz in the real world topologies (section 4.5). Note if the network topology or configuration changes, the CPC sends the new corpus to the Fuzzer and Algorithm 2 is repeated. We continuously test the network with fuzz traffic for any changes.

3.4 Consistency Tester

Input : Actual and expected report containing the and tags for a packet pertaining to a flow (5-tuple)
Output : Detected and localized faulty switch or Faulty Rule .
// Different rules were matched on data plane.
1 if () then
        // Fault is detected and reported.
2        Report fault
        // Different path was taken on data plane.
3        if () then
               // Localize the fault.
4               for  to by  do
5                      if  then
6                            No problems for this switch hop
                     // Previous switch wrongly routed the packet.
7                      else
       // Path is same even rules matched are different.
11        else
               // Localize Type-p match fault.
12               if  then
13                     Go through the different switches hop-by-hop to find
              // Localize Type-a fault.
14               else
15                      lies in else go through the different switches hop-by-hop
// Different path was taken on data plane.
18 else if () then
        // Type-p action fault is detected and reported.
19        Report fault
        // Localize Type-p action fault.
20        for  to by  do
21               if  then
22                     No problems for this switch hop
              // Previous switch wrongly routed the packet.
23               else
28       No fault detected
Algorithm 3 Consistency Tester (detection, localization)

After receiving an actual report from the data plane, the Consistency Tester queries the CPC for its expected report for the packet header and the corresponding source-destination pair in the actual report. Once Consistency Tester has received both reports, it compares both reports as per Algorithm 3 for fault detection and the localization. To avoid confusion, we use , tags for the actual data plane report and , tags for the corresponding expected control plane report respectively. If tag is different for a packet header and a pair of ingress and egress ports, then the fault is detected and reported (Lines 1-2). Note that we avoid the bloom filter false positive problem by first matching the hash value for the tag. Therefore, the detection accuracy is high unless a hash collision occurs in field (section 4.1.3). Once a fault is detected, Consistency Tester uses the bloom-filter for localization of faults where the actual path is different from the expected path i.e., the bloom filter is different from the bloom filter (Lines 3-8). Therefore, is compared with the per-switch hop in the control plane or for the hop starting from the source inport to the destination outport. This hop-by-hop walkthrough is done by traversing the reachability graph at the CPC hop-by-hop from the source port of the entry switch to the destination port of the exit switch. As per the Algorithm 3, the bitwise logical AND operation between the and the is executed at every hop. It is, however, important to note that if actual path was same as expected path i.e., even when actual rules matched were different on the data plane i.e., (Lines 10-13), the localization of faulty gets tricky as it can be either a case of Type-p match faults (e.g., bitflip in match part) (Lines 10-11) or Type-a fault (Lines 12-13). Hereby, it is worth noting that there will be no expected report from the CPC in the case of unexpected fuzz traffic. Therefore, Consistency Tester checks if (Line 10). If true, localization can be done through hop-by-hop manual inspection of expected switches or manual polling of expected switches (Lines 10-11) else the Type-a fault may be localized to the entry switch as it has a faulty rule that allows the unexpected fuzz traffic in the network (Lines 12-13). There is another scenario where the actual rules matched are same as expected but the path is different (Lines 14-20). This is a case of Type1-action fault (e.g., bitflip in the action part of the rule). In this case, the expected and actual bloom filter can be compared and thus, Type-p action fault is detected and localized. Note action fault is Type-p as it is caused in production traffic.

Binary hash chain in gives Pazz better accuracy, however, we lose the ability to automatically localize the Type-p match faults where the path remains the same and rules matched are different since the bloom filter remains the same. To summarize, detection will happen always, but localization can happen automatically only in the case of two conditions holding simultaneously: a) when traffic is production; and b) when there is a change in path since bloom filter will be different. In most cases, fuzz traffic is not permitted in the network. Recall active traffic can be injected from any switch in between a source-destination pair. In such cases, the will still be detected and can be localized by either manual polling of the switches or hop-by-hop traversal from source to destination. Blackholes [61] for critical flows can be detected as Consistency Tester generates an alarm after a chosen time of some seconds if it does not receive any packet pertaining to that flow777Blackholes for non-critical flows can be detected and localized through polling of the switches.. For localizing silent random packet drops, MAX-COVERAGE [62] algorithm can be implemented on Consistency Tester.

4 Pazz Prototype and Evaluation

4.1 Prototype

4.1.1 DPC: Verify Shim Header

We decided to use a 64-bit (8 Byte) shim header on layer-2: Verify. To ensure sufficient space, we limit the link layer MTU to a maximum of 8,092 Bytes for jumbo frames and 1,518 Bytes for regular frames. Verify has three fields, namely:
VerifyTagType: 16-bit EtherType to signify Verify header.
Verify_Port: 32-bit encoding the local inport in a switch.
Verify_Rule: 16-bit encoding the local rule/s in a switch.

We use a new EtherType value for to ensure that any layer-2 switch on the path without our OpenFlow modifications will forward the packet based on its layer-2 information. The Verify shim header is inserted within the layer 2 header after the source MAC address, just like a VLAN header. In presence of VLAN (802.1q), the Verify header is inserted before the VLAN header. Note Verify header is backward compatible with legacy L2 switches, and transparent to other protocols. .

4.1.2 DPC: New OpenFlow Actions

The new actions ensure that there is no interference in forwarding as no extra rules are added. To ensure efficient use of the shim header space, we use the bloom filter to encode path-related information in the field and binary hash chains [63] to encode rule-level information in the field. A binary hash chain adds a new hash-entry to an existing hash-value by computing a hash of the existing hash-value and the new hash-entry and then storing it as the new value. The field is a Bloom-filter which will contain all intermediate hash results including the first and last value. This ensures that we can test the initial value as well as the final path efficiently.

set_Verify_Port: Computes hash of the unique identifier () of the switch ID and its inport ID and adds the result to the Bloom-filter in the field.
set_Verify_Rule: Computes hash of the globally unique identifier () of the flow rule, i.e., switch ID and rule ID (uniquely identifying a rule within a table), flow table ID with the previous value of the to form a binary hash chain.
push_verify: Inserts a Verify header if needed, initializes the value in to 1 and the value of is the hash of . It is immediately followed by set_Verify_Rule and set_Verify_Port. If there is no Verify header, push_verify is executed at the entry and the exit switch between a source and destination pair.
pop_verify: Removes the Verify header from the tagged packet.

push_verify should be used, if there is no Verify header for a) all packets entering a source inport or b) all packets leaving the destination outport (in case, if any traffic is injected between a source-destination pair) just before a report is generated to the Consistency Tester. For packets leaving the destination outport, pop_verify should be used only after a sampled report to the Consistency Tester has been generated.

To initiate and execute data plane tagging, the actions set_Verify_Port and set_Verify_Rule are prepended to all flow rules in the switches as first actions in the OpenFlow “action list” [52]. On the entry and exit switch, action push_verify is added as the first action. On the exit switch, pop_verify is added as an action once the report is generated. Recall, our actions do not change the forwarding behavior per se as the match part remains unaffected. However, if one of the actions gets modified unintentionally or maliciously, it may have a negative impact but gets detected and localized later. Notably, set_Verify_Rule encodes the priority of the rule and flow table number in the field and thus, providing support for rule priorities/cascaded flow tables.

Figure 7: False positives with varying network diameter (switches in a path) and the number of hash functions (HF) for 16-bit bloom filter and 32-bit bloom filter. bloom filter (32 bits) on the right incurs less false positives and uses less number of hash functions as compared to the right 16-bit bloom filter.

4.1.3 Bloom Filter & Hash Function

We use bloom filter for the localization of detected faults. In an extreme case from the perspective of operational networks like datacenter networks or campus networks, for a packet header and a pair of ingress and egress port, if CPC computes different paths with hops in each of the paths, the probability of a collision in bloom filter and hash value simultaneously will be given by:

In our case, using the bloom filter false positive formula [57] as (bloom filter size) of the field and is the network diameter or the number of switches in a path. is the probability of collision of the hash function computed using a simple approximation of the birthday attack formula [64]:

is the number of different paths, is 216 for 16-bit field hash. Figure 7 illustrates a comparison of 16-bit bloom filter size (left) with the 32-bit bloom filter of tag (right). It illustrates that our bloom filter choice of 32-bit size has less false positives even with two hash functions as compared to the 16-bit bloom filter and, thus, is a better choice for operational networks.

For the 16-bit hash operation, we used Cyclic Redundancy Check (CRC) code [65]. For the 32-bit Bloom filter operations, we use one of the similar approaches as mentioned in [57]. First, three hashes are computed as: for where and are the two halves of a 32-bit Jenkins hash [66] of . Then, we use the first 5 bits of to set the 32-bit Bloom filter for .

Pazz Components: We implemented DPC on top of software switches, in particular, Open vSwitch [67] Version 2.6.90. The customized OvS switches and the fuzz/production traffic generators run in vagrant VMs [68]. Currently, the prototype supports both OpenFlow 1.0 and OpenFlow 1.1. In our prototype, we chose Ryu [69] SDN controller. Python-based Consistency Tester, Java-based CPC and Python-based Fuzzer communicate through Apache Thrift [70].

4.2 Experiment Setup

We evaluate Pazz on 4 topologies: a) 3 grid topologies of 4, 9 and 16 switches respectively with varying complexities to ensure diversity of paths, and b) 1 datacenter fat-tree (4-ary) topology of 20 switches with multipaths. Experiments were conducted on an 8 core 2.4GHz Intel-Xeon CPU machine and 64GB of RAM. For scalability purposes, we modified and translated the Stanford backbone configuration files [71] to equivalent OpenFlow rules as per our topologies, and installed them at the switches to allow multi-path destination-based routing. We used our custom script to generate configuration files for the four experimental topologies. The configuration files ensured the diversity of paths for the same packet header. Columns 1-4 in Table 2 illustrate the parameters of the four experimental topologies.

Topology #Rules #Paths Path Length Reachability graph computation time Fuzzer Execution Time
4 switches (grid) ~5k ~24k 2 0.64 seconds ~1 millisecond
9 switches (grid) ~27k ~50k 4 0.91 seconds ~1.2 milliseconds
16 switches (grid) ~60k ~75k 6 1.13 seconds ~3.2 milliseconds
4-ary fat-tree (20 switches) ~100k ~75k 6 1.15 seconds ~7.5 millisecond
Table 2: Columns 1-4 depict the parameters of four experimental topologies.
Column 5 depicts the reachability graph computation time by the CPC for the experimental topologies proactively by the CPC. Represents an average over 10 runs.
Column 6 depicts the Fuzzer execution time to compute the packet header space for generating the fuzz traffic for the corresponding experimental topologies. Represents an average over 10 runs.
(a) Results of detection time for all topologies
(b) Results of 4-switch grid topology
(c) Results of 9-switch grid topology
(d) Results of 16-switch grid topology
(e) Results of 4-ary fat-tree (20-switch) topology
Figure 8: a) For a source-destination pair, CDF of Type-p and Type-a fault detection time by Pazz in all 4 experimental topologies for sampling rate of 1/100 (left), 1/500 (middle) and 1/1000 (right) respectively against time in seconds. The faults connected by red, green, blue and black lines belong to 4-switch, 9-switch, 16-switch and 4-ary fat-tree topology (20-switch) respectively and represent an average over 10 runs.
b), c), d), e) For a source-destination pair, comparison of fault detection time (in seconds) by Pazz and exhaustive packet generation approach in all 4 experimental topologies (4-switch, 9-switch, 16-switch and 4-ary fat-tree respectively). In each figure, left to right illustrates sampling rate of 1/100 (left), 1/500 (middle) and 1/1000 (right) respectively. The blue and red lines illustrate Pazz and exhaustive packet generation respectively and represent an average over 10 runs. For fair comparison, the exhaustive packet generation approach generates the same number of flows randomly at same rate as Pazz.

We randomly injected faults on randomly chosen OvS switches in the data plane where each fault belonged to different packet header space (in 32-bit destination IPv4 address space) either in the production or fuzz traffic header space. In particular, we injected Type-p (match/action faults) and Type-a faults. ovs-ofctl utility was used to insert the faults in the form of high-priority flow rules on random switches. Therefore, we simulated a scenario where the control plane was unaware of these faults in the data plane. We made a pcap file of the production traffic generated from our Python-based script that crafts the packets. In addition, we made a pcap of the fuzz traffic generated from the Fuzzer. The production and fuzz traffic pcap files were collected using Wireshark [72] and replayed at the desired rate in parallel using Tcpreplay [73] with infinite loops to test the network continuously. For sampling, we used sFlow [56] with a polling interval of 1 second and a sampling rate of 1/100, 1/500 and 1/1000. The sampling was done on the egress port of the exit switch in the data plane so the sampled actual report reaches the Consistency Tester and thus, avoids overwhelming it. Note each experiment was conducted for a randomly chosen source-destination pair. Each experiment was executed ten times.

Workloads: For 1 Gbps links between the switches in the 4 experimental topologies: 3 grid and 1 fat-tree (4-ary), the production traffic was generated at 106 pps (packets per second). In parallel, fuzz traffic was generated at 1000 pps.

4.3 Evaluation Strategy

For a source-destination pair, our experiments are parameterized by: (a) size of network (4-20 switches), (b) path length (2-6), (c) configs (flow rules from 5k-100k), (d) number of paths (24k-75k), (e) number (1-30) and kind of faults (Type-p, Type-a), (f) sampling rate (1/100, 1/500, 1/1000) with polling interval (1 sec), and (g) workloads i.e., throughput (106 pps for production and 1000 pps for fuzz traffic). Our primary metrics of interest are fault detection with localization time, and comparison of fault detection/localization time in Pazz against the baseline of exhaustive traffic generation approach. In particular, we ask the following questions:
Q1. How does Pazz perform under different topologies and configs of varying scale and complexity? (section 4.4)
Q2. How does Pazz compare to the strawman case of exhaustive random packet generation? (section 4.5)
Q3. How much time does Pazz take to compute reachability graph at control plane? (section 4.6)
Q4. How much time does Pazz take to generate active traffic for a source-destination pair and how much overhead does Pazz incur on the links? (section 4.7)
Q5. How much packet processing overhead does Pazz incur on varying packet sizes? (section 4.8)

4.4 Pazz Performance

Figure 8

illustrates the cumulative distribution function (CDF) of the Type-p and Type-a faults detected in the four different experimental topologies with the parameters mentioned in Table 

2. As expected, in a grid 16-switch topology with 60k rules and 75k paths, Pazz takes only 25 seconds to detect 50% of the faults and 105 seconds to detect all of the faults in case of sampling rate 1/100 and polling interval of 1 second (left in Figure 8). For the same sampling rate of 1/100, in the case of 4-ary fat-tree topology with 20 switches containing 100k rules and 75k paths, Pazz detects 50% of the faults in 40 seconds and all faults in 160 seconds. Since the production traffic was replayed at 106 pps in parallel with the fuzz traffic replayed at 1000 pps, the Type-p faults in the production traffic header space (35% of total faults) were detected faster in a maximum time of 24 seconds for all four topologies as compared to the Type-a faults (65% of total faults) in the fuzz traffic header space which were detected in a maximum time of 420 seconds888Pazz is independent of the topology symmetry and thus, it performs similarly in asymmetrical topologies. We removed certain links in the four experimental topologies however, the detection and localization performance remained unaffected.

. As the experiment was conducted ten times, the time taken is the mean of the ten values to detect a fault pertaining to a packet header space. We omitted confidence intervals as they are small after 10 runs. In all cases, the detection time difference was marginal.

Localization Time: As per Algorithm 3, the production traffic-specific faults after detection were automatically localized within a span of 50 secs for all four experimental topologies. The localization of faults pertaining to fuzz traffic was manual as there was no expected report from the CPC. Hereby, the localization was done for two cases: a) when the fuzz traffic entered at the ingress port of the entry switch and b) when the fuzz traffic entered in between a pair of ingress and egress ports. For the first case, the localization of each fault happens in a second after the fault was detected by the Consistency Tester as the first switch possessed a flow rule to allow such traffic in the network. For the second case i.e., where fuzz traffic was injected from between the pair of ingress and egress ports took approx. 2-3 minutes after detection for manual localization as the path was constructed after hop-by-hop inspection of the switch rules.

4.5 Comparison to Exhaustive Packet Generation

We compare the fault detection time of Pazz which uses Fuzzer against exhaustive packet generation approach. For a fair comparison, the exhaustive packet generation approach generates the same number of flows randomly and at the same rate like Pazz. Figures 8,  8,  8 and 8 illustrate the fault detection time CDF in 4-switch, 9-switch, 16-switch and 4-ary fat-tree (20-switch) experimental topologies respectively. Three figures for each experimental topology illustrates the results for three different sampling rates of 1/100, 1/500 and 1/1000 (left-to-right) respectively. The blue line indicates Pazz which uses Fuzzer and the red line indicates exhaustive packet generation approach. As expected, we observe that Pazz performs better than exhaustive packet generation approach. Pazz provides an average speedup of 2-3 times. We observe in all cases, 50% of the faults are detected in a maximum time of ~50 seconds or less than a minute by Pazz. Note we excluded the Fuzzer execution time (section 4.7) in the plots. It is worth mentioning that Pazz will perform much better if we compare against a fully exhaustive packet generation approach which generates 232 flows in all possible destination IPv4 header space. Hereby, the detected faults are Type-a as they require active probes in the uncovered packet header space. Since Pazz relies on production traffic to detect the Type-p faults hence, we get rid of the exhaustive generation of all possible packet header space. Similar results were observed for localization as localization happens once the fault has been detected.

4.6 Reachability Graph Computation

Table 2 (Column 5) shows the reachability graph computation in all experimental topologies by the CPC for an egress port. To observe the effect of evolving configs, we added additional rules to various switches randomly. We observe that CPC computes the reachability graph in all topologies in s.

4.7 Fuzzer Execution Time & Overhead

Execution Time: Table 2 (Column 6) illustrates the time taken by Fuzzer to compute the packet header space for fuzz traffic in the four experimental topologies after it receives the covered packet header space (corpus) from the CPC. Since we considered destination-based routing hence, the packet header space computation was limited to 32-bit destination IPv4 address space in the presence of wildcarded rules. When some of the rules were added to the data plane, the CPC recomputed the corpus, the new corpus was sent to the Fuzzer which recomputed the new fuzz traffic within a maximum time of 7.5 milliseconds.

Overhead: The fuzz traffic contains 54-byte test packets at the rate of 1000 pps on a 1 Gbps link that is 0.04% of the link bandwidth and therefore, minimal overhead on the links at the data plane. Note that most of the fuzz traffic is dropped at the first switch unless there is a flow rule to match that traffic and thus, incurring even less overhead on the links.

4.8 DPC Overhead

We generated different packet sizes from 64 bytes to 1500 bytes at almost Gbps rate on the switches running the DPC software of Pazz and the native OvS switches. We added flow rules on our switches to match the packets and tag them by the Verify shim header using our push_verify, set_Verify_Rule and set_Verify_Port actions in the flow rule. Under these conditions, we measured the average throughput over 10 runs.

We observe that the Verify shim header and the tagging mechanism incurred 1.1% of throughput drop in Pazz as compared to the native OvS. Thus, it is clear that Pazz introduces minimal packet processing overheads atop OvS. Note that push_verify actions happen only on the entry switch/exit switch to insert the Verify shim header. Furthermore, sFlow sampling is done at the exit switch only.

5 Related Work

In addition to the related work covered in section 1 that includes the existing literature based on [17] and Table 1, we now will navigate the landscape of related works and compare them to Pazz in terms of the Type-p and Type-a faults which cause inconsistency (section 2). The related work in the area of control plane [26, 27, 28, 29, 30, 34, 36, 37, 38, 39, 40, 41, 31, 32, 33, 35, 42] either check the controller-applications or the control-plane compliance with the high-level network policy. These approaches are insufficient to check the physical data plane compliance with the control plane. As illustrated in Table 1, we navigate the landscape of the data plane approaches and compare them with Pazz based on the ability to detect Type-p and Type-a faults. It is worth noting that the approaches either test the rules or the paths whereas Pazz tests both together. In the case of Type-p match faults (section 2.3.1) when the path is same even if different rule is matched, path trajectory tools [20, 21, 22, 23, 24, 25, 49] fail. The approaches based on active-probing [3, 5, 25, 19, 18, 50, 51] do not detect the Type-a faults (section 2.3.2) caused by hidden or misconfigured rules on the data plane which only match the fuzz traffic. These tools, however, only generate the probes to test the rules known or synced to the controller. Such Type-a faults are detected by Pazz. Latest tools [74, 75] debug only P4-specific networks using program analysis techniques.

Overall, as illustrated in Figure 9, Pazz checks consistency at all levels between control and the data plane i.e., logical physical, logical physical, and logical physical.

Figure 9: Pazz provides coverage on the forwarding flow rules, topology and paths.

6 Conclusion

This paper presented Pazz, a novel network verification methodology that automatically detects and localizes the data plane faults manifesting as inconsistency in SDNs. Pazz continuously fuzz tests the packet header space and compares the expected control plane state with the actual data plane state. The tagging mechanism tracks the paths and rules at the data plane while the reachability graph at the control plane tracks paths and rules to help Pazz in verifying consistency. Our evaluation of Pazz over real network topologies and configurations showed that Pazz efficiently detects and localizes the faults causing inconsistency.

In future, we would like to verify the control-data plane consistency in a more challenging P4 SDN scenario.


  • [1] France seeks influence on telcos after outage.
  • [2] Hongyi Zeng, Peyman Kazemian, George Varghese, and Nick McKeown. A Survey Of Network Troubleshooting. Technical Report TR12-HPNG-061012, Stanford University, 2012.
  • [3] Hongyi Zeng, Peyman Kazemian, George Varghese, and Nick McKeown. Automatic test packet generation. In Proceedings of the 8th international conference on Emerging networking experiments and technologies, pages 241–252. ACM, 2012.
  • [4] Piotr Gawkowski and Janusz Sosnowski. Experiences with software implemented fault injection. In Architecture of Computing Systems (ARCS), 2007.
  • [5] Peter Perešíni, Maciej Kuźniar, and Dejan Kostić. Monocle: Dynamic, Fine-Grained Data Plane Monitoring. In Proc. ACM CoNEXT, 2015.
  • [6] The anatomy of a leak: As 9121.
  • [7] Ratul Mahajan, David Wetherall, and Tom Anderson. Understanding bgp misconfiguration. In ACM SIGCOMM Computer Communication Review, volume 32, pages 3–16. ACM, 2002.
  • [8] Cloud leak: How a verizon partner exposed millions of customer accounts.
  • [9] Cloud leak: Wsj parent company dow jones exposed customer data.
  • [10] Con-ed steals the ’net’.
  • [11] D. Madory. Renesys blog: Large outage in pakistan. Blog, 2011.
  • [12] Maciej Kuźniar, Peter Perešíni, and Dejan Kostić. What you need to know about sdn flow tables. In Proc. PAM, 2015.
  • [13] Naga Katta, Omid Alipourfard, Jennifer Rexford, and David Walker. Cacheflow: Dependency-aware rule-caching for software-defined networks. In Proceedings of the Symposium on SDN Research, page 6. ACM, 2016.
  • [14] Jennifer Rexford. SDN Applications. In Dagstuhl Seminar 15071, 2015.
  • [15] Maciej Kuzniar, Peter Peresini, Marco Canini, Daniele Venzano, and Dejan Kostic. A SOFT Way for Openflow Switch Interoperability Testing. In Proc. ACM CoNEXT, 2012.
  • [16] Maciej Kuzniar, Peter Peresini, and Dejan Kostić. Providing reliable fib update acknowledgments in sdn. In Proc. ACM CoNEXT, pages 415–422, 2014.
  • [17] Brandon Heller, James McCauley, Kyriakos Zarifis, Peyman Kazemian, Colin Scott, Nick McKeown, Scott Shenker, Andreas Wundsam, Hongyi Zeng, Sam Whitlock, Vimalkumar Jeyakumar, and Nikhil Handigol. Leveraging SDN layering to systematically troubleshoot networks. Proc. SIGCOMM Workshop HotSDN, page 37, 2013.
  • [18] Kai Bu, Xitao Wen, Bo Yang, Yan Chen, Li Erran Li, and Xiaolin Chen. Is every flow on the right track?: Inspect sdn forwarding with rulescope. pages 1–9, 2016.
  • [19] Peng Zhang, Cheng Zhang, and Chengchen Hu. Fast testing network data plane with rulechecker. In Network Protocols (ICNP), 2017 IEEE 25th International Conference on, pages 1–10. IEEE, 2017.
  • [20] Peng Zhang, Hao Li, Chengchen Hu, Liujia Hu, Lei Xiong, Ruilong Wang, and Yuemei Zhang. Mind the gap: Monitoring the control-data plane consistency in software defined networks. In Proceedings of the 12th International on Conference on emerging Networking EXperiments and Technologies, pages 19–33. ACM, 2016.
  • [21] Nikhil Handigol, Brandon Heller, Vimalkumar Jeyakumar, David Mazières, and Nick McKeown. I Know What Your Packet Did Last Hop: Using Packet Histories to Troubleshoot Networks. Proc. USENIX NSDI, pages 71–85, 2014.
  • [22] Srinivas Narayana, Mina Tahmasbi, Jennifer Rexford, and David Walker. Compiling Path Queries. In Proc. USENIX NSDI, 2016.
  • [23] Praveen Tammana, Rachit Agarwal, and Myungjin Lee. CherryPick: Tracing Packet Trajectory in Software-Defined Datacenter Networks. In SOSR, 2015.
  • [24] Praveen Tammana, Rachit Agarwal, and Myungjin Lee. Simplifying datacenter network debugging with pathdump. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), pages 233–248, 2016.
  • [25] Kanak Agarwal, John Carter, and Colin Dixon. SDN traceroute : Tracing SDN Forwarding without Changing Network Behavior. HotSDN 2014, pages 145–150, 2014.
  • [26] Seyed Kaveh Fayaz, Tianlong Yu, Yoshiaki Tobioka, Sagar Chaki, and Vyas Sekar. Buzz: Testing context-dependent policies in stateful networks. In Proc. USENIX NSDI, pages 275–289, 2016.
  • [27] Seyed K Fayaz and Vyas Sekar. Testing stateful and dynamic data planes with flowtest. In Proc. SIGCOMM Workshop HotSDN, pages 79–84. ACM, 2014.
  • [28] Haohui Mai, Ahmed Khurshid, Rachit Agarwal, Matthew Caesar, P. Brighten Godfrey, and Samuel Talmadge King. Debugging the Data Plane with Anteater. In SIGCOMM, 2011.
  • [29] Peyman Kazemian, George Varghese, and Nick McKeown. Header Space Analysis: Static Checking for Networks. In Proc. USENIX NSDI, 2012.
  • [30] Peyman Kazemian, Michael Chang, Hongyi Zeng, George Varghese, Nick Mckeown, and Scott Whyte. Real Time Network Policy Checking Using Header Space Analysis. In Proc. USENIX NSDI, 2013.
  • [31] Hongyi Zeng, Shidong Zhang, Fei Ye, Vimalkumar Jeyakumar, Mickey Ju, Junda Liu, Nick McKeown, and Amin Vahdat. Libra: Divide and Conquer to Verify Forwarding Tables in Huge Networks. In NSDI, volume 14, 2014.
  • [32] Marco Canini, Daniele Venzano, Peter Perešíni, Dejan Kostić, and Jennifer Rexford. A NICE Way to Test Openflow Applications. In Proc. USENIX NSDI, 2012.
  • [33] Colin Scott, Andreas Wundsam, Barath Raghavan, Aurojit Panda, Andrew Or, Jefferson Lai, Eugene Huang, Zhi Liu, Ahmed El-Hassany, Sam Whitlock, et al. Troubleshooting blackbox sdn control software with minimal causal sequences. ACM SIGCOMM Computer Communication Review, 44(4):395–406, 2015.
  • [34] Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, and P. Brighten Godfrey. VeriFlow: Verifying Network-Wide Invariants in Real Time. In NSDI, 2013.
  • [35] Ari Fogel, Stanley Fung, Luis Pedrosa, Meg Walraed-Sullivan, Ramesh Govindan, Ratul Mahajan, and Todd D Millstein. A general approach to network configuration analysis. pages 469–483, 2015.
  • [36] Nuno P. Lopes, Nikolaj Bjørner, Patrice Godefroid, Karthick Jayaraman, and George Varghese. Checking Beliefs in Dynamic Networks. In Proc. USENIX NSDI, 2015.
  • [37] H. Yang and S. S. Lam. Real-Time Verification of Network Properties Using Atomic Predicates. IEEE/ACM Transactions on Networking, 24(2):887–900, April 2016.
  • [38] Seyed Kaveh Fayaz, Tushar Sharma, Ari Fogel, Ratul Mahajan, Todd D Millstein, Vyas Sekar, and George Varghese. Efficient network reachability analysis using a succinct control plane representation. In OSDI, pages 217–232, 2016.
  • [39] Aaron Gember-Jacobson, Raajay Viswanathan, Aditya Akella, and Ratul Mahajan. Fast control plane analysis using an abstract representation. In Proceedings of the 2016 ACM SIGCOMM Conference, pages 300–313. ACM, 2016.
  • [40] Ryan Beckett, Aarti Gupta, Ratul Mahajan, and David Walker. A general approach to network configuration verification. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication, pages 155–168. ACM, 2017.
  • [41] Alex Horn, Ali Kheradmand, and Mukul R Prasad. Delta-net: Real-time network verification using atoms. In NSDI, pages 735–749, 2017.
  • [42] Yu-Wei Eric Sung, Xiaozheng Tie, Starsky HY Wong, and Hongyi Zeng. Robotron: Top-down network management at facebook scale. In Proceedings of the 2016 ACM SIGCOMM Conference, pages 426–439. ACM, 2016.
  • [43] Patrice Godefroid, Michael Y. Levin, and David Molnar. Sage: Whitebox fuzzing for security testing. Queue, 10(1):20–27, January 2012.
  • [44] Barton P Miller, Louis Fredriksen, and Bryan So. An empirical study of the reliability of unix utilities. Communications of the ACM, 33(12):32–44, 1990.
  • [45] Po-Wen Chi, Chien-Ting Kuo, Jing-Wei Guo, and Chin-Laung Lei. How to detect a compromised sdn switch. In Network Softwarization (NetSoft), 2015 1st IEEE Conference on, pages 1–6. IEEE, 2015.
  • [46] Markku Antikainen, Tuomas Aura, and Mikko Särelä. Spook in your network: Attacking an sdn with a compromised openflow switch. In Nordic Conference on Secure IT Systems, pages 229–244. Springer, 2014.
  • [47] G Pickett. Staying persistent in software defined networks. Black Hat Briefings, 2015.
  • [48] George Varghese. Technical perspective: Treating networks like programs. Commun. ACM, 58(11):112–112, October 2015.
  • [49] Peng Zhang, Shimin Xu, Zuoru Yang, Hao Li, Qi Li, Huanzhao Wang, and Chengchen Hu. Foces: Detecting forwarding anomalies in software defined networks. 2018.
  • [50] Yu-Ming Ke, Hsu-Chun Hsiao, and Tiffany Hyun-Jin Kim. Sdnprobe: Lightweight fault localization in the error-prone environment. pages 489–499, 2018.
  • [51] Vimalkumar Jeyakumar, Mohammad Alizadeh, Yilong Geng, Changhoon Kim, and David Mazières. Millions of little minions: Using packets for low latency network programming and visibility. In ACM SIGCOMM Computer Communication Review, volume 44, pages 3–14. ACM, 2014.
  • [52] OpenFlow Spec., 2015.
  • [53] Changhoon Kim, Anirudh Sivaraman, Naga Katta, Antonin Bas, Advait Dixit, and Lawrence J Wobker. In-band network telemetry via programmable dataplanes. In ACM SIGCOMM SOSR, 2015.
  • [54] Pat Bosshart, Dan Daly, Glen Gibb, Martin Izzard, Nick Mckeown, Jennifer Rexford, Cole Schlesinger, Dan Talayco, Amin Vahdat, George Varghese, and David Walker. P4: Programming Protocol-Independent lPacket Processors. arXiv:1312.1719v3 [cs.NI], 44(3):8, 2014.
  • [55] Hui Zhang, Cristian Lumezanu, Junghwan Rhee, Nipun Arora, Qiang Xu, and Guofei Jiang. Enabling Layer 2 Pathlet Tracing through Context Encoding in Software-Defined Networking. In HotSDN, 2014.
  • [56] Sonia Panchen, Peter Phaal, and Neil McKee. Inmon corporation’s sflow: A method for monitoring traffic in switched and routed networks. 2001.
  • [57] Adam Kirsch and Michael Mitzenmacher. Less hashing, same performance: building a better bloom filter. In ESA, volume 6, pages 456–467. Springer, 2006.
  • [58] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Trans. Comput., 35(8):677–691, August 1986.
  • [59] Llvm compiler infrastructure: libfuzzer: a library for coverage-guided fuzz testing.
  • [60] Zhao Zhang, Qiao-Yan Wen, and Wen Tang. An efficient mutation-based fuzz testing approach for detecting flaws of network protocol. In Computer Science & Service System (CSSS), 2012 International Conference on, pages 814–817. IEEE, 2012.
  • [61] Yibo Zhu, Nanxi Kang, Jiaxin Cao, Albert Greenberg, Guohan Lu, Ratul Mahajan, Dave Maltz, Lihua Yuan, Ming Zhang, Ben Y Zhao, et al. Packet-level telemetry in large datacenter networks. In ACM SIGCOMM Computer Communication Review, volume 45, pages 479–491. ACM, 2015.
  • [62] Ramana Rao Kompella, Jennifer Yates, Albert Greenberg, and Alex C Snoeren. Detection and localization of network black holes. In INFOCOM 2007. 26th IEEE International Conference on Computer Communications. IEEE, pages 2180–2188. IEEE, 2007.
  • [63] Leslie Lamport. Password authentication with insecure communication. Communications of the ACM, 24(11):770–772, 1981.
  • [64] Birthday attack.
  • [65] Guy Castagnoli, Jürg Ganz, and Patrick Graber. Optimum cycle redundancy-check codes with 16-bit redundancy. IEEE Transactions on Communications, 38(1):111–114, 1990.
  • [66] Bob Jenkins. Hash functions. Dr Dobbs Journal, 22(9):107–+, 1997.
  • [67] Open vswitch., 2016.
  • [68] Mitchell Hashimoto. Vagrant: Up and Running: Create and Manage Virtualized Development Environments. " O’Reilly Media, Inc.", 2013.
  • [69] Ryu component-based software defined networking framework., 2016.
  • [70] Apache thrift.
  • [71] Hassel-public.
  • [72] Angela Orebaugh, Gilbert Ramirez, and Jay Beale. Wireshark & Ethereal network protocol analyzer toolkit. Elsevier, 2006.
  • [73] Aaron Turner and M Bing. Tcpreplay: Pcap editing and replay tools for* nix. online], http://tcpreplay. sourceforge. net, 2005.
  • [74] Radu Stoenescu, Dragos Dumitrescu, Matei Popovici, Lorina Negreanu, and Costin Raiciu. Debugging p4 programs with vera. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, pages 518–532. ACM, 2018.
  • [75] Jed Liu, William Hallahan, Cole Schlesinger, Milad Sharif, Jeongkeun Lee, Robert Soulé, Han Wang, Călin Caşcaval, Nick McKeown, and Nate Foster. p4v: practical verification for programmable data planes. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, pages 490–503. ACM, 2018.