REMOTEGATE: Incentive-Compatible Remote Configuration of Security Gateways

09/14/2017
by   Abhinav Aggarwal, et al.
0

Imagine that a malicious hacker is trying to attack a server over the Internet and the server wants to block the attack packets as close to their point of origin as possible. However, the security gateway ahead of the source of attack is untrusted. How can the server block the attack packets through this gateway? In this paper, we introduce REMOTEGATE, a trustworthy mechanism for allowing any party (server) on the Internet to configure a security gateway owned by a second party, at a certain agreed upon reward that the former pays to the latter for its service. We take an interactive incentive-compatible approach, for the case when both the server and the gateway are rational, to devise a protocol that will allow the server to help the security gateway generate and deploy a policy rule that filters the attack packets before they reach the server. The server will reward the gateway only when the latter can successfully verify that it has generated and deployed the correct rule for the issue. This mechanism will enable an Internet-scale approach to improving security and privacy, backed by digital payment incentives.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/14/2021

Preventing Manipulation Attack in Local Differential Privacy using Verifiable Randomization Mechanism

Local differential privacy (LDP) has been received increasing attention ...
11/16/2021

Remote Memory-Deduplication Attacks

Memory utilization can be reduced by merging identical memory blocks int...
07/06/2009

Design of an Optimal Bayesian Incentive Compatible Broadcast Protocol for Ad hoc Networks with Rational Nodes

Nodes in an ad hoc wireless network incur certain costs for forwarding p...
09/04/2019

Internet Appendix for "Sequential Bargaining Based Incentive Mechanism for Collaborative Internet Access"

This document is an Internet Appendix of paper entitled "Sequential Barg...
11/19/2019

A Game-Theoretic Approach for Enhancing Security and Data Trustworthiness in IoT Applications

Wireless sensor networks (WSNs)-based internet of things (IoT) are among...
03/24/2021

The Value of Communication and Cooperation in a Two-Server Service System

In 2015, Guglielmi and Badia discussed optimal strategies in a particula...
09/24/2018

The Struggle is Real: Analyzing Ground Truth Data of TLS (Mis-)Configurations

As of today, TLS is the most commonly used protocol to protect communica...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Enterprise and home networks are nowadays connected to the Internet via security gateways, such as firewalls, intrusion detection systems, and data-leakage protection systems, designed to protect the nodes on the local network from attacks originating on the Internet. This network architecture is widely prevalent, to the point of every home-network router having a built-in firewall. We propose to add another purpose to these security gateways, by tasking them with protecting the Internet at large from attacks originating on the local network. The mechanism presented in this paper will allow any Internet party (henceforth, referred to as server) to configure a security gateway for a local network not under their administrative control.

Current gateways perform some filtering of outgoing traffic, primarily with the goals of preventing sensitive-data exfiltration and of detecting infected local nodes. For example, such filtering applies to outgoing traffic destined to or originated on certain ports (e.g., blocking known malware-related ports, blocking broadcast traffic) or with certain contents (e.g., blocking HTTP requests to known-bad servers). The administrator of the local network can manage such outgoing filtering, and does so with the goal of improving the security of the local network. Thus, it is often the case that the security of the rest of the Internet is not considered, leading to the common result of networked computers infected with bot malware for long periods of time because the bot code does not impede on the computer owner’s use.

Our goal in this work is to enable a server to configure any gateway to block attack traffic that reaches the server after passing through the gateway. We introduce RemoteGate, a protocol that enables this remote configuration of the gateway’s filtering mechanism by the server under attack. This provides any gateway on the Internet a dual-purpose : (1) it will protect some local network from external attacks by blocking unwanted packets originating on the Internet; and, (2) it will protect the Internet from attacks by blocking unwanted packets originating from the local network. A significant fact about this design is that the parties most interested in being protected from attack are capable of setting the corresponding policy on the gateway.

We envision RemoteGate to enable the creation of a distributed, Internet-scale packet filtering system. For each server, there would be one or more (security) gateways that mediate its interactions with the rest of the Internet. These gateways are enforcing a policy that was jointly specified by Internet parties interested in traffic to and from this server. From the point of view of a attack victim (the server in this case), this system offloads any defense mechanism from the local security gateway (closest to the victim) to a security gateway located closer to the source of the attack, reducing the processing load on the victim’s security gateway and the traffic load on the core of the Internet (by stopping attack traffic closer to source). An additional advantage is the increase in accuracy at each security gateway of filtering the traffic outgoing from its local network, in addition to monitoring the incoming traffic (which can be arbitrary).

To facilitate RemoteGate as a system that allows the policy of any security gateway to be manipulated, several significant security challenges have to be addressed to make our proposal practical. First and foremost, an adversary could weaken or even eliminate the current filtering policy of the gateway through careful attack packet design, leaving the local network open to external attacks. In other words, it may seem possible for the server to make the gateway learn a model that affects the filtering already provided by the existing policies in its firewall. However, we emphasize that with RemoteGate in place, this scenario is completely prevented (see Appendix A.2 for details.)

Second, a malicious update to the policy could prevent the local network from accessing Internet services it needs, effectively installing a denial-of-service policy. Third, if the server offloads its security enforcement to security gateways elsewhere on the Internet, a malicious security gateway could pretend to enforce the new policy while still allowing attacks to pass through, exposing the now-vulnerable server to exploitation. Fourth, an attacker with access to many gateways could manipulate them to block benign traffic destined for a server, again installing a denial-of-service policy, this time against the server.

Beyond the security issues potentially introduced by the mutual lack of trust, moving the security policy onto a local network’s security gateway presents new challenges related to computational requirements and to deployment. If, say, Amazon.com decides to offload its security policy from its security firewall to the security firewalls of its top-100 million customers, then each of those customers will presumably have to prepare their firewalls to handle the increased traffic filtering task. Furthermore, if only some small percentage of those top-100 million customers are willing to participate in this RemoteGate system, Amazon.com may decide that the effort of offloading only a small part of their security firewall is not worth the small benefit to be gained. Thus, in our system we have to design incentives for both parties to participate, in addition to addressing the newly introduced security problems.

Our approach to building this RemoteGate system is three-fold. First, policy updates must apply only to traffic destined for the server requesting the policy change. Second, servers will pay gateways for updating their policies. Third, servers will check through an interactive protocol that gateways enforce the new policy. Underlying these requirements is a common negotiation substrate through which the server conveys to the gateway the desired update to the security policy, the gateway demonstrates to the server that it deployed the policy update, and payment from server to gateway is exchanged. These steps are not trivial as they rely on a number of assumptions that we aim to discard through careful design. We overcome these challenges by proposing a RemoteGate architecture, in which any Internet-connected security gateway can offer policy-configuration services for a fee. For future work, we plan to present techniques to construct a secure realization of this architecture and analyze possible deployments in the contexts of some widely publicized attacks.

Figure 1: In our remote-configuration scenario, the server wishes to have the gateway block malicious traffic coming from attack source.

1.1 Motivation

The greatest challenge is designing such a system for RemoteGate is understanding the benefit–risk tradeoffs seen be each party involved. We describe in this section this tradeoff for each type of party. For now we assume that there are three parties (see Figure 1): the attack source, the victim of the attack (server), and the security gateway (a system that can filter traffic between the attacker and the server). The main mechanism considered is for the server to send a request to the security gateway, and for the security gateway to change its security policy to block attacks originated from the attacker and destined for the server.

Motivation for the server

The main goal for the server is to improve its network security, both in terms of accuracy and cost. In terms of accuracy, remotely configuring the policy of a security gateway allows the server to achieve a security policy that is customized for the attacker’s network. This implies that the security policy resulting from the remote configuration takes into account both the attack traffic that is afflicting the server and the normal traffic that passes through the security gateway. As a simple example, denial-of-service attack traffic is best blocked by filtering it as close to the attacker as possible, as filtering it later in the network becomes almost impossible due to bandwidth constraints and due to similarity to normal traffic.

In terms of cost, remote policy configuration effectively offloads some of the security enforcement typically done at the server’s own firewall to the security gateway. This is particularly useful for servers who run popular Internet-facing services and would need extremely powerful firewalls to handle all of the incoming traffic at a single ingress point. As a matter of fact, highly scalable Internet services often operated in a distributed fashion from multiple data centers, complicating the security management of all the firewalls involved. Offloading the security policy to remote security gateways would allow for cheaper and easier management of firewalls closer to the server, while making use of the large number of security gateways sitting mostly idle in front of the users accessing those servers.

Underlying these goals of improved accuracy and lowered cost is the key need to maintain the same security guarantees as in the local firewall scenario. Because the enforcement of the security policy is now shifted to a remote security gateway not controlled by the server, our system needs to explicitly account for the potential loss of trust.

Motivation for the Security Gateway

The security gateway operates under one requirement, to allow all desired traffic sent by the network it protects to reach the Internet. In general, the definition of desired traffic is not known and thus the gateway mostly operates by allowing traffic that it has seen before or that was explicitly whitelisted. In this context, the request to change the security policy sent by the server (which is an arbitrary node on the Internet from the point of view of the security gateway) introduces two problems. First, the gateway has to account for the additional resources to be spent configuring and deploying a new security policy. Second, the gateway has to ensure that the new policy still allows all desired traffic to reach the Internet.

To motivate the security gateway, we propose that it is paid for the resources it spends (e.g., additional computation time, additional memory, additional network latency) to handle the security policy requested by the server. To ensure that the new policy still satisfies the existing needs of the security gateway’s protected network, our approach will use the existing security policy as a starting point.

Deriving Security Policies from Examples

An significant, operational roadblock for the security gateway is in handling the format and semantics of the policy-change request. Different gateways have completely distinct filtering semantics, distinct languages for specifying the filtering policies, and distinct levels of expressiveness for capturing a certain filtering functions. Exposing this information to anyone on the Internet, beyond the challenges of actually representing this information in a common format that everyone agrees upon, is also a security risk for the security gateway, as attackers can study the filtering languages, its expressiveness, and its semantics to find weaknesses that would allow evasion attacks.

In our approach, we decouple the policy engine of the security gateway from the server’s request by having the server provide examples of attacks (i.e., network packets that form an attack) for the security gateway to analyze and create a security policy from. This restricts the communication between RemoteGate

and the gateway to exchanging information about the inputs to the policy and the expected results ("block" or "allow"), without requiring a particular policy type or a particular enforcement engine design. The security gateway can use a simple firewall-style, rule-based system that only takes into account IP addresses and ports, or it can use a machine learning classifier that uses the whole contents of a network packet to make decisions.

1.2 Model

Our model consists of a server that can receive packets from clients over the Internet. Specifically, consider a network of clients connected to the Internet via a network gateway . We are interested in the communication of these clients with . We assume there are multiple (at least two111The server’s firewall and a gateway closer to the clients.) gateways on the way from each client to the server. Each gateway is equipped with a firewall that can control the flow of traffic sent to the Internet via a security policy, which basically identifies specific characteristics about data packets passing through the gateway. As mentioned previously, usually these policies filter incoming packets to the local network behind the gateway. However, RemoteGate will focus on filtering the outgoing packets instead.

We consider an adversary who controls some or all of the clients behind the gateway to attack the server by sending attack packets. We assume that this adversary does not collude with the gateway.222In Section 3.3, we present a gateway discovery algorithm with spoof checking that will help ensure that there is no such collusion. However, we leave it for future work to find such a gateway (that controls the traffic between the attacker and the server without any collusion with the former) with higher efficiency and precision. The server wants to remotely configure the gateway’s policy to impede the attack traffic targeted at itself by these adversarially controlled clients.

We model the server and the gateway as rational players by provisioning incentives for both to participate in this protocol through carefully designed utility functions. In particular, we incentivize the gateway to participate in this protocol by providing a payment from the server upon successful deployment (and maintenance for a specified period of time) of a rule that prevents the attack. This can be done via any secure (digital) payment mechanism, that will allow both the server and gateway to contest the payment in a fair manner any time there is a conflict in the future. Such a mechanism is independent of our protocol and can be achieved by using a payment processing networks such as Visa [visa] or any such equivalent (even digital currency like Bitcoin [bitcoin] etc.).

1.3 Technical Challenges

Several technical challenges arise when dealing with a problem at the scale of the Internet, specifically given the anonymity of the remote users and their unknown intents. In this paper, we attempt to tackle two challenges that we think are most important to our problem statement and our model. First, we try to answer the question of how one can design a reward model that is incentive-compatible and fair to both the server and the gateway at the same time. Our solution achieves this by making the server pay the gateway a reward that is proportional to the accuracy of the rule that he deploys in its firewall. This incentivizes the gateway to participate in the protocol and more importantly, our design of the reward mechanism forces the gateway’s dominant strategy to play honestly. Similarly, we model the server’s incentive by exploiting its need to get the attack prevented as soon as possible. This allows us to force the server’s dominant strategy to be incentive-compatible as well.

Another important challenge we tackle is for the server is to ensure that the gateway deploys the correct model and continues filtering packets based on it. We refer to this problem as the deployment verification problem. This is important because it requires the server to not regret paying the gateway later upon realizing that the gateway maliciously deployed the wrong model (or no model at all). However, this is tricky because we never require the server to explicitly ask the gateway for any information about its firewall or the rule generated (apart from what is leaked from the verification during rule learning), for obvious privacy reasons. In this paper we tackle this problem by making the server specify the period for which it requires the gateway’s service and pay a periodic fee for that period. The idea is that this fee will incentivize the gateway to retain the model in its firewall. Further, any breach, if encountered, is assumed to be resolved by a trusted third party, so later if there is still attack going on, the server can take help from this third party to resolve the conflict. This assumption of a trusted third party seems to weaken our model at first, but we emphasize in the paper that this is certainly not the only solution to this problem.

1.4 Our Contributions

This paper introduces RemoteGate, an efficient protocol for remote configuration of security gateways. The key contributions of RemoteGate are as follows:

  1. An incentive compatible mechanism for a server under attack to convince a remote gateway to change its firewall settings in order to prevent the attack.

  2. A light-weight interactive mechanism for some party (server) to help a remote party (gateway) learn a classifier for the data that the former provides, without revealing any more information than the accuracy of the classifier on the specified test set.

  3. A spoof checking mechanism to verify the source address mentioned in the attack packets.

Apart from these main contributions, our vision in designing RemoteGate is to provide a protocol that can be autonomously run by devices that are connected to the Internet. Although we restrict the discussion in the paper to a server and a gateway, the protocol as designed can be used with any device capable of sending and receiving messages over the Internet. With the advancing technology in hand-held devices, personal computers and the advent of pervasive computing and Internet of Things (IoT), RemoteGate can be used by any smart device to protect itself against incoming attack packets.

Moreover, several other aspects of RemoteGate are voluntarily kept open to adapt to the emerging technologies. Be it the learning algorithm used by the gateway to obtain the rule to be deployed, or the kind of attack packets against which the defense is sought, RemoteGate allows for a wide variety of security services to be provided, not only on a local scale but on a global scale. Even with the digital payment system that will be used for reward payment or the service fee payment, upcoming trends like blockchain and cryptocurrencies can play a vital role in providing the required security guarantees that we need. We emphasize that the RemoteGate aims at fostering the ideology that while firewalls provide safety against incoming traffic, they can also be efficiently used to stop the attack packets from entering the Internet in the first place.

1.4.1 Properties of RemoteGate

With the objectives as above and the vision of global software defined networking in mind, our design of RemoteGate has the following properties.

  1. Filtering Accuracy: The accuracy of the rule deployed at the gateway’s firewall is highly dependent on the way that rule is derived. RemoteGate aims to achieve a higher filtering accuracy by outsourcing the learning to the gateway and not generating the rule itself. This helps the gateway take into consideration the existing rules in its firewall to learn an optimal model that uses the limited computational resources judiciously and benefits from the rules for other outgoing traffic from the gateway. Moreover, this approach prevents the server from accidentally (or not) blocking other outgoing traffic through its firewall.

  2. Decentralized Attack Prevention: A common practice for attack prevention these days is to inform companies like Cisco, Netgear, Fortinet etc. about an ongoing attack upon which they observe the traffic and design rules or patches to be installed in the existing routers/firewalls. This service is often provided in exchange for a fee, which is decided by the service providers themselves. All seems good until we realize that this approach relies on a critical assumption that the attack packets have not originated out of collusion with the prevention mechanisms. RemoteGate envisions to remove this assumption by directly contacting the gateway to learn a rule and deploy in its firewall without involving any third party. This removal of trust in external entities helps protect certain types of attacks that are designed with a financial motive in mind.

  3. Incentive-Compatible Reward Mechanism: RemoteGate uses concepts from game theoretic mechanism design, specially principles from bilateral trading to devise negotiation schemes and auction mechanism for the reward that is agreed upon by both the server and the gateway. Since the two parties involved are modeled as being rational and mutually untrusted, this design strategy guarantees that the reward chosen is fair with respect to the true valuations of both the server and the gateway. Acting honestly is shown to be the dominant strategy in the RemoteGate protocol.

  4. Light-Weight Protection: By offloading filtering of attack packets as well as learning of these filters to the gateway, the server now runs a light weight RemoteGate client that is able to carry out our protocol without any need of a sophisticated hardware or infrastructure in general.

  5. Monetary Fairness: An obvious question that comes to mind when thinking about RemoteGate’s approach for attack prevention is how one can be sure that the gateway will eventually deploy the rule in its firewall and not just abort after the payment has gone through. If one tries to address this issue by paying the gateway at the very end of its services, then the same question can be asked on the server’s end where there is no guarantee of the server aborting the protocol just before the payment begins. RemoteGate deals with this issue through (1) a periodic payment scheme designed to reward the gateway for its services so far, and (2) a conflict resolution protocol, which can either involve a trusted third party or use a decentralized solution like Ethereum smart contracts to ensure that any breach of agreements is caught and dealt with fairly. This way, RemoteGate ensures that both the server and the gateway are unable to put themselves into a monetary advantage by simply aborting the protocol at the right time.

  6. Smart Protection: We have designed RemoteGate trying to keep it as automated as possible. As long as the protocol is able to differentiate the attack packets from good packets and set some initial paramters, that is all we assume about what it needs to initiate the communication. An interesting question that comes to mind here is how does the server verify the identity or source of the attack packets, given that it is relatively easy to spoof address on the Internet. We handle this using a spoof check subroutine built into the RemoteGate protocol that first tries to find a gateway to interact with and only when it is convinced that the gateway is routing traffic from the attacker.

  7. Small number of firewall installations: Consider a situation in which thousands of computers around the world are under a distributed DoS attack originating from somewhere behind, say, the Tor network. If each of these computers were to install an expensive update to their respective firewalls that can curb the attack, the combined cost of this installation would far exceed the cost incurred in deploying a rule in a gateway (or two) that are closest to the source. The idea here is that once a rule is deployed in a gateway, it can prevent the attack packets from being routed to any victim of the attack over the Internet.

  8. Wide Applicability: The vision of RemoteGate is to be a light weight client that can be installed widely in any device, ranging from heavy duty routers and servers to handy hand-help devices like the cell phone and other forms of IoT compatible devices. This way, attack packets are stopped for devices that are not firewall-compatible or even those that are not sophisticated enough to run antivirus functionality.

1.4.2 Limitations of RemoteGate

There are several limitations of RemoteGate and areas of improvement that future research can look into. The solution provided in the upcoming sections are far from optimal, keeping in mind the novelty of attacks that exist over the Internet everyday. Nonetheless, we hope to achieve security in the situations where our assumptions hold and mention some important limitations of our protocol, as we see it, below.

  1. Slow Responsiveness under DoS Attack: RemoteGate, with its current algorithm, can be less responsive when the server is under a severe DoS attack. More concretely, if the attack packet frequency is too high, the server may not be able to communicate with the gateway at all in order to carry out the required message exchange. In this situation, the server should resort to conventional methods to stop the attack.

  2. Dependence on Learning Algorithm: An important assumption that RemoteGate makes is that the gateway is using a learning algorithm that will be able to classify the packets as required. For this reason, RemoteGate does not explicitly endorse the use of any specific learning technique for the gateway to use. Rather, it allows the gateway to choose one by itself, keeping in mind its rationality with respect to the the reward model. Of course, a (set of) particular algorithm(s) can always be hard-coded into the system if the situation demands so.

  3. Classification of Packets: One might challenge the classification provided by the server to the gateway on the training and test examples for the attack packets for its accuracy. In other words, does the server has an incentive in lying on the classification of the packets? If no, how does it obtain the information about these packets in the first place? For the first question, we assume that when trying to stop the attack, any false information provided by the server only delays the time it takes for the gateway to produce the right model. Thus, if the server chooses to lie on the classifications in some round, only to report the correct classification later, the gateway will be able to identify this behavior in the later stages of the algorithm and abort the protocol. This takes away the incentive from the server to lie and hence, play honestly. For the second question, we assume that the server is informed about the classification either through some threshold scheme (based on the frequency of packet arrival) or a human-assisted mechanism. Ofcourse, this question aligns with the broader concern about not performing the filtering at the server’s end. Our answer to this analogy is that (1) the server needs to know what it doesn’t want to receive, and (2) the gateway is a bette candidate for rule deployment for certain types of attacks, as explained next.

  4. Single Point Protection vs. RemoteGate:

    Is it better to configure the server’s firewall or hunt for an external gateway? The answer depends on the type of attack faced by the server. If the server receives attack packets from multiple sources but containing similar content, then an easier, and probably economic, solution would be to deploy a rule in its own filtering service and stop the packets. However, if the source is suspected to be local to a small group of users, then

    RemoteGate offers a promising solution, both in terms of preventing the packets from entering the Internet in the first place and possibly obtaining a higher filtering accuracy (as explained previously). The exact decision boundary, however, is, to some extent, subjective and thus, open to exploration and further research.

  5. Requirement for Good Packets: Any model generation algorithm that will filter attack packets from good packets will require samples of both types to create a sufficiently accurate boundary that does not suffer from overfitting or under-fitting. Assuming that the attack packets are known, it is entirely possible that the set of good packets is much wider and hence, the choice of an optimal set of good packets to use for training becomes crucial. In other words, if the attack packets all contain a pattern string in them, then every packet that does not contain that pattern string is a good packet. The important question here is how to represent the packets in a way that does not make the server send too many packets to the gateway before a correct model is built. We acknowledge that RemoteGate leaves this problem open for future work and for now, assumes that the server contains an optimal set of example packets for the gateway to train on.

  6. One Model per Server: For every server that contacts a given gateway for an attack, does that gateway deploy a different rule for each of these requests? If yes, then this is an indirect DoS attack on the gateway itself and its firewall, which will likely not be able to scale up to so many requests. However, if not, then how does the gateway decide the number of rules to deploy in order to maximize the filtering accuracy as well as the reward collected? This is one of the reasons why the choice of learning algorithm is kept open for the gateway to decide, since it can choose to build rules that, instead of serving the request by only one server, cater to a set of servers together. This is equivalent to developing rules that perform multi-class classification over the space of network traffic, instead of the binary case of separating attack from non-attack. As mentioned earlier, this is open to future work for further exploration and optimization.

  7. Collusion: Another important assumption RemoteGate makes is that the gateway and the server are not colluding with the attacker. Of course, in reality, this collusion is a very easily conceivable possibility, which must be dealt with in extensions of RemoteGate to malicious server/gateway settings. However, we emphasize that the current assumptions about the rationality of the two parties involved still make sense since there must exist at least one (non-colluding) gateway between the attacker and the victim to have any hope of attack prevention at all. The challenge then lies in the discovery of this good gateway and carry out the protocol with it. Our algorithm for gateway discovery hopes to achieve this, although, its current version may report false positives in certain scenarios.

1.5 Paper Organization

This paper is organized into several sections. Section 2 discusses the related work in this area. Section 3 presents the RemoteGate protocol and provides details about its various steps. Section 4 presents discussion on some technical aspects of the protocol and sheds more light on the decisions made during specific stages of the protocol design. Section 5 discusses some interesting open problems and opportunities for future work related to improvement of RemoteGate as well as the general idea of remote configurability. Finally, section 6 concludes our paper and highlights our main results and ideas. The reader is also encouraged to refer to the Appendix for some interesting FAQs (frequently asked questions) that we encountered during the design of RemoteGate and our answers to those questions.

2 Related Work

The main task in our protocol is for the gateway to learn a rule that correctly classifies the attack packets from the safe packets, using information provided by the server. As mentioned before, this can either be done using some static approach like identifying the IP address or information about the ports, or a more dynamic approach of machine learning that is applicable to even the most sophisticated attack packets. We use of the latter in our paper to make RemoteGate applicable to a wider variety of applications.

Machine learning is emerging as a popular area for both the research community and the service industry. Companies like Amazon [amazonML] and Microsoft [team2016azureml] are already in the game along with academic research groups [potluru2014cometcloudcare] towards providing users around the world with online machine learning visualizations and training infrastructure, while promising privacy of user data. However, this emerging trend of machine learning as a service [ribeiro2015mlaas] comes with its own challenges. One of the first things that come to mind is incentivizing people for providing computational assistance with the training. Numeraire [numeraire] uses an auction mechanism to reward participants with the Numeraire token, while any mechanism for digital payment can be used as a financial incentive.

The bigger picture, however, does not arise from financial incentives. Malicious intentions and privacy breaches pose problems to the use of any service that aims to be publicly available and relies on user data. This source of disruption can either be from the side of provider of data points, where the intention is to bias the output of the classifier towards incorrect classifications, or it can be from the side of the model generator where the client does not trust the service provider with respect to its services. For the former, techniques that handle adversarially generated examples and aim to create robust models [kurakin2016adversarial, goodfellow2014explaining, wang2016theoretical, dalvi2004adversarial, laskov2010machine, zhou2012adversarial] come in handy. While most of these works focus on recovery of the classifier after an adversary injects infected examples, there is also research like in [lowd2005adversarial] that tries to model this problem of constructing adversarial attacks by learning sufficient information about the classifier. The idea here is to better understand and determine the loopholes and the weaknesses in the learning algorithm to make them more robust to clever attacks in the future.

The scope of this paper, however, is along the lines of an untrusted service provider (the security gateway in this case). One particular example in this setting is performing execution of neural networks on an untrusted cloud, where the client who requires this computation is not sure if the cloud performed the inference correctly or not. An approach here is to require some form of proof from the cloud that convinces the client of the computation performed. There are many ways that this proof can work. SafetyNets 

[ghodsi2017safetynets] take an approach that is very specific to the underlying classifier (neural network in this case) by exploiting some mathematical properties of the learning algorithm. However, to generalize the proof technique, several other options come in handy. Some of the popular ones are listed as follows.

Verifiable Computation

[wahby2016verifiable, wahby2015efficient, braun2013verifying, parno2013pinocchio] This approach aims at answering the fundamental question of how a local computer can verify some computation that it asks a remote server to perform, without explicitly performing the computation itself? There are may ways people have attacked this problem, ranging from expressing the computation in some high level language and then using the features of that language to generate a verifiable proof, to exploiting some mathematical properties of the underlying function to generate the proof. The idea is for the verifier to perform minimal computation in order to validate the prover’s claims. In our setting, we perform this verification for the model generated by the gateway interactively during the training phase itself, without the server having to explicitly run a learning algorithm locally. Since we are doing this over a resource limited network, we perform verification keeping in mind both the computational resources at the server and the limitations on latency as posed by the network and the severity of the attack.

Interactive Proofs

[goldwasser2008delegating, reingold2016constant, goldreich2017simple, thaler2013time, cormode2012practical, ghodsi2017safetynets] Yet another frequently used approach for verifying computations is to use an interactive technique in which the prover and the verifier participate in a sequence of message exchanges that allows the verifier to reach a verdict on the prover’s claims. Often these exchanges happen in a way that the verifier queries the prover at certain checkpoints of the computation, and if it receives convincing responses, it delegates that the prover is performing the computation as required. The property that is crucial to ensure here is that if the prover performs the computation correctly, then it can always convince the verifier of the same, but if not, then we can bound the probability of the verifier being fooled by the prover by any desired value. Often this comes with a trade off in the number of bits exchanged as the error tolerance goes down. In our approach, as mentioned previously, we use an interactive scheme as well, in which the server and the gateway exchange messages until the former is convinced that the latter has generated the correct model for classifying the attack packets. Additionally, we use this interactive scheme in a clever way by using the progress of the gateway as a metric for the server to decide what reward to offer in the future rounds. This incentivizes the gateway to strive for best-effort model generation as the rounds progress and collect a high reward at the end.

Zero-Knowledge Proofs

[goldreich1994definitions, chiesa2015cluster, duan2008practical, pathak2012privacy, yampolskiy2011ai] In many applications, privacy of the model learnt or the underlying data is of utmost concern when two or more parties are involved. Consider a setting where a server knows some private database and a client wants to perform a query over the database, whose result must not reveal any more information about the database than what is revealed by the result itself. Zero-knowledge proofs are cryptographic tools that are able to offer this security in a succinct and non-interactive manner. For our application, we could have used this technique since the gateway’s firewall settings are its private bits, but RemoteGate offers a much simpler solution in which all the server needs is to be convinced that the gateway has the right model without requiring explicit knowledge of the model itself. In fact, the server will never query the gateway in the future about this model once it has been correctly deployed and is also willing to offer some tolerance to the accuracy obtained. All these relaxations allow for a simpler solution using an interactive scheme rather than using something as sophisticated as a zero-knowledge proof, for which we would have had to make assumptions regarding the gateway being powerful enough to generate such a proof and the server being equipped with zkSNARK(s) or equivalent to be able to verify such a proof.

Secure MPC

[mohassel2017secureml, lindell2009secure, ohrimenko2016oblivious, lindell2000privacy, chaudhuri2009privacy, ccatak2015secure] The problem in this paper can also be studied under the general umbrella of secure multi-party computation, in which a computation task is performed between multiple parties in a way that satisfies some security properties specific to the application. One way we could have moulded our problem in this framework is to have allowed the server and the gateway to collectively compute the classifier for the attack packets and then use this model to deploy at the gateway’s firewall. However, MPC is usually a time intensive process and with the leaning problem being potentially arbitrarily complex, it can take much longer time than our simple interactive scheme. We emphasize that during attack prevention, the server may only have limited time before it can no longer communicate with the gateway, so we need a solution that runs in as small a time and using as little computation as possible.

Probabilistically Checkable Proofs

[setty2012making, setty2012taking, setty2013resolving, ishai2007efficient] There is yet another technique that is used for proof generation in which the prover computes a bit string which the verifier can query for a small number of certain fixed locations and only verify that the bits on those positions are correctly computed. This provides probabilistic guarantees on the computation task performed since the verifier gets to choose the bit locations it wants to query. However, these strings are very long (often exponential in the input length) and hence requires both the verifier and the prover to have rich computational resources to participate in such a proof system. Again, in our setting, this would result in a heavy weight solution which would constrain the resources available at the server’s and the gateway’s end.

Incentive Compatibility.

In order to make our protocol force the gateway to act honest, we use the popular approach of Incentive compatibility [nisan2007algorithmic]

from game theory. Incentive compatibility is a property by which participants in a mechanism achieve best outcomes to themselves by acting according to their true preferences. We use a special form of this compatibility, called the

dominant strategy incentive compatibility (DSIC), in which we guarantee that following the true beliefs in presenting preferences is weakly dominant in the sense that no deviation from this strategy will increase the payoff (reward in this case). We claim that the DSIC strategy for the gateway, with respect to the algorithm we provide for the server, is to act honestly. We achieve this by using the ideas from some popular number guessing games [nisan2007algorithmic, mendes2014guessing].

3 The RemoteGate protocol

In this section, we present a general overview of the RemoteGate protocol to enable a server to contact a remote gateway and help it deploy a rule in its firewall that will filter out the attack packets from reaching the server. We provide details of the individual step of the algorithm subsequently.

3.1 Protocol Overview

Figure 2: Schematic of the communication between the server and the gateway during a typical run of RemoteGate protocol.

Our protocol consists of three main steps : Discovery, Learning and Service. In the Discovery phase, the server uses the information in the attack packets to discover a list of gateways that are forwarding the traffic from the attack source. Upon obtaining this list, the protocol enters the Learning phase, where the server initiates communication with the farthest gateway333Based on the information on round trip time (or some other metric) provided by the discovery algorithm. See Algorithm 3 for our approach. in this list and helps it learn the required model (policy) that will filter the attack packets. Once this learning is complete and the server pays the gateway for the deployment of this model, the Service phase begins in which the server pays the gateway a(n agreed upon) fee, periodically, to retain the model in the gateway’s firewall. This resembles an establishment of a service-agreement between the server and the gateway, which in case of a dispute, can be resolved by an external trusted third party.

A schematic in Fig. 2 provides a high level view of the major steps of our protocol. As mentioned before, the server, upon receiving attack packets, initiates an exchange with the gateway by providing the latter with positive and negative examples of the attack packets that the server has received. The gateway uses these examples to create a model that produces a classifier based on these training examples and provides a proof of this construction by classifying the test examples that the server provided. This way the gateway reveals only as much as much information about the model as is obtainable by the classifications it just provided. Upon receiving these classifications on the test examples, the server checks if the error in classification is within a pre-decided tolerance limit. If satisfied, the server pays the gateway the promised reward and the gateway deploys the model in its firewall for the packet filtering to begin. Otherwise, the server issues another round of training and testing with the gateway, on fresh examples, until it receives classifications that are sufficiently accurate.

The RemoteGate timeline is depicted in Fig. 3, in which the various events at different points of time have been specified. At , the server starts receiving attack packets from the attacker, but has yet not identified that it is under attack. After receiving these packets for some time, the server detects the attack at time and starts collecting samples for RemoteGate learning protocol. After sufficient number of samples have been collected, the server begins the RemoteGate protocol at time . It starts by discovering the list of gateways to interact with and once. Once the candidate gateways have been discovered at time , the server runs a spoof check with these gateways to determine what gateway to interact with. At time , this gateway is determined and the server starts negotiating for the initial reward. Upon successful negotiation, the learning phase begins at time and the server engages in rounds of training and testing with the gateway. After the rounds of learning have completed (which satisfy the server’s error tolerance) at time , the server pays the gateway the promised reward. At time , this payment is successful and the gateway prepares deploying the model in its firewall. Once the model is successfully deployed by time , the RemoteGate service period starts during which the filtering of attack packets happens through the gateway’s updated firewall. This service ends at time after which both the server and the gateway terminate the protocol.

We provide details of each of these steps in the upcoming sub-sections. We would also like to refer the reader to Appendix A.2 in this paper for some frequently asked questions that we encountered with respect to our approach in this paper.

1:procedure RemoteGate-Server()
2:     . GList ordered by farthest gateway first.
3:     . Remove spoofed gateways.
4:     for  do
5:          Initialize server log.
6:         Initialize and .
7:         if  then
8:              if  contains "BREACHthen
9:                  Issue a call to Conflict-Resolution().               
10:         else
11:              Output and return               
Algorithm 1 RemoteGate protocol for the server.
1:procedure RemoteGate-Gateway()
2:     while true do
3:         if PING packet received from server with attack packet and  then
4:              if  then
5:                  Wait for the INIT packet from the server. break if timed out.
6:                   Initialize gateway’s log.
7:                  
8:                  if  contains "BREACHthen
9:                       Issue a call to Conflict-Resolution().                   
10:                  Output .                             
Algorithm 2 RemoteGate protocol for the gateway.

3.2 Initialization

The RemoteGate system is installed as an application on both the server and the gateway for them to engage in the sequence of communications as required by our protocol. We provide a high level description of this application in Algorithms 1 and 2 (for the server and the gateway, respectively).

The server starts RemoteGate by determining the average time interval between two attack packets in its set of all the attack packets that have been received so far. Once this information is obtained, the server launches a discovery protocol (Algorithm 3) to obtain a list, denoted GList, of gateways that may be forwarding these attack packets. The gateways in this list are marked with some additional information about how far the gateway is from the server (for example, by using round trip times). However, this list is prone to containing gateways whose identities may have been spoofed or who were wrongly discovered. The server removes such false-positives by running a spoof check (Algorithm 4) with each of the gateways in the list and obtains a filtered list FList of gateways, ordered similar to GList. Starting from the first gateway in this list, the server begins with resetting the protocol communication log, denoted , and based on the gateway, decides on what error tolerance , value of the reward and duration of service to negotiate. With these parameters, the server then begins the RemoteGate protocol with the gateway. If the protocol was successfully deployed and serviced for the negotiated time period, the server terminates the protocol with success. However, if at any time during the protocol the server’s log contains the string "BREACH" (due to detection of malicious behavior from the gateway), the server resorts to the third party for appropriate conflict resolution. Else, it restarts this process with other gateways in FList.

Figure 3: RemoteGate timeline diagram showing what events happen at various times during a run of the protocol. The plot is not drawn to scale. With respect to this plot, the total time taken by RemoteGate to prevent the attack is .

Unlike the server, the application installed at the gateway’s end (Algorithm 2) pools until some server contacts it with a spoof check packet, called PING444Note that this ping packet is different from the standard ping packet in network routing protocols.. At this point, the gateway recognizes that the server is trying to verify if the attack packet (with inter-packet time of ) actually came from the local network behind the gateway. The gateway engages in this spoof check to ensure that its address was not spoofed by the attacker (Algorithm 5). If the spoof check returns with the indication that the attacker belongs to the local network, the gateway begins the learning phase of the RemoteGate protocol with the server. Similar to the server’s log, the gateway also maintains a log, denoted , for all the communication it does with the server. If at any point of time during the protocol the gateway detects malicious behavior from the server, it reports it to the log as a "BREACH" string, and resorts to the conflict resolution protocol.

3.3 Gateway Discovery

The first step for the server, once it is in posession of attack packets, is to discover the potential gateways that can protect the server by blocking any future attack packets. The list includes any gateway that is capable of blocking or filtering traffic from the attack source on its way to the server, and that is willing to participate in the RemoteGate protocol.

The challenge in enumerating the gateways on the path between two Internet nodes lies in the fact that there may be multiple paths between such nodes, each one used at different times or for different types of messages or for different directions of communication. The Internet is designed as a potentially directed communication graph, where routing policies can move traffic along unexpected routes given the observable topology. Commercial agreements driven by cost and traffic volumes, load balancing, privacy requirements, regulatory constraints, and availability and reliability needs ensure that routing policies at all levels (from local networks to large autonomous systems) are complex, often hard to infer, and uncertain to rely upon. We do not solve this problem here, rather we focus on the simple case of symmetric, stable paths, and then we discuss the corner cases left to solve in future work.

Symmetric, Stable Internet Paths

Our approach is inspired by the traceroute diagnostic tool, which measures transit delays of packets between two nodes while also recording the latencies to each intermediate router on the path. Our Algorithm 3 makes use of ICMP, just like traceroute, and could be perceived as an extension of ICMP messages in order to provide the additional information.555It is worth mentioning that some ICMP message types such as Router Solicitation and Router Advertisement appear relevant our gateway-discovery step, but we are not aware of ICMP messages or extensions that completely address our needs. The basic mechanism is to send packets to the (apparent) attack source and to use time-to-live values to address the intermediate routers and request that they return their managed IP range, if any. Let us denote such a request packet as DISCOVER. By “managed IP range” of a router we refer to the set of IPv4 addresses the router can reach on interfaces other than the one on which the DISCOVER packet was received, and for which the router is the sole entry–exit node. In other words, all traffic to and from the managed IP range passes through that router. While some routers on the Internet do manage IP ranges (as defined here), others do not, and thus we do not expect all gateways on a path between two Internet nodes to respond to a DISCOVER ICMP request. At a minimum, in typical configurations, there is a firewall on the server’s node and a firewall on the attack-source’s node that have their own managed IP ranges (respectively, the whole IPv4 address space except for the server’s IP address, and the IPv4 address of the attack-source’s node).

The server now holds a list of gateways, their correponding latencies, and their managed IP ranges. To proceed, the server sorts the list by latency, validates that each managed IP range contains the attack-source IP address, and selects the farthest gateway. We note that the managed IP ranges do not need to increase or decrease monotonically in this ordered list and the only value they share is the IP address of the attack source. This implies that only the latency information can be used to determine the relative position of the gateways on the path between the server and the attack source.

1:procedure GatewayDiscovery(, ) is the set of attack packets.
2:     .
3:     . TTL value for the current DISCOVER packet.
4:     . Assume this is a singleton set.
5:     while  do is the maximum hops to explore.
6:         Increment .
7:         Send ICMP DISCOVER packet, with TTL , to IP address .
8:         Wait for ICMP response packet. break if timed out.
9:         if ICMP GATEWAY-RESPONSE packet received from IP address  then
10:               packet contents.
11:              if  then
12:                  Add to GList.                        
13:         if ICMP ECHO-REPLY packet received then
14:              break               
15:     Sort GList by latencies.
16:     return .
Algorithm 3 Gateway discovery algorithm for the server.
Asymmetric or Dynamic Internet Paths

We leave for future work the question of discovering the gateways on paths that are asymmetric (i.e., source-to-server uses a different path than server-to-source), or where the paths change rapidly as can be the case for some mobile nodes.

We note that the above Algorithm 3 together with the rest of the system do not assume that each gateway is honest in any of its interactions with the server. At this point, we need to consider that some gateways may claim to control larger IP ranges than the ones they really manage, in order to be more likely to participate in the protocol and earn money. This challenge is addressed by authenticating the claims of the chosen gateway and ensuring that the attack does truly originate from a network managed by this gateway.

3.4 Authentication

Having discussed the mechanism for gateway discovery, we now discuss details of how the server and the gateway determine if the attack is really coming from the local area behind the gateway or not. Note that even though the attacker may have incentives to spoof the gateway’s address in the attack packets, it is important to observe that it is of no use to him if he spoofs the server’s address instead. This is because if the server’s address is spoofed, the gateway’s replies to the spoof check algorithm will go to the spoofed address instead and the protocol will not be able to proceed. Hence, we only talk about spoof detection with respect to the source address in the attack packets.

1:procedure Server-SpoofCheck() is the gateway’s IP address
2:     Ping with an attack packet and . This is the PING packet.
3:     Wait for the response from . return true if timed out.
4:     if VERIFIED packet received from  then
5:         return false      
6:     if CHECK packet received from  then
7:         Wait for time steps for an attack packet to be received. return false if timed-out.
8:         if  contains stamp then
9:              Send to .
10:              Wait for the response from . return true if timed out.
11:              if VERIFIED packet received from  then
12:                  return false                             
13:     return true
Algorithm 4 Spoof checking algorithm by the server. Checks if the attack is actually coming from behind the gateway. Returns true if was spoofed.

Our algorithms for spoof check are described in Algorithms 4 and 5. The server prepares a special packet, called PING, which contains an attack packet and the average inter-packet time as observed by the server. When the gateway receives this packet, it is either able to immediately recognize that it is from its local network, in which case the gateway sends a special packet, called VERIFIED to the server. However, if this is not the case, then the gateway generates a stamp, which is essentially a privately known string to the gateway, and sends a CHECK packet to the server what contains stamp. If the server receives a VERIFIED packet, it knows that the gateway’s address was not spoofed and its spoof check is complete. Else, the server observes all attack packets for time steps and looks for the string in them. The idea is that the gateway is now attaching stamp to each packet that it forwards to the server for time steps. Thus, if the attack really came from behind the gateway, the server must receive an attack packet with this stamp attached.

If the server intercepts a packet with the stamp, it immediately sends the packet to the gateway and the gateway replies with a VERIFIED packet. The learning phase can now start. However, if no such packet is received, the server concludes that the attack did not come from the gateway’s local network and removes that gateway from GList. Our assumption here is that one of the factors that distinguishes an attack packet from a good packet is the frequency of packet arrival. If the attacker did not send any attack packets for time steps, then essentially the frequency of attack packets is reduced five fold. This number, hence, can be set to whatever value that seems sufficient for the server to stop classifying the packets as a attack packets.

1:procedure Gateway-SpoofCheck()
2:     if source of identified to be from local network then Through trivial checks
3:         Send VERIFIED to the server.
4:         return false
5:     else
6:         Generate stamp.
7:         Send to the server.
8:         Attach stamp to all outgoing packets to the server for time steps.
9:         If no response from the server in this time, return true.
10:         if invalid stamp in the packet then
11:              return true
12:         else
13:              Send VERIFIED to the server.
14:              return false               
Algorithm 5 Spoof checking algorithm by the gateway. Checks if the packet is actually coming from its local network.

Another important assumption we make here is that the stamp is irreproducible by the attacker. This can be done by the establishment of an authenticated channel between the server and the gateway prior to spoof check, or through any mechanism that prevents the attacker from obtaining this string. For now, we leave the details of establishment of this stamp efficiently as future work and refer readers to existing solutions for sending private information over the communication link.

3.5 Initial Reward Selection

We now discuss how the server and the gateway decide on the initial reward for the RemoteGate protocol. The selection of this reward is critical to the incentive-compatible nature of our protocol since this is the reward that essentially incentivizes the gateway to participate in the protocol with the server at the cost of using its computation resources and modifying its firewall. Since the server and the gateway are both modeled as rational players, we design a negotiation scheme which is dominant-strategy incentive compatible. In other words, we design a scheme in which the dominant strategy for both the server and the gateway is to bid their true valuations for what the protocol. Algorithms 6 and 7 provide detailed steps for this negotiation.

The problem of deciding the initial reward falls under the broader category of bilateral trade, introduced in the seminal work of Myerson and Satterswaite [myerson1983efficient]. In this setting, a single seller is the owner of an indivisible item which can be traded with a single buyer. In our setting, the gateway is the seller, the model it learns is the item being sold and the server is the buyer. Apart from (dominant-strategy) incentive compatibility (DSIC), bilateral trade problems typically also try to ensure two more properties: (1) individual rationality, which in our case translates to the fact that the server and gateway participate voluntarily in the protocol and can leave anytime if incentivized to do so, and (2) budget balance, which in our case means that neither the server and the gateway suffer losses in terms of the rewards paid or received. A recent result by [colini2016approximately] shows that no dominant-strategy mechanism which is also individually rational and budget balanced can guarantee more than of the optimal welfare. Since we don’t assume any shared randomness between the server and the gateway or any shared knowledge of the distribution of initial rewards, we use a deterministic take-it-or-leave-it strategy, also called fixed-price bilateral trade, as devised in [blumrosen2016almost] to achieve of the optimal welfare, which is optimal for any deterministic strategy (Proposition in [blumrosen2016almost]).

Let

be the server’s true valuation/belief of how much the gateway should be paid per correctly classified example in the first round, based on factors like how important the prevention of the attack is as well as an estimate of the computational efforts used by the gateway to generate a sufficiently accurate model. Similarly, let

be the gateway’s true valuation of the reward it should receive. The authors would like to refer the reader to Appendix A.1 for details on how and can be chosen in practice. Let be the reward chosen in this initial negotiation phase. This reward, once agreed upon, will be paid for the accuracy reported by the gateway on the test examples that the server sends in the first round of the learning phase. Then, we define the utility function of the server to be , since it is trying to negotiate as small a reward as it can. Similarly, we define the gateway’s utility as , since it is trying to maximize the reward it collects. Based on the setting just defined, we choose the reward by setting , which becomes assuming .666If , then the negotiation cannot happen assuming that the server will not pay more than and the gateway will not accept a payment less than . From Theorem in [blumrosen2016almost], this mechanism guarantees DSIC, individually rationality, budget balance and achieves at least of the optimal social welfare.777The result in [blumrosen2016almost] holds for any distributions from which and are chosen. In particular, it is true for point distributions.

1:procedure Server-StartService()
2:     Sample at random.
3:     Send to the gateway. This is the INIT packet.
4:     Wait for the response from the gateway. return false if timed out.
5:     if  or fee or not acceptable then
6:         return false
7:     else
8:         
9:         if  then
10:              return false
11:         else
12:              Send .
13:              return               
Algorithm 6 Initialization for the server. Called when Server-SpoofCheck succeeds.

Having understood the initial reward selection mechanism, we now discuss the steps in Algorithms 6 and 7 in detail. The server begins with sampling an attack packet at random the set of all attack packets. It uses a commitment scheme to commit the value of to the gateway and sends along the packet , value of the service duration period and a commitment of the error tolerance . The two commitments are made for the gateway to be able to verify the server’s computation of and the total reward later in the protocol. Upon receiving these values from the server, the gateway decides on the length of the period it can actually provide service for, the fee it wishes to charge the server for the service once the model is deployed in the firewall, the number of post-paid installments it is willing to accept for the payment during the service period, and the value of based on the estimates of computation and resources that will be used to learn the model. Upon decision of these parameters, the gateway sends them to the server and wait for a response.

1:procedure Gateway-StartService()
2:     Initialize and based on and . fee is (post)paid in installments.
3:     Send .
4:     Wait for the response from the server. return if timed out.
5:     if  (or computed incorrectly) or does not verify then return      
6:     
Algorithm 7 Initialization for the gateway.

If the server determines that the service period proposed by the gateway is non-acceptable (may be too short or too long), or that the fee is too high or that the number of installments is unreasonable (typically too low), then it can discontinue the protocol with the gateway and move to the next gateway, if available, in FList. Else, the server computes the reward as the value that the gateway proposed (for the reasons mentioned above) and verifies that the payment is not more than its budget . The server reports the gateway of the reward selected and opens its commitment for for the gateway to be able to verify the computation of . It then proceeds to beginning the learning phase with the gateway. Upon receiving the server’s message, if the gateway is able to verify the computation of and that it gets paid at least , it proceeds to beginning the learning phase with the server.

3.6 Rule Learning

We now discuss the learning phase of the RemoteGate protocol that allows the server to stop the flow of attack packets it is receiving by helping the gateway come up with a classification rule that can be deployed in the gateway’s firewall. This rule (or model) must filter out the attack packets (up to the server’s error tolerance) and prevent the attacker from sending more such packets to the server.

We present the details of the learning phase in Algorithms 8 and 9. These algorithms proceed in rounds, indexed by . In each round, the server sends the gateway some training examples, , for the gateway to train its model, , and also some test examples, , for the gateway to apply its model and reply back to the server which the classifications, denoted , it obtains.888The server poses a hard limit (unknown to the gateway) on the number of rounds it is willing to engage in with the gateway before giving up and switching over to a different gateway (the next one in FList, if any). This limit can be decided based on the number of attack packets available with the server as well as the severity of the attack and the urgency of its prevention as seen fit by the server. It can also be dependent on the total reward the server is willing to offer the gateway as well as the maximum time the server is willing to engage in this protocol for. The gateway uses these training examples to train a model that correctly classifies the attack packets (best-effort) and then provides the server with what the model predicts on the examples in the test set. Since it can take some time for the gateway to train its model and it is not in the best interest for the server to wait indefinitely for the response, we can assume that the server and the gateway can agree on the maximum time the gateway will engage in training the model, similar to the negotiation of the initial reward. With respect to this, we assume that the gateway, if playing honestly, performs best effort training using whatever learning algorithm it seems fit for the classification. We leave the details of deciding a good time limit for training and the choice of the learning algorithm to future work.

The server verifies the response from the gateway against her knowledge of the true labels on the test set. It begins by computing the accuracy, , of the labels that the gateway has provided. If this accuracy is less than the server’s error tolerance, , it pays the gateway the reward, , and terminates the protocol. The gateway, upon the receipt of the reward, deploys the model in its firewall. However, if the server is not satisfied with the gateway’s accuracy, it sends more training examples (and a test set) in the next round for the gateway to retrain the model on more labeled examples, with the hope that this retraining will improve the classification accuracy. The server keeps accumulating the rewards for the rounds until it is satisfied with the gateway’s accuracy (or the round limit is reached, in which case it terminates the protocol without paying the gateway any reward) and then pays the gateway the total reward accumulated so far. Note that even after the payment has gone through, the server must ensure that the gateway deployed the correct model as promised. We discuss the details of this deployment verification in the next subsection.

The scheme described above works well when both the server and the gateway are honest. However, since we assume that both the server and the gateway are rational instead, we update the rewards in each round in a clever way that forces both the server and the gateway to act honest. For the gateway, we do this by ensuring that any deviation from the honest strategy will only decrease the reward the gateway can expect to receive. For the server, we achieve our goal by providing the gateway with the actual labels of the test packets in the round following the one they were sent in, so the server is dis-incentivized to lie on the accuracy and hence, the reward. The result is a carefully crafted protocol that both the server and the gateway will run in order to incentivize honest model generation to filter the attack packets.999Recall that in this paper, we do not allow for the possibility of the gateway to collude with the attacker and rationalize at the same time and rather assume that it indeed wants to help the server in preventing the attack but is being greedy on the reward at the same time.

3.6.1 Server’s algorithm

1:procedure Server-Learning()
2:     
3:     
4:     
5:     
6:     
7:     while  do
8:         Initialize and from .
9:         if  then
10:                        
11:         Send to the gateway.
12:         Wait for the response from the gateway. return false if timed out.
13:         
14:         Send to the gateway.
15:         Wait for the response from the gateway. return false if timed out.
16:         
17:         
18:         if  then
19:              Send and initiate payment of to the gateway.
20:              if payment was successful then
21:                  return Server-Deployment()               
22:         else
23:                             
24:     return false
Algorithm 8 Server’s algorithm.

The server’s algorithm is presented in detail in Algorithm 8. It begins with initializing some boundary conditions: a priori accuracy on the test set to zero; the set of labels on this test set to the empty set; and the cumulative reward so far, , to zero as well. The rounds then begin to progress and at the beginning of each round, the server makes sure that the maximum round index has not been exceeded.

In each round, the server begins by arranging the attack packets along with some good packets into two sets : one that it wishes to send to the gateway for training the model, and the other for testing. We typically assume that the training as well as the testing set is obtained by independent sampling without replacement from the set of all attack and good packets. This can even be done in a way that the training (testing) sets "appear" to be chosen uniformly at random to the gateway by local coin flipping at the server’s end (put an attack packet if heads and good packet if tails). Maintaining the overall quality of the function constructed by the learning algorithm (expected loss, with respect to the underlying distribution) [dekel2010incentive] makes this construction necessary.

After the selection of training and test packets, the server computes the reward it wishes to pay to the gateway per correctly classified example in the test set for this round. The reward is chosen based on a bidding mechanism, where both the server and the gateway provide their bids, for the server and for the gateway, for the reward and the smaller of the two is chosen as the reward for this round. This is commensurate with the bilateral trading scheme as discussed during the initial reward selection phase and provides a DSIC strategy for the server and the gateway to act honestly during each round of the protocol. A commitment scheme is used here for the gateway and server to be able to carry out this blind negotiation and also verify the computation of .

The server also sends the training set and the labels for the test set in the previous round. The latter is to assist the gateway in further training the model on more labeled examples with the hope that this will increase the model’s accuracy. Once the classification on the tests labels is obtained from the gateway, the server computes the accuracy by comparing them against the true labels . If the accuracy was more than the server’s threshold , then the server open its commitment for and initiates the payment protocol to pay the reward to the gateway. The total reward is calculated by multiplying the per-example reward in the last round of the protocol with the total number of correctly classified test examples so far. If this payment is successful, the server assumes the beginning of the service period as negotiated previously. The details of what happens during this service period will be explained in later subsections.

If, however, the accuracy in the current round is unsatisfactory, the server proceeds to the next round, where it starts by adjusting the reward. The server’s bid for the reward changes based on the accuracy reported by the gateway in the preceding rounds. However, to prevent the gateway from increasing the number of rounds in order to collect more reward, the server reduces its bid by half for the reward in every round according to the following equation:

Here, the in the denominator can be replaced by any constant . Rearranging the terms of this equation gives , implying that the total reward that the gateway collects gets reduced to half every time it is unable to learn the model to sufficient accuracy. To prevent the server from underpaying an honest gateway, we assume that was chosen keeping in mind this reduction in the total reward.

The gateway uses a similar scheme to update its bid for the reward and reports it to the server. The two bids must match in every round for both the server and the gateway.

3.6.2 Gateway’s algorithm

1:procedure Gateway-Learning()
2:     
3:     
4:     
5:     
6:     while true do
7:         if  then
8:              
9:                        
10:         Send to the server.
11:         Wait for the response from the server. return false if timed out.
12:         Verify the commitment for and the computation of . return false if not.
13:          trained on and .
14:         
15:         Send to the server.
16:         Wait for the response from the server. return false if timed out.
17:         if payment initiated by the server then
18:              if  fails to verify then
19:                  return false
20:              else
21:                  for  do Check if server did not lie on the accuracies.
22:                       if  differs from by more than  then
23:                           return false                                                        
24:              if  received from the server then
25:                  Deploy in firewall.
26:                  return Gateway-Deployment(). refers to the server’s identity.               
27:         else
28:              
29:              Wait for the response from the server. return false if timed out.               
30:     return false
Algorithm 9 Gateway’s algorithm.

We now discuss the details of Algorithm 9 to describe the steps that the gateway performs during the learning phase. After initializing the protocol parameters similar to what the server did, the gateway produces its bid to the server. Note that this bid is the same as the initial reward chosen previously. Upon receiving the server’s response, the gateway verifies if the commitment for was correctly made and the server correctly computed . If the test passes, the gateway (re)-trains the model based on the training examples from the current round and the test examples from the previous round (along with their labels received in this round). It then computes the classification using and sends it to the server.

If the server replies with the commitment-open for , the gateway verifies if is commensurate with the classifications would have provided for all the previous rounds. In other words, since the server is now trying to minimize the reward it gives to the gateway, lying on the accuracy of the test examples seems advantageous to the server since that will allow the server to lie on the reward accumulated. This way, once the final round happens where the server agrees on the model to be sufficiently accurate, the gateway can run all the previous test examples against the model and check if the labels provided by the server were consistent with what this model now reports (within parameter, which the server is now required to commit at the beginning of the protocol and open in the last round, just before payment happens). If the gateway finds any inconsistencies, it immediately recognizes treachery on the part of the server and aborts the protocol. Furthermore, if the server lies consistently, then the gateway may not be able to learn the model at all and the server would have wasted its time and not get the attack packets stopped. Even if the gateway learnt the model somehow, it is likely to have overfitted the model, which makes it likely that some false positives are blocked after deployment or some false negatives cause attack packets to reach the server. This is detrimental to the server since the attack will not be prevented in this case (as no model will be deployed in the firewall). Thus, it is not in the server’s best interest to lie on the accuracies reported, and hence, the reward accumulated, to the gateway.

Once the gateway receives the reward, it deploys the model in its firewall and proceeds to the service period. However, if the server replies with more training and test examples, the gateway updates its bid for the reward based on the labels provided by the server. It then retrains it model and provides classifications on the new test examples. The rounds progress until the server is satisfied with the gateway’s model.

An interesting way for the gateway to cheat during the bidding phase for is to carefully manipulate the classifications on the test sets so that the total reward collected is more than what it would have been if the gateway was playing honestly. The gateway can try to achieve this by temporarily reporting the classifications so that the accuracy calculated is low. The motivation behind this is to force the server into more and more rounds, thus collecting a high total reward than a smaller reward in the fewer rounds. However, it still has to do it in way that the server does not exhaust its maximum round limit .

We show that it is not possible for the gateway to obtain a reward higher than what it would have collected upon honest play. In fact, a stronger statement can be said here about the gateway’s strategy. Not only does following the honest strategy maximize the overall reward the gateway can collect from the server, but this holds true in each round as well. The following lemma establishes the proof.

Lemma 1.

Let be the number of rounds that the gateway takes to train the model if it plays honestly. Then, the maximum reward the gateway can collect from the server is at most .

Proof.

Let be the actual number of rounds taken by the gateway to perform the training. Clearly, , since otherwise, the honest learning would finish in rounds and not . Let for some . Then, we have the following.

Hence, the reward up to round can never exceed the reward collected by optimal play. ∎

This lemma proves that the dominant strategy for the gateway is to play honestly in each round during the learning phase. Combining this with our discussion of the initial reward selection and how the logs maintained by the gateway and the commitment schemes force the server to act honest, we have shown that the RemoteGate protocol is incentive compatible. Note that although we do not explicitly mention this, but the server’s utility function contains an additive term which represents the need for the attack to be prevented. This dis-incentivizes the server to take any action that will make the gateway abort the protocol and not deploy the model in its firewall, since then the attack will not be prevented. We assume that the amount of total reward and fee paid by the server is small compared to the what the server gains when the attack is successfully prevented.

3.7 Payment Protocol

Once the learning phase is over, the server pays the gateway its promised reward. Also, during the service phase, the server periodically pays the gateway a fee in appraisal of the retention of the model in the firewall. In this section, we abstract the underlying payment scheme used for all these payments. The exact details are out of the scope for this paper, although, specialized payment schemes for this protocol pose interesting open problems for future work.

Figure 4: Secure digital payment protocol blackbox. This is used to send units of money from the Sender to the Receiver.

Our algorithm is flexible in the use of any trusted digital payment protocol, as long as it is secure and provable. By trusted, we mean that when the server is paying the gateway its reward, the payment protocol must ensure that the money goes from the server’s account to an account that belongs to the gateway and no one else. Moreover, we want this protocol to be secure so that any data that is passed between the server/gateway to the merchant helping with the payment must be protected against tampering by malicious third parties. We also require that the payment protocol be provable, which means that at any point after the protocol terminates, both the server and the gateway must be able to provide a proof of the success or failure of the payment along with the amount of the reward in question. If an invalid or false proof is provided, then the payment protocol must be able to identify this and report accordingly. Some examples of payment protocols that can be used here are Visa Checkout [visa], PayPal [paypal], Google Wallet [google], Apple Pay [apple] etc.

3.8 Deployment Verification

Once the gateway has convinced the server of an accuracy that is at least , the server initiates the payment protocol (line 24 of Algorithm 8). The important thing here is for the gateway to now deliver its promise of deploying the model by actually installing it in the firewall and making sure no future attack packets reach the server. However, since the gateway is rational, it needs to be incentivized to do this. As mentioned previously, the incentive is in the form of a periodic fee that the server pays to the gateway to maintain the model in its firewall. The details of the deployment verification phase are provided in Algorithms 10 and 11.

1:procedure Server-Deployment()
2:     for  do
3:         Observe traffic for time steps.
4:         if attack packets received with average interval  then
5:              
6:              
7:              if  then
8:                  if  then
9:                        "BREACH"
10:                       return false.                   
11:                  Monitor for taking preventive measures in the future, if required.                        
12:         Pay fee to gateway .      
13:     return true
Algorithm 10 Server’s protocol for deployment.

In Algorithm 10, the server begins the deployment verification phase, also referred to as service, by observing the traffic for certain time steps and accordingly paying the gateway for that time period. More concretely, since the total service duration agreed upon is and the number of installments in which the payment will be made is , the server observes the traffic in time slots that are time steps long. In each slot, the server checks if an attack packet was received or not. Since the server allowed an error tolerance of for the gateway’s model, it now expects the attack frequency smaller by a factor of . Thus, if the server receives attack packets that are at least time steps apart101010recall that was the original inter-packet time in , then it is convinced that the model is correctly installed in the gateway’s firewall and hence, it pays the gateway the agreed upon fee.

(Note 1: The discussion above provides a way for the server to decide what value of to set. Ideally, the server would like . Since the only value known to the server at the beginning of the algorithm is , the server can set to any value less than as a good estimate. Another way would be to modify Algorithm 6 slightly so that the commitment for goes after is decided. This way, the server can set to any value less than instead and be sure that during the service period, no111111at most one to be precise. attack packets must be received.)

If, however, the server receives attack packets that arrive more frequently, the server knows that either the attacker is spoofing the gateway’s address in the new attack packets or the gateway never deployed the (or deployed a different) model in its firewall. To find what case it is, the server takes similar steps as it took in Algorithm 1. It begins with running the gateway discovery protocol in Algorithm 3 to determine the list of gateways that are closer to the attack source. If the gateway with whom the service is going on is not part of this list, then the server knows that the attacker has changed its source of attack, but is fulfilling its promised protection. Hence, it pays the fee as promised.121212In this case, ideally, the server should should take more pressing measures to stop the attack. If was indeed part of the list, the server runs the spoof-check protocol in Algorithm 4 with to ensure that its address was not spoofed. If it was spoofed, the server pays the gateway its promised fee, else, it identifies that the gateway is acting malicious by not fulfilling its promise and hence, immediately terminates any future payments to the gateway. In addition, the server initiates conflict resolution with the trusted third party to claim its reward and the already paid fee back.

(Note 2: Similar to the discussion in note , Algorithm 10 also suggests a way for the server to decide what to agree upon. Ideally, the suggested by in Algorithm 6 is such that is more than the time it took for the server to run the spoof-check with , say , so that during this deployment verification phase, the server can safely run the spoof check within one installment slot. Thus, the server accepts only when it is less than . Combine this with note above to get . Moreover, since the gateway also has a good estimate of what is, it is in its best interest to suggest accordingly or else it risks the server terminating the protocol and switching to a different gateway.)

1:procedure Gateway-Deployment()
2:     for  do
3:         if  time steps have passed then
4:              if fee not received then
5:                   "BREACH"
6:                  return false.                             
7:     return true
Algorithm 11 Gateway’s protocol for deployment.

Having discussed the server’s view of verifying deployment, the gateway’s protocol in this case is described in Algorithm 11. In each time slot of that passes, the gateway either receives a payment of fee from the server (at the end of the slot) or not. Everything is good if it does, but if the server does not pay the fee for any reason (attack received or other reason), the gateway construes this as breach of contract from the server and resorts to the trusted third party for conflict resolution to demand its money from the server.

We claim that this mechanism provides both the server and the gateway the required incentive to participate, engage and honestly act for the entire duration of the RemoteGate protocol. Specifically, the gateway now has no incentive to not deploy the model, or deploy the wrong model, or discontinue the deployment after some time (which may even be immediately after the payment).

3.9 Conflict Resolution

Figure 5: Schematic of the conflict resolution blackbox, where a trusted third party uses and to determine a verdict on who acted malicious and take an appropriate action.

The final piece of our protocol is the conflict resolution phase in which a trusted third party is approached by either the server or the gateway to resolve the breach of contract as viewed from either or both parties’ perspective(s). The logs of the communication maintained by both the server and the gateway will help this third party to determine who caused the breach and take appropriate actions. The schematic in Fig. 5 depicts this resolution process. For the purpose of our algorithm, the exact details of this conflict resolution are not important and hence, we omit this discussion here. However, we do emphasize that the existence of a trusted third party is only one of the many ways to resolve conflicts. We leave it as an interesting open problem to design an efficient resolution mechanism that does not involve any third party and is scalable as well as trusted.

4 Discussion

Our algorithm, as described above, works under certain critical assumptions, some of which are described in detail below, in addition to the limitations discussed in Section 1.4.2. These assumptions are from the point of view of the attacker itself than the attack packets per se. We discuss them here to emphasize on the interesting open problems that these assumptions give rise to. For other areas of improvement, the reader is encouraged to refer to our discussion of future work in the next section and the answers to frequently encountered questions in the Appendix.

DoS from the Server during learning

The gateway can itself be subject to a denial of service attack from the server who is honestly following the protocol if the latter keeps sending more and more examples for training (on fake or false attack packets). Setting too high and too low will be able to establish this without any regard to the reward (since the server can always terminate the protocol anytime it wishes). Currently, our protocol assumes that the server only runs this protocol when it is truly under an attack and that it wants to stop the attack packets as soon as possible. In other words, we model the server as rational with respect to attack prevention and not malicious. Dealing with a malicious server/gateway is yet another interesting direction to explore.

Same Gateway Multiple Attacks

What if another attack from the same gateway comes? Should the server run the protocol again, only this time with different attack packets? If yes, how many times should this be allowed? One solution is to adopt a strategy that if a gateway seems to be a source of too many attacks, then the server blocks all communication from that gateway. For example, the server gives only, say , chances to the gateway before it stops receiving any packets from it at all. This seems to be a practical solution, but if the gateway is honest, the attacker can deliberately make the gateway fall victim to server’s blacklisting. We believe that our spoof checking mechanism from Section 3 will be able to prevent such a scenario if we give only a limited number of chances to the gateway before blacklisting it. However, there is a big room for improvement here to handle the different ways the attacker can act in this case.

Multiple Gateways Same Attack

Contrary to the attack above, yet another strategy for the attacker is to launch the same attack but from multiple sources, so that the server runs our protocol with multiple gateways and ends up paying more than what it would to a single gateway. One way to deal with this is for the server to locate a gateway that collects packets from these different source gateways and run the protocol with it. Currently, this problem seems to be non-trivial if the gateway to be found is required to be different from the server itself. More specifically, if we view the incoming connections to the server as a graph, where the nodes are all nodes that can route packets to the server (or to other nodes who can), then this problem will involve finding the farthest node from the server that receives packets from each of these sources of attacks. The challenge is to do with without having any a priori access to this graph. We believe that this is in itself an interesting open problem.

5 Future Work

In this section, we highlight some interesting open problems (in addition to the ones mentioned in the previous section) that can further improve RemoteGate and provide Internet scale security in the future. This is also in addition to what was previously mentioned in Section 1.4.2.

Limited attack packets

We assume that the number of attack packets and good packets with the server are enough so that the model learnt by the gateway is sufficiently accurate (wrt to the value of fixed by the server). Further, we assume that the choice of the server in deciding training and test examples along with the order in which these will be sent to the gateway is supportive of the learning in the manner described above. We make these assumptions because the server has no good estimate of how complex the learning problem here is and how many rounds it will need to achieve its tolerance setting for

. We expect that the results in statistical learning theory may provide useful insights in trying to optimize the decisions here.

Bound on gateway’s computation

As mentioned before, we assume that the gateway can only perform polynomial (in the number of example packets sent by the server) number of computation steps in every round. However, similar to above, this raises the question of how one can guarantee that the gateway will be able to learn a model in at most rounds, given the examples from the server. Again, we seek the expertise of the statistical learning theorists in providing good estimates for the values of and for which this assumption holds with high probability.

Single source attack

Our protocol, in its current form, allows the server to pay a gateway that provides this attack-packet filtering service. However, one may question the feasibility of this solution in case multiple sources exist for the same type of attack, each being behind different gateways. This attack may be coordinated or coincidental. In any case, if we allow the server the server to pay each gateway separately for this attack by running the protocol individually with each gateway, the server ends up paying a very high price for curbing the attack. At this point, we require the server to make a decision of whether it is more cost-effective to change its own firewall settings or engage into multiple remote services. For now, we assume that the server takes this decision wisely and leave it for future research to handle this problem more efficiently.

Integration with Distributed Ledger Technologies

One way to remove the assumption of a trusted third party for conflict resolution is to leverage the strong security guarantees that blockchains provide. The current smart contract model that Ethereum and other ledger technologies provide form an ideal candidate for this. A RemoteGate smart contract which handles the initial reward negotiation, reward aggregation over the learning phase and provides an inbuilt escrow service for the entire duration of attack prevention seems to be a promising solution towards a completely decentralized attack prevention that also takes advantage of the tremendous research that is trying to make cryptocurrencies more secure and scalable. We envision an implementation of such a smart-contract based global SDN in our future work.

6 Conclusion

In this paper, we introduced the high level idea behind global software defined networking to help prevent attacks closer to their point of origin as opposed to the conventional approach of installing firewalls and antivirus at the victim’s end. We present a candidate algorithm for this, which we call RemoteGate, through which we envision enabling a server under attack to help configure the firewall of a remote gateway that was suspected to be the source of the attack packets. We designed RemoteGate to be an incentive-compatible protocol in which the server interactively helps the gateway to learn a model that, when deployed in the gateway’s firewall, will filter out the attack packets and prevent them from reaching the server. We also highlighted some challenges and assumptions of our work and provided ideas for future research in this direction.

Appendix A Frequently Asked Questions

a.1 Parameter setting

  1. How are and selected for RemoteGate?

  2. We assume human assistance in setting these parameters for now. In the longer vision of an automated system, these values can be set to appropriate functions of the attack detection mechanisms deployed at the server’s end and the resource-usage monitoring systems deployed at the gateway’s end.

  3. How can the server make sure the gateway will be able to learn a model within accuracy within rounds?

  4. Technically speaking, it cannot. The best the server can do it hope for such a learning to happen. However, one way to handle this is to be flexible in the choice of . Once the server has completed some number of rounds with the gateway, the accuracy at the end of these rounds dictates how much fraction of the attack packets will be filtered if the model was deployed as it is. If the server determines that it is not in the best interest to continue any further given the nature of the attack, it can preempt the learning phase and ask for deployment. Ofcourse, taking such a decision in an automated manner may be challenging, which we leave for future work to handle.

  5. How to set and ?

  6. The service period sought by the server depends on the type of the attack. An ideal decision would be to stop the attack for long enough period for the server to be able to take a more affirmative action during the time (like giving an external investigation enough time to identify the real source of the attack and take action). It can also depend on the amount of funds available at the server along with some human provided parameters. For the gateway, this period can be a function of the amount of computational resources it thinks the filtering will incur along with the existing service periods with other servers on the Internet. In either case, optimizing the service time based on the type of attack is a topic of further investigation and research.

a.2 Other discussion

  1. Where does the server get the money to pay the gateway?

  2. This is similar to how payments are currently proposed to happen through IoT devices. The user can register a credit card or some digital payment mechanism securely on the server, which it uses to issue rewards to the gateways it interacts with. Although the exact implementation of such a system is beyond the scope of the paper, the server will be able to access the funds of the user operating it through any secure currency interchange (including the modern cryptocurrencies).

  3. How can the gateway determine if the attack is invalid (not a real attack)?

  4. Philosophically, there is no such way apart from matching the attack packets to some previously well known types of attacks. However, RemoteGate is more general in the sense that any packet that the server wishes not to receive can be labelled as attack. Thus, our use of the word attack here is in a much broader sense, which in a way disables the gateway to question the authenticity of the labelling provided by the server.

  5. Can we protect against a server who fakes attack packets (e.g. prevent access to , say Google, to someone)?

  6. The current approach for RemoteGate is to only entertain requests from the server that block traffic to itself and not some other destination. Of course, in future this will be extended to general attack prevention, when this problem must be carefully looked into before the learning begins at the gateway’s end. However, in a third possibility where someone else spoofs the servers IP address to block the traffic, then gateway’s response will go to the servers IP address and not the spoofer. This is because the protocol is interactive. Hence, the spoofer will not be able to continue RemoteGate to the end.

  7. Can RemoteGate deal with man-in-the-middle attack?

  8. A possible man-in-the-middle attack for RemoteGate is when some adversary interacts with the server and the gateway making them believe that they are interacting with each other. One reason for such an intervention might be to obtain the reward the server has to pay the gateway. From the server’s perspective, if the money goes to anybody other than the gateway, then the server can appeal to the authenticity of the secure payment channel that we assume exists in this case. Our proposed use of cryptocurrency here can help prevent this problem due to ledger transparency. Moreover, if the middle man fails to stop the attack packets after the payment is made, he is in a way exposing himself to the conflict resolution that the server will resort to in this scenario. Of course, RemoteGate is still in a very nascent stage and clever forms of attacks can be designed to break the system. We leave it to future work to optimize in these cases.

  9. Does RemoteGate introduce new attacks?

  10. Short answer, yes. More specifically, among many other ways, the attacker now collude with the gateway to get money in ways that will incentivize the gateway to collude with the adversary. However, such a behavior is hard to detect and even completely avoid. A real life analogy may help understand our philosophy here: upon calling the Police during an emergency, we lay faith on the fact the Police forces are not colluding with the attacker and are determined to help us. If we assume collusion by default, then no trust in such forces can be laid. Similarly, our approach is to trust the gateway by default and take appropriate actions when this trust is broken.

  11. Is it possible for an adversary to tamper with the existing filtering of the gateway through careful design of the attack packets?

  12. The answer to this question depends not on the design of the attack packets but on the classification of these packets as attacks by the server. Even if the adversary spends a lot of resources to carefully design the attack packets, it needs to convince the server that these packets are indeed attack packets for which RemoteGate must be launched. An interesting question here is to compare the reward that the adversary collects (assuming RemoteGate is launched) vs the computation resources it spent in designing and transmitting those packets. If the latter is higher, then it becomes interesting to understand the situations under which the adversary is still incentivized to do so. We present this as an interesting future work to perform a resource competitive analysis of simulating this attack on the system.

Appendix B List of notations

Fig. 6 provides a detailed list of notations.

Set of attack packets with the server.
Average delay between two attack packets received by the server.
Server’s error tolerance parameter in for the model learned by the gateway.
A complete log of server’s interactions and internal computations during a run of the RemoteGate protocol. An entry (containing the state change and timestamp) is made in this log every time the server’s internal state changes.
A complete log of gateway’s interactions and internal computations during a run of the RemoteGate protocol. An entry (containing the state change and timestamp) is made in this log every time the gateway’s internal state changes.
Server’s true valuation of how much it should pay the gateway per correctly classified example during the learning phase.
Gateway’s true valuation of how much it should be paid by the server per correctly classified example during the learning phase.
Server’s bid for the reward per correctly classified example during round of the learning phase.
Gateway’s bid for the reward per correctly classified example during round of the learning phase.
Reward chosen (per correctly classified example) for the round of the learning phase.
Server’s requirement for the period of time it requires the gateway to maintain the model in its firewall for.
Gateway’s guarantee for the period of time it will maintain the model in its firewall.
Number of installments in which the total service fee will be paid after deployment.
fee Fees paid per installment by the server after the model has been deployed.

Training set of examples issued by the server for gateway’s (supervised) learning in round

.
Set of examples issued by the server to test the model learned by the gateway at the end of round .
The number of test examples that were correctly classified by the gateway using its model, compared to the true labels with the server.
The set of true labels for the examples.
The set of labels for the examples as obtained by the gateway using its model.
The total reward collected up to round that is to be paid to the gateway after the learning phase is complete.
The (best effort polynomial-time) model learned by the gateway in round using all the labelled examples it has received from the server so far.
Figure 6: List of notations