Privacy-Preserving Detection of IoT Devices Connected Behind a NAT in a Smart Home Setup

05/31/2019 ∙ by Yair Meidan, et al. ∙ Singapore University of Technology and Design Ben-Gurion University of the Negev 0

Today, telecommunication service providers (telcos) are exposed to cyber-attacks executed by compromised IoT devices connected to their customers' networks. Such attacks might have severe effects not only on the target of attacks but also on the telcos themselves. To mitigate those risks we propose a machine learning based method that can detect devices of specific vulnerable IoT models connected behind a domestic NAT, thereby identifying home networks that pose a risk to the telco's infrastructure and availability of services. As part of the effort to preserve the domestic customers' privacy, our method relies on NetFlow data solely, refraining from inspecting the payload. To promote future research in this domain we share our novel dataset, collected in our lab from numerous and various commercial IoT devices.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The ability to launch massive distributed denial of service (DDoS) attacks via a botnet of compromised devices is an exponentially growing risk in the Internet of Things (IoT) [7, 23]. Such massive attacks, possibly emerging from IoT devices in home networks [12], hit not only the target of the attacks, but also the infrastructure of telecommunication service providers (telcos) along the attack path as well. By virtue of the huge bandwidth that is assigned to customers nowadays, the combined traffic surge from infected IoT devices that might hit the telco’s infrastructure could eventually overload it. This might cause episodic downtime and serious backlashes in the form of widespread customer dissatisfaction.

Typically, IoT-based DDoS attacks rely on exploiting vulnerabilities of specific models of IoT devices [12]. In such cases most domestic customers who connect IoT devices to their home networks don’t have the knowledge or means to handle ongoing attacks, and the burden of preventing them falls on the telco. To effectively scale and defend against IoT-based attacks launched from customers’ premises, telcos can continuously monitor the traffic of their customers. Based on the monitored traffic, telcos can detect exploitation attempts, infections, and executed attacks on third parties, and then block these activities. However, this approach might be too late and result in service malfunctions and also hurt the telco’s reputation. Due to these reasons, we propose a method for detecting connected vulnerable IoT models before they are compromised. Thus, in the case of DDoS attacks, our method can facilitate offloading of huge traffic amounts generated by an abundance of infected domestic IoT devices. In turn, this can prevent the combined traffic surge from hitting the telco’s infrastructure, reduce the likelihood of service disruption and ensure continued service availability.

In this paper we propose a novel method for telcos to mitigate the above IoT-related risks posed by their domestic customers. It relies on monitoring the traffic of each smart home separately in order to verify: Is an IoT model, known to be vulnerable to a DDoS attack, connected to this network or not? This method relies on NetFlow records and it does not violate the privacy of customers since it does not analyze traffic payloads. A telco using our proposed method can (1) detect vulnerable IoT devices connected behind a NAT, and (2) use this information to take actions. We empirically evaluate our method on genuine NetFlow records collected in our lab for a period of ten days from numerous commercial smart home IoT devices. We also compare our NetFlow-based method to two existing deNATing methods: (1) a domain-based method [11] and (2) a method which is based on DNS IP-ID [19]; we evaluate them empirically on packet-level data collected simultaneously from the same network. Unlike some past studies which applied their methods to partially, questionably, or completely unlabeled datasets, our datasets are explicitly labeled with the device model. We share all of our datasets with the scientific community to promote future reproducible research, so given the ground truth labeling, both our study and future studies can be truthfully evaluated in terms of classification performance.

2 Background

2.1 NATing, deNATing and IoT Identification Behind a NAT

In home networks it is common [10, 16] to use NAT-enabled Wi-Fi routers. As part of NATing the outbound traffic, the NAT routers effectively ’hide’ the internal IP addresses of individual connected devices by replacing them with the router’s external IP address. Once NATed, it becomes difficult to correlate each packet to its packet stream from the outside. As described by [19], deNATing is the reverse of NATing, and it aims at re-identifying the communication flowing through a NAT. In Section 4, we survey existing deNATing methods and illustrate their shortcomings with regard to our use case.

In the IoT identification literature it is typically required to analyze chronological sequences of packets [18, 14] or sessions [17] for device (type) fingerprinting or even for the basic calculation of inter-arrival times [15, 22]. When the traffic is NATed separating multiple packets into distinct timely sequences becomes a real challenge, thus the validity of existing IoT identification methods is undermined. To overcome this, we propose to use the popular NetFlow protocol (discussed next), which already aggregates packets internally into (Net)Flows. Nonetheless, although NetFlow practically performs deNATing internally, the detection of specific IoT models based on a single NetFlow remains a definitive challenge.

2.2 NetFlow as a Basis for Privacy-Preserving Traffic Analysis

In the domain of network traffic analysis several levels of data granularity are typically used to define an entity, such as a packet, a transaction, a session, a flow, a conversation window, etc. [4, 3]. Among them, packet-level traffic analysis is a common approach, wherein deep packet inspection (DPI) is an accepted conduct [8]. This approach requires the capturing and analysis of highly-detailed network traffic data including the payload of each packet. Although potentially informative for IoT model detection, there are some known disadvantages [13] to using DPI in terms of efficiency and privacy preserving, as follows.

  • Efficiency: The collection and analysis of the entire raw traffic (including the payload) in networks of high traffic rates is technically challenging. For a telco, the traffic volume can reach up to multiple Gigabits per second and it is far from trivial to capture and analyze such tremendous amounts of data.

  • Privacy preserving: DPI allows the telco access to its customers’ personal information. Even more dangerously, in case this data is leaked, there is a privacy risk to the communicating parties, as this information might expose private data (e.g., video captures that are transmitted over HTTP).

To address the above disadvantages we propose to use NetFlow [6] instead. NetFlow was integrated as a feature in Cisco’s routers and provides the capability to summarize IP network traffic passing through an interface. By relying only on traffic statistics and metadata aggregated by NetFlow, we preserve the privacy of the communicating parties. We do not access the actual payload and we do not use this information at all in our method. In addition, collecting and analyzing only NetFlow’s statistical aggregations instead of the raw data requires significantly less computation and storage, thus making the analysis more efficient. It is also worth mentioning that NetFlow is a common solution natively supported by most routers [26], and it is acknowledged as the de-facto standard for compact representation of large traffic rates. Actually, NetFlow was also used in the past for security-related tasks like Botnets detection and deNATing and has already been introduced as a privacy preserving solution [26, 1, 11].

2.3 System and Threat Model

Our system model is a typical network setup found in smart homes. The IoT devices are connected to a gateway router providing an interface for connecting IP-enabled devices to the Internet. We assume that during the initial setup when the IoT devices connect to the network they possibly have security vulnerabilities that are not yet exploited. In our threat model, we assume that (1) our network setup is likely to be targeted with attacks (e.g., DDoS) which could be carried out by botnets such as Mirai; (2) The telco would like to observe the traffic emerging from the customers’ premises which are NATed; (3) The telco is a passive listener who wishes to block the IoT devices which are vulnerable and susceptible to cyberattacks; and (4) The telco does not know which applications are running and will preserve privacy while monitoring the traffic.

3 Research Goals and Contributions

We focus on the following questions:

  1. Can we detect IoT models connected behind a NAT by analyzing NetFlows?

  2. Can we perform this detection without compromising the users’ privacy?

  3. Can we provide satisfactory detection performance for a large variety of IoT models and validate our approach to be better than state of the art?

We summarize our contributions as follows:

  1. To the best of our knowledge, we are the first to apply machine learning techniques to NATed network traffic for IoT model detection.

  2. The current state of the art relies on data sources and features that necessitate the analysis of high-definition data and thus compromise users’ privacy. In our approach we preserve privacy, mostly by using only common meta-features extracted by NetFlow.

  3. We evaluate our method using genuine traffic data collected from various IoT devices, and demonstrate that we can detect IoT models behind a NAT. We also share both our NetFlow and pcap datasets with the scientific community.

4 Related Work

4.1 Scope and Orientation

Several prior NAT-related studies (for example, [10]) focused on identifying the presence of a NAT device in a network. We refer to them as non-deNATing, since they don’t perform classical deNATing, i.e., they don’t divide NATed traffic (all with the same source IP of the NAT router) into distinct packet streams for further analysis. As can be seen in Table 1, most of the other studies aimed at uncovering the identity and/or the quantity of the devices connected behind such NAT devices, or the people who use them. Some motivations for doing so:

  • Security: Detecting attackers [20] or devices that are vulnerable [21, 11] or infected by a malware [19].

  • Traffic management: Applying policies such as parental browsing control or per device communication limits [19], as well as traffic interception.

  • Commerce: Traffic profiling for targeted advertising [19].

  • Privacy violation: Inferring user connectivity [26] and behavior [2].

Our motivation is also security-oriented; it reflects the viewpoint of a telco wishing to defend against IoT botnets that might severely impact the communication availability. Unlike prior studies whose subject of interest was (NATed) operating systems [19] or people and their behavior [2, 26, 20, 24], our method is designed to detect connected IoT device models. Additionally, in contrast to some NAT-related studies performed in the past which addressed large-scale environments like smart cities [26, 21, 24, 11] and smart manufacturing [9, 10, 21, 11], we tailored our method to smart homes. We evaluated it empirically using a wide range of popular home (consumer) IoT devices like smart light bulbs, sockets, and webcams, as well as laptops and smartphones.

Table 1: The scope and orientation of previously-conducted deNAT-related studies

4.2 Methods and Evaluation

As summarized in Table 2, a variety of data sources and related methods have been proposed for deNATing. Most of them rely on features extracted directly from the TCP/IP model, however in some cases they might fail to address our use case. For instance, features from the application layer like the HTTP user agent [10], MSN transaction ID [20], and DNS domains [2] might not be available to the telco when the traffic is encrypted. In contrast, we use NetFlow which extracts data from different layers (such as the network and transport layers which are usually not encrypted), as well as metadata like traffic rates and volumes. The TCP timestamp is another commonly used feature in deNATing [19, 25], however its analysis requires a minimal number of packets, and it might even be disabled. Moreover, many IoTs use the UDP transport protocol, so the robustness of this feature becomes questionable. In contrast, our NetFlow-based method can handle both TCP and UDP, so it covers a wider range of IoT models. Other approaches relied on the server(s) the IoT devices communicate with [2] and the related DNS domains [11]. With NetFlow, we can analyze all kinds of traffic and not just specific types such as DNS. Some works relied on the IP TTL [16] based on the assumption that a NAT device decrements its value by one, however the value at which it decrements might differ among various NAT devices. Others proposed using the open ports [9] to deduce the traffic’s origin. However, these studies aimed at roughly distinguishing between SCADA and non-SCADA traffic. Our method is better suited for the given use case, as it proposes fine-grained classification. We have empirically shown that our method is effective at distinguishing between multiple IoT models, even among specific models of the same make (e.g., different D-Link webcams).

Table 2: The methods used by previous deNAT-related studies

For empirical evaluation, most prior studies used tcpdump log files, Shodan scans [9, 21], or simulated data [24]. In some cases, the dataset is not available for research reproduction [10]; in some cases, it is questionably labeled [11] or not at all [16]; and in most cases, it does not represent smart homes. Moreover, even if labeling is present, the data only comes from non-IoT hosts [20], and the class does not reflect the device model [19]. In contrast, in our research we use NetFlow records, collected in our controlled home-like lab for a period of ten days from various IoT and non-IoT devices. For benchmarking we also simultaneously collected pcap files, and most importantly, we explicitly labeled our data with the ground truth regarding the device models. We believe that the novelty, authenticity, diversity, scope, and reliability of our publicly available datasets will facilitate future research and may serve as a benchmark for deNATing algorithms, specifically in the context of smart homes.

5 Proposed Method

Whenever a harmful IoT exploit is discovered (step 1 in Figure 1), and a related vulnerability is identified (step 2) in a certain IoT model, a telco might want to mitigate the associated risk to its network. To accomplish this, the first step is to detect the presence of such IoT devices among the telco’s customers.

Figure 1: An overview of the key steps in our proposed method

In order to detect IoT device models connected behind domestic NATs, we propose a method consisting of central training (steps 3-4) which maximizes efficiency and control, followed by local deployment

(steps 5-7) of the trained classifier. We use the following notation for describing our proposed method, which is reproduced for every IoT model separately:

  • Model of an IoT device, defined by the combination of its type, make, and version. For example, webcam.D_Link.DCS_933L and webcam.D_Link.DCS_
    942L
    are two separate IoT models which share the same type and make.

  • Devices , instances of IoT model , each defined by a unique MAC address.

  • Lab which is used for collecting and analyzing traffic data from . This central lab can be operated by the telco itself or a third party.

  • Flow, an aggregation of the communication between a client and a server, produced using NetFlow, defined by the (1) ingress interface, (2) source IP address, (3) destination IP address, (4) IP protocol, (5) source port, (6) destination port, and (7) IP type of service.

  • Flow-level dataset, collected in from , using NetFlow and nProbe.

  • Training and validation sets for , containing only flows generated by IoT devices of model .

  • Test set, containing flows from various s and also non-IoTs such as PCs, smartphones, etc.

  • One-class classifier for , trained in using and .

  • Homes , monitored networks of the telco’s customers.

  • Local detectors . Each is an agent which monitors the NetFlows emerging from the respective home in order to decide whether an IoT device of model is connected behind ’s NAT or not.

  • Anomaly score assigned to a flow by the one-class classifier . The lower this score is, the more chances that originated from .

  • Threshold used to determine if a flow originated from or not. Flows with scores below are classified as originating from non- devices.

5.1 Central Training

For a potentially vulnerable IoT model , devices are to be connected to ’s internal network behind a NAT. Upon normal usage of , network traffic data is generated and subsequently processed by NetFlow, which is installed on the NAT router. The produced flows are continuously collected (step 3) using nProbe into a designated server for storage, thus accumulating the raw flow-level dataset . Having gathered a sufficient amount of flows in , preprocessing steps are to be undertaken, followed by the application of machine learning techniques for training and fine-tuning a classifier (step 4). Instead of conventional binary or multi-class supervised algorithms we propose to train a one-class classifier for each separately. With this approach (1) is quickly collected, (2) is trained independently from any non- device (IoT or not), and (3) can be shared among telcos or other organizations as a standalone classifier.

5.2 Local Deployment

In order to preserve the privacy of end users, a telco can deploy a traffic monitoring solution only from outside its customers’ premises. This solution cannot be implemented on the home router (location 1 in Figure 2) because end users are not obligated to using any type of router. Instead, we propose to place our solution on a hardware agent situated outside the customers’ premises, between the home router and the ONT (location 2 in Figure. 2). This local detector monitors the (NATed) traffic data emerging from the home network, and applies the pre-trained classifier in order to detect connected IoT devices of model .

Figure 2: Possible locations along the network to deploy the IoT model detection method

In step 5 the centrally trained classifier is ditributed among the local detectors . Each can be implemented using a low-cost thin computer (such as Raspberry Pi), and should have the following software components: (1) nProbe for collecting flows from , (2) a software environment (e.g., Python) for preprocessing the flows, (3) the trained classifier and (4) a software component capable of decision-making and executing actions based on the classification results. As part of continuous monitoring (step 6) each collected by from is preprocessed exactly the way it was in , in order to achieve the same data structure and scale. Then, is assigned an anomaly score by . If , is marked as generated by an IoT device of model (step 7).

5.3 Actions to be Triggered by IoT Model Detection

A positive classification by can trigger various automated reactions, including:

Traffic blocking

: This is probably the most severe reaction, and it is not advised to be taken immediately. The reason is that in false positive detections, traffic blocking means denial of service to legitimate devices, followed by customer dissatisfaction and damage to the reputation of the telco.

Email notification: A more moderate (and perhaps more productive) reaction is to tell the customer that a vulnerable IoT device might be connected to the network, so a software update or a change of password are advised.

Additional verification: Although privacy-preserving, detection that is based only on metadata in a single NetFlow cannot guarantee perfect results (i.e., without any false positives). Therefore, a cascading verification process, in which an additional classifier confirms the detection, can be considered.

We are aware that typically a telco can already see all packet contents including application layer data. Still, we propose the telco to rely solely on the local detectors for data collection, analysis and automated reaction. The main reason is that a central monitoring solution would probably end up with a table that holds information on IoT devices owned by specific customers. On top of the privacy violation, this table could become a valuable goal for attackers.

6 Evaluation Method

6.1 Lab Setup

In order to collect representative data, imitating a real-world scenario of various IoT devices connected behind a NAT, we set up a dedicated network as illustrated in Figure 3. First, we partitioned a switch into two VLANs: and , representing the home network and the telco side respectively. Then, we connected a variety of commercial IoT devices, as well as laptops and smartphones (details are provided in Table 3), to via a wireless access point. We also connected to a NAT router, where NetFlow is already installed. In turn, we connected the router to , which was connected to the Internet. To imitate the stages of central training and local deployment, we installed nProbe on a server and a Raspberry Pi respectively, to collect the NetFlow records from the router and analyze them accordingly. In addition, to enable the comparison of our method to previous studies, we also performed port mirroring from and and captured pcap files using Wireshark.

Figure 3: Our evaluation setup, imitating a customer’s smart home

6.2 Data Acquisition

Data collection. We operated the devices routinely over a period of approximately ten days to collect genuine traffic data. For instance, we made the webcams send videos, turned on and off the sockets and the light bulb, surfed the Web via the laptops and smartphones, etc. The resultant traffic was captured simultaneously using NetFlow and Wireshark.

Ground truth labeling. In our lab we recorded flows both behind and in front of the NAT, and we also made sure to configure static IPs in the internal network. This way we were able to match each external outbound flow (labeled with the source IP address of the router) with its internal twin, correctly labeled with the source IP. By matching these with a table of IP/MAC/device model, we labeled the NetFlows for analysis. We repeated the same matching procedure with the pcap files, so the ground truth labels are available for them as well.

Feature extraction. We used the following NetFlow features [5] as a feature set that is minimal yet potentially informative for IoT model detection:

  1. IN_BYTES: The number of incoming bytes associated with an IP Flow

  2. OUT_BYTES: The number of outgoing bytes associated with an IP Flow

  3. DST_TOS: Type of Service byte setting when exiting outgoing interface

  4. SRC_TOS: Type of Service byte setting when entering incoming interface

  5. PROTOCOL: IP protocol byte

  6. L4_DST_PORT: TCP/UDP destination port number

  7. L7_PROTO_NAME: Layer 7 protocol name

  8. flow_duration: Extracted from NetFlow by subtracting FLOW_START_
    MILLISECONDS from FLOW_END_MILLISECONDS

6.3 Preprocessing and Experimentation

First, we partitioned chronologically, such that the earliest 70% of each device (identified by the MAC address) is included in , the next 10% in and the remaining (latest) 20% in . Then, using Python and Scikit-Learn, we repeated the following experiment 13 times, once for each :

  1. We filtered out all of the non- flows from both and , because we chose the technique of one-class classification.

  2. In

    we scaled the numeric features to the range of [0,1] and encoded the categorical features into dummy variables of the same range.

  3. While preprocessing and we performed the same scaling and encoding, specific to the current . Consequently, the number of dummy variables differs among IoT models, depending on the number of unique values found in for each categorical feature (e.g., L4_DST_PORT).

  4. We trained on

    and saved it to the disk to enable distribution to the local detectors. In preliminary experimentation we found that the Isolation Forest algorithm performs much better than One-Class SVM and Local Outlier Factor (LoF), so we chose it as the sole algorithm in our study.

  5. We applied to to evaluate the method’s classification performance.

6.4 Performance Metrics

We used the following widely-accepted metrics to evaluate our detection method:

  1. True Positive Rate (TPR): Ratio of cases where was generated by and truthfully detected as such by .

  2. False Positive Rate (FPR): Ratio of cases where was not generated by but falsely classified as such by .

  3. The area under the ROC curve (ROC AUC): Class discrimination capability for differing values.

  4. Time to detect: Depends on the interarrival time (IAT) of , as well as its duration, preprocessing, and classification times.

7 Results and Discussion

Table 3: The IoT models in our experiments, their datasets and the classification performance using

7.1 Experimental Results

Our experiments are summarized in Table 3, sorted by the number of NetFlows. Excluding the bottom seven rows (non-IoTs), leads to the following conclusions:

  • Training time.

    The mean (standard deviation) of the time it takes to train

    is only 3.15 (2.81) seconds. Thus, frequent retraining for performance improvements is highly feasible.

  • Classifier size. requires very little disk space - just 1,350.62 (280.39) KB. Thus, deploying it on thin local detectors is practical.

  • Time to detect. The time it takes to preprocess a given is in the order of microseconds, and so is the classification time; thus, both are negligible. The IAT and duration of are much more significant, and their sum (shown in Table 3) varies between approximately 4 and 18 minutes.

  • ROC AUC. For most s, reasonable values of 0.85 (0.05) are attained using . Only webcam.Sricam.SP017 performs substantially worse; a closer look revealed that in about 19% of cases it is confused with webcam.Amcrest.IPM_
    723S
    . Apparently, the reason for this confusion is the substantial overlap in their communicated domains, including Amazon, HTTP.Amazon, NTP, NTP.Amazon, and SSL.Amazon. Figure 4(a) shows how almost all of the ROC curves (one for each ) share the same shape.

  • Default TPR and FPR. When ’s classification is determined based on comparing to Scikit-Learn’s default , a TPR of 0.76 (0.04) is obtained. This means that, on average, when an IoT device of model is connected behind a NAT, a telco can detect it in 76% of the cases based only on metadata captured in a single NetFlow. Consequent actions (see Subsection 5.3) can then substantially mitigate the risk to the telco’s infrastructure and service. Unfortunately, this TPR is accompanied by an FPR of 0.13 (0.20), meaning that in too many cases false alarms are generated. Hence, we looked for solutions to reduce the FPR while preserving satisfactory TPR levels.

  • -percentile-based TPR and FPR. While searching for methods to overcome the challenge of high FPR, we noticed that using for calibration can improve the classification performance using . That is, we (1) trained the classifier using , (2) applied to , (3) calculated a predefined percentile of the resultant distribution of the score and (4) used this value as the new to act as a classification threshold when using . So, for varying percentiles [0, 30] of on we recalculated the TPR and the FPR using , and we found that the percentile (annotated in Table 3) provides the best results: A decrease of FPR for almost all the IoT models, and most substantially for webcam.Edimax.IC_3116W, webcam.Amcrest.IPM_HX1B and socket.TP_Link.HS110. Actually, for two s the FPR decreased to absolute zero, and for five others the FPR was found to be 0.02 or less. Overall, the FPR decreased to 0.11 (0.21) with a cost of reducing the TPR to 0.73 (0.05). We note that this level of performance is yet to be improved in order to support actual deployment by a telco, and we discuss it in Subsection 7.3.

(a) (a) ROC curves for multiple IoT models using , based on classifying NetFlows
(b) (b) Distribution of the increment in IP-ID values of DNS requests for IoT models in our experiment

(c) (c) DNS IP-ID of four Windows- and Android-based hosts over time (adapted from [19])
(d) (d) DNS IP-ID of two Linux-based IoT models over time (gathered from our experiments), motivating the ”slope-matching” idea
Figure 4: Experimental results using

7.2 Benchmarking

In order to evaluate our proposed method more comprehensively, we decided to empirically compare it with two current deNATing methods. We implemented them in our lab with a few necessary adjustments, and because they use packet-level traffic data we tested them on pcap files that had been collected simultaneously by Wireshark.

7.2.1 DNS IP-ID-Based deNATing

The method proposed in [19] aimed at deNATing traffic of devices using the same OS, rather than detecting different IoT models. It relies on the fact that the IP-ID field in some OSs is consistently incremented for successive packets sent to the same destination IP. Since DNS requests to the same DNS resolver are sent to the same destination IP, the value of the IP-ID field is monotonically increasing between DNS requests. Thus, tracking IP-ID values of multiple DNS requests coming out of an NAT router to the same resolver may assist in correlating subsequent packets of distinct devices behind the NAT.

The researchers experimented with Windows 8 and 10, and Android. They showed that for successive DNS requests the difference in IP-ID values is very stable (see Fig. 4(c)) and almost always equals one. This makes deNATing rather simple and effective with these OSs, unlike our Linux-based IoTs, where the difference was found to be highly variable (see Fig. 4(b)). Still, as can be seen in Fig. 4

(d), different IoT models may increment their respective DNS IP-ID values in a consistent and characteristic way, such that robust linear regression models can be trained. In turn, the trained slopes can be compared with the ones found in a test set. A good match between trained and observed slopes can be the basis for an IoT model detection technique. We leave this to future work.

7.2.2 DNS Domain-Based deNATing

In [11] the objective was to identify the type of NATed IoT devices, similarly to this paper. For each IoT device the authors tracked a list of communicated  device-facing server names. Then, during a test period, if the number of communicated device server names from the related list surpassed a threshold, they inferred that the device type is present. Their method is somewhat limited as it poses the following constraints on the type and quantity of the communicated servers:

  • It excludes third party and human-facing server names to minimize the FPR.

  • It excludes device types which communicate with less than three servers.

  • If an overlap exists among the server names of multiple device types, the method cannot guarantee that the devices are distinguishable, and it reports that at least one of them was detected.

Figure 5: The domains (green nodes) requested by our IoT devices (blue). The red node represents webcam.Sricam.SP017, which had the lowest ROC AUC.

Fig. 5 illustrates the results of applying the method proposed in [11] to our test set. It presents the DNS domains (green nodes) requested by the IoT devices (blue nodes) in our experiment, where it can be seen that some of them are human-facing, some belong to third parties, and also an overlap of requested domains clearly exists in our experiment. In practice, meeting all of the above constraints would eliminate all the devices we experimented with, so we implemented the DNS domain-based method without limiting the types of communicated servers. We also used the server names rather than their resolved IP addresses, in order to promote efficiency, while relying on the fact that Linux-based IoT devices don’t support DNS caching. Also, in the original paper they reported the detection performance on a time window of days. To compare the performance of their method to ours more fairly, we examined it using a time window of 10 minutes.

Altogether, only three of our IoT models met the criterion of at least three device-facing communicated servers, and they performed well: a TPR of 1.00 for all of them, an FPR of 0.00 for socket.TP_Link.HS110 and webcam.Amcrest.IPM_
HX1B
, and an FPR of 0.56 for webcam.Edimax.IC_3116W. However, a coverage of three out of 13 IoT models seems far from sufficient for a telco to implement.

7.3 Limitations

We are aware of some shortcomings of our proposed method, as follows. First, upon deployment, a telco can train only after it purchases devices of model , configures them and collects a sufficient amount of NetFlow records. This process might take few days to complete. However, we are not aware of any other traffic-based detection method that can skip the time-consuming data acquisition stage. To shorten this stage it is advised to connect multiple devices. Second, firmware updates to IoT models might make the classifiers obsolete. Again, any other data-driven classifier would probably face the same challenge.

7.4 Future Research

The scope of this paper was limited to developing a method which is capable of detecting vulnerable IoT models behind a NAT. However, detection is only a first stage, to be complemented with locally-installed tools such as vulnerability scanning (to check if the vulnerability has been patched) or virtual patching.

Additional challenges to address are (1) improving our method in terms of the TPR and FPR, possibly using the cascading detection approach (discussed in Subsection 5.3), (2) looking for solutions that are less costly than local detectors, can support a multitude of households and are still privacy preserving, and (3) exploring the potential of the DNS IP-ID ”slope matching” idea.

8 Conclusion

In this paper, we demonstrated that using our proposed method enables a telco to detect about 73% of any NATed IoT model of interest. This is a first step towards dramatically mitigating the risk posed to the telco’s infrastructure by domestic IoT devices that might be recruited to botnets. The detection takes only few minutes, and is being performed while preserving customers’ privacy.

References

  • [1] Abt, S., Baier, H.: Towards efficient and privacy-preserving network-based botnet detection using netflow data. In: Ninth International Network COnference (INC). pp. 37–50 (2012)
  • [2] Apthorpe, N., Reisman, D., Feamster, N.: A Smart Home is No Castle: Privacy Vulnerabilities of Encrypted IoT Traffic. In: Workshop on Data and Algorithmic Transparency (2017)
  • [3] Bekerman, D., Shapira, B., Rokach, L., Bar, A.: Unknown malware detection using network traffic classification. In: Communications and Network Security (CNS), 2015 IEEE Conference on. pp. 134–142 (2015)
  • [4] Callado, A.C., Kamienski, C.A., Szabó, G., Gero, B.P., Kelner, J., Fernandes, S.F., Sadok, D.F.H.: A survey on internet traffic identification. IEEE Communications Surveys and Tutorials 11(3), 37–52 (2009)
  • [5] cisco.com: NetFlow Version 9 Flow-Record Format (2011), https://www.cisco.com/en/US/technologies/tk648/tk
    362/technologies_white_
    paper09186a00800a3db9.html
  • [6] cisco.com: Cisco - NetFlow (2017), https://www.cisco.com/c/en/us/tech/quality-of-service-qos/netflow/index.html?dtid=osscdc000283
  • [7] CNA, d.: DDoS attack on StarHub first of its kind on Singapore’s telco infrastructure: CSA, IMDA (2016), https://www.channelnewsasia.com/news/singapore/ddos-attack-on-starhub-first-of-its-kind-on-singapore-s-telco-in-7770046
  • [8] El-Maghraby, R.T., Elazim, N.M.A., Bahaa-Eldin, A.M.: A survey on deep packet inspection. In: 2017 12th International Conference on Computer Engineering and Systems (ICCES). pp. 188–197. IEEE (2017)
  • [9] Ercolani, V.J., Patton, M.W., Chen, H.: Shodan visualized. In: 2016 IEEE Conference on Intelligence and Security Informatics (ISI). pp. 193–195. IEEE (9 2016)
  • [10] Gokcen, Y., Foroushani, V.A., Heywood, A.N.Z.: Can We Identify NAT Behavior by Analyzing Traffic Flows? In: 2014 IEEE Security and Privacy Workshops. pp. 132–139. IEEE (5 2014)
  • [11] Guo, H., Heidemann, J.: IP-Based IoT Device Detection. In: Proceedings of the 2018 Workshop on IoT Security and Privacy. ACM (2018)
  • [12] Kambourakis, G., Kolias, C., Stavrou, A.: The Mirai botnet and the IoT Zombie Armies. In: Proceedings - IEEE Military Communications Conference MILCOM. vol. 2017-October (2017). https://doi.org/10.1109/MILCOM.2017.8170867
  • [13] Li, B., Springer, J., Bebis, G., Gunes, M.H.: A survey of network flow applications. Journal of Network and Computer Applications 36(2), 567–581 (2013)
  • [14]

    Lopez-Martin, M., Carro, B., Sanchez-Esguevillas, A., Lloret, J.: Network Traffic Classifier With Convolutional and Recurrent Neural Networks for Internet of Things. IEEE Access

    5, 18042–18050 (2017)
  • [15] Mahalle, P.N.: Object Classification based Context Management for Identity Management in Internet of Things. International Journal of Computer Applications 0975 – 8887 63(12),  1–6 (2013). https://doi.org/10.5120/10515-5486
  • [16] Maier, G., Schneider, F., Feldmann, A.: NAT Usage in Residential Broadband Networks. In: International Conference on Passive and Active Network Measurement. pp. 32–41. Springer, Berlin, Heidelberg (2011)
  • [17] Meidan, Y., Bohadana, M., Shabtai, A., Guarnizo, J.D., Ochoa, M., Tippenhauer, N.O., Elovici, Y.: ProfilIoT: A Machine Learning Approach for IoT Device Identification Based on Network Traffic Analysis. In: Proceedings of the Symposium on Applied Computing - SAC ’17. pp. 506–509. ACM Press (4 2017)
  • [18] Miettinen, M., Marchal, S., Hafeez, I., Frassetto, T., Asokan, N., Sadeghi, A.R., Tarkoma, S.: IoT Sentinel Demo: Automated Device-Type Identification for Security Enforcement in IoT. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). pp. 2511–2514. IEEE (6 2017)
  • [19] Orevi, L., Herzberg, A., Zlatokrilov, H., Sigron, D.: DNS-DNS: DNS-based De-NAT Scheme. In: NDSS DNS Privacy Workshop (2017)
  • [20] Ori, Z., Levi, M., Elovici, Y., Rockach, L., Nir Shafrir, Sinter, G., Pen, O.: Identifying computers hidden behind a nat using machine learning techniques. In: ECIW2008-7th European Conference on Information Warfare and Security: ECIW2008. p. 335. Academic Conferences Limited (2008)
  • [21] Patton, M., Gross, E., Chinn, R., Forbis, S., Walker, L., Chen, H.: Uninvited connections: A study of vulnerable devices on the internet of things (IoT). In: Proceedings - 2014 IEEE Joint Intelligence and Security Informatics Conference, JISIC 2014 (2014). https://doi.org/10.1109/JISIC.2014.43
  • [22] Radhakrishnan, S.V., Uluagac, A.S., Beyah, R.: Gtid: A technique for physical deviceanddevice type fingerprinting. IEEE Transactions on Dependable and Secure Computing 12(5), 519–532 (2015)
  • [23] Rayome, A.D.: DDoS attacks increased 91% in 2017 thanks to IoT (2017), https://www.techrepublic.com/article/ddos-attacks-increased-91-in-2017-thanks-to-iot/
  • [24] Savage, S., Wetherall, D., Karlin, A., Anderson, T.: Practical network support for IP traceback. In: Proceedings of the conference on Applications, Technologies, Architectures, and Protocols for Computer Communication - SIGCOMM ’00. vol. 30, pp. 295–306. ACM Press (2000)
  • [25] Tekeoglu, A., Altiparmak, N., Tosun, A.S.: Approximating the Number of Active Nodes Behind a NAT Device. In: 2011 Proceedings of 20th International Conference on Computer Communications and Networks (ICCCN). pp. 1–7. IEEE (7 2011)
  • [26] Verde, N.V., Ateniese, G., Gabrielli, E., Mancini, L.V., Spognardi, A.: No NAT’d User Left Behind: Fingerprinting Users behind NAT from NetFlow Records Alone. In: 2014 IEEE 34th International Conference on Distributed Computing Systems. pp. 218–227. IEEE (6 2014)