A wrinkle in time: A case study in DNS poisoning

06/26/2019 ∙ by Harel Berger, et al. ∙ 0

The Domain Name System (DNS) provides a translation between readable domain names and IP addresses. The DNS is a key infrastructure component of the Internet and a prime target for a variety of attacks. One of the most significant threat to the DNS's wellbeing is a DNS poisoning attack, in which the DNS responses are maliciously replaced, or poisoned, by an attacker. To identify this kind of attack, we start by an analysis of different kinds of response times. We present an analysis of typical and atypical response times, while differentiating between the different levels of DNS servers' response times, from root servers down to internal caching servers. We successfully identify empirical DNS poisoning attacks based on a novel method for DNS response timing analysis. We then present a system we developed to validate our technique that does not require any changes to the DNS protocol or any existing network equipment. Our validation system tested data from different architectures including LAN and cloud environments and real data from an Internet Service Provider (ISP). Our method and system differ from most other DNS poisoning detection methods and achieved high detection rates exceeding 99 they can considerably enhance the accuracy of these methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The Domain Name System (DNS) [1, 2] is one of the best known protocols in the Internet. Its main function is to translate human-readable domain names into their corresponding IP addresses. Its importance for the Internet is derived from the fact that virtually all day-to-day network applications use DNS. The translation process is done through DNS queries between the client and the DNS server (resolver). The ordering of the DNS tree, from the root down, is called the ”DNS Hierarchy” and is depicted in Fig. 1.

Fig. 1: DNS hierarchy example

There are two types of DNS resolvers, termed Authoritative and Recursive. Authoritative name servers give answers in response to queries about IP addresses. They only respond to queries about domains that have been configured to provide answers. Recursive resolvers provide the proper IP address requested by the client. They do the translation process by themselves, and return the final response to the client. In this paper, we focus on recursive DNS resolvers.

Generally, clients issue their queries using DNS messages to a DNS resolver which maps each query to a matching Resource Record (RR) set and returns it in the response DNS message. Each record is associated with a Time-To-Live (TTL) value. Resolvers are allowed to store (cache) the response in their memory until the TTL value expires. When this time period has elapsed, the RR is evicted from the cache.

Given a query to resolve, a recursive resolver looks up the cache for a matching record. If one exists, it is returned as the response. If not, the recursor uses the DNS resolution process to obtain a matching record by implementing the following steps. First, it determines the closest zone (level in the hierarchy) that encloses the query and has its information cached. If no such zone is cached, the enclosing zone is the root zone. In this case, the recursive resolver resorts to contacting the DNS root-servers. The root server it contacts returns an authoritative response, which redirects the recursive resolver to a Top Level Domain (TLD) server. Then, the recursor requests a response from the TLD server. The TLD server sends the full resolution response, or a redirection to a Second Level Domain (SLD) server. A process similar to the one for the TLD is carried out by the SLD server, and the DNS hierarchy levels beneath it (in the DNS hierarchy tree).

Various studies have examined DNS attacks[3, 4, 5, 6, 7, 8, 9] designed to prevent clients from resolving RRs. One of the most common attacks is DNS cache poisoning, where the attacker provides spoofed records in the responses to redirect the victims to malicious hosts. This kind of attack can facilitate credentials theft, malware distribution, censorship and others.

Over the years, a number of improvements have been suggested to secure the DNS. One of the main advances is known as Domain Name System Security Extensions (DNSSEC) [10, 11, 12]. It consists of a set of extensions to the DNS which provides DNS resolvers with origin authentication of DNS data, authenticated denial of existence, and data integrity. All responses that use DNSSEC are digitally signed. By checking the digital signature, a DNS resolver is able to verify whether the information is identical (i.e. unmodified and complete) to the information published by the zone’s owner and served on an authoritative DNS server. In so doing, DNSSEC protects against cache poisoning [13].

DNSSEC appears to be efficient, but it does not guarantee availability or confidentiality. Its deployment rate is less than 20% of all name servers globally, as updated monthly in the Internet society’s statistics[14]. In addition, many DNSSEC keys are vulnerable [15]. Thus shielding from cache poisoning remains a crucial unresolved issue, as described in [16].

Contribution of This Work: Our hypothesis, as shown in Fig. 2

, is that the time elapsed from the DNS query to the relevant DNS response is best fit by a group of Poisson distributions. A Poisson distribution is a statistical distribution of the likelihood an event will occur within a time interval. The average number of events in an interval is designated by

. A Poisson distribution is a typical decision in the field of network attacks[17]

. In this paper, each Poisson resembles a level in the DNS resolution process. We posited that the gaps between the Poissons could indicate anomalies. Thus anomalies point to attacks with high probability.

To test this assumption, we chose a sample of Alexa’s top sites[18]. We acquired data from simulations and from a real ISP. We inspected the distribution of the entire domains’ data and each domain separately. Some of the cases we examined exhibited a distribution similar to our assumption. We created attack data, based on a third party attack tool. We constructed a detection system for the attacks based on the simulation data.

Our key contribution is the innovative notion that attack detection can be based solely on time value. This simple concept is also efficient in terms of running time and memory. Our simulations ran at different times, and on different network parameters, environments etc. Therefore, our data includes jitter and packet loss. In our analysis we do not mention this noise explicitly. Furthermore, the levels detection in Section IV-C is compared to a real data from an ISP.

Fig. 2: We hypothesized that the distribution of the DNS time value would consist of a series of distributions. To simplify the presentation, we assume they are a union of Poisson distributions. Between each two Poissons there is a gap with no values. Therefore an attack that generates a time value located in one of the gaps or proximal to it can be pinpointed with high probability.

The other novel insights we present here are as follows:

  • Timing analysis of DNS authoritative servers - We measured the response times of the root and TLD levels. We executed this from two vantage points, one of which was local(Ariel University, Israel) and the other from Google cloud(St. Ghislain, Belgium). Our method of time measurement is configuration dependent, but can easily be generalized to other cases. We analyzed each server separately to define its time value distributions. Although each server had its own time distribution, the Poisson distribution was identified in each of them. We report our analysis of both the local and the cloud environments alongside real data from Israeli universities’ ISP[19].

  • The development of a detection system for DNS poisoning attacks - We describe a detection system based on observations and their interpretation. The system does not require any changes to the protocol or any existing network equipment. It can be used as a standalone in any network to obtain a classification of DNS responses, and better ensure the identification/prevention of DNS attacks.

  • Testing the accuracy of the system based on simulation and empirical analyses - We tested an actual cache poisoning tool built by a third party on our system. Our accuracy rate exceeded 98%.

  • Mitigation of success rate - We compared the success rate of the attack with and without our system. We were able to reduce the attacker’s success rate by 70%-80%.

The remainder of this paper is organized as follows. We discuss previous works on DNS measurement and cache poisoning attacks in Section II. Then, we present our model in Section III. In Section IV we describe how the method was implemented to analyze DNS packets in a local network and in the cloud. We differentiate between cache and non-cache responses and use them to show ways to differentiate between DNS levels with high accuracy. In Section V we describe a novel poisoning attack identification method. We discuss several learning machines we used for the methodology and the identification of the attack in Section VI. The technical specifications appear in Section VII and the results in Section VIII. Section IX concludes this paper.

Ii Related Works

In this section, we summarize previous studies related to DNS measurement and cache poisoning attack identification.

Van Rijswijk-Deij et al.[20] surveyed a large variety of TLD servers. Their overview spanned features of DNS measurements such as duration, goals, number of vantage points, etc. However, their tests cases only examined cloud email services.

Ager et al.[21] evaluated the response time of ISP DNS resolvers, GoogleDNS[22] and openDNS[23]. This study analyzed the time value of DNS resolvers, but disregarded DNS hierarchy levels. None of these works which analyzed DNS server measurements, considered measuring the DNS levels from the root down to the internal caching resolvers or the distribution of DNS time values.

Wang et al. [24]

proposed an associative feature analysis approach based on statistical models to track the anomalous behavior of DNS servers. In collaboration with a major commercial ISP in China, they captured and analyzed real DNS traffic in this large-scale network environment. They used an outlier function to map malicious responses. The parameters they used were queries/responses per client/server/specific server. The authors detected various attacks in the real world, but did not determine the real volume of the attacks. In other words, they could not evaluate their accuracy.

Yamada et al. [25]

focused on an anomaly detection system for DNS servers. Normally, dealing with large number of hosts can consume vast amounts of computational resources and make real-time analysis difficult due to traffic overload. They proposed anomaly detection for DNS servers that frequently invoke host selection in which only potential hosts are selected. They used a FIFO (First In First Out) based method for frequent host selection along with other statistics. They categorized packets by type such as DNS mail records (MX), regular DNS packets (A), error rate, etc. They proposed several heuristics, such as the number of queries and requests, where a slightly higher rate of queries/requests was considered to indicate an attack. They identified attacks such as the spam Backscatter. This kind of spam consists of incorrectly automated bounce messages sent by mail servers, typically as a side effect of incoming spam (unsolicited messages). Their attack identification system achieved 68% accuracy.

Klein et al. [26] focused on cache poisoning attacks. They investigated a new indirect attack where they injected the victim’s cache with a poisonous record which does not immediately impact the resolution process, but rather becomes effective after an authentic record expires. In this case, the next resolution request to that name returns the spoofed record. Canonical NAME record (CNAME) is a type of resource record in the DNS used to specify that a domain name is an alias for another domain called the ”canonical” domain. Delegation of the NAME record (DNAME) creates an alias for an entire subtree of the domain name tree. They injected CNAME and DNAME responses in a cache poisoning attack.

Celik and Oktug [27]

researched the Fast-Flux Service Networks (FFSN) method used by bots. An Internet bot is a software application that runs automated tasks (scripts) over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive at a much higher rate than would be possible for a human alone. Fast flux is a DNS method used by bots to conceal their actions behind an ever-changing network of proxies. Bots use FFSN to hide phishing and malwares through networks of proxies and servers. The authors used a variety of features for machine learning and achieved 98% accuracy in FFSN identification. Some of the features were taken from the DNS packets themselves, such as the number of unique A records and NS records. They also mapped the A and NS records to their geographical identifiers to better understand their spatial entropy. They counted the AS related to each IP and inspected the RTT values, but failed to generate a proper analysis because of the processing and delays they could not dissect.

An Intrusion Detection system (IDS) which is not specific to DNS but is related to our work was implemented in Ertoz et al. [28]

. They used outliers as identifiers of anomalies to detect attacks such as port scans, worms, etc. (no list is provided in the article). They also explored the non-authorized use of protocols (without inspecting the payload). The features were divided into three parts: the package header, the time window statistics, and the connection statistics. The anomaly detection module was the Local Outlier Factor (LOF). The outlier factor of a data point was local in the sense that it measured the extent of being an outlier with respect to its nearest data points. Each new data point distance was compared to the density of the class data points. If its distance was smaller, it was not considered an outlier. A pattern matching method was also used on the top 10% suspicious connections in the previous method to identify future attacks.

Overall, these studies used statistical methods such as outliers and LOF to identify attacks. Some used pattern analysis based on prior knowledge. However, none implemented a time stamp/time analysis as the main detection strategy. Nevertheless, time based analysis is easy to use, and efficient in terms of memory and number of calculations. Recording RTT values (along with domain names) is simple to do, which makes this identification method easy to deploy in a large number of systems.

Iii The Model

Our model consists of a client, a recursive resolver, an attacker and a defender. The defender possesses an additional offline dataset of DNS benign responses. Fig. 3 depicts this model. The attacker is described in Section III-A, and the defender in III-B.

Fig. 3: Our model is composed of the client, recursor, attacker and defender. The defender and attacker are located inside the LAN.

Iii-a Attacker Model

Our attacker module is an eavesdropper that can also inject malicious packets [29]. It cannot drop a packet it sees. It can respond with a fake packet to a query it sees. In addition, the attacker cannot access the offline data of the defender.

Typically, the attacker in DNS poisoning attacks is an off path attacker[30, 31, 32] which is considered to be a more realistic attack model. Getting access to a LAN or attacking on-path between a recursive resolver and authoritative name server is much harder than spoofing response packets. The success rate of an attacker that spoofs packets is low since it needs to guess certain random parameters, such as the query ID and port number. However, it can acquire these with an additional puppet[30], or through flaws in the resolver software design[33], etc. BGP Hijack attacks [34, 35] enable the interception of traffic, including DNS queries. In this frequent type of attack, the attacker who has this ability can counterfeit DNS responses effortlessly.

Our attacker is placed inside the LAN. This defines our attack as DNS poisoning against the client, rather than the notorious cache poisoning attack. However, our identification method does not need modification to enhance its ability to detect a cache poisoning attack. The only difference is where the attacker and defender are placed.

The attacker’s success is defined as the case where the attacker’s response is categorized by the defender as a benign response.
The attacker’s failure consists of sending a counterfeit response that the defender correctly labels as an anomaly/malicious.

Iii-B Defender model

Our defender’s module is a sniffer. It classifies any new sample as a benign or a malicious response. It has an offline dataset of domain names and RTT values. It classifies the new samples according to its dataset. We assume that it sniffs the domain names of each response and other network parameters such as the RTT value. We use the time value as the main feature of this study to map the resolve levels and their time differences. In so doing we reduce analysis time.

The defender’s success is defined as the case where the attacker’s response is labeled as a malicious packet.
The defender’s failure is the case where the attacker’s response is erroneously labeled as a benign packet.

Iv Methodology

This section presents the data analysis methodology. We had two sources for the data in this study. The first was the experiments we conducted. We used local and cloud environments. Our experimental data are fully described in Section VII. We aimed to confirm our hypothesis not only on the simulation data but also on real data. Therefore, we acquired a second source data from the Inter-University Computation Center (IUCC) . The IUCC serves a vast number of faculty members and students at Israeli universities and regional colleges as well as researchers in numerous R&D organizations in Israel. IUCC is considered Israeli universities’ ISP.

The data for our experiments were recorded from both a stub resolver and a recursive resolver. We mapped each query from the client to the correlative query that arrives at the recursive resolver by the ID and domain parameters. A combination of these two parameters produced a one-to-one mapping of the client’s queries and the recursor’s responses. We did the same for the fully resolved response from the recursive resolver to the client. Therefore, we could have inspected the resolution process as a whole. However, the data from the IUCC were recorded from one link between the recursive resolver and the authoritative name server. Some of the queries got responses in another link. Some responses we saw were for queries that were sent in an alternative link. Hence we could not aggregate the resolution process. Also, we did not have any control over which queries were asked during the record session, which domains were asked etc. Therefore, we could not correlate the domains from the IUCC to the domains from our experiments. As a result, these data are mainly discussed at the Specific Level Analysis (Section IV-D).

The first step was to separate the cache and the resolve responses in the data to identify which response was responded to by the recursive resolver’s memory and which ones were either given a full or partial resolution process. This step is important for two reasons. First, because our methodology tends to maps time values to resolution processes, we map the lack-of-resolution time interval. Second, separating it from the fastest resolution process indicates a time interval where no responses arrive. An attacker’s packet that arrives during this time interval can thus be distinguished easily from cache or resolved responses.

After separating the cache and resolve levels, we broke down the DNS resolve levels. The DNS level of a query sent to the recursive resolver is the highest level in the DNS hierarchy to which any of the resolution queries were sent by the resolver. For example, if a query about www.wikipedia.org was sent to the recursive resolver (as in Fig. 4), and it sent a resolution query to the root server (which is the highest level in the hierarchy), the query was tagged as root level.

Fig. 4: DNS resolve levels for www.wikipedia.org. The resolving process is done by the DNS recurser (recursive resolver) on www.wikipedia.org. The resolution takes place in steps 1-3 by the root, org. and wikipedia.org. nameservers.

Then, we analyzed each DNS level separately to better understand the RTT distribution. We inspected each domain by separating the data into domains. This provided a clearer view of the distribution of each domain. Each domain was inspected in terms of the DNS levels, and yielded a probability table for each domain and the specific time intervals in each.

We analyzed the data in four steps:

Iv-1 Cache or Resolve analysis

This analysis identified which response was from the cache and which one had received a full/partial resolution process.

Iv-2 Hierarchy Levels Analysis

This analysis separated the DNS resolve levels over the entire dataset.

Iv-3 Specific Level Analysis

This analysis focused on the distribution of each DNS level.

Iv-4 Specific Domain Inspection

This analysis grouped the data into domains to get a clearer view.

As mentioned in Section III, our attacker attacks a client. This analysis allowed us to determine from the client’s point of view how the data distributed for each DNS level. From the client’s point of view, every response is received from the same IP - the recursive resolver’s IP. Therefore, we need to distinguish between DNS levels. Note that in a cache poisoning attack, there is no need for our methodology. In this case, the data for each DNS level are separate, since the recursive resolver sees the IPs and can track each DNS level separately. Therefore, this section is not relevant for a cache poisoning attack. However, Section V can be used both for our attack and for cache poisoning attack.

Iv-a Cache or Resolve analysis

First, the cached responses were separated from the resolved responses. We assumed this would be feasible since the resolve process takes time. We found that there was a considerable difference between the cached responses and the resolved responses as a function of the components and the communication system. Fig. 5 depicts these differences, where the cached responses are on the far left side and the resolved responses are on the right side. A gap of 50 ms appears between them.

The cached responses were identified by the fact that the query from the client and the response from the recursive resolver were successive. As a further confirmation method, we tested the ping between the client and the recursive resolver, and compared the average value of the ping to the actual response values. The response RTT values that were close to the average ping value were from the cache.

Fig. 5 depicts the distribution of the RTT values between 0-96 ms. acquired from one of our experiments. The figure shows that there was a wide gap between the cache on the left side (blue) of the histogram, and the beginning of resolve responses on the right side (green) of the histogram.

Fig. 5:

Classification of recursive resolver responses, between 0-100 ms, at 1 ms intervals. The estimated ping value is 2 ms, because the recursive resolver in this experiment was in the same LAN of the client. Most of the smooth responses (cache) were lower than the ping value. All the hatched circle responses were higher.

This analysis led us to the conclusion that the cached responses could be easily identified. The next step was to determine whether we could identify each level of the DNS hierarchy.

Iv-B Hierarchy Levels Analysis

We inspected the behavior of the DNS levels. Each resolution query from the recursive resolver was inspected individually, which yielded detailed data about the client’s query it was connected to. By identifying the source’s IP from the response, it was possible to identify the current level of resolve of the resolve query. Each response’s source IP was identified in a reverse DNS query to get its DNS level. We mapped each client’s query to the highest DNS level of any resolution query that was done for it. It was impossible to extract the DNS levels’ distributions from the data as a whole since most of the intervals were mixed with a number of DNS levels. We had to devise another way to approach the problem.

As a result, our next step was to separate the levels and inspect them individually. In each level, we took the relevant authoritative servers and verified the RTT values based on the responses acquired from them.

Iv-C Specific Level analysis

For this analysis, we examined both our data sources. We separated out each resolve query to map each DNS level. We generated histograms for each authoritative server at the specific level included in our data to fully examine their RTT distribution. These histograms can vary from different vantage points. To test whether our hypothesis on the RTT distribution would hold with respect to each vantage point, we ran our experiment from two vantage points. The third vantage point was the IUCC.

First, we looked at the behavior of the root level. We created histograms for the 13 root servers[36], as depicted in Fig. 6(a-c). Except for root server j in the local experiment, most of the responses arrived within an interval of 40-80 ms, with differences of 40 ms between them. Root server j was inspected separately and was found to be much closer to the recursive resolver than the other root servers. The average ping value of root server j was ms, whereas the minimal ping value of other root servers was ms. In addition, a traceroute check showed that it was the only root server whose route included IIX ISP, which was located near our local experiment site. These features confirmed that root server j was the most efficient root server for the local experiment.

[Root servers distributions from the local experiment.]       [Root servers distributions from the cloud experiment.]       [Root servers distributions from the IUCC data source.] [gTLD servers distributions from the local experiment.]       [gTLD servers distributions from the cloud experiment.]       [gTLD servers distributions from the IUCC data source.] [ccTLD servers distributions from the local experiment.]       [ccTLD servers distributions from the cloud experiment.]       [ccTLD servers distributions from the IUCC data source.]

Fig. 6: Local and cloud experiments had no any data from root server b. IUCC did not include any data from root server j or gTLD server b. The majority of RTT values from the root servers were between 80-160 ms in the local experiment. In the cloud experiment, the majority were between 0-40 ms. Most of the responses from root servers arrived between 40-80 ms in the IUCC data source. As for the gTLD servers, the majority of the RTT values in the local data experiment were between 80-120 ms. (except from server b). In the cloud data experiment, the majority was between 0-40 ms. Most of the responses from gTLD servers arrived between 40-80 ms in the IUCC data source. We picked 6 countries for the ccTLD servers: China, France, Mexico, Russia, Taiwan, and the UK. Each country had a unique distribution. China was the most heterogeneous.

Second, we examined the behavior of the TLD level to determine the differences between the behavior of the general TLD (gTLD) level servers, and the country code TLD (ccTLD). The country code TLDs are reserved for a specific country or state. We separated the country code TLDs from the other TLDs to determine whether there was a difference in their response times. We generated histograms for both types of TLD servers as shown in Fig. 6(d-i).

Because there were too many gTLD servers to depict in one histogram, we chose one of the most frequently used gTLD server types (from a.gtld-servers.net to m.gtld-servers.net) that is representative of .com and .net domains. As depicted in Fig. 6, the histograms were similar to the root servers, especially in terms of the mean and the distribution of the RTT values. Similar to the root servers, there was one server in the local data experiment that was different. The b.gtld-servers.net ping value was ms, whereas the other gTLD server’s ping value was at least ms. Furthermore, the traceroute command showed a similar result as for root server j.

Country code TLDs are scattered over the globe. Therefore, their RTT values depends on the country and the distance from the recursive resolver. We took 6 countries as an example, and created three histograms for every vantage point. The histograms are depicted in Fig. 6(g-i). Clearly, the ccTLDs RTT values were different across countries and the distribution within each nation was different. Most of China’s RTT values were located between 50-90 in the local experiment and IUCC, whereas Russia’s was 120-160 ms in the local experiment, and 80-120 ms. in IUCC.

The results showed that every level had its own RTT distribution. The synthetic data we obtained from our experiments showed similarities to the actual data from the IUCC in Fig. 6(c) and 6(f), with small shifts between them. Thus, the distribution of the synthetic data proved to be equivalent to the real data. The distributions of the root and the gtld servers were similar to a Poisson distribution, whereas the ccTLDs were diverse (due to geographical distances).

We next tested our predictions on domains, to make the attack identification more precise. By separating into domains we expected to find more gaps in the RTT distribution. We also tested each part of the Poisson distribution. Thus, in the next stage we separated the data into domains.

Iv-D Specific domain inspection

The data in this inspection were separated by domain. We inspected the top 500 domains from alexa.com. Each domain was inspected individually, resulting in smaller distributions.

[Youtube.com distribution] [Facebook.com distribution] [Baidu.com distribution]
[Youtube.com resolve levels] [Facebook.com resolve levels] [Baidu.com resolve levels]

Fig. 7: Distribution of RTT values for the top 3 sites, in the local experiment. Each histogram is depicted at 10 ms intervals, between 0-400 ms. We addressed bins with fewer than 30 responses (out of a total of 1500 packets for each domain) as noise. Sub figures (a-c) support the assumption (see Fig. 2) that a proper description of the RTT value distribution is a collection of distributions with gaps between them. Sub figures (d-f) depict each DNS resolve level. They confirms the hypothesis of a correlation between the Poissons and the resolve levels.

The value measured at each level was also taken into account at the next level as a result of the recursive process. For example, a TLD’s RTT value was assigned for its resolution query, and was summed with the root RTT value to create the accumulated RTT value of the root level. The process time between the query for the TLD and the root was summed as well, to obtain the actual RTT value. Each domain was inspected in two ways:

  1. Level: Each DNS level was clustered in a different set.

  2. Probabilistic: This inspection mapped the common use of the recursive resolver. This inspection was divided into two parts: domain and interval.

    1. Domain: For each domain, the data were grouped by DNS level, to obtain a percentile division. For example for the domain quora.com, we obtained a cache of 98.8713%, a TLD of 0.2257%, and a SLD of 0.9029%.

    2. Interval: This inspection was specific to a time interval. For example, for the time interval of 60-70 ms of abc.com, there were 100 responses out of a total of 50,000 for the domain, with the following distribution: Root - 10 responses, TLD - 40 responses, SLD - 100 responses. In other words, the probability of obtaining responses in this interval was 0.3%. The following probabilities corresponded to the number of responses by each level in this time interval: Root - 6.66%, TLD - 26.66%, SLD - 66%.

The RTT distribution of 3 of the top sites is depicted as an example in Fig. 7(a-c). The figures represent sites from the local experiment between 0-400 ms, at 10 ms intervals. The histograms resemble Poissons with regard to each site separately. Fig. 7(d-f) depicts the same sites with identification of each resolve level. Each Poisson is dominated by a different color, which means a different resolve level. This correlation confirms the hypothesis with regard to each domain. The gaps that can be seen between the Poissons are the baseline for our attack identification method.

V Identification of an attack

As stated above, we used our methodology to generate distributions of the data. These distributions were then used to identify DNS poisoning attacks. In following section, we discuss the identification method which is identical for both DNS poisoning and DNS cache poisoning except for the place of the attacker and the defender. In our model associated with DNS poisoning, both the attacker and defender reside in the LAN. In cache poisoning attack, the attacker is placed somewhere in the internet. The defender is located beside the resolver.

In Sections V and VI, we used the domain Yahoo.com as an example since Yahoo.com is one of the top 5 most searched domains in alexa.com. Based on the distribution of our data, we tried to evaluate our hypothesis on attack identification. We used a heuristic function to obtain clear insights into the distribution of the data. Then, we used a DNS attack tool to generate attack packets. Notations appear in Table I.

Total number of packets (for the domain).
Specific time interval t.
Number of responses from the resolver in interval t.
Amount of data in a specific bin out of the entire data.
Heuristic function implied on n.

Binary vector describing the impact of

on our data bins.
I’th bit of .
TABLE I: List of symbols

V-a Measurements

To assess the effectiveness of the attack, we assumed that whenever a query is sent to the recursive resolver, a single attack response is sent in parallel to the response from the recursor. The attacker’s goal is to answer any query it sees as fast as it can. Therefore, it produces single packet for a query and proceeds to the next query. There were no duplicate responses or response failures. Each response had a race condition: attack response vs. the recursive resolver’s response. We describe a number of measurements, and then present our theoretical method to test our attack mitigation rate

.

  1. Probability of RTT value: represents the probability of getting a specific RTT value . We denote by the number of packets in a specific interval. represents the total number of packets for the specific domain. We used the formula:

    (1)

    An example of this probability distribution is presented in Fig.

    8. It depicts the probability to getting an RTT value in intervals of 10 ms. It can be seen that the probability of obtaining a response in 0-10 ms. is above 50%. The probability of obtaining a response between 60-70 ms. is . Each domain was described by this kind of histogram.

    Fig. 8: An example of a probability distribution: Yahoo.com which is one of the top 5 domains from alexa.com, at 10 ms intervals. More than 50% of the values are from the cache in the first bin between 0-10 ms. The remaining values are centered around 70-90 ms, and 130-200 ms.
  2. Probability of attack success: As mentioned in Section III, we used an eavesdropper attack that can inject counterfeit packets. To simplify our calculations, we assumed that the attack can fake response to any query it sees. Therefore, the attack success’ probability is 100%. Thus, a change in the number of packets the attack sends does not alter its success rate.

  3. Heuristic function: To document the success rate of the attack with and without our system, we took the percentile distribution from Section V-A(1). Our purpose was to remove negligible amounts of RTT values to obtain a clearer picture of each domain. Thus, we mapped each bin with the function , where is the percentage of the data in the specific bin.

  4. New attack success rate: Applying on our data produced a 1D binary vector. We denote it as . Its length is 100 since every bin describes 10 ms. for an interval of 0-1000 ms. This is the interval we got responses in. We summed all the intervals, and averaged the success on the complete interval by dividing the sum by 100. Therefore, the attack success rate with our system is:

    (2)

We tried multiple alpha values between 0%-8%. Fig. 9 depicts the influence of the alpha values on the attack’s success rate in our system. A higher alpha value results in a higher mitigation rate/lower attack’s success rate. However, a higher alpha value increases the false positive (FP) rate where many benign packets are falsely indicated as attacks. An extreme case is which erases all the data, and marks every packet as an attack. In this case, the false negative (FN) rate is 0% but the FP is 100%. For a value exceeding 0.5 the attack’s success rate drops by 70%-80%.

Fig. 9: Success rate of an attack as a function in alpha values in our data of two experiments (which are fully described in Section VII). We used alpha values between 0-8%. Each alpha value produced a different attack success rate. The identification is based on the bins where there were data exceeding the current alpha value. For example, produced a success rate for the attack in the Local exp., and for the Google exp.

V-B Experimental attack

We also tested an experimental attack. This attack was created by our DNS poisoning tool[37]. It was executed as an inside-the-LAN attack between the client and the recursive resolver. The distribution of the attack was between 0-20 ms, since we designed it to outrun the recursor and its cached memory responses. Because there was a considerable difference between the RTT values of the attack responses and the responses from the resolution process, we only used the responses tagged as cache as benign data for our identification method.

We first tested the intuitive notion of a threshold for the cache by generating histograms of the data from our experiment(Fig. 10 for local environment and Fig. 11 for cloud environment).

Fig. 10: Attack and cache distribution from the local environment, between 0-20 ms at 0.5 ms intervals. The intersection of the attack and the cache responses is in the second and third bins. Thus tagging these bins as cache produces a 0.66% error rate.
Fig. 11: Attack and cache distribution from the cloud environment, between 0-20 ms at 0.5 ms intervals. The intersection of the attack and the cache responses is in the first and second bins. Thus tagging these bins as cache produces a 1.4% error rate.

Intuitively the bins where the cache and attack responses intersect, and where the cache constitutes the majority of each bin should be tagged as cache. The intersection of the cache and the attack responses were in the second and third bins in the local environment, and in the first and second bins in the cloud environment. In the local environment, the intuitive process yielded a 0.66% error rate. In the cloud, it yielded a 1.4% error rate. The results are presented in section VIII-B.

Vi Machine learning

Our methodology and attack identification in Sections IV and V consisted of classifying packets into two or more classes. This made it possible to implement learning machines. A successful classifying rate between intervals in any domain depends on the information acquired about that specific domain.

In an interval where there were high values of diverse DNS levels, the classifying rate was lower. In an interval where there were a high number of benign responses, the identification rate of the attack was lower. The gaps with no responses at all corresponded to places where the identification rate of the attack was high.

Supervised learning[38], [39] involves creating a function from labeled training data. This is done by using training data that consists of a set of training examples. Each train example pairs an input object and an output value. A supervised learning algorithm analyzes the training data and produces a function which can be used for mapping new samples into corresponding output values.

Random Forest[40, 41]

consists of an ensemble of k untrained Decision Trees (trees with only a root node). The following steps are carried out on these roots: at the current node, randomly select p features from available features. The number of features p is usually much smaller than the total number of features. Then compute the best split point for tree k and split the current node into two sons. Reduce the number of total features from this node on. Repeat the previous steps until either a maximum tree of depth l has been reached or the splitting metric reaches some extremum. Repeat all the above for each tree k in the forest. Aggregate on the output of each tree in the forest.

KNN classification algorithm[42, 43, 44] is used to classify a test sample according to its K training samples that are the nearest to it. Each train sample has a label. The algorithm takes the label of the majority of the K nearest samples and assigns it to the test sample.

We chose these methods since random forest may find a set of simple rules that will accommodate the data distribution and K nearest neighbors is an algorithm that finds outliers from a main distribution. It turned out to be more stable than other anomaly detection algorithms we tried.

We used these learning methods to obtain the best identification rate. As a ground rule for these learning machines, we used a ratio of 80/20 between the training and test samples.

Vii Experiments

In this section, we present the technical specifications of the data collection for Sections IV and V. We ran experiments in two environments: local and in the cloud. The cloud provider we used was Google’s cloud engine. We used local machines for the lab experiment and VMs for the cloud experiment. We used a network bandwidth of 100Mb. Both experiments were run in 4-7 days. We took the top 500 sites from alexa.com as a sample to simulate user access to different sites.

The software we used to imitate a DNS recursive resolver was BIND[45]. We used tshark[46] software to log the communication both in the client and the resolver. We analyzed the domain, RTT value, query/response flag, ip src/dst, and the answer itself. The identifying process was based on an IANA TLD database[47]. For specific level inspection, we also used a DNS tool for Python called dnspython[48], and nslookup software[49], by querying the dns tool and performing a nslookup query if the former failed. We used a dns-cache-poisoning tool[37] as the attack software. This is a simple open source tool which we customized easily to our experiment. We added a randomized sleep to the attack to randomize its RTT values. The attack was detected by a Python sickit implementation of machine learning[50]. Our data are available in a Google drive[51].

[Exp. 1]

[Exp. 2]

Fig. 12: In Exp. 1 a client queries a recursive resolver, and the communication is recorded in log files(log 1 and log 2). In Exp. 2 the attacker spoofs responses to the client.

The experiments are depicted in Fig. 12; experiment 1, is shown in Fig. 12(a). The experiment involved recording benign DNS packets between a stub resolver and a recursive resolver. Each query or response within the LAN between the client and the recursor was recorded by a log file(log 1). Each resolve query/response that was taken by the recursive resolver outside the LAN was recorded by another log file(log 2). We correlated the two log files by domain+ID parameters to determine the DNS level from which client received a response.

This way, the stub resolver’s RTT value was mapped onto a DNS level produced outside of the LAN/recursive resolver’s log file. We needed to produce the recursor’s responses from the cache and the resolution process. This was made possible by changing the TTL value in the recursive resolver. To get cache responses, we changed the TTL value to 3 hours. To collect more resolution responses, we changed the TTL value to 0. This way, we mapped the behavior of the recursive resolver in both cases.

Experiment 2, depicted in Fig. 12(b), involved an attacker between the client and the resolver. Each experiment produced about response packets.

We obtained real data from IUCC[19]. These data were acquired from a line between Tel Aviv and Frankfurt and were recorded over about 2 days. Congestion on this link caused packet drop. In addition, uneven routing caused some of the responses to be delivered in another link. Thus, some of the queries were unanswered. Overall, we only used about 25% of the packets, or approx. 1.3 million. We assumed that these data did not contain attempts at cache poisoning. As stated in Section IV, we only analyzed the IUCC data in Section IV-D. Therefore, we analyzed the IP sources and not domains from each packet.

To control for data reliability, we split the data from each data source into a ratio of 80/20 of train/test. We assessed whether the test data distributed in a similar way as the train data. We found that the distributions were similar. In the IUCC data there was less than a 5% differentiation for each time interval between the train and test data. In the data from our experiments, most of the data had less than a 5% differentiation in each time interval between the train and test data. Less than 10% of the domains had 1-3 intervals which had 5%-10% differentiation between the train and test data.

Viii Results

In the following section, we describe the results of Sections IV and V from the learning machines described in Section VI. A list of abbreviations is presented in Table II.

Acc Accuracy
Dev Deviation
Var Variance
RF Random Forest
KNN K Nearest Neighbors
Att Attack’s identification rate
FP False Positives
FN False Negatives
L Local experiment
C Cloud experiment
sub subdomain
TABLE II: List of Abbreviations for result tables

Viii-a Methodology

As presented in Section IV, we separated the data into specific domains, creating sub-datasets. We added the deviation and variance for each indicator. We used Random Forest (RF) and K Nearest Neighbors (KNN) to identify the origin of the responses, as stated in Section VI. The average proportion of the cache/DNS level of resolve in each data is presented in Table III.

Cache Root gTLD ccTLD sub host
L 41.651 23.257 13.432 1.644 19.87 0.13
C 16.76 32.88 19.38 2.872 28.072 0.07
TABLE III: Proportion of packets in the data

Tables IV and V present the cache and non-cache identification. This calculation was straightforward as was shown in Section IV-A since the RTT values were quite different between the cache and the resolve levels. We achieved an accuracy rate of 98%. This was done to better identify the distribution of the DNS packets, and to test our ability to distinguish between the DNS levels.

Acc Dev Var
RF 0.983 0.02 5*
KNN 0.985 0.02 4*
TABLE IV: Cache/no cache differentiation in the cloud environment.
Acc Dev Var
RF 0.983 0.02 5*
KNN 0.985 0.02 5*
TABLE V: Cache/no cache differentiation in the local environment.

Tables VI and VII indicate the identification rates of the DNS hierarchy levels, as stated in Section IV-D. We analyzed the DNS level and the cache data. We obtained a correct identification rate of approximately 75% in the cloud, and 84% in the local data. This analysis proved to be more difficult, because each domain depicts a slightly different histogram. Some created a dense distribution, with a number of different DNS levels in the center which made it hard to differentiate between levels. Furthermore, due to the cloud’s resources, the RTT value was small in most of the responses. Therefore, dissecting levels in its data was less successful.

Acc Dev Var
RF 0.75 0.1 0.01
KNN 0.69 0.1 0.01
TABLE VI: Identifying DNS levels including cache in the cloud environment.
Acc Dev Var
RF 0.81 0.06 0.004
KNN 0.84 0.06 0.003
TABLE VII: Identifying DNS levels including cache in the local environment.

Tables VIII and IX show the identification rate of the DNS hierarchy levels, from the root level excluding the cache data. We obtained a correct identification rate in the cloud of about 70%, and 76% in the local data. Tables VI and VII include the cache data, which are more distinguishable. Therefore, the identification rate is different in this case.

Acc Dev Var
RF 0.64 0.1 0.01
KNN 0.7 0.1 0.01
TABLE VIII: Identifying DNS levels without cache in the cloud environment.
Acc Dev Var
RF 0.71 0.08 0.007
KNN 0.76 0.08 0.007
TABLE IX: Identifying DNS levels without cache in the local environment.

As can be seen in the tables, the identification rate in almost every stage was slightly different between the environments. This can be attributed to disparities in Google’s resources. Each packet goes through a smaller number of hops in the resolution process in the cloud than in the local resolver. Google has considerably shorter routes to most of the root/gTLD/ccTLD servers, which gives it a low RTT value for most requests. Thus, it is more difficult to separate the levels in the data from Google.

Viii-B Empirical attack

We ran an experimental attack, as mentioned in Section V-B. The attack packets were received between 0-20 ms, so that they arrived before the cache responses. An eavesdrop/inject attack between the stub resolver and the recursive resolver was used to generate these attack packets. Since the attack was located inside the LAN, as indicated in Section III, the attack packets arrived fast. We used its data along with the data from section VIII-A. The results are presented in Tables X and XI.

Acc FP FN
RF 0.987 0.005 0.006
KNN 0.990 0.003 0.006
TABLE X: Experimental attack identification rate from the cloud environment. The deviation in all the cases was below 4%.
Acc FP FN
RF 0.997 0.001 0.001
KNN 0.998 0.001 0.0006
TABLE XI: Experimental attack identification rate from the local environment. The deviation in all the cases was below 1%.

As can be seen in Tables X and XI, we had correct identification rate. As mentioned in Section VII, we generated attack packets. To be precise, we identified 355,606 out of 359,197 attack packets from the Google data, and 131,415 out of 131,810 from our local simulation data. We assumed that although the identification rate for the empirical attack was superior to the naive calculation, the difference was not pronounced since each learning machine had only one feature. Thus, the machine’s power was comparable to the naive threshold. Our identification rate was found to be superior to the naive threshold by 0.2%-0.4%.

Ix Conclusion

In this paper we presented an innovative method to detect DNS poisoning attacks. We assumed that the behavior of the RTT value could be generalized as a number of Poisson distributions. Each Poisson addresses a resolve level. Thus, analyzing the gaps between the Poissons can serve to detect attacks. We confirmed our hypothesis in various environments, and used it as a basis for identification of the attack.

This study presented our method for an experimental local network attack. In the future, we aim to apply this identification method to other kinds of attacks. For example, it could be used to identify cache poisoning attacks against recursive resolvers. To do so, each level will be inspected separately to detect anomalies from the recursor’s point of view. This appears to be easier, since no classifying process is needed, given the lack of intersecting levels. The time analysis may be more precise, since the distribution of only one DNS level is more distinguishable.

Future work will also concentrate on the precision of the analysis. In this paper, we analyzed the data at milliseconds intervals. We saw that this process failed to fit Google. In the future, we will attempt to analyze the data at microseconds intervals to achieve higher accuracy, while determining the level of precision that results in a fit and avoids overfitting.

Adding packet drops, multiple responses and other kinds of failures may change our model. These cases are intriguing topics for further investigations and may make the results reported here more resilient. These models may lead to a prototype for a more realistic defense system.

References

  • [1] P Mockapetris. RFC 1034 Domain Names - Concepts and Facilities, 1987. "http://tools.ietf.org/html/rfc1034".
  • [2] P. Mockapetris. Domain names - implementation and specification. STD 13, RFC Editor, November 1987. http://www.rfc-editor.org/rfc/rfc1035.txt.
  • [3] BT global services. DNS Security Survey Report. BT global services, 2017. "https://stats.labs.apnic.net/dnssec/XA?c=XA&x=1&g=1&r=1&w=7&g=0".
  • [4] Georgios Kambourakis, Tassos Moschos, Dimitris Geneiatakis, and Stefanos Gritzalis. Detecting dns amplification attacks. Critical information infrastructures security, pages 185–196, 2008.
  • [5] Collin Jackson, Adam Barth, Andrew Bortz, Weidong Shao, and Dan Boneh. Protecting browsers from dns rebinding attacks. ACM Transactions on the Web (TWEB), 3(1):2, 2009.
  • [6] S. Cheshire and M. Krochmal. Dns-based service discovery. RFC 6763, RFC Editor, February 2013. http://www.rfc-editor.org/rfc/rfc6763.txt.
  • [7] Hitesh Ballani and Paul Francis. Mitigating dns dos attacks. In Proceedings of the 15th ACM conference on Computer and communications security, pages 189–198. ACM, 2008.
  • [8] Sooel Son and Vitaly Shmatikov. The hitchhiker’s guide to dns cache poisoning. Security and Privacy in Communication Networks, pages 466–483, 2010.
  • [9] M Nazreen Banu and S Munawara Banu. A comprehensive study of phishing attacks. International Journal of Computer Science and Information Technologies, 4(6):783–786, 2013.
  • [10] Scott Rose, Matt Larson, Dan Massey, Rob Austein, and Roy Arends. DNS Security Introduction and Requirements. RFC 4033, March 2005.
  • [11] R. Arends, R. Austein, M. Larson, D. Massey, and S. Rose. Resource records for the dns security extensions. Internet Requests for Comments, March 2005. http://www.rfc-editor.org/rfc/rfc4034.txt.
  • [12] R. Arends, R. Austein, M. Larson, D. Massey, and S. Rose. Protocol modifications for the dns security extensions. Internet Requests for Comments, March 2005. http://www.rfc-editor.org/rfc/rfc4035.txt.
  • [13] Peter Silva. Dnssec: The antidote to dns cache poisoning and other dns a acks. White paper, F5, 2009.
  • [14] stats.labs.apnic.net. DNSSEC deploy rate. stats.labs.apnic.net, 2018. https://www.globalservices.bt.com/static/assets/pdf/products/diamond_ip/DNS-Security-Survey-Report-2017.pdf.
  • [15] Tianxiang Dai, Haya Shulman, and Michael Waidner. Dnssec misconfigurations in popular domains. In International Conference on Cryptology and Network Security, pages 651–660. Springer, 2016.
  • [16] P Anu and S Vimala. A survey on sniffing attacks on computer networks. In 2017 International Conference on Intelligent Computing and Control (I2C2), pages 1–5. IEEE, 2017.
  • [17] Palvinder Singh Mann and Dinesh Kumar. A reactive defense mechanism based on an analytical approach to mitigate ddos attacks and improve network performance. International Journal of Computer Applications, 12(12):975–987, 2011.
  • [18] Amazon. Alexa top sites. Alexa top sites, 2018. https://www.alexa.com/topsites.
  • [19] IUCC. Inter-University Computation Center. IUCC website, 2018. https://https://www.iucc.ac.il/.
  • [20] Roland van Rijswijk-Deij, Mattijs Jonker, Anna Sperotto, and Aiko Pras. A high-performance, scalable infrastructure for large-scale active dns measurements. IEEE Journal on Selected Areas in Communications, 34(6):1877–1888, 2016.
  • [21] Bernhard Ager, Wolfgang Mühlbauer, Georgios Smaragdakis, and Steve Uhlig. Comparing dns resolvers in the wild. In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement, pages 15–21. ACM, 2010.
  • [22] Google. Google Public DNS. Google website, 2018. https://developers.google.com/speed/public-dns/.
  • [23] OpenDNS. OpenDNS. Cisco website, 2018. https://www.opendns.com/.
  • [24] Yao Wang, Ming-zeng Hu, Bin Li, and Bo-ru Yan. Tracking anomalous behaviors of name servers by mining dns traffic. In Frontiers of High Performance Computing and Networking–ISPA 2006 Workshops, pages 351–357. Springer, 2006.
  • [25] Akira Yamada, Yutaka Miyake, Masahiro Terabe, Kazuo Hashimoto, and Nei Kato. Anomaly detection for dns servers using frequent host selection. In Advanced Information Networking and Applications, 2009. AINA’09. International Conference on, pages 853–860. IEEE, 2009.
  • [26] Haya Shulman Klein, A. and Michael Waidner. Internet-wide study of dns cache injections. In IEEE International Conference on Computer Communications 2017, Atlanta, GA, USA, 2017. IEEE.
  • [27] Z Berkay Celik and Sema Oktug. Detection of fast-flux networks using various dns feature sets. In Computers and Communications (ISCC), 2013 IEEE Symposium on, pages 868–873. IEEE, 2013.
  • [28] Levent Ertoz, Eric Eilertson, Aleksandar Lazarevic, Pang-Ning Tan, Vipin Kumar, Jaideep Srivastava, and Paul Dokas. Minds-minnesota intrusion detection system. Next generation data mining, pages 199–218, 2004.
  • [29] Hongyi Yao, Danilo Silva, Sidharth Jaggi, and Michael Langberg. Network codes resilient to jamming and eavesdropping. IEEE/ACM Transactions on networking, 22(6):1978–1987, 2014.
  • [30] Amir Herzberg and Haya Shulman. Socket overloading for fun and cache-poisoning. In Proceedings of the 29th Annual Computer Security Applications Conference, pages 189–198. ACM, 2013.
  • [31] Amir Herzberg and Haya Shulman. Dnssec: Security and availability challenges. In 2013 IEEE Conference on Communications and Network Security (CNS), pages 365–366. IEEE, 2013.
  • [32] D Eastlake, E Brunner-Williams, and B Manning. Rfc, 2929: Domain name system (dns) iana considerations, 2000.
  • [33] Amit Klein. Bind 9 dns cache poisoning. Report, Trusteer, Ltd, 3, 2007.
  • [34] Bahaa Al-Musawi, Philip Branch, and Grenville Armitage. Bgp anomaly detection techniques: A survey. IEEE Communications Surveys & Tutorials, 19(1):377–396, 2016.
  • [35] Geoff Huston, Mattia Rossi, and Grenville Armitage. Securing bgp—a literature survey. IEEE Communications Surveys & Tutorials, 13(2):199–222, 2010.
  • [36] IANA. IANA roots table. IANA website, 2018. https://www.iana.org/domains/root/servers.
  • [37] gPrado. dnspoof attack tool. github, 2011. "https://github.com/maurotfilho/dns-spoof".
  • [38] Sotiris B Kotsiantis, I Zaharakis, and P Pintelas. Supervised machine learning: A review of classification techniques, 2007.
  • [39] R Gentleman, Wolfgang Huber, and VJ Carey. Supervised machine learning. In Bioconductor Case Studies, pages 121–136. Springer, 2008.
  • [40] Tin Kam Ho. Random decision forests. In Document Analysis and Recognition, 1995., Proceedings of the Third International Conference on, volume 1, pages 278–282. IEEE, 1995.
  • [41] Tin Kam Ho. The random subspace method for constructing decision forests. IEEE transactions on pattern analysis and machine intelligence, 20(8):832–844, 1998.
  • [42] Thomas Cover and Peter Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21–27, 1967.
  • [43] Yang Lihua, Dai Qi, and Guo Yanjun. Study on knn text categorization algorithm. Micro Computer Information, 21:269–271, 2006.
  • [44] Belur V Dasarathy. Nearest neighbor (NN) norms:NN pattern classification techniques. 1991.
  • [45] Internet Systems consortium. Bind. BIND, 2018. "https://www.isc.org/downloads/bind/".
  • [46] wireshark. Tshark. wireshark command line tool, 2018. "https://www.wireshark.org/docs/man-pages/tshark.html".
  • [47] IANA. IANA database. IANA website, 2018. "https://www.iana.org/domains/root/db".
  • [48] dnspython. DNSpython tool. DNS python tool, 2018. "http://www.dnspython.org/".
  • [49] Randall J Boyle. Applied Networking Labs. Prentice Hall, 2013.
  • [50] scikit learn. scikit learn python library. scikit learn webpage, 2018. "http://scikit-learn.org/stable/".
  • [51] Google. archive google drive. Google drive, 2018. "https://drive.google.com/file/d/16dwFZHmu94wsJGA5MePhr8MPRnP3LNjM/view?usp=sharing".