From IP ID to Device ID and KASLR Bypass (Extended Version)

06/25/2019
by   Amit Klein, et al.
0

IP headers include a 16-bit ID field. Our work examines the generation of this field in Windows (versions 8 and higher), Linux and Android, and shows that the IP ID field enables remote servers to assign a unique ID to each device and thus be able to identify subsequent transmissions sent from that device. This identification works across all browsers and over network changes. In modern Linux and Android versions, this field leaks a kernel address, thus we also break KASLR. Our work includes reverse-engineering of the Windows IP ID generation code, and a cryptanalysis of this code and of the Linux kernel IP ID generation code. It provides practical techniques to partially extract the key used by each of these algorithms, overcoming different implementation issues, and observing that this key can identify individual devices. We deployed a demo (for Windows) showing that key extraction and machine fingerprinting works in the wild, and tested it from networks around the world.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/14/2020

Cross Layer Attacks and How to Use Them (for DNS Cache Poisoning, Device Tracking and More)

We analyze the prandom pseudo random number generator (PRNG) in use in t...
01/21/2021

An Efficient Communication Protocol for FPGA IP Protection

We introduce a protection-based IP security scheme to protect soft and f...
04/20/2018

IP Over ICN Goes Live

Information-centric networking (ICN) has long been advocating for radica...
06/04/2020

FastReID: A Pytorch Toolbox for Real-world Person Re-identification

We present FastReID, as a widely used object re-identification (re-id) s...
10/06/2020

Dissecting Span Identification Tasks with Performance Prediction

Span identification (in short, span ID) tasks such as chunking, NER, or ...
02/06/2020

Looking GLAMORous: Vehicle Re-Id in Heterogeneous Cameras Networks with Global and Local Attention

Vehicle re-identification (re-id) is a fundamental problem for modern su...
10/09/2020

A Graph Neural Network Approach for Scalable and Dynamic IP Similarity in Enterprise Networks

Measuring similarity between IP addresses is an important task in the da...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Online browser-based user tracking is prevalent. Tracking is used to identify users and track them across many sessions and websites on the Internet. Tracking is often performed in order to personalize advertisements or for surveillance purposes. It can either be done by sites that are visited by users, or by third-party companies which track users across multiple web sites and applications. [2] specifically lists motivations for web-based fingerprinting as “fraud detection, protection against account hijacking, anti-bot and anti-scraping services, enterprise security management, protection against DDOS attacks, real-time targeted marketing, campaign measurement, reaching customers across devices, and limiting number of access to services”.

Tracking methods

Existing tracking mechanisms are usually based on either tagging or fingerprinting. With tagging, the tracking party stores at the user’s device some information, such as a cookie, which can later be tracked. Modern web standards and norms, however, enable users to opt-out from tagging. Furthermore, tagging is often specific for one application or browser, and therefore a tag that was stored in one browser cannot be identified when the user is using a different browser on the same machine, or when the user uses the private browsing feature of the browser. Fingerprinting is implemented by having the tracking party measure features of the user’s machine (for example the set of installed fonts). Corporates, however, often install a single “golden image” (standard set of software packages) on many identical (hardware-wise) machines, and therefore it is hard to obtain fingerprints that distinguish among such machines.

In this work we present a new tracking mechanism which is based on extracting data used by the IP ID generator (see Section 1.1). It is the first tracking technique that is able to simultaneously (a) cross the private browsing boundary (i.e. compute the same tracking ID for a private mode tab/window of a browser as for a regular tab/window of the browser); (b) work across different browsers; (c) address the “golden image” problem; and (d) work across multiple networks; all this while maintaining a very good coverage of the platforms involved. To our knowledge, no other tracking method (or a combination of several tracking techniques) achieves all these goals simultaneously. Moreover, the Windows variant of this technique also survives Windows shutdown+startup (but not restart).

Our techniques are realistic: for Windows we only need to have control over 8-30 IP addresses (in 3-13 class B networks), and for Linux/Android, we only need to control 300-400 IP addresses (can be in a single class B network). The Windows technique was successfully tested in the wild.

1.1 Introduction to IP ID

The IP ID field is a 16 bit IP header field, defined in RFC 791 [22]. It is used to facilitate de-fragmentation, by marking IP fragments that belong to the same IP datagram. The IP protocol assembles fragments into a datagram based on the fragment source IP, destination IP, protocol (e.g. TCP or UDP) and IP ID. Thus, it is desirable to ensure that given the same source address, destination address and protocol, the IP ID does not repeat itself in short time intervals. Simultaneously, the IP ID should not be predictable (across different destination IP addresses) since “[IP ID] predictability allows traffic analysis, idle scanning, and even packet injection in specific cases” [53].

Designing an IP ID generation algorithm that meets both requirements is not straightforward. Since IPv4 was standardized, several schemes have emerged:

  • Global counter – This approach was used in the early IPv4 days due to its simplicity and its non-repetition period of 65536 global packets. However it is extremely predictable and thus insecure, hence abandoned.

  • Counter/bucket based algorithms – This family of algorithms, suggested by RFC 7739 [18, Section 5.3],111While RFC 7739 focuses on IPv6, its proposed algorithms and discussions are also applicable to IPv4. is the focus of our work. It uses a table of counters, and a hash function that maps a combination of a source IP address, destination IP address, key and sometimes other elements into an index of an entry in the table. IP ID is generated by choosing the counter pointed to by the hash function, possibly adding to it an offset (which may depend on the IP endpoints, key, etc.), and finally incrementing the counter. The non-repetition period in this family is 65536 global packets. At the same time, knowing IP ID values for one pair of source and destination IP addresses does not reveal anything about the IP IDs of other endpoints (except those that use the same bucket) – i.e. it fulfills the non-predictability requirement for almost all other IP destination addresses. This family of algorithms is, therefore, a trade-off between security and functionality.

  • Searchable queue-based algorithm – This algorithm maintains a queue of the last several thousand IP IDs that were used. The algorithm draws random IDs until one is found that is not in the queue. Then this ID is used as the next IP ID, pushed to the queue, and the least-recently used value is popped from the queue. This algorithm ensures high unpredictability, and guarantees a non-repetition period as long as the queue.

Windows (version 8 and later) and Linux/Android implement variants of the counter-based algorithm. MacOS and iOS implement a searchable queue algorithm.

1.2 Introduction to KASLR

KASLR (Kernel Address Space Layout Randomization [56]) is a security mechanism designed to defeat attack techniques such as ROP (Return-Oriented Programming [47]) that rely on the predictability of kernel code addresses. KASLR-enabled kernels randomize the kernel image load address during boot, so that kernel code addresses become unpredictable. While, e.g. in the Linux x64 kernel, the entropy of the load address is 9 bits, a brute force attack is deemed irrelevant since each failure usually ends in a system freeze (“kernel panic”). A typical KASLR bypass enables the attacker to obtain a kernel address (from which, addresses to useful kernel code gadgets can be calculated as offsets) without de-stabilizing the system.

1.3 Our Approach

The IP ID generation mechanisms in Windows and in Linux (UDP only) both compute the IP ID as a function of the source IP address, the destination IP address, and a key which is generated when the source machine is restarted and is never changed afterwards. We run a cryptanalysis attack which analyzes the IP ID values that are sent by a device and extracts the key . This key can then be used to identify the source device, because subsequent attacks will yield the same key value (until the device is restarted).

In more detail, IP ID generation in both systems maintains a table of counters and uses a hash function to choose which counter is used for each connection. It seems hard to deploy an attack based on the value of the counter, since each IP ID might depend on a different counter. Instead, our attack techniques rely on identifying and exploiting collisions which map two destination IP addresses to the same counter. This enables us to extract information about the key that caused the hash values to collide (Linux), or (in Windows) extract information about the offset of the IP ID from the counter. These values depend on and therefore enable us to learn and identify the machine.

Our approach does not rely on an a-priori knowledge of the counter values. Moreover, after we reconstruct , we can reconstruct the current counter values (in full or in part) by sending traffic to specially chosen IP addresses, obtaining their IP ID values and with the knowledge of , work back the counter values that were used to generate them.

Linux/Android KASLR bypass

Support for network namespaces (part of container technology) was introduced in Linux kernel 4.1. With this change, the key was extended to include 32 bits of a kernel address (the address of the net structure for the current namespace). Thus, reconstructing also reveals 32 bits of a kernel address, which suffices to reconstruct the full address and be able to bypass KASLR.222Through our IP ID attack we were also able to achieve partial KASLR bypass, and a partial list of loaded drivers, with regards to Windows 10 RedStone 4. This attack was based on an additional initialization bug in Windows. However, that bug was repaired in the October 2018 security update and the corresponding KASLR bypass is not effective anymore.

Conclusion

In general, our work demonstrates that the usage of a non-cryptographic algorithm for the generation of attacker observable values such as IP ID, may be a security vulnerability even if the values themselves are not security-sensitive. This is due to an attacker’s ability to extract the key used by the algorithm generating the values, and use this key to track or attack the system.

1.4 Advantages of our Technique

Tracking machines based on the key that is used for generating the IP ID has multiple advantages:

Browser Privacy Mode: Since our technique exploits the behavior of the IP packet generator, it is not affected if the browser runs in privacy mode.

Cross-Browser: Since our technique exploits the behavior of the IP packet generator, it yields the same device ID regardless of the browser used. It should be noted that browsers (like Tor browser) that relay transport protocols through other servers are not affected by our technique.

Network change: Tracking works across different networks since our technique uses bits of as a device ID, and does not depend on the device’s IP address or network.

The “Golden Image” Challenge: Since each device generates its own key in a random fashion at O/S restart, even devices with identical software and hardware will most likely have different values and thus different device IDs.

Not easily turned off: IP ID generation is built into the kernel, and cannot be modified or switched off by the user. Furthermore, the Windows attack can use simple HTTP traffic. The Linux/Android attack requires WebRTC which cannot be turned off for mobile Chrome and Firefox.

VPN resistant: The device ID remains the same when the device uses an IP-layer VPN.

Windows shutdown+startup vs. restart: The Fast Startup feature of Windows 8 and later,333https://blogs.msdn.microsoft.com/olivnie/2012/12/14/windows-8-fast-boot which is enabled by default, saves the kernel to disk on shutdown, and reloads it from disk on system startup. Therefore, is not re-initialized on startup, and keeps its pre-shutdown value. This means that the tracking technique for Windows survives system shutdown+startup. On restart, in contrast, the kernel is initialized from scratch, and a new value for is generated, i.e. the old device ID is no longer in effect.

Scalability:

Our technique can support billions of devices (Windows, Linux, newer Androids), as the device ID is random, and thus ID collisions are only expected due to the birthday paradox. Thus the probability of a single device not to have a unique ID is very low.



It should be noted that in the Linux/Android case, due to the use of 300-400 IP addresses, the need to “dwell” on the page for 8-9 seconds, and (in newer Android devices) the excessive attack time, there are use cases in which the technique may be considered invasive and/or inapplicable.

1.5 Additional Contributions

In addition to the cross-browser tracking technique for Windows and Linux, our work introduces multiple additional contributions.

With respect to Windows, we also show

  • The first full public documentation of the IP ID generation algorithm in Windows 8 and later versions, obtained via reverse-engineering of the relevant parts of Windows kernel tcpip.sys driver.

  • Cryptanalysis of said algorithm, resulting in a practical technique to extract 40-45 bits of its key. This analysis is applicable to all Windows 8 and later operating systems.

  • A scaled down demo implementation of the Windows tracking technique, using only 15 IP addresses (+2 IP addresses for verification) and providing a 40-bit device ID for a field experiment. We provide results from an extensive in-the-wild experiment spanning 75 networks in 18 countries, demonstrating the practicality and applicability of the technique, and also demonstrating that IP IDs are rarely modified in transit.

With respect to Linux/Android, in addition to a cross-browser tracking technique we also show a full kernel address disclosure (KASLR bypass), based on revealing a kernel address which is in the .data segment of the kernel image.

We disclosed the vulnerabilities to Microsoft and Linux. Microsoft fixed the issue in Windows April 2019 Security Update (CVE-2019-0688).444https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0688555The old (vulnerable) logic is still available for Windows 10 versions below 1903 via a registry setting: if during system startup, the registry key HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters contains a value named EnableToeplitzHashForIPID of type DWORD with data 0x00000001, then the Toeplitz-based logic is used to generate the IP ID. By default, this value is absent, hence the new (fixed) logic is used. This registry flag is not in effect for Windows 10 versions 1903 and above. Linux fixed the kernel address disclosure (CVE-2019-10639) together with partially addressing the key-based tracking technique (by extending the key to 64 bits) in a patch666“netns: provide pure entropy for net_hash_mix()” (https://github.com/torvalds/linux/commit/355b98553789b646ed97ad801a619ff898471b92) applied to Linux kernel versions 5.1-rc4, 5.0.8, 4.19.35, 4.14.112, 4.9.169 and 4.4.179. For 3.18.139 and 3.16.67, Linux applied a patch777“inet: update the IP ID generation algorithm to higher standards” (https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=55f0fc7a02de8f12757f4937143d8d5091b2e40b) we developed, that extends the key to 64 bits. The key-based tracking technique (CVE-2019-10638) is fully addressed in a patch,888“inet: switch IP ID generator to siphash” (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=df453700e8d81b1bdafdf684365ee2b9431fb702) part of kernel versions 5.2-rc1, 5.1.7, 5.0.21, 4.19.48 and 4.14.124.

2 The Setting

We assume that device tracking is carried out over the web, using an HTML snippet (which can be embedded by a 3 party site/page). The snippet forces the browser to send TCP or UDP traffic (one packet per destination IP suffices) to multiple IP addresses under the tracker’s control (8-30 addresses for Windows, 300-400 for Linux/Android). Ideally, such transmission would be rapid. In our experiments, this can be done in few seconds or less.

For the Windows attack, the tracker needs to choose the IP addresses according to some trivial constraints (the Linux IP addresses are not subject to any constraints). A discussion of the exact constraints and their trade-offs can be found in the following sections. At the server side, the tracker collects the IP ID values sent by the client to each of the IPs, and computes a device ID consisting of bits of the key in the device’s kernel data that is used to calculate the IP ID.

Additional scenarios (KASLR bypass and internal IP disclosure) for Linux/Android attacks are described in Appendix G.

3 Related Work

Many tracking techniques were suggested in prior research. At large, proposals can be categorized by their passive/active nature. We use the terminology defined in [59]:

  • A fingerprinting technique measures properties already existing in the browser or operating system, collecting a combination of data that ideally uniquely identifies the browser/device without altering its state.

  • A tagging technique, in contrast, stores data in the browser/device, which uniquely identifies it. Further access to the browser can “read” the data and identify the device.

As described in Section 1, fingerprinting techniques typically cannot guarantee the uniqueness of the device ID, in particular with respect to corporate machines cloned from “golden images”. Tagging techniques store data on the device, and as such they are more easily monitored and evaded. A comprehensive discussion of tracking methods can be found in Google Chromium’s web page “Technical analysis of client identification mechanisms” [23].

3.1 Fingerprinting

There is a major drawback to fingerprinting techniques, which is that typically they cannot guarantee the uniqueness of the device ID. This problem becomes acute when considering organizations wherein desktops and laptops are cloned from “golden images”, thus making those devices practically indistinguishable for passive techniques. (Furthermore, since fingerprinting techniques are known and understood nowadays, countermeasures are already deployed against some of these techniques.) For example, font-based fingerprinting, User-Agent header fingerprinting, WebGL (canvas) fingerprinting, browser plugin/extension fingerprinting, and CPU/GPU performance fingerprinting are all methods that cannot distinguish between systems that are based on the exact same hardware and software. Recently, [10] improved the accuracy of some of these techniques, and introduced new variants, and [58] measured the longevity of various fingerprinting techniques, and found it limited (and provided suggestions to increase their longevity). However, none of these works addressed the above fundamental shortcoming. Below we discuss the few fingerprinting techniques that do cover the “golden image” scenario:
DNS-based fingerprinting methods: [3] suggests using the DNS resolver IP address as a fingerprint. However, in an enterprise (or an ISP, or a campus), multitude of clients use the same resolver, and as such, this DNS-based fingerprinting method does not contribute toward distinguishing among these clients.
Clock skew: [31]

describes how to remotely measure an endpoint’s TCP timestamp clock skew. However, nowadays the risk of enabling TCP timestamps is well understood, and in Windows 7 and later, this feature is disabled by default.

[50] describes how to measure the CPU clock skew using Javascript, however they do not compute a unique ID, but rather attempt to match a signature of a previous measurement, which, even with 300 devices, resulted in at least 20% multiple matches.
Using the Javascript Math.random() seed: This attack ([27]) was addressed by browser vendors in 2008-2010 and is no longer effective.
Ephemeral source ports in outgoing requests: This technique does not work behind a firewall/NAT, as oftentimes the firewall/NAT replaces the original client source port with a port from its own pool. Furthermore, the ports are drawn from a small space (up to values) which leads to collisions, and moreover, it requires constant monitoring to track devices.
Motion sensors (in mobile devices): it is possible to fingerprint mobile devices in the browser using deviations in their motion sensors. In general, these techniques are limited in their coverage to such mobile devices that have the required motion sensors. According to [8], deviations in the accelerometer readouts can be used as a fingerprint, but this requires the mobile device not to move while the measurement takes place. [14] suggests using the accelerometer and gyroscope, but their calibration process is time consuming, and the accuracy reported (93%) is insufficient for large scale deployments.
User action history: Except for DNS (see below), privacy mode renders this technique ineffective.

3.2 Tagging

In general, tagging methods are a well understood privacy threat. Therefore, one of the goals of the privacy modes that are implemented in major browsers is to make the tagging methods identify a private browsing session as a distinct instance, which has a different tag than that of the “regular” browser. Private browsing sessions also typically clear their residual data upon termination and start with an empty set of data when launched.

A summary of the privacy mode boundary-crossing status of many tagging techniques appears in [9, Table IV]. As is depicted in that table, almost all tagging techniques do not cross the private browsing boundary for most browsers. In general, nowadays tagging attempts are blocked by the relevant software vendors. For example, Flash cookies do not cross the privacy mode boundary ([60]), and anyway, Flash nowadays requires user interaction in order to run.

There are some advanced tagging techniques that are not covered in [9], and yet do not cross the privacy mode boundary:
TLS-based: The TLS token binding protocol specifically requires browsers to separate privacy mode tokens from the regular browser tokens ([45, Section 7.3]). Firefox provides a separation between the regular browser and privacy mode with respect to TLS session identifiers and session tickets ([42]), and likewise Chrome ([13]). Recently, a similar technique, using TLS 1.3’s session resumption (and session tickets) was described by [55], but it suffers from the above same drawbacks.
HSTS-based: HSTS data does not cross the privacy mode boundary in Chrome[23], Firefox[54] and Safari[54].
HTTPS Public Key Pinning (HPKP): [62] describes a tagging technique based on HPKP. However, HPKP is being deprecated - it is only supported nowadays by Firefox.
DNS cache fingerprinting/tagging (timing based): A DNS-based fingerprinting method is proposed in [16], which can reveal elements of the user’s browsing history. This fingerprinting method could in theory be converted to a tagging method. However the “read tag” operation is destructive as it changes the data (the tag).
DNS cache-based tagging: Recent work [29] describes a tagging technique based on client side caching of resolved DNS names, where the resolution contains random elements which provide statistic uniqueness. This technique does not work across different networks (as clients typically flush their DNS cache when connecting to a new network), and its longevity is limited by the TTL cap imposed by resolvers and stub resolvers.

3.3 IP ID Research

Device tracking via IP ID: Using IP ID is proposed in [6] (2002) to detect multiple devices behind a NAT, assuming an IP ID implementation using a global counter. But nowadays none of the modern operating systems implements IP ID as a global counter. A similar concept is presented by [43] for a single destination IP (the DNS resolver) which theoretically works for devices that have per-IP counter (Windows, to some extent). However, this technique does not scale beyond a few dozen devices, due to IP ID collisions (the IP ID field provides at most values), and requires ongoing access to the traffic arriving at the DNS resolver.
Predictable IP ID: The predictability of IP ID may theoretically be used in some conditions to track devices. [17] describes a technique to predict the IP ID of a target, but requires the adversary to have a fully controlled device alongside it behind the same NAT. Also this technique only handles sequential increments (e.g. not time-based). As such, it is inapplicable to the more general scenarios handled in this paper. This technique is then used in [20] to poison DNS records.
OS Fingerprinting: [61] suggests using as a fingerprint for some operating systems.
Measuring traffic: [52]

samples IP ID values from servers whose IP ID is a global counter, to estimate their outbound traffic.


IP ID Algorithm Categorization: [49] provides practical classification of IP ID generation algorithms and measurements in the wild.
Fragmentation attacks: While not directly related to the properties of the IP ID field, it should be noted that attack techniques abusing fragmentation are known. RFC 1858 [46] lists several such attacks, e.g. the “tiny frgament” attack and the “overlapping fragement” attack.

Windows IP ID research: In parallel to our research, Ran Menscher published on Twitter his research on Windows IP ID [39]. That research reverse-engineered part of the Windows IP ID generation algorithm (without revealing how the index to the counter array is calculated). The analysis of this algorithm is based on two assumptions: (1) that the technique is applied shortly after restart, when the relevant memory buffer contains zeroes in a large part of its cells; and (2) that the attacker controls or monitors traffic to pairs of IP addresses which differ in single, specific bit position (including positions in the left half of the address). Based on these extreme assumptions, the attacker can extract the key easily, and use it to expose kernel 31-bit data quantities (though without learning where in the array this data resides).

The uninitialized memory issue exploited by this attack was fixed in Microsoft’s October 2018 Security Update [41], which invalidated assumption (1), rendering Menscher’s attack completely ineffective. Our attack and our demo, on the other hand, still work against systems that were patched with this update. Our work has multiple contributions over Menscher’s attack: (1) We provide the full details of the IP ID algorithm. (2) Our analysis does not rely on the array data, and is thus still in effect after applying the October 2018 Security Update which initializes the array with random data. (3) Our analysis does not require the extreme requirements on the relations between the addresses of the controlled/monitored IP addresses. (4) Our kernel data exposure provides positions of the data, not just data quantities (though our kernel exposure technique, too, was eliminated with the October 2018 Security Update). (5) It should also be noted that unlike our attack, Menscher’s technique could not be used for tracking, since as the cell arrays become non-zero when they are incremented, the attack becomes ineffective.

3.4 PRNG seed/key extraction

Our approach involves breaking the random number generator algorithm used by operating systems to generate the IP ID value and obtain the seed/key used by the algorithm. Similar strategies were used to different ends. For example, [32] broke the PRNG of the Witty worm to obtain the seed, from which they learned the infection time of the Internet nodes. [27] broke the Javascript Math.Random() PRNG of several browsers, obtained the seed and used it as a browser instance tracking ID. [28] broke the Math.Random() PRNG of Adobe Flash, obtained the seed and used it to extract the machine clock speed.

4 Tracking Windows 8 (and Later) Devices

In this section we first present the algorithm that is used for generating the IP ID in Windows 8 (and later) devices. The input to this algorithm includes a key which is generated at system restart. We then describe how a remote server can identify 45 bits of this key. This data enables to remotely and uniquely identify machines.

4.1 IP ID Generation

IP ID prior to Windows 8

In versions of Microsoft Windows up to and including Windows 7, the IP ID was generated sequentially and globally. That is, for each outgoing IP packet, a global counter would be incremented by 1 and the result (truncated to 16 bits) would be used [43]. These older Windows versions are out of scope for this paper.

The source code of the algorithm that is used for generating IP ID values in Windows is not public. However, we recovered the exact algorithm using reverse engineering, and verified its correctness by comparing its output to IP ID values generated by live Windows systems.

Technical details

The algorithm was obtained by reverse-engineering parts of the tcpip.sys driver of 64-bit Windows 10 RedStone 4 (April 2018 Update, Build 1803). Apparently this algorithm is in use starting with Windows 8 and Windows Server 2012. It was positively tested with Windows 8.1 (64-bit), Windows 10 (64-bit), Windows 2012, Windows 2012 R2 and Windows 2016. The algorithm was verified with TCP and UDP over IPv4. The 32 trailing zero bits used in the calculation of the variable are hard-wired in the code. Notice that the code is not specific to IPv4, and can be used with IPv6, which is why the key is defined as 320 bits - more than required to support IPv4.999 Our tracking technique can be probably adapted to IPv6, but since IPv6 is out of scope for this paper, we did not test this. For IPv4 pre RedStone 5, only 106 key bits — (78 bits), (15 bits) and (13 bits) — are used.

Henceforth, the term “IP” is used as a synonym to “IPv4”. Also, in order to simplify the discussion, it is assumed that is returned modulo even with Windows 10 RedStone 5 (October 2018 Update, Build 1809), i.e. the most significant bit of the IP ID is simply discarded in this case.

Toeplitz hash

The IP ID generation is based on the Toeplitz hash function defined in [21]. Let us first define the Toeplitz hash,

, which is a bilinear transformation from a binary vector

in , and an input which is a binary string (where ) to the output space . For a binary vector , denote by the -th bit in the vector, with bit numbering starting from 0. The -th bit of () is defined as the inner product between and a substring of starting in location . Namely

(1)
IP ID generation

The IP ID generation algorithm itself uses keys (tcpip!TcpToeplitzHashKey) which is a 320 bit vector, and and which are 32 bits each. All these keys are generated once during Windows kernel initialization (using SystemPrng and BCryptGenRandom).

In addition to these constant keys, the algorithm uses a dynamic array of counters, denoted , where is a power of 2, and is specifically set to .

Algorithm 1 describes how Windows 8 (and later) generates an IP ID for a packet delivered from to , while updating a counter in .

The algorithm uses the keys, and the source and destination IP addresses, to pick a random index for a counter in , and an offset. The algorithm outputs the sum of the counter and the offset, and increments the counter.

1:procedure Generate-IPID
2:     
3:     
4:     
5:     return for Windows 10 RedStone 5
Algorithm 1 Windows 8 (and later) IP ID Generation
Notation

We use the notation for the number represented in binary by the bits , namely the number . (Network byte order is used throughout the paper for representing IP addresses as bit vectors, e.g. 127.0.0.1 is 01111111.00000000.00000000.00000001.) For a vector , denote by the sub-vector .

Properties of the Toeplitz hash

Our attack uses the following properties of , which follow from the linearity of this transformation:

(2)

Therefore the trailing zeros in the input of in the computation of on line 3 of Algorithm 1, have no effect on the output. Also,

(3)

Therefore it is possible to decompose the second input of to two parts, and rephrase the computation as the XOR of two separate expressions.

4.2 Reconstructing the Key K

To reconstruct the key, the device needs to be measured. The measurements only take a few seconds, and are thus assumed to take place from the same network. I.e., the device’s source IP address, , is fixed (though possibly unknown). A first set of measurements directs the client device to IP addresses from the same class B network. A second set of measurements directs the client device to pairs of IP addresses, each pair in the same class B network, with different class B network pairs in the set.

Once the device is measured, the attack proceeds in two phases. The first phase of the attack recovers 30 bits of the key using the first set of measurements. The second phase of the attack reveals additional 15 bits of the key using the 30 bits recovered in the first phase and the second set of measurements. Overall, the measurements reveal 45 bits of the key, which suffices to uniquely identify machines from a large population, with high probability.

Section 4.5 describes how to optimally choose the parameters and given limits on the number of IP addresses that are available () and the processing time that is allowed (). For IP addresses (typical low budget limit), and attack run time limit of seconds on a single Azure B1s machine ( from Section 5.2), the optimal parameter values are .

4.3 Extracting Bits of K - Phase 1

Denote by , and the values of the destination IP address, the IP ID and (prior to increment) respectively, with respect to the -th packet in the -th class B network that is used in the attack ( and are counted 0-based). The first phase of the attack uses only a single class B network, and therefore is set to 0 in this phase. We thus use the following shorthand notation: , and .

A major observation is that only the first half of is used to calculate in Algorithm 1. Therefore packets that are sent to different IP addresses in the same class B network, have an identical index into the counter table, and use the same counter . Denote the value of for the -th class B network as .

If these packets are sent in rapid succession (i.e. when no other packet is sent in-between with ), then , and therefore the output in line 5 of the algorithm is calculated with (for simplicity, in Windows 10 RedStone 5, we discard the most significant bit of the IP ID).

We focus in this phase on the first class B network, , with destination IP addresses in it. Note that the offset that is calculated in line 3 is the difference between the IPID and the counter prior to its increment.

The attack enumerates over the values of the counter.101010Actually, we show in Appendix C.2 that one bit of this value is canceled out by the algorithm, and therefore the attack enumerates over only 14 bits. We ignore this fact for now to simplify the exposition. For each possible value it calculates the differences between the observed IPIDs and the corresponding values of the counter, arriving at the offsets calculated in line 3. By observing pairs of IPIDs, it is possible to identify the correct value of as well as 30 bits of the key.

In more detail, for each possible value of the attack calculates the difference

which, for the right value of the counter should be equal to the offset that is calculated in line 3. Namely to

This value can be expressed as . Applying eq. (2) and eq. (3), this expression is simplified into:

.

The attack takes two different values and computes the XOR of the two corresponding such quantities. This results in the following expression (where we denote by a representation of a number in as a vector in ):

This yields 15 linear equations () on since (from eq. (1)):

Since all belong to the same class B network, always has 0 for its first 16 bits, and therefore can start at 16. Due to obvious linear dependencies, only sets of such equations are useful (e.g. all pairs with ), with a total of linear equations for bits . That is, for and , the equations are:

(4)
Speeding up the computation using preprocessing

The coefficients of in eq. (4) are controlled by the server and are known at setup time. Therefore it is possible to preprocess the computation of Gaussian elimination. Namely, compute a matrix that, when multiplied by the observed values, reveals bits of the key. This preprocessing is important for efficiency, and we describe in Appendix A.1 how it can be done. Readers who are only interested in the feasibility of the attack and not in its details can skip that appendix.

Attack summary
  1. The tracker needs to control IP addresses in the same class B network.

  2. During setup time, the tracker calculates, using Gaussian elimination, a matrix , based on the values of these IP addresses.

  3. In real time, the tracker gets IP ID values from the device, from packets sent to the destination IP addresses under the tracker’s control.

  4. The tracker then guesses 14 bits ( - the most significant bit of cancels itself in eq. (4)) of the counter that is used for these IP addresses, calculates vectors (), that are defined as differences of functions of the observed IP IDs (details in Appendix A.1) and performs a matrix-by-vector multiplication of and the vector .

    For the correct value of this computation results in a vector of bits, whose first 30 bits are and the remaining bits are zero.

  5. The attacker identifies the right value of the counter by comparing to zero the bits starting at position 31: if , this verification statistically guarantees the correctness of the solution (up to a flipped most significant bit in , see Appendix C.2.)

Overall this process reveals 30 bits of the key as well as the value .

The attack takes bit operations (for enumeration over the possible key values and for the matrix-by-vector multiplication),111111Run time can be improved by conducting the comparison to zero on a vector-by-vector basis, eliminating on the first non-zero value encountered. This requires an average of bit operations and memory bits (for ). As explained in Section 4.5, we set and therefore this overhead is very small.

The tracker obtains the (correct) value , which will be used in the next phase. While it is guaranteed that the correct and will be found, the algorithm may emit additional candidates (with incorrect ). The false positive probability of both phases of the attack is analyzed in Appendix C.121212Note: Throughout the paper, we assume that . This results in a single key vector per guessed . We discuss the conditions on to meet this assumption in Appendix B. If , then each guess of yields possible keys. Thus small values of are acceptable.

4.4 Extracting Bits of K - Phase 2

Given 30 bits of () and the value , recovered in Phase 1, the attack can be extended to learn a total of up to 45 key bits (). This is done in the following way. The offset for computed in line 3 of Algorithm 1 is:

The following equation follows from the previous one:

The tracker looks at pairs of IP addresses in the remaining B classes (), each pair in a different class B network. Denote each such pair as , with the order inside the pair conforming to the order of packet transmission, and the packets being transmitted in rapid succession. Substituting the above into the definition of yields:

Using the linearity of , this is simplified into:

(5)

Let us use the notation

Then eq. (5) becomes

Subtracting the IPIDs of the two consecutive packets in the same B class (with and ) cancels the value of the counter , and yields:

(6)

The left side of the equation is observed by the tracker. The right side can be computed based on and .131313As explained in App. A.2, the bit does not affect the computation. The same holds for the most significant bit of , so knowing suffices for the attack. The tracker already knows these values except for , and therefore only needs to enumerate over the possible values of and eliminate all values which do not agree with the equation. We discuss this procedure in depth in Appendix A.2.

Attack summary:
  1. The tracker needs to control additional pairs of IPs (each pair in its own class B network).

  2. Given IP IDs for these pairs, the tracker enumerates over additional 15 key bits, and then, for each pair of IP addresses, calculates both sides of eq. (6) and compares them. For this calculation the tracker can choose and the leftmost bit of arbitrarily, as they will both cancel themselves.

  3. In theory, each IP pair should yield a elimination power for identifying the right key, but see Appendix C for a more accurate analysis.

  4. In the calculation, the leading term (in terms of run time) is computing (where , which takes bit operations, and is used twice. Thus, the run-time is roughly bit operations (there is no multiplication by since the first pair is likely to eliminate almost all false guesses).

At the end of Phase 2, the tracker obtains:

  • A partial key vector (or some candidates) (45 bits), which is specific to the device since it was set during kernel initialization, and does not depend on . These bits serve as a device ID.

  • The value

    This value allows the tracker to calculate (assuming are known) the value of the counter for any destination IP address whose IP ID is known (provided the source IP is ).141414This is useful for reconstructing the table of counters (Appendix D.)

4.5 Choosing Optimal and

For Windows, we assume budget-oriented constraints, namely available IP addresses and CPU time per measurement. We need to set the number of IP addresses from the same class B network to which the client is directed in the first set of measurements, and the number of pairs of IP addresses, each pair in the same class B network, used in the second set of measurements.

Our goal is to optimize for minimum false positives. The first constraint can be expressed as . As for the second constraint, the leading term of the time of the attack run is (Appendix A.3.2), where expresses the computing platform’s strength. Therefore, we can approximate the second constraint as . Additionally, there are inherent constraints: to let Phase 1 suggest a single key candidate to Phase 2 (most of the time), and to let Phase 2 provide a single final key (most of the time).

Given these constraints, we want to minimize the leading term in false positives, (Appendix C), i.e. we need to maximize . Since we “pay” two IP addresses for each increment of and only one IP address for each increment of , we should make as large as possible (as long as is valid), so the solution is:

(As stated in Section 4.2, for , sec., and , the optimal combination is .)

4.6 Practical Considerations

We discuss in Appendix A.3 different issues that appear when deploying the attack. These issues include ways to emit the needed traffic from the browser, handling packet loss and out-of-order packet transmission, handling interfering packets, and limiting the false-positive and false-negative error probabilities.

The run time of the key extraction attack is less than a second even on a very modest machine. The dwell time (time duration in which the page needs to be loaded in the browser) is 1-2 seconds for a WebSocket implementation. It is possible to minimize the dwell time by moving to WebRTC (STUN).

Longevity: the device ID is valid until the machine restarts (mere shutdown+start does not invalidate the device ID due to Windows’ Fast Start feature). A typical user needs to restart his/her Windows machine only for some Windows updates, i.e. with a frequency of less than once per month.

The attack is scalable: with 41 bits, the probability of a device to have a unique ID is very high, even for a billion device population; false positives are also rare ( – Table 3), and false negatives can be made negligible (Appendix A.3.4). From resource perspective, the attack uses a fixed number of servers, RAM/disk and () IPs. The required CPU power is linear in the number of devices measured per time unit, and in the Windows case is negligible. Network consumption per test is also negligible (assuming WebRTC/STUN implementation, a single STUN binding request is 48 IP level bytes, thus the total network traffic consumed is IP layer bytes in packets, i.e. less than 1.5KB at the IP layer.)

4.7 Attack Improvements and Variants

A fast-track identification of already-seen keys can be obtained in the following way: Once bits of a key are extracted, they will be stored for comparison against future connections. When a device is to be measured, the tracker first goes through all stored bit strings, and tests the measured data for compatibility with each one of them. This amounts to guessing the bits of one by one, starting from the least significant, and eliminating via eq. (6), using where is the number of bits guessed so far. The CPU work per key is thus almost negligible.

The original attack can also be sped up using incremental evaluation. The details are in Appendix A.5.

4.8 Environment Factors

We demonstrate here that the tracking attack can be deployed in almost every setting that can be reasonably expected.

HTTPS: In essence, there should be no problem in having the snippet use WebSocket over HTTPS (wss:// URL scheme) for TCP packets.

NAT: Typically NAT (Network Address Translator) devices do not alter IP IDs, and thus do not affect the attack.

Transparent HTTP Proxy / Web Gateway: Such devices may terminate the TCP connection and establish their own connections (with IP ID from their own network stack) and thus render our technique completely ineffective. However, typically these devices do not interfere with HTTPS (TCP port 443) traffic, and UDP traffic, so these alternatives can be used by the tracker.

Forward HTTP proxy: When a browser is configured to use a forward proxy server, even HTTPS traffic is routed to it by the browser. However, it may still be the case that UDP traffic (which is not handled by HTTP forward proxies) can be used by the technique.

Tor-based browsers and similar browsers: Browsers that forward TCP traffic to proxy servers (and disallow or forward UDP requests) are incompatible with the tracking technique as they do not expose IP header data generated on the device. Since “Tor transports TCP streams, not IP packets”,151515https://www.torproject.org/docs/faq.html.en#RemotePhysicalDeviceFingerprinting this applies to all Tor-based products, such as the Tor browser and Brave’s “Private Tabs with Tor” and therefore they are not covered by our technique.

Windows Defender Application Guard (WDAG): This new technology in Windows 10 enables the user to launch the Edge browser in a virtual environment. While the device ID in this virtual environment is independent of the device ID of the main operating system, it is consistent among all WDAG Edge instances. Furthermore, unlike the “main” Windows device ID, the WDAG device ID does not change with operating system restart, hence the WDAG device ID lives longer than the main Windows device ID. It should be noted that WDAG is only available for Edge browser in Windows 10 Enterprise/Pro edition, and requires high-end hardware.

IP-Level VPN: We experimented with F-Secure FreeDome (www.f-secure.com/en/web/home_global/freedome) and PureVPN (www.purevpn.com/). Both VPNs supported our technique.

IPv6 and IPsec: We do not know whether IPv6 or IPsec packets use the same IP ID generation mechanism. This requires further research.

Javascript disabled: Tracking can also work when Javascript (or any client side scripting) is not available, e.g. with the NoScript browser extension [36]. We discuss this in Appendix A.4.

4.9 Possible Countermeasures

We list here some obvious ways of modifying Algorithm 1 and their impact:

  • Increasing (the size of the table of counters) – surprisingly, this has very little effect on the basic tracking technique, since no assumptions were made on in the first place. It does affect the reconstruction technique.

  • Changing into a cryptographically strong keyed-hash function – while this change eliminates the original attack, it is still possible to mount a weaker attack that only tracks a device while its does not change. In fact, this applies to the entire abstract scheme proposed in [18, Section 5.3]. See Appendix E.

  • Changing the algorithm altogether (this is our recommendation). A robust algorithm relies on industrial-strength cryptography, large enough key space, and strong entropy source for the key, and uses them to generate IP IDs which (a) have guaranteed non-repetition period; (b) are difficult to predict; and (c) do not leak useful data. The algorithm used in macOS/iOS [53] is a good example. This eliminates the attack altogether.

5 Field Experiment – Attacking Windows Machines in the Wild

We set up a fully operational system to test the IP ID behavior in the wild, as well as to verify that the technique for extracting device IDs for Windows machine works as expected.

5.1 Setup

As explained in Appendices A.3.3 and C, in order to avoid false positives (which almost always happen due to false keys that differ from the true key in a few most significant bits), we need to trim the most significant bits from the key – i.e. use the key’s tail. For the full production setup (30 IP addresses), we calculated that a tail of 41 bits will suffice. Due to logistic and budgetary constraints, in our experiment we used only 15 IP addresses (rather than 30) for the key extraction (and 2 more IPs for verification), with . Thus we lowered the tail length to 40, and used the 40 bits as a device ID. That is, for this experiment, we traded the device ID space size for a smaller probability of false positives.

We then used WebSocket traffic to the additional pair of IP addresses (from a class B network that is different than those in the initial set of 15 IPs) to verify the correctness of the key bits extracted. In this experiment, since we do not extract we can only compute the least significant 9 bits of the IPID, adapting eq. (6) into:

(We need to use since we cannot know the order of packet generation. Thus given knowledge of we have two candidates for , out of a space of values.) A random choice of two values yields a success rate of . We deem our algorithm to be valid if it consistently yields the correct value (in one of the candidates) in all tests.

Both regular web traffic (e.g. snippet download) and WebSocket traffic were carried out in the clear, over HTTP ports 80 and 8080 respectively.

We asked “Friends and Family” to browse to the demo site using Windows 8 or later, from various networks.

5.2 Results

Network distribution

The experiment was conducted from July 22, 2018 to October 20, 2018. We collected data on 75 different class B networks.161616Except for 3 tests in the same class B network. In two of these tests, the scenarios were of a user roaming from abroad having the same class B as local cellular network access. In one case the subnet ownership was different. The networks are well dispersed across 18 countries and 4 continents with representatives from Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Hong Kong, Israel, India, Japan, The Netherlands, Poland, Sweden, Switzerland, UK and USA. The networks are also usage-diverse (home networks, SMB networks, corporate networks, university networks, public hotspots and cellular networks). We asked the users who connected to our demo site to use multiple regular browsers (indeed the snippet was accessed with all the common Windows browsers) and networks, and connect at different times, and verified that the device ID remained the same in all these connections.

Failures to extract a key – IP ID modification

In only 6 networks out of 75 (8%) we could not extract the key and therefore concluded that the IP ID was not preserved by the network. These six networks did not include any major ISP and seem to be used by relatively few users: they included an airport WiFi network, a government office, and a Windows machine connecting through one cellular hotspot (hotspots that we tested in other cellular networks did not change the IP ID). Of those six networks, in 3 networks we had clear indication (via an HTTP request header - Via or X-BlueCoat-Via) that a transparent proxy (in two cases - Squid 3.1.23) or a web security gateway (in one case - BlueCoat ProxySG) was in path. In such cases, moving to WebSocket over HTTPS, or to UDP would probably have addressed the issue. Another case was a forward proxy (moving to UDP would have possibly addressed it). In the two final cases, the exact nature of interference was not identified, however one case clearly exhibited Linux characteristics at the IP and TCP level (hence, it is very likely to be a Linux-based TCP gateway), and the other exhibited non-Windows TCP artifacts (TCP timestamps), thus most likely another TCP gateway. We can say then that optimistically, only 2 networks out of 75 (2.7%) are incompatible with the tracking technique, maybe even less (as it is still quite possible these two TCP gateways are actually transparent proxies).

Positive results

In the remaining 69 networks, for 4 networks we did not keep traffic for the additional two IPs, thus we could not verify the key extraction. For the rest 65 networks, our algorithm extracted a single 40-bit key, and correctly predicted the least significant 9 bits of the IPID of the second IP in the last pair (i.e. the correct value was one of the two candidates computed by the algorithm). This verifies the correctness of the algorithm and the key bits it extracts.

Lab verification

We tested a machine in the lab with the above test setup to obtain 40 bits of . Then, using WinDbg in local kernel mode, we obtained tcpip!TcpToeplitzHashKey, extracted the 40 bits from it and compared to the 40 bits calculated by the snippet – as expected, they came out identical.

Actual run time

Our demo system was implemented on the least powerful (and cheapest) Azure VM (B1s class, 1 vCPU [40]). Based on the run time measured for Phase 1 with (0.12 seconds), and since the attack in proportional to , we can compute the CPU speed factor , and using it we estimate the run time for Phase 1 with to be approx. 0.72 seconds. We estimate the run time in Phase 2 to be 0.01 seconds or less, thus the overall run time would be 0.73 seconds. Extrapolated to 10,000 Azure B1s, the run time would be 0.000073 seconds.

Packet loss and false negatives

We analyzed 79 valid tests171717Some networks were tested multiple times (with Windows 8+ operating system, and no IP ID modification), and found only 3 cases wherein the analysis logic failed to provide a device ID (additional test from the same devices succeeded in extracting a key). In all such cases a manual analysis indicates that this is due to packet loss. All three cases were from locations where Internet connectivity is not ideal, and are also geographically remote from our server (the tested networks were in Asia, whereas our server is in the US). In two cases (both belong to the same user, in a cellular network), there were 3 missing packets, and in one case – 4. Appendix A.3.4 describes additional logic (not implemented in the experiment) that can be used to reduce false negatives to a negligible level.

6 Linux and Android

The scope of our research is Linux kernel 3.0 and above. Also, we only investigated the x64 (typical desktop Linux) and ARM64 (Android) CPU architectures, although almost all of the analysis is not architecture-specific.

6.1 Attack Outline

In order to track a Linux/Android device, the tracker needs to control several hundred IP addresses. The tracking snippet forces the browser to rapidly emit UDP packets to each such IP (using WebRTC and specifically the STUN protocol, which enables sending bursts of packets closely spaced in time to controlled destination addresses). It also collects the device’s source IP address (using WebRTC as well – [48] or alternatively Appendix G.2.)

The tracker collects IP IDs from all IP addresses, and identifies bucket collisions by looking for IP pairs whose IP IDs are in close proximity. Recall that the choice of the bucket is a function of the source and destination IP addresses, and a device key. The tracker enumerates over the key space to find the (correct) key which generates collisions for the same pairs for which collisions were observed. The key that is found is the device ID.

6.2 IP ID Generation in Linux

The Linux kernel implementation of IP ID differs between TCP and UDP [30]. The TCP implementation always used a counter per TCP connection (initialized with a hash of the connection endpoints and a secret key, combined with a high resolution timer) and as such, is not interesting to us (collisions are meaningless). The implementation of IP ID for stateless over-IP protocols181818This also covers the IP ID of TCP RST packets which do not belong to an established TCP connection, e.g. RST for SYN received to a non-open port, and RST for SYN+ACK received with no matching SYN previously sent. (e.g. UDP) has gone through an interesting evolution process. We focus on short datagrams, i.e. datagrams shorter than MTU (maximum transmission unit), that do not undergo fragmentation. We designate the IP ID generation algorithms as and , in their order of evolution.

: In early Linux kernels, the IP ID for short datagrams was simply set to 0.

and : In Linux kernel 3.16.0 (released August 2014), IP ID for short datagrams became dynamic (just like it has always been for long UDP datagrams).191919See function __ip_select_ident in https://elixir.bootlin.com/linux/v3.16/source/net/ipv4/route.c. This was back-ported to various active Linux 3.x branches (see Table 2). The generation algorithm in general has an array of buckets, each containing a value (the implementation uses 32 bit quantities, but only the least significant 16 bits are used) and a time-stamp of the last time this bucket was used (this time-stamp is taken in a resolution of Hz, where depends on the version of the OS). The bucket array is initialized at boot time with random data (using a PRNG). The algorithm also uses the following parameters

  • – a 32-bit key (ip_idents_hashrnd) which is initialized upon first IP transmission with random data.

  • – a hash function. The details of the hash function are not important for understanding the attack. There are two versions of the hash functions: the old one is used in , and the new one is used in and .202020In the hash function is a modified Jenkins lookup3 hash function [25], except that the initialization steps are taken from the Jenkins lookup2 hash function [24] but using a different constant (0xdeadbeef instead of 0x9e3779b9). This IP ID algorithm was back-ported to several earlier kernel branches. In Linux 4.1 (released June 2015), the hash function was corrected to fully comply with the Jenkins lookup3 hash function. This change was back-ported to Linux 3.18 (ver. 3.18.17) which is the active development 3.x kernel branch, and to several earlier kernel branches. This version is used in and .

  • – the IP “next level” protocol number (for UDP, this value is 17). Nominally 8-bit field, extended to 32-bit by zero-filling the most significant bits.

  • – a PRNG (a 96/128 bit Tausworthe Generator) which receives as a parameter and provides a random integer in the range . (We define ). Note that .

The IP ID generation algorithm is defined in Algorithm 2. The procedure picks an index to a counter as a function of the source and destination IP address, the protocol and the key. It picks a random value which is smaller than or equal to the time that passed (measured in ticks, with tick frequency of per second) since the last usage of this counter, increments the counter by this value, and outputs the result.

1:procedure Generate-IPID
2:     
3:     
4:     
5:     
6:     return
Algorithm 2 Linux IP ID Generation (/)

: Starting with Linux 4.1, the net namespace of the kernel context, (a 64-bit pointer in kernel space) is included in the hash calculation, conditional on a compilation flag CONFIG_NET_NS (which is on by default for Linux 4.1 and later, and for Android kernel 4.4 and later). The modification is for step 2, which now reads:

where is a right-shift (by bits) and a truncation function that returns 32 bits from . We designate this algorithm as .

To summarize, there are four flavors of IP ID generation (for short stateless protocol datagrams) in Linux:

  1. - IP ID is always 0 (in ancient kernel versions)

  2. - Both versions use Algorithm 2, with the different implementations of .

  3. - Algorithm 2, with being a correctly implemented Jenkins lookup3 hash function, with net namespace.

Of interest to us are algorithms to . We focus mostly on UDP, as this is a stateless protocol which can be emitted by browsers.

The resolution of the timer in the algorithm is determined by the kernel compile-time constant CONFIG_HZ. A common value for older Android Linux kernels is 100(Hz). Newer Android Linux kernels (4.4 and above) use 300 or 100 (or rarely, 250). The default for Linux is .212121https://elixir.bootlin.com/linux/v4.19/source/kernel/Kconfig.hz In general, for tracking purposes, a lower value of is better.

Note that and are generated during the operating system initialization, which, unlike Windows, happens during restart and during (shutdown+)start.

6.3 Setting the Stage

Our technique for tracking Android (and Linux) devices uses HTML5’s WebRTC[1] both to discover the internal IP address of the device [48], and to send multiple UDP packets. It works best when the WebRTC STUN [37] traffic is bursty. In order to analyze the effectiveness of the technique we investigated the following features, focusing on Android devices.

6.3.1 Android Versions and Linux Kernel Versions

The Android operating system is based on the Linux kernel. However, Android versions do not map 1:1 to Linux kernel versions. The same Android version may be built with different Linux kernel versions by different vendors, and sometimes by the same vendor. Moreover, when an Android device updates its Android operating system, typically its Linux kernel remains on the same branch (e.g. 3.18.x). Android vendors also typically use somewhat old Linux kernels. Therefore, many Android devices in the wild still have Linux 3.x kernels, i.e. use algorithm or .

6.3.2 Sending Short UDP Datagrams to Arbitrary Destinations, or “Set Your Browsers to STUN”

The technique requires sending UDP datagrams from the browser to multiple destinations. The content of the datagrams is immaterial, as the tracker is interested only in the IP ID field. We use WebRTC (specifically – STUN) to send short UDP datagrams (with no control over their content) to arbitrary hosts. The RTCPeerConnection interface can be used to instruct the browser’s WebRTC engine to use a list of presumably STUN servers, and even allows setting the UDP destination port per each host. The browser then sends STUN “Binding Request” (UDP short datagram) to the destination host and port.

To send STUN requests to multiple servers (in Javascript), create an array A of strings in the form stun:host:port, then invoke the constructor RTCPeerConnection({iceServers: A}, …) in a regular WebRTC flow e.g. [26] (applying the fix from [19]).

Another option (specific to Google Chrome) is to send requests over gQUIC (Google QUIC) protocol, which uses UDP as its transport. This is less ideal since the traffic is less bursty, its transmission order isn’t deterministic, and there is an overhead in HTTPS requests and in gQUIC packets.

6.3.3 Browser Distribution in Android

We want to estimate the browser market share of “supportive” browsers (Chrome-like and Firefox) in the Android OS. Based on April 2018 figures for operating systems,222222https://netmarketshare.com/operating-system-market-share.aspx combined with mobile browsers distribution in April 2018,232323https://netmarketshare.com/browser-market-share.aspx we conclude that the Chrome-like browsers (Google Chrome, Opera Mini, Baidu, Opera) comprise 90% of the browser usage in Android. Adding Firefox (even though its STUN traffic is less bursty, Firefox can still be tracked at least for ) gets this figure up to 92%.

6.3.4 Chrome’s STUN Traffic Shape

Chrome sends the STUN requests to the list of supposedly STUN servers, in bursts. A single burst may contain the full list of the requested STUN servers (in ascending order of destination IP address), or a subset of the ordered list (typically with a missing range of destination hosts). We measured 1014 bursts (to destination IP addresses) emitted by a Google Pixel 2 mobile phone (Android 8.1.0, kernel 4.4.88), running Google Chrome 67 browser. The vast majority of bursts last between 0.1 seconds to 0.2 seconds, and the maximal burst duration was 0.548 seconds. Thus we use an upper bound of seconds for a single burst duration.

Chrome emits up to 9 bursts with increasing time delays, at the following times (in seconds, where is the first burst): 0, 0.25, 0.75, 1.75, 3.75, 7.75, 15.75, 23.75, 31.75.242424See https://chromium.googlesource.com/external/webrtc/+/master/p2p/base/stunrequest.cc). We label these bursts respectively, and we will be interested in and , as they’re sufficiently far from their neighbors. Thus, we are only interested in the first 8-9 seconds of the STUN traffic.

6.3.5 UDP Latency Distribution

While WebRTC traffic is emitted by the browser in well defined, ordered bursts, one cannot assume the traffic will retain this “shape” when arriving to the destination servers. Indeed, even order among packets within a burst is not guaranteed at the destination. Understanding the latency distribution in UDP short datagrams is therefore needed in order to simulate the in-the-wild behavior, and consequently the efficacy of various tracking techniques. The latency of UDP datagrams is gamma-distributed according to

[34] and [35]

. However, for simplicity, we use normal distribution to approximate the in-the-wild latency distribution. On May 1-6 2018, we measured the latency of connections to a server in Microsoft Azure “East-US” location (in Virgina, USA) from 8 different networks located in Israel, almost 10,000km away. The maximum standard deviation was 0.081 seconds. Hereinafter, we will use a standard deviation value

seconds as a worst case scenario for UDP jitter.

6.3.6 Packet Loss

We identified two different packet loss scenarios:

  • Packet loss during generation: the WebRTC packet stream (in Chrome-like browsers) is bursty in nature. In some bursts, we noticed large chunks of missing packets. These are quite rare (in the experiment we describe in Section 6.3.4 we got 29 such cases out of 1014 – 2.9%, though they are more common in Androids whose kernel is 4.x and have ) and easily identified. We can safely ignore them because the tracker can detect a burst with a lot of missing packets, reject the sample and run the sampling logic again, or use a more sophisticated logic incorporating information from more than two bursts. Additionally, with there are far less false pairs, which helps the analysis.

  • Network packet loss: the UDP protocol does not guarantee delivery, and indeed packets get lost over the Internet. The loss rate is not high, however, and we estimate it to be . This is also backed by research.252525See http://www.verizonenterprise.com/about/network/latency/, and [5].

6.4 The Tracking Technique

The technique that we use is different than prior art techniques in focusing on bucket collisions. That is, in cases wherein UDP datagrams for two different destination IP addresses end up with IPID generated using the same counter.

The tracker needs to control Internet IPv4 addresses, such that the IP-level traffic to these addresses (and particularly, the IP ID field) is available to the tracker. Ideally the IPs are all in the same network, so that they are all subject to the same jitter distribution. The tracker should be able to monitor the traffic to these IP addresses with time synchronization resolution of about 10 milliseconds (or less) - e.g. by having all the IPs bound to a single host.

With different destination IP addresses and buckets ( in Algorithm 2), there are expected collisions (unordered pairs of IP addresses which fall to the same bucket), assuming no packet loss. In reality, the tracker can only obtain an approximation of this set. The goal is to reduce those false negatives and false positives to levels which allow assigning meaningful tracking IDs.

The basic property that enables the attacker to construct the approximate list is that in an IP ID generation the counter is updated by a random number which is smaller than 1 plus the multiplication of the timer frequency and the time that passed since the last usage of that counter. Therefore for a true pair where the IP ID generation for and used the same bucket (counter), the following inequality almost always holds:

(We use instead of to support up to 10 IPs colliding into the same bucket, as each collision may increment the counter by where is from the previous collision. So the counter can end up incrementing no more than where is the sum of the time difference between collisions, i.e. the time duration between the first collision and the last collision in the burst.)

Since we are looking at datagrams from the same burst we have an upper bound such that , and therefore:

For two IP addresses which are not mapped to the same counter, the likelihood of this inequality to hold is only which is when (the worst case in our setting is with and , where ). The key extraction algorithm (Section 6.6) will examine IP ID values in two different communication bursts, and this will further reduce the likelihood of a false positive. Note that the probability of a false positive pair in a given burst to survive into the next burst is roughly where is the time between the consecutive bursts, whereas a true pair will occur in all bursts. Thus for the intersection of 2 consecutive bursts seconds apart, the amount of false positives (in both bursts) will be of their amount in a single burst.

Another requirement is that the set of IP addresses is large enough, so that the number of colliding pairs will be sufficiently high in most of the tracking attempts, rather then on the average (the expected number of colliding pairs is ).

6.5 Attack Phase 1 – Collecting Collisions

The tracking snippet needs to be rendered for at least 8.5 seconds, enough time for the browser to send the first 6 STUN bursts () – see Section 6.3.4. The tracking server splits the STUN traffic to bursts, based on the datagrams’ time of arrival, and on the expected burst time offsets (see Section 6.3.4). For simplicity and ease of analysis, we henceforth only use traffic from bursts and , which can be easily and unambiguously determined (since they are well separated in time from other bursts). We note that in some cases, requests in or in may be unsent, and in such cases we may need to resort to using e.g. and or similar combinations, but as long as these are “late” bursts (i.e. separated from their neighboring bursts by a enough units, where is the UDP jitter, see above), they can be separated without errors (or almost without errors) and the following analysis remains valid. If there are too many missing requests in a burst, the Tracking Server communicates with the Tracking Snippet, instructing it to retest the device.

Assuming no (or few) missing requests in and , the Tracking Server starts analyzing the data per burst (in and ). For each burst the Tracking Server calculates a set of pair candidates by collecting pairs of IP addresses 262626Chrome-like browsers send STUN requests ordered by IP address, thus for a true pair a higher IP will have a higher IP ID. for which and where . It then identifies pairs which appear in the candidate sets of both bursts, and adds them to a set of full candidates. This set forms a single measurement of a device. The tracker calculates the tracking ID based on in Phase 2.

6.6 Attack Phase 2 – Exhaustive Key Search

In the second phase the tracking server runs an exhaustive search on the key space where the key is 32 bits long for algorithms and , 41 bits long for algorithm (Linux) and 48 bits for (Android). For each candidate key, the algorithm counts how many IP pairs in are predicted by the candidate key. It is expected that only in one (the correct) key, this number will exceed a threshold , and in such case, this will be returned as the correct key (and the device ID). See Algorithm 3 for details (the algorithm uses the notation where is split into ).

1:procedure Generate-ID() is defined in Section 6.5
2:     if  then
3:         return ERROR      
4:     
5:     for all  do
6:         
7:         if  then
8:                             
9:     if  then
10:         return Needs special treatment if
11:     else
12:         return ERROR      
Algorithm 3 Exhaustive key search

We assume here knowledge of the version of the algorithm () used – , or . For and , the key space size is , and for , it is for the x64 architecture and for the ARM64 architecture (see Section 6.7.)

As explained in Section 6.11, false positives () are very rare – they can be handled but as this complicates the analysis logic, it is left out of the paper.

Attack run time

Where pairs, the run time of Algorithm 3 is proportional to . ’s distribution depends on ; Table 1 summarizes the expectancy and standard deviation for common values. These were approximated by a computer simulation (100 million iterations.)

[Hz]
100 50.59 7.39 0.146
250 65.47 8.60 0.131
300 70.45 8.79 0.125
Table 1: Approximated distribution
Time/memory optimization

When the number of devices to measure is much smaller than it is possible to optimize the technique for repeat visits. The optimization simply amounts to keeping a list of already encountered values (or values), and trying them first. If a match is found (i.e., this is a repeat visit), there is clearly no need to continue searching the rest of the key space. Otherwise, the algorithm needs to go through the remaining key space.

Targeted tracking

Even if the key space is too large to make it economically efficient to run large scale device tracking, it is still possible to use it for targeted tracking. The use case is the following: The tracking snippet is invoked for a specific target (device), e.g. when a suspect browses to a honeypot website. At this point, the tracker (e.g. law enforcement body) extracts the key, possibly using a very expensive array of processors, and not necessarily in real time. Once the tracker has the target’s key, it is easy to test any invocation of the tracking snippet against this particular key and determine whether the connecting device is the targeted device. Moreover, if the attacker targets a single device (or very few devices), it is possible to reduce the number of IP addresses used for re-identifying the device, by using only IP addresses which are part of pairs that collide (into the same counter bucket) under the known device key. Thus we can use a single burst with as few as 5 IP pairs (10 addresses altogether) per device to re-identify the device. The dwell time in this case drops to near-zero.

6.7 The Effective Key Space in Attacking Algorithm A3

In Algorithm , 32 bits of the net namespace are extracted by a function we denote as , and are added to the calculation of the hash value. The attack depends on the effective keyspace size .

A detailed analysis for Linux kernel versions 4.8 and above (on x64), and 4.6 and above (on ARM64, i.e. Android) appears in Appendix F. The conclusion is that if KASLR is turned off then the effective key space size is 32 bits in both x64 and ARM64. If KASLR is turned on, then the effective key space size is 41 bits in x64 and 48 bits in ARM64.

6.8 KASLR Bypass for Algorithm A3

By obtaining as part of Attack Phase 2 (Section 6.6), the attacker gains 32 bits of the address of the net structure. In single-container systems such as desktops and mobile devices, this net structure resides in the .data segment of the kernel image, and thus has a fixed offset from the kernel image load address. In default x64 and ARM64 configurations, the 32 bits of completely reveal the random KASLR displacement of net. This suffices to reconstruct the kernel image load address and thus fully bypass KASLR. See Appendix F for more details.

6.9 Optimal Selection of

Since IP addresses are at premium, we choose a minimal integer number of IP addresses such that at the point where is minimal, . We assume (worst case scenario). For simplicity, at this stage we neglect packet loss, and assume that (we assume , and we measured ). For false negatives, we use the Poisson approximation of birthday collisions [4] with . Therefore:

For false positives, we also assume that a burst contains the average number of false pairs and true pairs . We note that the probability for a single false key to match exactly pairs is , thus the probability of a single false key not to become a false positive is . The probability of false keys to generate at least one false positive key is therefore:

Assuming (worst case – Android), we enumerated over all values for each in to find the optimal (per ). We plot the results in Fig. 1.

Figure 1: for and optimal

As can be seen, (with ) is the minimal “round” satisfying at its optimal .

6.10 A More Accurate Treatment for

Using a computer simulation, we approximated the distributions of all collisions (using simulation runs), and of true collisions (using simulation runs). The simulations took into account 1% packet loss. With these, we can calculate more accurate approximations:

(We use the convention where .) We enumerated over values for and (worst case – Android.) and plotted the FP and FN probabilities in Fig. 2 (eliminating values below ).

Figure 2: vs. for

As can be seen, the minimal is at , where and . We get the same optimal value for as we got in Section 6.9, which means that the approximation steps we took there are reasonable.

6.11 Practical Considerations

Controlling packets from the browser

As explained in Section 6.3.2, it is possible to emit UDP traffic to arbitrary hosts and ports using WebRTC. The packet payload is not controlled. The tracker can use the UDP destination port in order to associate STUN traffic to the same measurement.

Synchronization and packet transmission/arrival order

Unlike the Windows technique, in the Linux/Android tracking technique there is no need to know the exact transmission order of the packets within a single burst.

False positives and false negatives

Using a computer simulation with destination IP addresses, a burst length of seconds, and packet loss rate of 0.01, we calculated an approximation of for the false negative rate of for , and an approximation for the false positive rate of . These approximations were computed assuming (worst case – Android). See Section 6.10 for more details.

Device ID collisions

The expected number of pairs of devices with colliding IDs, due to the birthday paradox, and given devices and a key space of size , is . For Algorithms and the key space size is , and will cause device ID collisions once there are several tens of thousands of devices. For this will affect 0.00023 of the population (2 out of every 10,000 devices). For Alg. , the key space size (with KASLR) is , so collisions start showing up with in the millions. Even for , collisions affect only 0.00006 of the population.

Dwell time

In order to record , the snippet page needs to be loaded in the browser for 8-9 seconds. Navigating away from the page will immediately terminate the STUN traffic.

Environment factors

All the UDP-related topics in Section 4.8 are applicable as environment factors on the Linux/Android tracking technique.

Longevity

The device ID remains valid as long as the device is not shutdown or restarted. Mobile devices are rarely shut down, and are typically restarted only on updates, which happen once every several months, or even less frequently.

Scalability

The attack is scalable. Device ID collisions are rare even with many millions of devices (see above). False positives and false negatives are also rare (less than combined). From a resource perspective, the attack uses a fixed number of IPs and servers, and a fixed-size RAM/disk. The required CPU power is proportional to the number of devices measured per time unit. Network consumption per test is negligible – approx. 13.5KB/s (at the IP level) during measurement.

6.12 Possible Countermeasures

Increasing M

Changing the algorithm to use a larger number of counters, will reduce the likelihood of pairs of IP addresses using the same counter. In response to such a change the tracker can increase the number of IP addresses that is uses. The expected number of collisions is , and therefore increasing by a factor of requires the attacker to increase by only a factor of .

On the other hand, also grows (probably linearly in ), and when no information is practically revealed to the tracker. It is probably safe to assume that the tracker can handle an increase of by a factor of , which means that in order to stop the attacker the IP ID generation algorithm must increase by more than , making it too memory expensive to be practical. (Decreasing only makes it easier for the tracker, except for in which case the system becomes a global counter (with random hops depending on the time since last transmission), which renders the attack ineffective, yet has several other security issues, such as described for example in [51].)

Increasing the key size (W)

This can be an effective counter-measure for the exhaustive search phase, though the pair collection phase is unaffected by it. Yet some choices of the hash function might still allow fast cryptanalysis.

Strengthening h

Our analysis does not rely on any property of the hash function , except that it is more-or-less uniform. Thus, changing (while keeping the same key space ) will not affect our results.

Replacing the algorithm

See the last item in Section 4.9.

7 Experiment – Attacking Linux and Android Devices in the Lab

In order to verify that we can extract the key used by Linux and Android devices, we need to control hundreds of IP addresses. Controlling such a magnitude of Internet-routable IP addresses was logistically out of scope for this research. Therefore we had to settle for an in-the-lab setup, which naturally limited the number of devices we could test.

7.1 Setup

We connected the tested devices to our own WiFi access point, which advertised our laptop as a network gateway. Then we launched a Chrome-like browser inside the Linux/Android device, and navigated to a page containing a tracking snippet. The tracking snippet used WebRTC to force UDP traffic to a list of hosts, and this traffic passed through our laptop (as a gateway) and was recorded.

We then ran the collision collection logic (Phase 1), and fed its output (IP pairs whose IP IDs collide) to the exhaustive key search logic (Phase 2). For KASLR-enabled devices, we also provided the algorithm with the offset (relative to the kernel image) of init_net, which we extracted from the kernel image file given the build ID (can be inferred e.g. from the User-Agent HTTP request header). We expected that the algorithm will output a single key, which will match a large part of the collisions.

7.2 Results

We tested 2 Linux laptops and 6 Android devices, together covering the vast majority of operating system and hardware parameters that regulate the IP ID generation. The results from all tests were positive - our technique extracted a single key and a kernel address of init_net where applicable (which was identical to the address in /proc/kallsyms). Note that due to hardware availability constraints, for the Pixel 2XL case (), we provided the algorithm with the correct 16 bit kernel displacement to reduce the key search to . Table 2 provides information about the common kernel versions, their parameter combinations and the tested devices.

The Attack time column is the extrapolated attack time in seconds with 10,000 Azure B1s machines, based on from Table 1, i.e. the average attack time is where is the time it takes a single B1s machine to test a single key with a single pair, divided by 10,000. The standard deviation of the attack time for a given is , which is in Table 1 times the average attack run time in Table 2. From a calibration run (single B1s machine, 10 pairs, keys, 294.83 seconds run time) we calculated , and populated the Attack Time column in Table 1 with .

O/S
Kernel
Version
Alg. [Hz] KASLR NET_NS Tested System
Attack
Time [s]
Linux (x64) 4.19+ 250 Yes Yes 12 41
Dell Latitude
E7450 laptop
99
Linux (x64) 4.8-4.18.x 250 Yes Yes 6 41
Dell Latitude
E7450 laptop
99
Android
(ARM64)
4.4.56+,
4.9, 4.14
300/
100
Yes Yes 6/7 48 Pixel 2XL ()
13,612/
9,775
Android
(ARM64)
3.18.17+
3.4.109+
100 No No
Don’t
care
32
Redmi Note 4
Xiaomi Mi4
0.15
Android
(ARM64)
3.18.0-3.18.6
3.10.53+
3.4.103-3.4.108
100 No No
Don’t
care
32
Samsung J7 prime
Samsung S7
Meizu M2 Note
0.15
Table 2: Common Linux/Android Kernels and Their Parameter Combinations
Applicability in-the-wild

While our tests were carried out in the lab, we argue that the results are representative of an in-the-wild experiment with the same devices. We list the following potential differences between in-the-lab and in-the-wild experiment, and for each difference, we note why our experiment can be projected to an in-the-wild scenario.

  • Packet loss: our technique is not sensitive to packet loss. We ran false positive/negative computer simulations (assuming 1% packet loss) supporting this fact.

  • Network latency: our technique is not sensitive to network latency (which is just a constant time-shift, from our perspective).

  • UDP jitter: this only affects correctly splitting the traffic into bursts. Our technique uses the “late” bursts, thus assuring that the bursts are well separated time-wise and that a jitter of s does not affect tracking.

  • Network interference (IPID modification): this issue was already evaluated in-the-wild in the Windows experiment, and the Windows results can be applied to the Linux/Android use case.

  • Packet reordering (within a burst): Our technique does not rely on packet order within a burst.

Thus we conclude that our results (and henceforth, the practicality of our technique) are applicable in-the-wild.

8 Conclusions

Our work demonstrates that using non-cryptographic random number generation of attacker-observable values (even if the values themselves are not security sensitive), may be a security vulnerability in itself, due to an attacker’s ability to extract the key/seed used by the algorithm, and use it as a fingerprint of the system.

Specifically, we find that the IP ID generation scheme proposed in RFC 7739 [18, Section 5.3], and implemented in Windows 8 and later is vulnerable to an attack of this type, and that the Linux/Android IP ID generation algorithm is also vulnerable to such an attack. We demonstrate these attacks and are able to extract device IDs for systems running Windows, Linux and Android.

We stress that any replacement cryptographic algorithm must not be hampered by using a key that is too short, in order to avoid a key enumeration attack. Also, as a security measure, we strongly recommend generating unique keys for such cryptographic usage, without resorting to using secret data that is used for other purposes (which – in case of a cryptographic weakness in the algorithm – can leak out).

9 Acknowledgements

This work was supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Directorate in the Prime Minister’s Office.

We would like to thank the anonymous USENIX 2019 reviewers for their feedback, Assi Barak for his help to the project, as well as Avi Rosen, Sharon Oz, Oshri Asher and the Kaymera Team for their help with obtaining a rooted Android device.

References