Auto-tuning Distributed Stream Processing Systems using Reinforcement Learning

09/14/2018 ∙ by Luis M. Vaquero, et al. ∙ 0

Fine tuning distributed systems is considered to be a craftsmanship, relying on intuition and experience. This becomes even more challenging when the systems need to react in near real time, as streaming engines have to do to maintain pre-agreed service quality metrics. In this article, we present an automated approach that builds on a combination of supervised and reinforcement learning methods to recommend the most appropriate lever configurations based on previous load. With this, streaming engines can be automatically tuned without requiring a human to determine the right way and proper time to deploy them. This opens the door to new configurations that are not being applied today since the complexity of managing these systems has surpassed the abilities of human experts. We show how reinforcement learning systems can find substantially better configurations in less time than their human counterparts and adapt to changing workloads.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Processing newly generated data and reacting to changes in real-time has become a key service differentiator for companies, leading to a proliferation of distributed stream processing systems in recent years (see Apache Storm (Toshniwal2014, ), Spark Streaming (zaharia2012discretized, ), Twitter’s Heron (Kulkarni2015, ), or LinkedIn’s Samza (noghabi2017samza, )).

Operators must carefully tune these systems to balance competing objectives such as resource utilisation and performance (throughput or latency). Streaming workloads are also uncertain, with operators having to account for large and unpredictable load spikes during provisioning, and be on call to react to failures and service degradations. There is no principled way to fully determine a sufficiently good configuration and how to adapt it to workload changes (or available resources).

Data engineers typically try several configurations and pick the one that best matches their service level objectives (SLO) (Floratou2017, ). However, individual systems have a daunting number of configuration options even for single systems. The situation is worse in distributed environments where remote interactions, networks and remote storage come into play. Finding optimal configurations is NP-hard (Sullivan2004, ), making it difficult for humans to understand the impact of one configuration change, let alone the interactions between multiple ones.

This difficulty in tuning systems impacts costs, especially those related to finding highly specialised administrators. Personnel is estimated to be almost 50% of the total ownership cost of a large-scale data system 

(Bara2008, ) and many data engineers and database administrators can spend nearly 25% of their time on tuning (Debnath2008, ). With increasing complexity, the goal of finding a working configuration in reasonable times has surpassed the abilities of humans. Indeed, Xu et al. (Xu2015, ) report that developers tend to ignore over 80% of configuration options, leaving considerable optimisation potential untapped.

The configuration problem requires exploring a vast potential space, while adapting to changes to changes in order to preserve pre-established SLOs (critical in latency-sensitive applications). This context seems well suited for adaptive machine learning techniques, such as Reinforcement Learning (RL). RL systems are adopted in other domains such as self-driving cars; they take the best decision based on prior experience, while also allowing pseudo-random exploration in order to allow the system to adapt to changes. However, the adoption of RL to distributed data management system is in its early stages due to two main factors: 1) too many potential actions to explore (the famous DeepMind papers cope with tens of actions at most 

(silver2016, )) and 2) learning which of the many monitoring metrics affect our SLO (Thalheim2017, ).

In this paper we address these limitations and present a system that applies RL for automatically configuring stream processing systems. We use a combination of machine learning methods that 1) identify the most relevant metrics for our SLO (processing latency), 2) select for each metric the levers that have the highest impact, and 3) discretise numeric configuration parameters into a limited set of actions.

After training our system with a variety of workloads we show that the obtained configurations significantly improve the results obtained by human engineers. The system requires a small time to suggest configuration actions, having the additional ability to automatically adapt to changes in the streaming workload.

The remainder of this paper is organized as follows. Section 2 shows the most relevant techniques used in this work. Next, in Section 3 we illustrate how these techniques have been architected towards a more systematic and reproducible approach. We show how the system converges into finding a better configuration and does it in tens of minutes while being able to adapt to changing workloads in Section 4. Section 5 highlights the most related work, while we discuss our main findings and future works in Section 6. We finalise with a summary of the main findings of this work in Section 7.

2. Methodology

We detail in this section the techniques underpinning our automated configuration adaptation system. First, we present how we generated our training data from tens of thousands of clusters running with random configuration values. We then detail the process that automatically selects a subset of metrics and configuration levers. Finally, we present our Reinforcement Learning model for the problem of automated systems configuration.

2.1. Training Data Generation

We ran Spark Streaming clusters under various workloads and configurations (see below), in order to collect runtime performance metrics from the application as well as the infrastructure it runs on.

We used a range of synthetic and real workloads to avoid overoptimising the model beforehand for individual scenarios. Synthetic workloads were modelled with Poisson distributions for event arrival with different

values, as well as with classic trapezoidal loads (ramp up, stable and ramp down period). We also used a subset of the benchmark described in (Chintapalli2016, ), as well as a proprietary dataset coming from a major manufacturer of end consumer connected devices. Lists of valid values or ranges were generated for continuous variables based on the configuration of the underlying virtual machines.

We have instrumented our Spark clusters with standard monitoring collection techniques111Following Spark recommendations for advanced monitoring settings, see https://spark.apache.org/docs/latest/monitoring.html#metrics
OS profiling tools such as dstat, iostat, and iotop can provide fine-grained profiling on individual nodes. JVM utilities such as jstack for providing stack traces, jmap for creating heap- dumps, jstat for reporting time-series statistics. We also user perf, systemmap, gprof, systemd, as a profiling tools accounting for hardware and software events. We used a total of 90 metrics provided by these tools together with the latency and throughput of the Spark processing.
. We store per minute events forming time series of 90 metrics across all the nodes in the cluster (for each of the 80 clusters we have 9 Spark worker nodes and a driver node).

Figure 1. Left: Process to generate workloads and gather monitoring data and associated lever configuration values. Right: Extraction of the most relevant metrics and their associated levers.

We deployed 80 Spark clusters of 10 nodes (64GB, 8 vcpus each), and ran a variety of workloads on them (2 types of Poissonian, (Chintapalli2016, ), trapezoidal, and proprietary workloads were run on 16 clusters each). Every 15 min we randomly changed the configuration of these clusters. We selected a total of 109 levers from the available ones in Spark 2.3222 https://spark.apache.org/docs/latest/configuration.html, and changed one of them each time. Some configurations were not allowed (e.g. too low memory in the driver node) to make sure all configurations resulted in runnable conditions. In total we generated approximately 100000 different configurations. The outcome of this process for each cluster is a matrix of infrastructure and application metrics along time (a matrix where one of the dimensions is time) as shown on the left hand side of Figure 1.

2.2. Metrics Selection

In order to limit the processing efforts and improve clustering results, we filter out metrics showing constant trend or low variance (

0.002) (Thalheim2017, ). This step dropped 10% of the metrics.

As shown on the right hand side of Figure 1

, we use two classic techniques for selecting the most relevant metrics. 1) Factor Analysis (FA), which transforms the high dimensional streaming monitoring data into lower dimensional data and 2) k-means, in order to cluster this lower dimensional data into meaningful groups 

(VanAken2017, ). The data obtained as described in the previous subsection were normalised (standardised) before doing FA. For every sample (one every 15 minutes), we took the average over 4 minutes.

FA assumes that the information gained about the interdependencies between variables (plus an error element) can be used later to reduce the set of variables in a dataset in an aim to find independent latent variables. A factor or component is retained if the associated eigenvalue is bigger than the 95

percentile of the distribution of eigenvalues derived from the random data. We found that only the initial factors are significant for our Spark metrics, as most of the variability is captured by the first couple of factors.

To reconstruct missing data (e.g. due to network issues or transient failures), we use 3

order spline interpolation to minimise distortion to the characteristics of time series of metrics 

(Liu2009, ).

The FA algorithm takes as input a matrix with metrics as rows and lever values as columns, thus entry is the value of metric on lever value . FA returns a lower dimension matrix , with metrics as rows and factors as columns, and the entry is the coefficient of metric in factor . The metrics are scatter-plotted using elements of the row of as coordinates for metric .

The results of the FA yield coefficients for metrics in each of the top two factors. Closely correlated metrics can then be pruned and the remaining metrics are then clustered using the factor coefficients as coordinates.

As metrics will be close together if they have similar coefficients in , we clustered the metrics in via k-means, using each metrics row of as its coordinates. We keep a single metric for each cluster, namely, the one closest to the centre of the cluster. We iterated over several k values and took the number that minimised the cost function (minimum distances between data points and their cluster centre) (VanAken2017, ).

Figure 2 (left) shows a two-dimensional projection of the scatter-plot and the metric clusters. This process identifies a total of 7 clusters, which correspond to distinct aspects of a system’s performance. Our results show some expected relationships (cache performance counters are in the same group as metrics related to JVM performance, like garbage collection). We can also see that some of these metrics are well-organised (e.g. metrics related to overall input-output performance are close to memory metrics but they fall under different categories). From the original 90 metrics, we were able to reduce the number of metrics by 92% (this result varies slightly for different runs of the same data due to the random initialisation of the clusters, shown in Figure 2

(right)). The FA plus clustering analysis is run separately in two batches: 1) the Spark driver node and 2) all the Spark worker nodes, in order to assess adequately the metrics that are exclusive to the driver (e.g. Spark driver memory).

Figure 2. Clusters of metrics resulting from the FA + k- means analysis

2.3. Ranking Most Actionable Metrics per Lever

Having reduced the metric space, we then try to find the subset of configuration levers with the highest impact on the target objective function. We use a feature selection technique for linear regression (including polynomial features) 

(Tibshirani2011, ). The process can be observed in Figure 3 (top).

Figure 3. Top: election of levers with strongest correlation with performance. Bottom: Reinforcement learning configuration feedback loop

We represented the configuration levers to be tuned with the set of independent variables , whereas the linear regression dependent variables

represented the preselected metrics. We convert categorical valued levers into continuous valued variables by numbering the categories. These variables are then normalised (value minus mean divided by standard deviation). The Lasso adds a L1 penalty equal to a constant

times the sum of absolute weights to the cost function. Because each non-zero weight contributes to the penalty term, Lasso effectively shrinks some weights and forces others to zero, thus automatically selecting features (non-zero weights) and discarding others (zero weights) (VanAken2017, ).

As indicated by (Hastie2001, ), we initially start with all weights set to zero (no features selected). We then decrease the penalty in small increments, recompute the regression, and track what features are added back to the model at each step. The order in which the levers first appear in the regression determines how much of an impact they have on the target metric (VanAken2017, ).

In our experiments, each invocation of Lasso takes 30 min and consumes 20 GB of memory. The Lasso path algorithm guarantees that the selected levers are ordered by the strength of statistical evidence and that they are relevant (assuming that the data are well approximated using a linear model).

2.4. Automated Tuning

2.4.1. Dynamic Lever Discretisation

Before the configurator can perform any modification on any of the lever, it needs to discretise the metrics. We follow a process for dynamically categorising continuous variables as described in (Subiros2015, ). Briefly, each lever corresponding to a continuous variable is manually marked with the min and max values from the samples data. The initial bin size is set to: . If the RL configurator assigns the maximum bin for a number of times, the max is increased by one bin (a new bin is added so that ). The bin size is dynamically updated as follows: if the same bin is assigned a configurable number of times, then the bin size is halved. If this happened for the first time, then we would have 20 bins after this initial halving. The algorithm can also merge bins as described in (Subiros2015, ).

The central value of the bin is taken as value for the configuration parameter. We also add a smaller ridge term for each configuration the RL configurator selects. The ridge term adds/substracts a small value to the central value of the bin. This is helpful for “noisy” cloud environments. This ridge factor means that the configuration chosen by the configurator for some of the levers is modulated to the top or the bottom of the discretisation (binning) interval.

2.4.2. Reinforcement Learning Configurator

At this point the system has (1) the set of non-redundant metrics; (2) the set of most impactful configuration levers; and (3) a mechanism to dynamically discretise continuous levers. The configurator now needs to:

  • learn a mapping from non-redundant metrics to impactful configuration levers (this mapping has been massively pruned by Lasso)

  • select a lever, and decide whether to increase or decrease the value

At each time step , the agent observes some state (values for 109 levers and 90 metrics), and is asked to choose an action (change in the value of one of the levers) that triggers a state transition to and the configurator receives reward . We used a delay-dependent reward (see Section 3 below).

Both, transitions and rewards are stochastic Markov processes (probabilities and rewards depend only on the state of the environment

and the action taken by the configurator ). The configurator has no a priori knowledge of the state the system will transition to, neither consequentely about the reward it will obtain. The interaction with the configured system and collection of rewards is what drives learning. The goal of learning is to maximise the where is a factor discounting future rewards.

The configurator picks actions based on a policy, defined as a probability distribution over actions:

where is the probability of picking action (lever value) in state .

On every state, the configurator can choose from several possible actions and it will take one or another based on prior experience (rewards). This is similar to a big lookup table, represented by a state-action value function .

The equation in essence means that the value of a state depends on the immediate reward and a discounted reward modulated by .

As there are too many states (109 possible levers can be acted on by increasing or decreasing them and 90 metrics with continuous values), it is common to use function approximators for  (Mao2016, ). A function approximator, , has a manageable number of adjustable parameters; we refer to these as the policy parameters and represent the policy as . Note that we use a stochastic policy (uniform random) to select among the set of filtered levers.

Deep Neural Networks (DNNs) have recently been used successfully as function approximators to solve large-scale RL tasks 

(silver2016, ). An advantage of DNNs is that they do not need hand-crafted features.

While the goal of Q Learning is to approximate the Q function and use it to infer the optimal policy (i.e. ), Policy Gradients use a neural network (or other function approximators) to directly model the action probabilities.

Each time the configurator interacts with the environment (generating a data point ), the neural network parameters are tuned using gradient descent so that “good” tuning to the configuration levers will be more likely used in the future. The gradient of the objective function above is:

where is the expected cumulative discounted reward from (deterministically) choosing action in state , and subsequently following policy .

Using Monte Carlo Methods (hastings70, ), the agent samples multiple trajectories and uses the empirically computed cumulative discounted reward,

, as an unbiased estimate of

. It then updates the policy parameters () via gradient descent:

where is the step size. We implemented the well-known REINFORCE algorithm (Sutton1999, ). The direction of gives how to change the policy parameters in order to increase , the probability of action at state . The size of the step depends on the size of the return , this means that actions that empirically lead to better returns are reinforced. In order to decrease the variance of gradient estimates based on a few local samples we subtract a baseline value from each return .

The most relevant levers are preferentially used by our RL algorithm (the top lever is used % of the time), but the other levers will also be used occasionally () to keep a good trade-off between exploration and exploitation (see next section).

3. Design

We represent the state of the system (the current monitoring metrics and the key actionable levers) as distinct images (one for each of the monitoring metrics) and another for showing the discretised configuration values.

As in (Mao2016, ), we keep a grid per metric, where each cell represents a node in the cluster. There is a matrix for each resource showing the average utilisation of the resources during the running of that configuration.

State (configuration plus metric values) could be represented as a heatmap of utilisation across nodes in the cluster. See Figure 4 for a specific example of the input to the neural network.

Figure 4. An example of a state representation

We craft the reward signal to guide the agent towards good solutions for our objective: minimising event processing latency. For each tuning, we set the reward to where is the latency for event and is the set of events that can arrive to the streaming engine.

We represent the policy as a neural network (called policy network) which takes as input the collection of heatmaps described above, and approximates a probability distribution over all possible actions (as restricted by the output of the Lasso). Each tuning is based on a fixed number of events (with upcoming events being temporarily held until the phase completes).

The tuning phase terminates when all the events have been processed. Each tuning phase can apply several configurations (one change at a time) in order to reach an acceptable latency. A number of configurations within a tuning phase defines an episode. The value for is determined empirically. We use . Higher values of (closer to ) mean slower learning but a better chance of finding a more complex solution (or getting a worse performance). Lower values yield a more consistent behaviour.

Rewards are only applied at the end of each episode. We also set discount factor to 1, so that the cumulative reward over time coincides with (negative) the sum of each event latency, hence maximising the cumulative reward mimics minimising the average latency.

State, action, and reward information for all configuration steps of each episode are recorded in order to compute the (discounted) cumulative reward, , at each timestep of each configuration step and to train the neural network using a variant of the REINFORCE algorithm (shown in Algorithm 1).

while  ( hasConverged!=true OR noMaxNumIter)  do
     
     for  to  do
         ;
         for t : 1 ..  do
               compute returns
         end for
     end for
     for t : 1 ..  do
          compute baseline
         for i : 1 .. N do
              
         end for
     end for
     
end while
return
Algorithm 1 Adapted REINFORCE algorithm, based on (Sutton1999, ; Mao2016, )

To reduce the variance, it is common to subtract a baseline value from the returns, . The baseline is calculated as the average of the return values, , where the average is taken at the same configuration step across all episodes.

We employed a neural network with a fully connected hidden layer with 20 neurons. The heatmaps are blended to amalgamate 40 configuration steps and each episode lasts for 250 configuration steps. We used a similar setup to the one reported by 

(Chintapalli2016, )

(26 nodes to create 17000 events per second). We update the policy network parameters using the rmsprop 

(Mao2016, ) algorithm with a learning rate of 0.001. Unless otherwise specified, the results below are from training for 1000 training iterations.

4. Experimental Evaluation

To evaluate our work, we implemented our techniques using Google TensorFlow and Python’s scikit-learn 

(Abadi2016, ; Pedregosa2007, ). We loaded Spark with the well-known Yahoo streaming benchmark that simulates an advertisement analytics pipeline (Chintapalli2016, ). We also tested the performance of the system on real-world production workloads for a top consumer IoT vendor company.

We conducted all of our deployment experiments on Amazon EC2. We deployed our system controller together with the workload generator clients. These services are deployed on m4.large instances with 4 vCPUs and 16 GB RAM. The Spark streaming deployment was deployed over 10 m3.xlarge instances with 4 vCPUs and 15 GB RAM. We deployed our tuning manager and repository on a local server with 20 cores and 256 GB RAM.

We first perfomed a preliminary evaluation to determine:

  • How long does it take to train the policy network?

  • Do changes make sense? How is latency affected?

  • Can we adapt to workload changes?

  • How does it compare to the performance obtained by two expert big data engineers for a production workload?

In the following subsections, we provide answers to all of these questions.

4.1. Training Time

Every 5 minutes, the network tries a new configuration (changing just one lever at a time, ). Lasso is re-evaluated after each training phase, hence its output remains constant for the duration of the results shown in Figure 5. After 50 minutes, latency was reduced by more than 70% ). Figure 5 shows how latency decreases as the policy network selects new configuration values. As can be observed in Figure 5, the first few iterations (departing from the default Spark configuration) are very productive in terms of latency reduction. Two of these changes were exploratory and resulted in no change in performance (some times transient performance decreases are observed during exploration). The remaining 7 changes were done on an exploitation mode. Three of these resulted in a minimum impact on performance. The training fully converged after 11h with increasingly smaller improvements. Better training times would have been achieved with GPU-boosted hardware, but reacting to changes in tens of minutes can be sufficient for a wide variety of streaming applications.

Figure 5. Reduction in latency () as training progresses.

4.2. Execution Breakdown

To better understand what happens when computing a new configuration at the end of an episode, we logged the RL configuration output to record the amount of time spent in the different parts of the tuning process when running a workload similar to (Tibshirani2011, ).

The workload is continuously executed but Kafka is buffering new incoming events during Configuration Loading and Preparation in case of node unavailability. This is possible since we designed our Spark jobs to behave idempotently by sinking their processed data on a set of partitioned tables.

The execution of an episode of the RL module can be broken down into:

  • Configuration Generation: time to calculate the best change to the current configuration.

  • Configuration Loading and Preparation: time it takes to the system to install the new configuration and prepare Spark for the new tuning phase (incoming events being buffered by Kafka).

  • Workload Stabilisation: we enable some time to enable some time for the changes to exert some effect in the workload. The stabilisation occurs before 3 min ( of the time) but we dynamically detect stabilisation by creating trends on the variance of the latency and the most relevant metrics, as defined above.

  • Network Reward and Adaptation: time to apply the reward and update the deep neural network parameters.

Figure 6. Execution Time Breakdown – The average amount of time spent in the parts of the system during an episode.

The results in Figure 6 show the breakdown of the average times spend during an episode. As shown in the Figure, the time it takes to run an episode is dominated by two main factors: loading the new configuration and allowing for the configuration effects to reach a stationary state (a constant synthetic workload with 100K events throughput was used in this experiment). Depending on the suggested changes, the configuration loading is done without rebooting the nodes in the cluster, unless this is strictly needed for the configuration to take effect. The time it takes to apply the reward and update the network and to create the new configuration is negligible in comparison.

4.3. Quality of the Suggested Changes

We started with a batch setting interval of 10s, where the system can barely cope (and hence latency increases, see Figure 7A. Then, the network automatically suggested to reduce the batch interval to 2.5s, resulting in a notorious improvement in latency at the highest throughout (Figure 7B). This may seem to be a negligible difference to a human administrator, but the effects in performance are quite significant.

Figure 7. CDF of the end to end latency () for different configurations (Left: 10s Spark batch size vs Right: 2.5s Spark batch size).

This is just an example of suggested configuration. The network starts with a default batch size of 10s. In our initial discretisation of the Spark batch size corresponded to the smallest bin that could be assigned (disregarding ripple effects). The dynamic discretisation mechanism described above enables the RL configurator to dynamically segment a bin into smaller ’sub-bins’ in order to find the most appropriate configuration.

4.4. Adaptation to Workload Changes

In this subsection, we show how the RL configurator is capable of adapting to new radically different workloads. In this example, we use a synthetic benchmark that models a Poisson distribution in terms of event arrival rate.

is the number of events entering the Kafka queue used by the Spark cluster to consume events during the . We assume that for a short interval the rate of arrivals to this queue is fixed and that the distribution of this variable is Poisson, i.e. . In this case, we modelled two distributions and corresponding to arrival throughputs of 10000 and 100000, respectively.

We also model the size of the events for each of the two distributions above. We modelled two Gaussian distributions with similar standard deviation (0.3), but different means (0.5 and 5 MB, respectively). Thus, we have

distribution 1, which is low rate and small size-events, and distribution 2, which is high rate and large-sized events.

Figure 8. Adaptation to drastic changes in workload.

As can be observed in Figure 8, the workload is changed from distribution to around minute 65, resulting in a spike in the latency value that nearly doubles the previous baseline. The RL algorithm is capable of improving the situation but it cannot return to the previous baseline as larger events take longer to process distribution 2.

4.5. Exploration vs Exploitation

As described above, the best lever (according to our RL configurator) is used of the time. This is referred to as exploitation since the configurator “exploits” prior knowledge. This section explores the right value of depending on how frequently our workload changes.

We alternate between distributions and 1, 3 o 6 times per hour. We then measure the time it takes for the RL configurator to reach 80% of its previous baseline value. By baseline, we mean the stationary latency that is reached when neither workload nor resources are changed. These values (baselines) are and for distribution 1 and distribution 2 as shown in Figure 8.

rate 1/60 3/60 6/60
0.9 18.9 min 1 19.1 min 1.26 10 min 1.5
0.8 18.1 min 1 18.8 min 1.12 10 min 1.4
0.7 16.5 min 1 17.1 min 1.05 10 min 1.2
Table 1. Convergence times and baseline (italics) for different values of the Exploration vs. exploitation factor (rows) at different rates of workload change (columns).

In Table 1, we show the time to reach a stationary value (top of each cell) and the level above the initial baseline that the RL configurator was capable of obtaining. As frequency increases, the RL configurator does not have enough time to find the right configuration and the experiment terminates with a latency value that is higher than the reported baseline.

As can be observed in the top number within each cell in the table, more exploration (lower ) implies faster adaptation (measured as time to reach 1.2 times the original baseline) for changes in workload even at higher frequencies.

Higher values of result in worse baselines (bottom number of each cell in the table) for the same frequencies since the RL configurator has a more restricted ability to explore new configurations. The downside of lower values is increased variability (standard deviation in the mean values reported in Table 1 are 0.15, 0.26, and 0.34 for , , and , respectively).

4.6. Comparison to Human Configurators

We tested how different mechanisms can be employed to configure the network. The results in these section are not meant to be exhaustive and should be taken as a qualitative indication only.

We took 2 expert data engineers with more than 10 years of industrial experience and gave them 1 day to do their tweaks in the cluster configuration. We also recruited 9 students from a Computer Science MSc. All of the students had previously taken a unit with several lectures and assignments on how to configure Spark. Students were given a full week to deliver their best configuration. We compared these two cohorts with our algorithm.

As can be observed in Figure 9, the RL network is more efficient than their human counterparts. Unsuprisingly, the experts seemed to be better than students, but the small samples size here prevents us from making any strong claims.

Figure 9. Comparison of the results obtained in tuning the cluster by several different methods.

These results were obtained with differences in the time it took each method to accomplish the reduction in latency. The RL method is capable of reaching much better configurations in a fraction of the time it takes humans. Note that the results performed for the RL method are the ones obtained after 50 min of running. As previously shown in Figure 5, further improvements would have been possible just by letting the RL run for 10h (still significantly less than their human counterparts).

5. Related Work

5.1. Configuration Sampling

Performance prediction techniques can: 1) compile all possible configurations and record the associated performance scores (maximal sampling), which can be impractically slow and overly expensive and time consuming (Weiss2008, ); or 2) intelligently selecting and executing “enough” configurations to build a predictive model (minimal sampling). For example, Zhang et al. (Zhang2015, ) approximate the configuration space as a Fourier series to derive an expression showing how many configurations must be studied to build predictive models with a given error. Continuous learning techniques have also be applied to ensure adaptability (Thalheim2017, ). Our work falls closer to this later set of continuous learning techniques, but it also relies on gathering information on a large number of samples (minimal sampling) as an exhaustive screening of the full configuration space is simply unfeasible.

One of the problems with massive configuration spaces is derived from the fact that many configuration parameters are continuous in nature and can, hence, take an infinite number of values. We built on previous work (Subiros2015, ) to dynamically discretise continuous configuration variables.

5.2. Metric Dimensionality Reduction Techniques

Modern monitoring frameworks have created an opportunity to capture many aspects of a the performance of a system and the virtualised environment it tends to run on. This has resulted in an information crosstalk and overload problem where many metrics are non-linearly interdependent.

Reducing the size and dimensionality of the bulk of metric data exposed by complex distributed systems is essential. Common techniques include sampling to enable a systematic trade-off between the accuracy, and efficiency to collect and compute on the metrics (Zhou2000, ; Kollios2003, ; Krishnan2016, ; Quoc2017, ), and data clustering via k-means and k-medoids (Ng1994, ; Ding2015, ).

Classic approaches such as principal component analysis (PCA) 

(pearson1901lap, ) and random projections (Papadimitriou1998, ) can also be used for dimensionality reduction, these approaches either produce results that are not easily interpreted by developers (i.e., PCA) or sacrifice accuracy to achieve performance, producing different results across runs (i.e., random projections). On the other hand, clustering results can be visually inspected by developers, who can also use any application-level knowledge to validate their correctness. Additionally, clustering can also uncover hidden relationships which might not have been obvious.

(Thalheim2017, ) et al., focus on analysing interdependencies across metrics by building a call graph. Similar to (VanAken2017, ), we rely on factor analysis to determine the most relevant metrics, hence reducing the problem of metric dimensionality. Like these authors, we also rely on Lasso to find the strongest associations between metrics and configuration levers.

5.3. Machine Learning for System Configuration Optimisation

The usage of machine learning methods to tweak configuration in systems is not novel. For instance, (Zhang2018, )

review on the usage of deep learning in networking configuration.

Previous work on self-tuning databases is focused on standalone tools that target only a single aspect of the database, such as indexes (Skelley2000, ) or partitioning schemes (Agrawal2004, ). Other tools are workload-specific (Debnath2008, ). They all require laborious preparation of benchmark workloads, spare hardware and expertise on the database internals, which (VanAken2017, ) does not require. OtterTune uses GP regression to learn workload mappings (VanAken2017, )). GP offers several advantages like the dynamic tradeoff between exploration and exploitation, but it relies on an explicit process of performance prediction.

More recent efforts have focused on optimising the configuration of in memory databases (pavlo17, ).

STREAM (Arasu2003, ), Aurora (Balakrishnan2004, ) and Borealis (abadi2005, ) were the precursors of a Cambrian explosion in the variety of streaming engines (Akidau2013, ; Kulkarni2015, ; Neumeyer2010, ) many of which like (Twitter’s Heron, Storm, Samza, Flink or Spark Streaming) have been open-sourced. Despite all their sophistication and performance none of the existing streaming systems are truly self-regulating.

(Floratou2017, ) presented an architecture enabling streaming engines to self-regulate. They presented policies to adjust the topology configuration so that the performance objectives are met even in the presence of slow machines/containers, similar to (Fu2015, ; Gedik2014, ) but lacking the ability to automatically scale based on the input load.

(Herodotou11, ) presents self-tuning techniques for Map Reduce systems based on a graph of workload that can be optimised. (Dalibard2017, ) applied Bayesian optimisation techniques to garbage collection. Recent work proposed self-driving relational database systems that predict future workloads and proactively adjust the database physical design (pavlo17, ).

In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. Early works use RL for decentralised packet routing in a switch at small scales (Boyan1993, ). Congestion protocols have also been optimised online and offline using RL (Dong2015, ; Winstein2013, ). Cluster scheduling has been studied widely recently (Chen2017, ; Grandl2014, ; Isard2009, ; Mao2016, ; Zaharia2010, ).

Unlike prior efforts, our system does not focus on topology configuration, job scheduling or routing confiuration, but on finding which configuration parameters in a streaming engine (Spark Streaming in our case) make a difference to maintain predefined latency/throughput SLOs.

5.4. Machine Learning for Workload Prediction

Large configuration spaces are a common theme in the literature. As mentioned above, sampling has been a the gold standard to try to tackle this problem  (Siegmund2012, ; Guo2013, ; Sarkar2015, ). These solutions tend to require manual configuration, while subjecting the learning systems to very large variance (Nair2017, ). For instance, regression tree techniques for performance prediction require thousands of specific system configurations (Guo2013, ), even when the authors used a progressive random sampling approach, which samples the configuration space in steps of the number of features of the software system in question. Sarkar et al. (Sarkar2015, )

randomly sampled configurations and used a heuristic based on feature frequencies as termination criterion. The samples are then used to train a regression tree. 

(Nair2017, ) used eigenvalues of the distance matrix between the configurations to perform dimensionality reduction to minimise configuration sampling by dropping out close configurations while measuring only a few samples.

While these are related to our approach, we do not intend to build a performance predictor based on metrics and configurator. Our approach uses several intermmediate steps: 1) selecting relevant metrics, 2) associating metrics to right configuration levers, and 3) learning association of metric to lever in order to improve the performance of the system. Our system uses techniques similar to (VanAken2017, ) for steps (1) and (2). In our system learning is confined to the third phase, which requires no prior configuration sampling.

Gaussian Process (GP) Models (GPM) is often the surrogate model of choice in the machine learning literature. GPM is a probabilistic regression model which instead of returning a scalar () returns the mean and variance associated with

. Building GPMs can be very challenging since they can be very sensitive to the parameters of GPMs, they do not scale to high dimensional data as well as a large dataset (software system with large configuration space) 

(Shen2005, ) and can be limited to models with around ten decisions (Wang2016, ).

6. Discussion

Our system explores this massive configuration space by using Reinforcement Learning. We have shown how the system selects obvious configuration levers in the first few episodes (e.g. increasing the memory of the driver node), resulting in substantial performance gains. The system can be tweaked with a single parameter (), allowing data engineers to balance between configuration exploration and exploitation. Higher exploration rates have been found to obtain better solutions, although the higher variance increases the likelihood of “faulty configurations” (ones where the cluster cannot keep running). Data engineers do not need to take care of machine boundaries (e.g. configuring worker nodes consistently), as the system does this on their behalf.

The learned behavior, including the right balance for (

), is specific to the workload, job/analysis, and cluster type, requiring an abundance of data to tweak the RL configurator. As future work, we plan to explore transfer learning techniques to minimise this need 

(Yang2017, ; Jamshidi2017, ), opening the applicability of this technique to more heterogeneous scenarios.

Stream processing systems can be fine tuned to accommodate different workloads in a variety of ways. However, the ability to remain performant under changing workloads is a fundamental aspect. A range of techniques have been suggested to address this challenge, including scaling the number of virtual machines used in the streaming engine (Cervino2012, ), using smarter mechanisms to allocate workloads to dynamic cores (Micolet2016, ) or cluster nodes (Mao2016, ), dynamic load balancing (Martins2014, ), or even specific configuration aspects (like batch size (Das2014, )). We have shown in this paper how our approach can react automatically to configuration changes, and preserve low streaming latencies by automated learning and exploration of the vast configuration space.

Our system is based on a set of algorithms that operate optimally under clear domain restrictions. The effectiveness of our system depends on having a linear relationship between central metrics and levers (Tibshirani2011, ). We found these to apply in our experiments, but that might not be the case in other systems with different behavior, configuration levers and obtained metrics.

We have taken advantage of the lower intrinsic dimensionality of configuration spaces (Nair2017, ), by using random sampling techniques to help reducing the dimensionality of our model. This approach allowed us to implement RL methods in a problem where it would seem to be unfeasible.

Our system uses a close time horizon (to compute the baseline in Algorithm 1) whereas the underlying optimisation problem has a far time horizon (data streams are infinite in theory). Value networks estimating average return values can be used to overcome this limitation in the future (Sutton1998, ).

The main overhead of our system is the machine reboot time after a new configuration As part of our future work, we want to explore how the system performs by restricting its behaviour to those configurations that do not require rebooting the cluster. Another potential approach to minimise downtimes is a green-blue deployment setting where changes are applied into a new cluster. Both, the blue and green cluster read events from our Kafka topics and idempotently dump the data into a database. Tuning the time to entirely move the workload to the new cluster requires job dependent techniques that we are starting to explore.

7. Conclusions

We presented the first stream processing system that uses RL to adapt to a variety of workloads in a dynamic manner. Our system converges to better solutions than human operators in much less time, resulting in significant latency (60-70%) reductions in a few tens of minutes. The system adapts well to different workloads while requiring minimal human intervention.

8. Acknowledgments

References

  • (1) Abadi, D., Ahmad, Y., Balazinska, M., Çetintemel, U., Cherniack, M., Hwang, J., Lindner, W., Maskey, A., Rasin, A., Ryvkina, E., Tatbul, N., Xing, Y., and Zdonik, S. The design of the Borealis stream processing engine. 2005, pp. 277–289.
  • (2) Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (Berkeley, CA, USA, 2016), OSDI’16, USENIX Association, pp. 265–283.
  • (3) Agrawal, S., Narasayya, V., and Yang, B. Integrating vertical and horizontal partitioning into automated physical database design. In Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data (New York, NY, USA, 2004), SIGMOD ’04, ACM, pp. 359–370.
  • (4) Akidau, T., Balikov, A., Bekiroğlu, K., Chernyak, S., Haberman, J., Lax, R., McVeety, S., Mills, D., Nordstrom, P., and Whittle, S. Millwheel: Fault-tolerant stream processing at internet scale. Proc. VLDB Endow. 6, 11 (Aug. 2013), 1033–1044.
  • (5) Arasu, A., Babcock, B., Babu, S., Datar, M., Ito, K., Nishizawa, I., Rosenstein, J., and Widom, J. Stream: The stanford stream data manager (demonstration description). In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data (New York, NY, USA, 2003), SIGMOD ’03, ACM, pp. 665–665.
  • (6) Balakrishnan, H., Balazinska, M., Carney, D., Çetintemel, U., Cherniack, M., Convey, C., Galvez, E., Salz, J., Stonebraker, M., Tatbul, N., Tibbetts, R., and Zdonik, S. Retrospective on aurora. The VLDB Journal 13, 4 (Dec. 2004), 370–383.
  • (7) Bâra, A., Lungu, I., Velicanu, M., Diaconita, V., and Botha, I. Improving query performance in virtual data warehouses. WSEAS Trans. Info. Sci. and App. 5, 5 (May 2008), 632–641.
  • (8) Boyan, J. A., and Littman, M. L. Packet routing in dynamically changing networks: A reinforcement learning approach. In Proceedings of the 6th International Conference on Neural Information Processing Systems (San Francisco, CA, USA, 1993), NIPS’93, Morgan Kaufmann Publishers Inc., pp. 671–678.
  • (9) Cervino, J., Kalyvianaki, E., Salvachua, J., and Pietzuch, P. Adaptive provisioning of stream processing systems in the cloud. In 2012 IEEE 28th International Conference on Data Engineering Workshops (April 2012), pp. 295–301.
  • (10) Chen, W., Xu, Y., and Wu, X. Deep reinforcement learning for multi-resource multi-machine job scheduling. CoRR abs/1711.07440 (2017).
  • (11) Chintapalli, S., Dagit, D., Evans, B., Farivar, R., Graves, T., Holderbaugh, M., Liu, Z., Nusbaum, K., Patil, K., Peng, B. J., and Poulosky, P. Benchmarking streaming computation engines: Storm, flink and spark streaming. In 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (May 2016), pp. 1789–1792.
  • (12) Dalibard, V., Schaarschmidt, M., and Yoneki, E. Boat: Building auto-tuners with structured bayesian optimization. In Proceedings of the 26th International Conference on World Wide Web (Republic and Canton of Geneva, Switzerland, 2017), WWW ’17, International World Wide Web Conferences Steering Committee, pp. 479–488.
  • (13) Das, T., Zhong, Y., Stoica, I., and Shenker, S. Adaptive stream processing using dynamic batch sizing. In Proceedings of the ACM Symposium on Cloud Computing (New York, NY, USA, 2014), SOCC ’14, ACM, pp. 16:1–16:13.
  • (14) Debnath, B., Lilja, D., and Mokbel, M. Sard: A statistical approach for ranking database tuning parameters. In Proceedings of the 2008 - IEEE 24th International Conference on Data Engineering Workshop, ICDE’08 (9 2008), pp. 11–18.
  • (15) Ding, R., Wang, Q., Dang, Y., Fu, Q., Zhang, H., and Zhang, D. Yading: Fast clustering of large-scale time series data. Proc. VLDB Endow. 8, 5 (Jan. 2015), 473–484.
  • (16) Dong, M., Li, Q., Zarchy, D., Godfrey, P. B., and Schapira, M. PCC: Re-architecting congestion control for consistent high performance. In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15) (Oakland, CA, 2015), USENIX Association, pp. 395–408.
  • (17) Floratou, A., and Agrawal, A. Self-regulating streaming systems: Challenges and opportunities. In Proceedings of the International Workshop on Real-Time Business Intelligence and Analytics (New York, NY, USA, 2017), BIRTE ’17, ACM, pp. 1:1–1:5.
  • (18) Fu, T., Ding, J., Ma, R., Winslett, M., Yang, Y., and Zhang, Z. DRS: Dynamic Resource Scheduling for Real-Time Analytics over Fast Streams, vol. 2015-July. Institute of Electrical and Electronics Engineers Inc., 7 2015, pp. 411–420.
  • (19) Gedik, B., Schneider, S., Hirzel, M., and Wu, K.-L. Elastic scaling for data stream processing. IEEE Trans. Parallel Distrib. Syst. 25, 6 (June 2014), 1447–1463.
  • (20) Grandl, R., Ananthanarayanan, G., Kandula, S., Rao, S., and Akella, A. Multi-resource packing for cluster schedulers. SIGCOMM Comput. Commun. Rev. 44, 4 (Aug. 2014), 455–466.
  • (21) Guo, J., Czarnecki, K., Apel, S., Siegmund, N., and Wąsowski, A. Variability-aware performance prediction: A statistical learning approach. In 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE) (Nov 2013), pp. 301–311.
  • (22) Hastie, T., Tibshirani, R., and Friedman, J. The Elements of Statistical Learning. Springer, 2001.
  • (23) Hastings, W. K.

    Monte carlo sampling methods using markov chains and their applications.

    Biometrika 57, 1 (1970), 97–109.
  • (24) Herodotou, H., Lim, H., Luo, G., Borisov, N., Dong, L., Cetin, F. B., and Babu, S. Starfish: A self-tuning system for big data analytics. In In CIDR (2011), pp. 261–272.
  • (25) Isard, M., Prabhakaran, V., Currey, J., Wieder, U., Talwar, K., and Goldberg, A. Quincy: Fair scheduling for distributed computing clusters. In Proceedings of the ACM SIGOPS 22Nd Symposium on Operating Systems Principles (New York, NY, USA, 2009), SOSP ’09, ACM, pp. 261–276.
  • (26) Jamshidi, P., Velez, M., Kästner, C., Siegmund, N., and Kawthekar, P. Transfer learning for improving model predictions in highly configurable software. In Proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (Piscataway, NJ, USA, 2017), SEAMS ’17, IEEE Press, pp. 31–41.
  • (27) Kollios, G., Gunopulos, D., Koudas, N., and Berchtold, S.

    Efficient biased sampling for approximate clustering and outlier detection in large data sets.

    IEEE Transactions on Knowledge and Data Engineering 15, 5 (Sept 2003), 1170–1187.
  • (28) Krishnan, D. R., Quoc, D. L., Bhatotia, P., Fetzer, C., and Rodrigues, R. Incapprox: A data analytics system for incremental approximate computing. In Proceedings of the 25th International Conference on World Wide Web (Republic and Canton of Geneva, Switzerland, 2016), WWW ’16, International World Wide Web Conferences Steering Committee, pp. 1133–1144.
  • (29) Kulkarni, S., Bhagat, N., Fu, M., Kedigehalli, V., Kellogg, C., Mittal, S., Patel, J. M., Ramasamy, K., and Taneja, S. Twitter heron: Stream processing at scale. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (New York, NY, USA, 2015), SIGMOD ’15, ACM, pp. 239–250.
  • (30) M Liu, J. Nonlinear time series modeling using spline-based nonparametric models.
  • (31) Mao, H., Alizadeh, M., Menache, I., and Kandula, S. Resource management with deep reinforcement learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks (New York, NY, USA, 2016), HotNets ’16, ACM, pp. 50–56.
  • (32) Martins, P., Abbasi, M., and Furtado, P. Audy: Automatic dynamic least-weight balancing for stream workloads scalability. In 2014 IEEE International Congress on Big Data (June 2014), pp. 176–183.
  • (33) Micolet, P.-J., Smith, A., and Dubach, C. A machine learning approach to mapping streaming workloads to dynamic multicore processors. In Proceedings of the 17th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, Tools, and Theory for Embedded Systems (New York, NY, USA, 2016), LCTES 2016, ACM, pp. 113–122.
  • (34) Nair, V., Menzies, T., Siegmund, N., and Apel, S. Faster discovery of faster system configurations with spectral learning. Automated Software Engineering (Aug 2017).
  • (35) Neumeyer, L., Robbins, B., Nair, A., and Kesari, A. S4: Distributed stream computing platform. In 2010 IEEE International Conference on Data Mining Workshops (Dec 2010), pp. 170–177.
  • (36) Ng, R. T., and Han, J. Efficient and effective clustering methods for spatial data mining. In Proceedings of the 20th International Conference on Very Large Data Bases (San Francisco, CA, USA, 1994), VLDB ’94, Morgan Kaufmann Publishers Inc., pp. 144–155.
  • (37) Noghabi, S. A., Paramasivam, K., Pan, Y., Ramesh, N., Bringhurst, J., Gupta, I., and Campbell, R. H. Samza: stateful scalable stream processing at linkedin. Proceedings of the VLDB Endowment 10, 12 (2017), 1634–1645.
  • (38) Papadimitriou, C. H., Tamaki, H., Raghavan, P., and Vempala, S. Latent semantic indexing: A probabilistic analysis. In Proceedings of the Seventeenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (New York, NY, USA, 1998), PODS ’98, ACM, pp. 159–168.
  • (39) Pavlo, A., Angulo, G., Arulraj, J., Lin, H., Lin, J., Ma, L., Menon, P., Mowry, T., Perron, M., Quah, I., Santurkar, S., Tomasic, A., Toor, S., Aken, D. V., Wang, Z., Wu, Y., Xian, R., and Zhang, T. Self-driving database management systems. In CIDR 2017, Conference on Innovative Data Systems Research (2017).
  • (40) Pearson, K. On lines and planes of closest fit to systems of points in space. Philosophical Magazine 2, 6 (1901), 559–572.
  • (41) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., VanderPlas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in python. CoRR abs/1201.0490 (2012).
  • (42) Quoc, D. L., Chen, R., Bhatotia, P., Fetzer, C., Hilt, V., and Strufe, T. Streamapprox: Approximate computing for stream analytics. In Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference (New York, NY, USA, 2017), Middleware ’17, ACM, pp. 185–197.
  • (43) Sarkar, A., Guo, J., Siegmund, N., Apel, S., and Czarnecki, K. Cost-efficient sampling for performance prediction of configurable systems (t). In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE) (Washington, DC, USA, 2015), ASE ’15, IEEE Computer Society, pp. 342–352.
  • (44) Shen, Y., Ng, A. Y., and Seeger, M. Fast gaussian process regression using kd-trees. In Proceedings of the 18th International Conference on Neural Information Processing Systems (Cambridge, MA, USA, 2005), NIPS’05, MIT Press, pp. 1225–1232.
  • (45) Siegmund, N., Kolesnikov, S. S., Kästner, C., Apel, S., Batory, D., Rosenmüller, M., and Saake, G. Predicting performance via automated feature-interaction detection. In Proceedings of the 34th International Conference on Software Engineering (Piscataway, NJ, USA, 2012), ICSE ’12, IEEE Press, pp. 167–177.
  • (46) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (Jan. 2016), 484–489.
  • (47) Skelley, A. Db2 advisor: An optimizer smart enough to recommend its own indexes. In Proceedings of the 16th International Conference on Data Engineering (Washington, DC, USA, 2000), ICDE ’00, IEEE Computer Society, pp. 101–.
  • (48) Sullivan, D. G., Seltzer, M. I., and Pfeffer, A. Using probabilistic reasoning to automate software tuning. SIGMETRICS Perform. Eval. Rev. 32, 1 (June 2004), 404–405.
  • (49) Sutton, R. S., and Barto, A. G. Introduction to Reinforcement Learning, 1st ed. MIT Press, Cambridge, MA, USA, 1998.
  • (50) Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems (Cambridge, MA, USA, 1999), NIPS’99, MIT Press, pp. 1057–1063.
  • (51) Thalheim, J., Rodrigues, A., Akkus, I. E., Bhatotia, P., Chen, R., Viswanath, B., Jiao, L., and Fetzer, C. Sieve: Actionable insights from monitored metrics in distributed systems. In Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference (New York, NY, USA, 2017), Middleware ’17, ACM, pp. 14–27.
  • (52) Tibshirani, R. Regression shrinkage and selection via the lasso: a retrospective. Journal of the Royal Statistical Society Series B 73, 3 (2011), 273–282.
  • (53) Toshniwal, A., Taneja, S., Shukla, A., Ramasamy, K., Patel, J. M., Kulkarni, S., Jackson, J., Gade, K., Fu, M., Donham, J., Bhagat, N., Mittal, S., and Ryaboy, D. Storm@twitter. In Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data (New York, NY, USA, 2014), SIGMOD ’14, ACM, pp. 147–156.
  • (54) Van Aken, D., Pavlo, A., Gordon, G. J., and Zhang, B. Automatic database management system tuning through large-scale machine learning. In Proceedings of the 2017 ACM International Conference on Management of Data (New York, NY, USA, 2017), SIGMOD ’17, ACM, pp. 1009–1024.
  • (55) Vaquero, L. M., and Subiros, D. Hypershapes for rules with dimensions defined by conditions, 2016. International Application No.: PCT/US2016/030087.
  • (56) Wang, Z., Hutter, F., Zoghi, M., Matheson, D., and De Freitas, N. Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Int. Res. 55, 1 (Jan. 2016), 361–387.
  • (57) Weiss, G. M., and Tian, Y.

    Maximizing classifier utility when there are data acquisition and modeling costs.

    Data Mining and Knowledge Discovery 17, 2 (Oct 2008), 253–282.
  • (58) Winstein, K., and Balakrishnan, H. Tcp ex machina: Computer-generated congestion control. In Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM (New York, NY, USA, 2013), SIGCOMM ’13, ACM, pp. 123–134.
  • (59) Xu, T., Jin, L., Fan, X., Zhou, Y., Pasupathy, S., and Talwadker, R. Hey, you have given me too many knobs!: Understanding and dealing with over-designed configuration in system software. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (New York, NY, USA, 2015), ESEC/FSE 2015, ACM, pp. 307–319.
  • (60) Yang, Q. When deep learning meets transfer learning. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (New York, NY, USA, 2017), CIKM ’17, ACM, pp. 5–5.
  • (61) Zaharia, M., Borthakur, D., Sen Sarma, J., Elmeleegy, K., Shenker, S., and Stoica, I. Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling. In Proceedings of the 5th European Conference on Computer Systems (New York, NY, USA, 2010), EuroSys ’10, ACM, pp. 265–278.
  • (62) Zaharia, M., Das, T., Li, H., Shenker, S., and Stoica, I. Discretized streams: An efficient and fault-tolerant model for stream processing on large clusters. HotCloud 12 (2012).
  • (63) Zhang, C., Patras, P., and Haddadi, H. Deep learning in mobile and wireless networking: A survey. CoRR abs/1803.04311 (2018).
  • (64) Zhang, Y., Guo, J., Blais, E., and Czarnecki, K. Performance prediction of configurable software systems by fourier learning (t). In Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE) (Washington, DC, USA, 2015), ASE ’15, IEEE Computer Society, pp. 365–373.
  • (65) Zhou, S., Zhou, A., Cao, J., Wen, J., Fan, Y., and Hu, Y. Combining sampling technique with dbscan algorithm for clustering large spatial databases. In Proceedings of the 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Current Issues and New Applications (London, UK, UK, 2000), PADKK ’00, Springer-Verlag, pp. 169–172.