Metareasoning in Modular Software Systems: On-the-Fly Configuration using Reinforcement Learning with Rich Contextual Representations

05/12/2019 ∙ by Aditya Modi, et al. ∙ 17

Assemblies of modular subsystems are being pressed into service to perform sensing, reasoning, and decision making in high-stakes, time-critical tasks in such areas as transportation, healthcare, and industrial automation. We address the opportunity to maximize the utility of an overall computing system by employing reinforcement learning to guide the configuration of the set of interacting modules that comprise the system. The challenge of doing system-wide optimization is a combinatorial problem. Local attempts to boost the performance of a specific module by modifying its configuration often leads to losses in overall utility of the system's performance as the distribution of inputs to downstream modules changes drastically. We present metareasoning techniques which consider a rich representation of the input, monitor the state of the entire pipeline, and adjust the configuration of modules on-the-fly so as to maximize the utility of a system's operation. We show significant improvement in both real-world and synthetic pipelines across a variety of reinforcement learning techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Face detection and landmark detection modular system. The input is an image stream to the face detection module which outputs locations of faces in the image which are then input to the face landmark detection module which outputs locations of eyes, nose, lips, brows etc on the detected face landmark modules. The metareasoning module receives the input stream of images along with intermediate outputs of the face detector to dynamically decide the configuration of the pipeline such that it optimizes the end system loss.

The lives of a large segment of the world’s population are greatly influenced by complex software systems, be it the software that returns search results, enables the purchase of an airplane ticket, or runs a self-driving car. Software systems are inherently modular, i.e. they are composed of numerous distinct modules working together. As an example, a self-driving car has modules for sensors such as cameras, lidars which poll the sensors and output sensor messages, and a mapping module that consumes sensor messages and creates a high-resolution map of the immediate environment. The output of the mapping module is then input to a planning module whose job is to create safe trajectories for the vehicle. These distinct modules often operate at different frequencies; the camera module may be producing images at Hz while the GPS module may be producing vehicle position readings at Hz. Furthermore, they may each have their own set of free parameters which are set via access of a configuration file at startup. For example, the software serving as the driver of a camera in the self-driving pipeline may have a parameter setting for the rate at which images are polled from the camera and another parameter for the resolution of the images. Similarly, the function of the mapping module may be controlled by a parameter that specifies the maximum amount of memory it is allowed to consume, leading to the continual removal of information about more distant and thus less relevant map content.

Large software systems typically are composed of a set of distinct modular components. The operating characteristics of all of the components are usually manually configured to achieve system performance targets or constraints like accuracy and/or latency of output. Configurations of parameters may result from the tedious and long-term tuning of one parameter at a time. Once such nominal configurations have been produced, they are then held constant during system execution. The reliance on such fixed policies in a dynamic world may often be suboptimal. As an example, modules may take different amounts of time depending on the specific contents of the inputs they receive.

As a running example, we illustrate a pipeline for extracting faces with keypoint annotations from images in Figure 1. A natural performance metric for the pipeline might blend the prediction latency and accuracy, where the latency of a face-detection module may vary dramatically based on the number of people in the camera view. In this case, one might prefer switching to a parameter setting which allows the face detector to sacrifice some accuracy but which is much faster hence raising the overall utility of the entire pipeline. Also modules which are upstream from the face detector like the camera driver module might ideally throttle back the rate at which it is producing images since most of these images will not get processed anyways, due to a bottleneck at the face detector module. Attempts to separately optimize distinct modules can often lead to losses in utility Bradley (2010) because of unaccounted shifts in the distribution of outputs produced by upstream modules.

Revisiting the self-driving car example, a basic utility function is to simply navigate passengers to their destination safely and in a reasonable amount of time. Highlighting the contextuality again, the emphasis on driving time might be higher when trying to get to an important meeting or a flight than going grocery shopping. Furthermore, the utility function will typically be deeply personal to the user and has to be inferred over time. Importantly, this is a complex pipeline-level feedback which is hard to attribute to individual components.

Optimizing the configuration of large modular systems is challenging for the following reasons: 1. Changing the parameters of an upstream module can drastically change the distribution of inputs to downstream modules. Jointly choosing configuration for each module leads to a combinatorial optimization problem where the space of assignments is the cross product of the action space of the parameters of each module. 2. Even if we solved the combinatorial optimization problem, a fixed configuration is not good across all inputs. Hence, we need to choose the configuration in an

input-adaptive manner. This decision about a particular module’s parameter assignment has to be made before input is passed through it. 3. There are challenges of credit assignment about how much each particular parameter assignment, for each module along the way, contributed to the final utility. For non-additive utility functions, this is especially challenging Daumé III et al. (2018). 4. Finally, the metareasoning process by itself should add negligible latency to the original system. If the cost of metareasoning is significant, it may be best to run the original pipeline with different configurations and select the best performing assignment.

In this work, we leverage advances in representation and reinforcement learning (RL) to develop metareasoning machinery that can optimize the configuration of modular software systems under changing inputs and compute environments. Specifically we demonstrate that by having a metareasoner continuously monitor the entire system we can switch parameters of each module on-the-fly to adapt to changing inputs and optimize a desired objective. We also study the distinction between attainable performance between choosing the best configuration for the entire pipeline as a function of just the initial input, versus further choosing the configuration of each module based on all the preceding actions and outputs. We experiment with a synthetic pipeline meant to require adaptivity to the inputs, and we find that by doing so at each module, we improve by roughly 50% or more over the best constant assignment, and typically by a similar margin over the choice of a configurationn just as a function of the initial input. For the face and landmark detection pipeline 1

, we use the activations of a pretrained neural network model as a contextual signal and leverage this rich representation of context in decisions about the configuration of each module

before the module operates on its inputs. We characterize the boosts in utility provided via use of this contextual information, improving or more across different utility functions as opposed to the best static configuration of the system. Overall, our experiments demonstrate the importance of online, adaptive configuration of each module.

2 Related Work

RL to control software pipelines: Decisions about computation under uncertainties in time and context have been described in Horvitz and Lengyel (1997), which presented the use of metareasoning to guide graphics rendering under changing computational resources, considering probabilistic models of human attention so as to maximize the perceived quality of rendered content. The metareasoning guided tradeoffs in rendering quality under shifting content and time constraints in accordance with preferences encoded in a utility function. Principles for guiding proactive computation were formalized in Horvitz (2001). Raman et al. (2013) characterize a tradeoff between computation and performance in data processing and ML pipelines, and provide a message-passing algorithm (derived by viewing pipelines as graphical models) that allows a human operator to manually navigate this tradeoff. Our work focuses on the use of metareasoning to replace the operator by seeting the best operating point for any pipeline automatically.

Bradley (2010)

proposed using subgradient descent coupled with loss functions developed in imitation learning in order to jointly optimize modular robotics software pipelines which often involve planning modules, when the modules are differentiable with respect to the overall utility function. This is not suited to most real-world pipelines with modules described not with parameters but lines of code. In this work we instead develop fully general methods, which only assume the ability to evaluate the pipeline. Another form of pipeline optimization is to accordingly pick or configure the machine where each module should be executed. Methods in this ambit (

Mirhoseini et al. (2017)) are complementary to this work in that optimizing the pipeline configuration per se remains a problem even with optimal device placement.

RL in distributed system optimization

: The use of machine learning for optimizing resource allocation in distributed systems for data center and cluster management has been very well studied (

Lorido-Botran et al. (2014); Demirci (2015); Delimitrou and Kozyrakis (2013, 2014)

). Many of these techniques use supervised learning as well as collaborative filtering for resource assignment, which rely on the assumption of having a rich set of processes in the training data and might as a result suffer from eventual data bias for new workloads. Most recently, the use of reinforcement learning for learning policies which dynamically optimize resources such that service level agreements can be better satisfied has received a lot of attention especially with the rise of reinforcement learning with neural networks as function approximators (colloquially termed as ‘deep reinforcement learning’ (

Li (2017); Arulkumaran et al. (2017)). Methods using model-free methods Mao et al. (2016) based on policy-gradients Williams (1992); Sutton et al. (2000) and Q-learning Watkins (1989); Xu et al. (2012) have shown promise as modeling such large-scale distributed systems is a challenge in itself. Similarly, RL has found impressive success in energy optimization for data centers (Gao (2014); Memeti et al. (2018)).

RL for scheduling in operating systems: Even at the single machine level, RL has found promise for thread scheduling and resource allocation in operating systems. For example Fedorova et al. (2007); Hanus (2013) use RL-based methods to learn adaptive policies which outperform the best statically optimal policy (found by solving a queuing model) as well as myopic reactive policies which greedily optimize for short term outcomes. The problem of scheduling in operating systems however differs from pipeline optimization in two fundamental ways. First, the operating system (as well as the scheduler) is oblivious to accuracy dependencies between different processes or threads. Second, due to either architectural or generality constraints, schedulers do not optimize process-level parameters but mainly focus on machine configuration.

3 Problem Definition

3.1 Formal Setting and Notation

A pipeline of modules can be viewed as a directed graph where each node is a module and an edge from to represents module consuming the output of as its input. We assume the graph does not have any cycles. Without loss of generality, let the modules be numbered according to their topological sort; i.e. refers to the index of a module in a linear ordering of the DAG. For each module , we have a set of possible configurations—these are the actions that are available for the metareasoner to choose from. We denote this set by . A module can then be viewed as a mapping from its inputs to outputs , and each configuration implies a different mapping. As a running example, we will consider the face detection pipeline of Figure 1. The pipeline contains two modules with module having choices and module having choices. The input space to the first module is the space of images (possibly in a feature space). The output space is the same as and can encode the image, the locations of faces in the image, and the latency induced by the first module.

The quality of a pipeline’s operation is measured using a loss function denoted by . In the example pipeline of Figure 1, the outputs from the landmark detector can be labeled by human evaluators to assess accuracy and can be a complex trade-off between the latency incurred by the overall pipeline in processing an image vs. the accuracy of the detected landmarks. If labels are not available, accuracy might be inferred from proxies such as an incorrect denial of authentication for a user based on the landmark detector output, which can be observed when the user authenticates via alternative means such as a password. Crucially, we only observe the value of this loss-function for the specific outputs that the pipeline generates based on a certain configuration of actions at each module in response to an input . We highlight that the loss function can be any function mapping the pipeline’s final output and system state to a scalar value, such as a passenger’s satisfaction with a ride in a self-driving car as discussed in Section 1.

A metareasoner can be represented as a collection of (possibly randomized) policies , where specifies a context-dependent configuration of the module and is the set of distributions over the action set . We abuse the notation for here to denote any succinct representation of the preceding pipeline component’s outputs, actions and system state variables which are needed to choose the appropriate action for module . The pipeline receives a stream of inputs and we use to index the inputs. At time , the pipeline receives an initial input , based on which an action is picked at the first module and it produces an intermediate output . This induces the next input at the second module, at which point the policy is used to pick the next action and so on. At each intermediate module , the input depends on the outputs of all its parents in the DAG corresponding to the pipeline and we assume that the input spaces are chosen appropriately so that a good metareasoner policy for module can solely depend on instead of having to depend explicitly on the outputs of its predecessors. Proceeding this way, the interaction between the metareasoner and the environment can be summarized as follows:

  1. is fed as input to the pipeline.

  2. metareasoner chooses actions for each module based on the output of its predecessors and induces a trajectory: ; eventual output of the pipeline is .

  3. Observe loss .

Formulated this way, the task of the metareasoner can be viewed as an episodic fixed-horizon reinforcement learning problem, where the state transitions are deterministic (although the initial input can be highly stochastic, such as an image in the face detection example). Each input processed by the pipeline is an episode, the horizon is , actions chosen by policies for the upstream modules affect the state distribution seen by downstream policies. The feedback is extremely sparse with the only loss being observed at the end of the pipeline. The goal of the metareasoner is to minimize its average loss: , and the ideal metareasoner can be described as:

(1)

Our goal is to learn a metareasoner during the live operation of the pipeline. Since we only observe pipeline losses for the current choices of the metareasoner’s policies, we must balance exploration to discover new pipeline configurations, and exploitation of previously found performant configurations. In such explore-exploit problems, we measure the average loss accumulated by our adaptive learning strategy as a benchmark; a lower loss is better. A better learning strategy will quickly identify good context-dependent configurations and hence have lower average loss as increases.

3.2 Challenges

In this section we highlight the important challenges that a metareasoner needs to address.

Combinatorial action space: Viewing the entire pipeline as a monolothic entity, with an aim to find the best fixed assignment for each module with no input dependence, leaves the metareasoner with combinatorially many choices (every possible combination of module configurations) to consider. This can quickly become intractable even for modest pipelines (e.g. See Figure 3), despite the use of the simplest possible static policy class.

Adaptivity to inputs: Having a static action assignment per module is overly simplistic in general and we typically need a policy for manipulating configurations that is context-sensitive. For example, in Figure 4, we observe that the number of faces in the input image implies a fundamentally different trade-off between latency and accuracy; implying a different optimal choice for the image processing algorithm.

Credit Assignment: Since we only observe delayed episodic reward, we do not know which module was to blame for a bad pipeline loss.

Exploration: Pipeline optimization offers a fundamentally challenging domain for exploration. Though we employ ideas from contextual bandits here, we anticipate future directions that explore by using pipeline structure to derive better learning strategies.

4 Methods

The methods we outline now each address some of the challenges in Section 3.2. The simplest strategy is a non-adaptive (i.e. insensitive to the context) approach that can, however, effectively handle combinatorial actions (Section 4.1) to search for a locally optimal static assignment. A simple context-sensitive strategy views the pipeline optimization problem as a monolithic contextual bandit, and is vulnerable to a combinatorial scaling of complexity with pipeline size (Section 4.2). Finally, the most sophisticated strategy we develop produces a context-adaptive policy, exploits pipeline structure to learn per-module policies and uses policy-gradient algorithms to quickly reach a locally optimal configuration policy (Section 4.3).

4.1 Greedy Hill Climbing

The simplest (infeasible) strategy for pipeline optimization with input examples is to brute-force try every possible configuration for each of the inputs and pick the configuration that accumulates the lowest loss. This strategy will identify the best non-adaptive (i.e. context-insensitive) configuration, but needs executions of the pipeline to find this configuration. Since this is typically intractable even for modest values of and (especially in real-time), we now describe a tractable alternative to find an approximately good configuration via random co-ordinate descent.

Rather than identifying the best configuration, suppose we aim to find a “locally optimal” configuration – that is, for every module, if we held all other module configurations fixed then deviating from the current configuration can only worsen the pipeline loss. To achieve this, we begin by randomly picking an initial configuration for each module in the pipeline. In each epoch, we first sample

out of examples sampled uniformly with replacement from the dataset, where is

is a hyperparameter that can be set based on the available computational budget. We then choose one of the modules

uniformly at random and keep the configurations of all other modules fixed. We cycle through every possible action for that module (using, for instance,

examples for each choice of action at this module) and pick the configuration that achieves the lowest accumulated loss. We then repeat this process until our training budget of examples is exhausted, or we cycled through every module without making a configuration change (which means we are at a local optimum). This is akin to a greedy hill-climbing strategy, and has been used in many diverse applications of combinatorial optimization as an approximate heuristic, for instance in page layout optimization 

(Hill et al., 2017). More sophisticated variants of this approach can use best-arm identification techniques during each epoch, but fundamentally, this strategy finds an approximately optimal context-insensitive policy.

4.2 Global Bandit From Initial Input

For many real-world pipelines, the modules’ operating characteristics are sensitive to the initial input, meaning that a context-insensitive configuration policy can be very sub-optimal w.r.t. the pipeline loss. This motivates our approach to find a context-adaptive policy using contextual bandit (henceforth CB) algorithms.

A CB algorithm receives a context in each round , takes an action and receives a reward . The algorithm learns a policy that is context-sensitive and adaptively trades-off exploration and exploitation to maximize . In our setting is the input example to the pipeline, is the cartesian product of all module-specific configurations and the reward is simply the negative of the observed pipeline loss.

In our experiments, we use a simple CB algorithm that uses Boltzmann exploration (see e.g. (Kaelbling et al., 1996)). Concretely, the policy is represented by a parametrized scoring function . The score for each global configuration is computed and the policy is a softmax distribution of these scores:

(2)

where is a hyperparameter that governs the trade-off between exploration and exploitation. The score function is typically updated using importance-weighted regression (Bietti et al., 2018) (henceforth IWR); that is, if we observe a reward after configuring the pipeline with action , then the score function is optimized to minimize .

These contextual bandit algorithms can very effectively find context-sensitive policies and adaptively explore promising configurations. However, by viewing the entire pipeline as one monolithic object with combinatorially many actions, they cannot tractably scale to even moderate-sized pipelines.

4.3 Per-Module Bandit: Using Intermediate Observations

The contextual bandit approach of Section 4.2 does not scale well with the size of the pipeline, but it does guarantee (under mild assumptions, like an appropriate schedule for , see e.g. Singh et al. (2000)) that we will eventually find the best context-adaptive policy expressible by our scoring function . It also does not capture the outputs of prior modules in choosing the configuration at a successor, which can be vital such as when a previous module incurs a large latency. Suppose we again relax the goal to instead find an approximately good “locally optimal” policy. Our key insight is to now employ a CB algorithm for each module, so that the algorithm for module only needs to reason about actions. Moreover, as inputs are processed by the pipeline, the metareasoner can use up-to-date information (e.g. about latencies introduced by upstream modules) as part of the context for the downstream bandit algorithm.

One can again perform a variant of randomized co-ordinate ascent as in Section 4.1, holding all but one module fixed and running a CB algorithm for that module. This ensures that each bandit algorithm faces a stationary environment and can reliably identify a good context-sensitive policy quickly. However, this can be very data-inefficient; we will next sketch an actor-critic based reinforcement learning algorithm that can apply simultaneous updates to all modules.

Suppose we consider stochastic policies of the form (2) for a module , but where and . A common approach to optimize the policy parameters

is to directly perform stochastic gradient descent on the average loss, which results in the policy gradient algorithm. Specialized to our setting, an unbiased estimate of the gradient for the parameters

of , that is (recall (1) is given by since the loss is only incurred at the end. Typically, policy gradient techniques use an additional trained critic

as a baseline to reduce the variance of the gradients 

(Konda and Tsitsiklis, 2000). We train the critic to minimize the mean squared error between the observed reward and the predicted reward, .

5 Experiments

The algorithms discussed in the previous section are tested on two sets of pipelines: a synthetic pipeline with strong context dependence and a real-world perception pipeline. Our results show performance improvement by adaptively choosing the configuration of the pipeline. For all our experiments, we use a PyTorch based implementation

(Paszke et al., 2017)

with RMSProp

(Hinton et al., 2012) as the optimizer. For hyperparameter tuning, we perform a grid search over the possible choices. The common hyperparameters for both methods are:

  • [nosep,leftmargin=*]

  • Learning rate

  • Minibatch size

  • -weight decay factor

All our plots include 5 different runs with 5 randomly chosen random seeds with standard error regions. The specific details for each algorithm are as follows:

Greedy hill-climbing For finding the greedy step in each iteration, we use a minibatch of samples per action (). The procedure is run until it converges to a fixed assignment. In the plots, we outline this as the non-adaptive baseline with which each method is compared. The final assignment obtained by the procedure is evaluated using Monte Carlo runs with sufficiently large number of samples from the input distribution (synthetic pipeline) or using samples present in a holdout set (face detection pipeline).

Global contextual bandit The policy parameters consist of a single policy that maps the input to a configuration for the entire pipeline, and policy class is a neural network with a single hidden layer. The inverse temperature coefficient for Boltzmann exploration, , is considered to be a hyperparameter. We use the IWR loss  with minibatches to perform updates to the policy. For hyperparameter tuning, we choose the setting with the minimum cumulative loss for the pipeline across the input stream.

Per-module contextual bandit

The policy function at each module is a single hidden layer neural network with a softmax layer at the end. We use the policy gradient update rule as discussed in Section 

4.3. The context for each module is the concatenation of the sequence of actions chosen for previous modules, current latency and the initial input to the system. Additionally, for each module, we implement a critic which predicts the final loss of the pipeline for the given context as described in Section 4.3. The critic is again a single hidden layer neural network with a single output node and is trained using squared loss over the observed and predicted loss. We use the same learning rate for both networks. We use minibatches for training the networks for each module and these are concurrently updated for each minibatch. In addition, we also use entropy regularization weighted by ent_wt with the policy gradient loss function (Haarnoja et al., 2018). We tune hyperparameters using the best cumulative pipeline loss across the input stream.

Method Hyperparameter choices
Global CB
Per-module CB ent_wt
Table 1: Algorithm specific hyperparameter choices

At a high-level, our experiments seek to uncover the importance of adaptivity to the inputs in configuring the pipeline. To capture practical trade-offs, we consider loss functions which combine the latency incurred while processing an input, along with the accuracy of the final prediction compared to ground truth annotations.

5.1 Synthetic Pipelines

We begin with an illustrative synthetic pipeline designed to highlight: (1) benefits of adaptivity to the input over a static assignment, and (2) infeasibility of the global CB approach for even modestly long pipelines.

Figure 2: Synthetic pipeline

The structure of the synthetic pipeline with modules is a linear chain of length as shown in Figure 2. Each module has two possible actions: and (cheap/expensive action) which incur a latency cost of and respectively. Inputs to the pipeline consist of uniformly sampled binary strings from , with the bit encoding the preferred action for module . If the bit is set to , both actions give an accurate output and if it is , only the expensive action gives an accurate output. If we make an incorrect prediction at module , then the final prediction at the end of the pipeline is always incorrect. At each episode , we provide an input to the pipeline by first sampling a random binary string as mentioned above, but then add uniform noise in the interval to each entry and this perturbed input constitutes the initial context for the pipeline. The loss function for the final output of the pipeline is

We center the latency term at , which is the latency of the optimal policy that routes each input perfectly to the cheapest action that makes the correct prediction for it and the normalization keeps this term in . The second term measures the error in the eventual prediction, which requires each module to make an accurate prediction. The value is set to 1 for an incorrect output and 0 otherwise.While the initial input encodes the optimal configuration,suited to global CB, there is further room to adapt. When module makes an error in prediction, then all modules should pick the cheap action.

Figure 3: Average loss as a function of the number of examples for the synthetic pipeline. The flat line corresponds to the expected loss of the best constant assignment. The shaded region represents one standard error over runs.

We show results of our algorithms for and

. For static assignments, we compute both the solution of the greedy hill climbing strategy and a brute force search over all assignments, which results in similar average losses under the input distribution. The context for each module for per-module CB contains the pipeline’s input, a binary string to denote upstream actions and the current latency. We use ReLU activations with the number of hidden layers for each network in our experiments as the average of input dimension and the output dimension. For instance, for global CB, the number of hidden layers for

is and for per-module CB is .

We show the evolution of the average loss as a function of the number of examples for different values of in Figure 3. Our results show significant gains for being adaptive over the constant assignment baseline in all the plots. For , the total number of assignments is and it can be clearly seen that global CB is effective when compared to the per-module counterpart. However, global CB is slower in convergence than per-module CB. For , the difference between the two is more pronounced as the per-module CB method converges rapidly. For , the total number of assignments for the pipeline is and global CB completely fails to learn a better adaptive policy. The per-module CB has a slower convergence in this harder case, but still improves upon the best constant assignment extremely quickly.

5.2 Face and Landmark Detection

Pipeline and dataset: We use a two-module production-grade real-world perception pipeline service to empirically study the efficacy of our proposed methods (Figure 1). The first module is a face detection module which takes as input an image stream and outputs the location of faces present in the image as a list of bounding box rectangles. This module has four different algorithms for detecting faces. The exact details of the algorithms are proprietary and hence we only have black-box access to them. We benchmarked the latency and accuracy of the algorithms on 2689 images from the validation set of the 2017 keypoint detection task of the open source COCO dataset (Lin et al., 2014). COCO has ground truth annotations of up to 17 visible keypoints per person in an image. We notice that not only do each of the algorithm choices have large variation in latency and accuracy on average when compared to each other, more crucially their latencies and accuracies vary drastically with the number of true faces present in the incoming images, i.e. they are context dependent. Specifically, we observe that latency drastically increases with the number of faces present in the image. Figure 4 shows the latencies of all four detection algorithm choices vs. number of true faces present in the image. Note that different algorithms have different latencies on average with Algorithm 0 being the fastest ( seconds) and Algorithm 3 the slowest ( seconds).

Figure 4: Face detection algorithm choices vs. latency in seconds as a function of true number of faces present in the image. Algorithm 0 and 2 are much faster than Algorithm 1 and 3. All algorithms exhibit increasing latencies as the number of faces goes up in the image.

The second module is a face landmark detection module which takes as input the original image and the predicted face rectangles output by the face detection module and computes the location of landmarks on the face like nose, eyes, ears, mouth etc. There are three different landmark detector algorithm choices: 5 points, 27 points or 87 points landmark detector. Again we observe in our benchmarking that the landmark detector which outputs 87 points takes the most time at ms per image on average vs. ms per image for the 27 points algorithm and ms per image for the 5 points one. Since the landmark detectors are applied on each face rectangle detected by the face detector, the computational time required goes up proportional to the number of faces. Figure 5 shows example face detections and landmarks detected on images from the validation sets of the COCO dataset.

Figure 5: Example face and landmark detections from COCO validation set. (Left) Face detected (blue rectangle) and landmarks detected within the face (blue dots). The red dots represent groundtruth face landmarks not detected. (Right) False face detections (blue rectangles) and wrong landmarks within the rectangles.

Accuracy calculation: For evaluating when a prediction by the face detection module is a true/false positive/negative, we closely follow the scheme laid out in the COCO Keypoints evaluation page Lin et al. (2014). Specifically a rectangle location on the image is considered a true positive if it is within pixels of a ground truth face annotation which is quite conservative as the images we use are all resized to constant size of pixels. Otherwise, it is marked as a false positive. Ground truth faces which are not “covered” by any of the predicted faces cause an entry in the false negative count. If an image contains no faces and the face detection module also predicts no faces then we count such scenarios as true negatives.

Similarly, for the face landmark module, we mark a prediction as a true positive if it is within pixels of the ground truth landmark annotation, else a false positive. All landmarks not “covered” by any of the predicted faces are counted as false negatives. Note since the COCO keypoint annotations include only keypoint annotations on the entire human body including only face landmarks we don’t penalize predictions of the or landmark detection algorithms which are not within threshold distance of any ground truth landmark as that unfairly counts as false positives (due to lack of ground truth annotations)111Since we find an optimal matching between predicted and true keypoints, each false negative also results in a false negative.

Figure 6: Test performance curves for the Face Detection and Landmark pipeline with and . The -axis is the performance percentage improvement over static global policy after every 200 episodes of learning on held-out examples. The plots use a latency-based loss (left), latency and false negative rate (middle) and latency, false negative rate and false discovery rate (right). The adaptive approaches significantly improve over the best fixed configuration in all the cases. Shading represents standard error across 5 runs.
Figure 7: Action counts in module 2 for per-module CB

Results: The dataset of 2689 images is divided into train and test sets of size 2139 and 550 respectively. For training, we use minibatches randomly sampled from the training set and test curves are plotted using the average loss over the complete test set.222Unlike the synthetic pipeline we do not use average loss over the run of the algorithm here as the number of episodes is much larger than the size of our data set, which means algorithms can overfit to the data set unlike in the synthetic case where we have effectively an infinite data set. So we evaluate a proxy for average loss as the average test performance on held out examples, following standard methodology. We use the embedding from the penultimate layer of ResNet-50 (He et al., 2016)

as the contextual representation for each image for both adaptive methods. Thus, the context is a 1000 dimensional real valued vector. For per-module CB, the first module’s policy network gets the embedding as input whereas the second one gets additional concatenated values of number of faces detected by module 1 and its latency. All networks here have a hidden layer with 256 units with ReLU activations. For evaluating the final loss function of the pipeline, we consider a combination of three metrics:

Pure latency: Squared loss between the pipeline’s latency and a threshold : .

Latency and accuracy: In addition to the squared distance, we now consider the false negative rate (FNR) of the pipeline for the landmarks detected in each image. Since false negative rate is always in , it is robust to different number of landmarks in different images as well as different number of predicted landmarks by different actions (5, 27 and 87), unlike a direct classification error in landmark prediction. In this case, .

Latency, accuracy and false detection penalty: For the face detection module, in many cases there are non-zero false positives. This further increases the number of false positive landmarks for those cases and therefore we add another penalty of the false discovery rate for face detection.

#Faces #Images Global CB Per-module CB
3 124 11.82% 15.23%
4 83 18.58% 22.51%
5 63 23.04% 24.12%
Table 2: Performance percentage improvement over static global policy for global contextual bandit and per-module contextual bandit, broken down by the number of true faces in the image. Numbers shown here are for latency and accuracy loss with .

In our experiments, we choose a value of and for all three loss functions for the pipeline. Note that, if one tries to optimize total latency of the pipeline, then the non-adaptive solution of choosing the cheapest action for both modules works well. Therefore, we choose the bell shaped squared loss for latency which reflects the specification of aiming for a target latency. Figure 6 shows the observed improvement of the adaptive methods over the static global policy for and . Per-module CB and global CB show improvement for all loss functions against the constant assignment baseline found by greedy hill climbing. The numbers in Table 2 show the context-dependency of the pipeline. The benefits of algorithms which are able to effectively utilize context (Global CB and Per-module CB) is really highlighted in the parts of the dataset which contain more than 3, 4 or 5 faces. As the number of faces in an image increases, the percentage gain increases as well. The observed gains of approximately 15, 22 and 24 percent in respecting the utility function are arguably significant for sensitive mission-critical applications. Although these two methods are hard to distinguish on average, we think this is due to the small length of the pipeline and the intermediate context for the second module’s policy not being very informative. In order to show that the adaptivity to the final loss function influences the chosen actions, we compare the counts of action and for the second modules using the first two loss functions. We show these counts for per-module CB. It can be seen from Figure 7 that changing the loss function leads to a change in the chosen actions for the test set.

5.3 Discussion

We observe that contextual optimization of software pipelines can provide drastic improvements in the average performance of the pipeline for any chosen loss function. Our experiments show that for small pipelines, both global CB and per-module CB can give potential improvement over a constant assignment. However, these experiments should only be considered as a controlled study of the power of contextual optimization and there are additional caveats which we defer for future work:

Computational overhead: The loss functions considered by us for the pipeline involve a combination of latency and accuracy. In addition to the pipeline’s latency, any metareasoning module will add to the cost. In our experiments, the total time for inference and updates is less than 5-7 ms per input which is orders of magnitude less the the pipeline’s latency. Moreover, making the pipeline configurable in real-time might induce further communication/data re-configuration costs. We focus on the potential improvements from adaptivity in this paper and leave the engineering constraints for future work.

Non-stationarity during learning: For the per-module CB algorithm, the input given to each network is ideally the input for the corresponding module. Changing the configuration of these pipelines can vary the distribution of the inputs to these modules drastically and change in one action changes the input for downstream modules. The pipelines in our experiments do not showcase this issue. We ignore this aspect in our current exposition and leave a more involved study to future work.

6 Conclusion

We presented the use of reinforcement learning to perform real-time control of the configuration of a modular system for maximizing a system’s overall utility. We employed contextual bandits and provided them with a holistic representation of a visual scene and with the ability to both sense and control the parameters of each module. We show significant improvement with the use of the metareasoning methodology for both the face detection and synthetic pipelines. Future directions include studies of scaling up the mechanisms we have presented to more general systems of interacting modules and the use of different forms of contextual signals and their analyses, including the use of more flexible neural network inference methods.

Acknowledgements

This work was done while AM was at Microsoft Research. AM acknowledges the concurrent support in part by a grant from the Open Philanthropy Project to the Center for Human-Compatible AI, and in part by NSF grant CAREER IIS-1452099.

References

  • Arulkumaran et al. (2017) Arulkumaran, K., Deisenroth, M. P., Brundage, M., and Bharath, A. A. (2017). A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866.
  • Bietti et al. (2018) Bietti, A., Agarwal, A., and Langford, J. (2018). A contextual bandit bake-off. arXiv preprint arXiv:1802.04064.
  • Bradley (2010) Bradley, D. M. (2010). Learning in modular systems. Technical report, CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST.
  • Daumé III et al. (2018) Daumé III, H., Langford, J., and Sharaf, A. (2018). Residual loss prediction: Reinforcement learning with no incremental feedback.
  • Delimitrou and Kozyrakis (2013) Delimitrou, C. and Kozyrakis, C. (2013). Paragon: Qos-aware scheduling for heterogeneous datacenters. In ACM SIGPLAN Notices, volume 48, pages 77–88. ACM.
  • Delimitrou and Kozyrakis (2014) Delimitrou, C. and Kozyrakis, C. (2014). Quasar: resource-efficient and qos-aware cluster management. In ACM SIGARCH Computer Architecture News, volume 42, pages 127–144. ACM.
  • Demirci (2015) Demirci, M. (2015). A survey of machine learning applications for energy-efficient resource management in cloud computing environments. In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pages 1185–1190. IEEE.
  • Fedorova et al. (2007) Fedorova, A., Vengerov, D., and Doucette, D. (2007). Operating system scheduling on heterogeneous core systems. In Proceedings of the Workshop on Operating System Support for Heterogeneous Multicore Architectures.
  • Gao (2014) Gao, J. (2014). Machine learning applications for data center optimization.
  • Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pages 1856–1865.
  • Hanus (2013) Hanus, D. (2013). Smart scheduling: optimizing Tilera’s process scheduling via reinforcement learning. PhD thesis, Massachusetts Institute of Technology.
  • He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778.
  • Hill et al. (2017) Hill, D. N., Nassif, H., Liu, Y., Iyer, A., and Vishwanathan, S. (2017). An efficient bandit algorithm for realtime multivariate optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1813–1821.
  • Hinton et al. (2012) Hinton, G., Srivastava, N., and Swersky, K. (2012). Neural networks for machine learning lecture 6a overview of mini-batch gradient descent.
  • Horvitz (2001) Horvitz, E. (2001). Principles and applications of continual computation. Artificial Intelligence, 126(1-2):159–196.
  • Horvitz and Lengyel (1997) Horvitz, E. and Lengyel, J. (1997). Perception, attention, and resources: A decision-theoretic approach to graphics rendering. In Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence, pages 238–249. Morgan Kaufmann Publishers Inc.
  • Kaelbling et al. (1996) Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285.
  • Konda and Tsitsiklis (2000) Konda, V. R. and Tsitsiklis, J. N. (2000). Actor-critic algorithms. In Advances in neural information processing systems, pages 1008–1014.
  • Li (2017) Li, Y. (2017). Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274.
  • Lin et al. (2014) Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer.
  • Lorido-Botran et al. (2014) Lorido-Botran, T., Miguel-Alonso, J., and Lozano, J. A. (2014). A review of auto-scaling techniques for elastic applications in cloud environments. Journal of grid computing, 12(4):559–592.
  • Mao et al. (2016) Mao, H., Alizadeh, M., Menache, I., and Kandula, S. (2016). Resource management with deep reinforcement learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks, pages 50–56. ACM.
  • Memeti et al. (2018) Memeti, S., Pllana, S., Binotto, A., Kołodziej, J., and Brandic, I. (2018). Using meta-heuristics and machine learning for software optimization of parallel computing systems: a systematic literature review. Computing, pages 1–44.
  • Mirhoseini et al. (2017) Mirhoseini, A., Pham, H., Le, Q. V., Steiner, B., Larsen, R., Zhou, Y., Kumar, N., Norouzi, M., Bengio, S., and Dean, J. (2017). Device placement optimization with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2430–2439. JMLR. org.
  • Paszke et al. (2017) Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch. In NIPS-W.
  • Raman et al. (2013) Raman, K., Swaminathan, A., Gehrke, J., and Joachims, T. (2013). Beyond myopic inference in big data pipelines. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 86–94.
  • Singh et al. (2000) Singh, S., Jaakkola, T., Littman, M. L., and Szepesvári, C. (2000). Convergence results for single-step on-policy reinforcement-learning algorithms. Machine learning, 38(3):287–308.
  • Sutton et al. (2000) Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063.
  • Watkins (1989) Watkins, C. J. C. H. (1989). Learning from delayed rewards. PhD thesis, King’s College, Cambridge.
  • Williams (1992) Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256.
  • Xu et al. (2012) Xu, C.-Z., Rao, J., and Bu, X. (2012). Url: A unified reinforcement learning approach for autonomic cloud management. Journal of Parallel and Distributed Computing, 72(2):95–105.