Distributed Submodular Maximization

11/03/2014 ∙ by Baharan Mirzasoleiman, et al. ∙ ETH Zurich Yale University 0

Many large-scale machine learning problems--clustering, non-parametric learning, kernel machines, etc.--require selecting a small yet representative subset from a large dataset. Such problems can often be reduced to maximizing a submodular set function subject to various constraints. Classical approaches to submodular optimization require centralized access to the full dataset, which is impractical for truly large-scale problems. In this paper, we consider the problem of submodular function maximization in a distributed fashion. We develop a simple, two-stage protocol GreeDi, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and show that under certain natural conditions, performance close to the centralized approach can be achieved. We begin with monotone submodular maximization subject to a cardinality constraint, and then extend this approach to obtain approximation guarantees for (not necessarily monotone) submodular maximization subject to more general constraints including matroid or knapsack constraints. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse Gaussian process inference and exemplar based clustering on tens of millions of examples using Hadoop.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 24

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Numerous machine learning algorithms require selecting representative subsets of manageable size out of large datasets. Examples range from exemplar based clustering (Dueck and Frey, 2007) to active set selection for non-parametric learning (Rasmussen and Williams, 2006), to viral marketing (Kempe et al., 2003), and data subset selection for the purpose of training complex models (Lin and Bilmes, 2011a). Many such problems can be reduced to the problem of maximizing a submodular set function subject to cardinality or other feasibility constraints such as matroid, or knapsack constraints (Gomes and Krause, 2010; Krause and Golovin, 2012; Lee et al., 2009a).

Submodular functions exhibit a natural diminishing returns property common in many well known objectives: the marginal benefit of any given element decreases as we select more and more elements. Functions such as maximum weighted matching and maximum coverage are typical examples of functions with diminishing returns. As a result, submodular function optimization has numerous applications in machine learning and social networks: viral marketing (Kempe et al., 2003; Babaei et al., 2013; Mirzasoleiman et al., 2012), information gathering (Krause and Guestrin, 2011)

, document summarization 

(Lin and Bilmes, 2011a)

, and active learning 

(Golovin and Krause, 2011; Guillory and Bilmes, 2011).

Although maximizing a submodular function is NP-hard in general, a seminal result of Nemhauser et al. (1978) states that a simple greedy algorithm produces solutions competitive with the optimal (intractable) solution (Nemhauser and Wolsey, 1978; Feige, 1998). However, such greedy algorithms do not scale well when the dataset is massive. And as data volumes in modern applications increase faster than the ability of individual computers to process them, we need to look at ways to adapt our computations using parallelism.

MapReduce (Dean and Ghemawat, 2004) is arguably one of the most successful programming models for reliable and efficient parallel computing. It works by distributing the data to independent machines: map tasks redistribute the data for appropriate parallel processing and the output then gets sorted and processed in parallel by reduce tasks.

To perform submodular optimization in MapReduce, we need to design suitable parallel algorithms. The greedy algorithms that work well for centralized submodular optimization do not translate easily to parallel environments. The algorithms are inherently sequential in nature, since the marginal gain from adding each element is dependent on the elements picked in previous iterations. This mismatch makes it inefficient to apply classical algorithms directly to parallel setups.

In this paper, we develop a distributed procedure for maximizing submodular functions, that can be easily implemented in MapReduce. Our strategy is to use simple random selections to partition and process data in parallel. In particular:

  • We present a simple, parallel protocol, called GreeDi for distributed submodular maximization subject to cardinality constraints. It requires minimal communication, and can be easily implemented in MapReduce style parallel computation models.

  • We show that under some natural conditions, for large datasets the quality of the obtained solution is provably competitive with the best centralized solution.

  • We discuss extensions of our approach to obtain approximation algorithms for (not-necessarily monotone) submodular maximization subject to more general types of constraints, including matroid and knapsack constraints.

  • We implement our approach for exemplar based clustering and active set selection in Hadoop, and show how our approach allows to scale exemplar based clustering and sparse Gaussian process inference to datasets containing tens of millions of points.

  • We extensively evaluate our algorithm on several machine learning problems, including exemplar based clustering, active set selection and finding cuts in graphs, and show that our approach leads to parallel solutions that are very competitive with those obtained via centralized methods ( in exemplar based clustering, in active set selection, in finding cuts).

This paper is organized as follows. We begin in Section 2 by discussing background and related work. In Section 3, we formalize the distributed submodular maximization problem under cardinality constraints, and introduce example applications as well as naive approaches toward solving the problem. We subsequently present our GreeDi algorithm in Section 4, and prove its approximation guarantees. We then consider maximizing a submodular function subject to more general constraints in Section 5. We also present computational experiments on very large datasets in Section 6, showing that in addition to its provable approximation guarantees, our algorithm provides results close to the centralized greedy algorithm. We conclude in Section 7.

2 Background and Related Work

Distributed Data Analysis and MapReduce

Due to the rapid increase in dataset sizes, and the relatively slow advances in sequential processing capabilities of modern CPUs, parallel computing paradigms have received much interest. Inhabiting a sweet spot of resiliency, expressivity and programming ease, the MapReduce style computing model  (Dean and Ghemawat, 2004) has emerged as prominent foundation for large scale machine learning and data mining algorithms (Chu et al., 2006; Ekanayake et al., 2008). A MapReduce job takes the input data as a set of pairs. Each job consists of three stages: the map stage, the shuffle stage, and the reduce stage. The map stage, partitions the data randomly across a number of machines by associating each element with a key and produce a set of pairs. Then, in the shuffle stage, the value associated with all of the elements with the same key gets merged and send to the same machine. Each reducer then processes the values associated with the same key and outputs a set of new pairs with the same key. The reducers’ output could be input to another MapReduce job and a program in MapReduce paradigm can consist of multiple rounds of map and reduce stages (Karloff et al., 2010).

Centralized and Streaming Submodular Maximization

The problem of centralized maximization of submodular functions has received much interest, starting with the seminal work of Nemhauser et al. (1978). Recent work has focused on providing approximation guarantees for more complex constraints (for a more detailed account, see the recent survey by Krause and Golovin, 2012). Golovin et al. (2010) consider an algorithm for online distributed submodular maximization with an application to sensor selection. However, their approach requires stages of communication, which is unrealistic for large in a MapReduce style model. Gomes and Krause (2010) consider the problem of submodular maximization in a streaming model; however, their approach makes strong assumptions about the way the data stream is generated and is not applicable to the general distributed setting. Very recently, Badanidiyuru et al. (2014) provide a single pass streaming algorithm for cardinality-constrained submodular maximization with approximation guarantee to the optimum solution that makes no assumptions on the data stream.

There has also been new improvements in the running time of the standard greedy solution for solving SET-COVER (a special case of submodular maximization) when the data is large and disk resident (Cormode et al., 2010). More generally, Badanidiyuru and Vondrák (2014) improve the running time of the greedy algorithm for maximizing a monotone submodular function by reducing the number of oracle calls to the objective function. Following the same line of thought, Wei et al. (2014) propose a multi-stage framework for submodular maximization. In order to reduce the memory and execution complexity, they apply an approximate greedy procedure to maximize surrogate (proxy) submodular functions instead of optimizing the target function at each stage. Both approaches are sequential in nature and it is not clear how to parallelize them. However, they can be trivially integrated into our distributed framework for further optimization.

Scaling Up: Distributed Algorithms

Recent work has focused on specific instances of submodular optimization in distributed settings. Such scenarios often occur in large-scale graph mining problems where the data itself is too large to be stored on one machine. In particular, Chierichetti et al. (2010) address the MAX-COVER problem and provide a approximation to the centralized algorithm at the cost of passing over the dataset many times. Their result is further improved by Blelloch et al. (2011). Lattanzi et al. (2011) address more general graph problems by introducing the idea of filtering, namely, reducing the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. This idea is, in spirit, similar to our distributed method GreeDi. In contrast, we provide a more general framework, and characterize settings where performance competitive with the centralized setting can be obtained. The present version is a significant extension of our previous conference paper (Mirzasoleiman et al., 2013), providing theoretical guarantees for both monotone and non-monotone submodular maximization problems subject to more general types of constraints, including matroid and knapsack constraints (described in Section 5), and additional empirical results (Section 6). Parallel to our efforts (Mirzasoleiman et al., 2013)Kumar et al. (2013) has taken the approach of adapting the sequential greedy algorithm to distributed settings. However, their method requires knowledge of the ratio between the largest and smallest marginal gains of the elements, and generally requires a non-constant (logarithmic) number of rounds. We provide empirical comparisons in Section 6.4.

3 Submodular Maximization

In this section, we first review submodular functions and how to greedily maximize them. We then describe the distributed submodular maximization problem, the focus of this paper. Finally, we discuss two naive approaches towards solving this problem.

3.1 Greedy Submodular Maximization

Suppose that we have a large dataset of images, e.g. the set of all images on the Web or an online image hosting website such as Flickr, and we wish to retrieve a subset of images that best represents the visual appearance of the dataset. Collectively, these images can be considered as exemplars that summarize the visual categories of the dataset as shown in Fig. 1.

Figure 1: Cluster exemplars (left column) discovered by our distributed algorithm GreeDi described in Section 4 applied to the Tiny Images dataset (Torralba et al., 2008), and a set of representatives from each cluster.

One way to approach this problem is to formalize it as the -medoid problem. Given a set of images (called ground set) associated with a (not necessarily symmetric) dissimilarity function, we seek to select a subset of at most exemplars or cluster centers, and then assign each image in the dataset to its least dissimilar exemplar. If an element is assigned to exemplar , then the cost associated with is the dissimilarity between and . The goal of the -medoid problem is to choose exemplars that minimize the sum of dissimilarities between every data point and its assigned cluster center.

Solving the -medoid problem optimally is NP-hard, however, as we discuss in Section 3.4, we can transform this problem, and many other summarization tasks, to the problem of maximizing a monotone submodular function subject to cardinality constraint

(1)

Submodular functions are set functions which satisfy the following natural diminishing returns property.

(Nemhauser et al., 1978) A set function satisfies submodularity, if for every and

Furthermore, is called monotone if and only if for all it holds that and non-monotone otherwise. We will generally require that is nonnegative, i.e., for all sets .

Problem (1) is NP-hard for many classes of submodular functions (Feige, 1998). A fundamental result by  Nemhauser et al. (1978) establishes that a simple greedy algorithm that starts with the empty set and iteratively augments the current solution with an element of maximum incremental value

(2)

continuing until elements have been selected, is guaranteed to provide a constant factor approximation.

(Nemhauser et al., 1978) For any non-negative and monotone submodular function

, the greedy heuristic always produces a solution

of size that achieves at least a constant factor of the optimal solution.

This result can be easily extended to , where and are two positive integers (see, Krause and Golovin, 2012).

3.2 Distributed Submodular Maximization

In many today’s applications where the size of the ground set is very large and cannot be stored on a single computer, running the standard greedy algorithm or its variants (e.g., lazy evaluations, Minoux, 1978; Leskovec et al., 2007) in a centralized manner is infeasible. Hence, we seek a solution that is suitable for large-scale parallel computation. The greedy method described above is in general difficult to parallelize, since it is inherently sequential: at each step, only the object with the highest marginal gain is chosen and every subsequent step depends on the preceding ones.

Concretely, we consider the setting where the ground set is very large and cannot be handled on a single machine, thus must be distributed among a set of machines. While there are several approaches towards parallel computation, in this paper we consider the following model that can be naturally implemented in MapReduce. The computation proceeds in a sequence of rounds. In each round, the dataset is distributed to machines. Each machine carries out computations independently in parallel on its local data. After all machines finish, they synchronize by exchanging a limited amount of data (of size polynomial in and , but independent of ). Hence, any distributed algorithm in this model must specify: 1) how to distribute among the machines, 2) which algorithm should run on each machine, and 3) how to communicate and merge the resulting solutions.

In particular, the distributed submodular maximization problem requires the specification of the above steps in order to implement an approach for submodular maximization. More precisely, given a monotone submodular function , a cardinality constraint , and a number of machines , we wish to produce a solution of size such that is competitive with the optimal centralized solution .

3.3 Naive Approaches Towards Distributed Submodular Maximization

One way to solve problem (1) in a distributed fashion is as follows. The dataset is first partitioned (randomly, or using some other strategy) onto the machines, with representing the data allocated to machine . We then proceed in rounds. In each round, all machines – in parallel – compute the marginal gains of all elements in their sets . Next, they communicate their candidate to a central processor, who identifies the globally best element, which is in turn communicated to the machines. This element is then taken into account for computing the marginal gains and selecting the next elements. This algorithm (up to decisions on how break ties) implements exactly the centralized greedy algorithm, and hence provides the same approximation guarantees on the quality of the solution. Unfortunately, this approach requires synchronization after each of the rounds. In many applications, is quite large (e.g., tens of thousands), rendering this approach impractical for MapReduce style computations.

An alternative approach for large would be to greedily select elements independently on each machine (without synchronization), and then merge them to obtain a solution of size . This approach that requires only two rounds (as opposed to ), is much more communication efficient, and can be easily implemented using a single MapReduce stage. Unfortunately, many machines may select redundant elements, and thus the merged solution may suffer from diminishing returns. It is not hard to construct examples for which this approach produces solutions that are a factor worse than the centralized solution.

In Section 4, we introduce an alternative protocol GreeDi, which requires little communication, while at the same time yielding a solution competitive with the centralized one, under certain natural additional assumptions.

3.4 Applications of Distributed Submodular Maximization

In this part, we discuss two concrete problem instances, with their corresponding submodular objective functions , where the size of the datasets often requires a distributed solution for the underlying submodular maximization.

3.4.1 Large-scale nonparametric learning

Nonparametric learning (i.e., learning of models whose complexity may depend on the dataset size ) are notoriously hard to scale to large datasets. A concrete instance of this problem arises from training Gaussian processes or performing MAP inference in Determinantal Point Processes, as considered below. Similar challenges arise in many related learning methods, such as training kernel machines, when attempting to scale them to large data sets.

Active Set Selection in Sparse Gaussian Processes (GPs)

Formally a GP is a joint probability distribution over a (possibly infinite) set of random variables

, indexed by the ground set , such that every (finite) subset for

is distributed according to a multivariate normal distribution. More precisely, we have

where and are prior mean and covariance matrix, respectively. The covariance matrix is parametrized via a positive definite kernel . As a concrete example, when elements of the ground set are embedded in a Euclidean space, a commonly used kernel in practice is the squared exponential kernel defined as follows:

Gaussian processes are commonly used as priors for nonparametric regression. In GP regression, each data point is considered a random variable. Upon observations (where

is a vector of independent Gaussian noise with variance

), the predictive distribution of a new data point is a normal distribution , where mean and variance are given by

(3)
(4)

Evaluating (3) and (4) is computationally expensive as it requires solving a linear system of variables. Instead, most efficient approaches for making predictions in GPs rely on choosing a small – so called active – set of data points. For instance, in the Informative Vector Machine (IVM) one seeks a set such that the information gain, defined as

is maximized. It can be shown that this choice of is monotone submodular (Krause and Guestrin, 2005a). For medium-scale problems, the standard greedy algorithms provide good solutions. For massive data however, we need to resort to distributed algorithms. In Section 6, we will show how GreeDi can choose near-optimal subsets out of a dataset of 45 million vectors.

Inference for Determinantal Point Processes

A very similar problem arises when performing inference in Determinantal Point Processes (DPPs). DPPs (Macchi, 1975) are distributions over subsets with a preference for diversity, i.e., there is a higher probability associated with sets containing dissimilar elements. Formally, a point process on a set of items is a probability measure on (the set of all subsets of ). is called determinantal point process if for every we have:

where is a positive semidefinite kernel matrix, and , is the restriction of to the entries indexed by elements of (we adopt that ). The normalization constant can be computed explicitly from the following equation

where is the identity matrix. Intuitively, the kernel matrix determines which items are similar and therefore less likely to appear together.

In order to find the most diverse and informative subset of size , we need to find which is NP-hard, as the total number of possible subsets is exponential (Ko et al., 1995). However, the objective function is log-submodular, i.e. is a submodular function (Kulesza and Taskar, 2012). Hence, MAP inference in large DPPs is another potential application of distributed submodular maximization.

3.4.2 Large-scale Exemplar Based Clustering

Suppose we wish to select a set of exemplars, that best represent a massive dataset. One approach for finding such exemplars is solving the -medoid problem (Kaufman and Rousseeuw, 2009), which aims to minimize the sum of pairwise dissimilarities between exemplars and elements of the dataset. More precisely, let us assume that for the dataset we are given a nonnegative function (not necessarily assumed symmetric, nor obeying the triangle inequality) such that encodes dissimilarity between elements of the underlying set . Then, the cost function for the -medoid problem is:

(5)

Finding the subset

of cardinality at most that minimizes the cost function (5) is NP-hard. However, by introducing an auxiliary element , a.k.a. phantom exemplar, we can turn into a monotone submodular function (Krause and Gomes, 2010)

(6)

In words, measures the decrease in the loss associated with the set

versus the loss associated with just the auxiliary element. We begin with a phantom exemplar and try to find the active set that together with the phantom exemplar reduces the value of our loss function more than any other set. Technically, any point

that satisfies the following condition can be used as a phantom exemplar:

This condition ensures that once the distance between any and is greater than the maximum distance between elements in the dataset, then . As a result, maximizing (a monotone submodular function) is equivalent to minimizing the cost function . This problem becomes especially computationally challenging when we have a large dataset and we wish to extract a manageable-size set of exemplars, further motivating our distributed approach.

3.4.3 Other Examples

Numerous other real world problems in machine learning can be modeled as maximizing a monotone submodular function subject to appropriate constraints (e.g., cardinality, matroid, knapsack). To name a few, specific applications that have been considered range from efficient content discovery for web crawlers and multi topic blog-watch (Chierichetti et al., 2010), over document summarization (Lin and Bilmes, 2011b) and speech data subset selection (Wei et al., 2013), to outbreak detection in social networks (Leskovec et al., 2007), online advertising and network routing (De Vries and Vohra, 2003), revenue maximization in social networks (Hartline et al., 2008), and inferring network of influence (Gomez Rodriguez et al., 2010). In all such examples, the size of the dataset (e.g., number of webpages, size of the corpus, number of blogs in blogosphere, number of nodes in social networks) is massive, thus GreeDi offers a scalable approach, in contrast to the standard greedy algorithm, for such problems.

4 The GreeDi Approach for Distributed Submodular Maximization

In this section we present our main results. We first provide our distributed solution GreeDi for maximizing submodular functions under cardinality constraints. We then show how we can make use of the geometry of data inherent in many practical settings in order to obtain strong data-dependent bounds on the performance of our distributed algorithm.

4.1 An Intractable, yet Communication Efficient Approach

Before we introduce GreeDi, we first consider an intractable, but communication–efficient two-round parallel protocol to illustrate the ideas. This approach, shown in Algorithm 1, first distributes the ground set to machines. Each machine then finds the optimal solution, i.e., a set of cardinality at most , that maximizes the value of in each partition. These solutions are then merged, and the optimal subset of cardinality is found in the combined set. We denote this distributed solution by .

As the optimum centralized solution achieves the maximum value of the submodular function, it is clear that . For the special case of selecting a single element , we have . Furthermore, for modular functions (i.e., those for which and are both submodular), it is easy to see that the distributed scheme in fact returns the optimal centralized solution as well. In general, however, there can be a gap between the distributed and the centralized solution. Nonetheless, as the following theorem shows, this gap cannot be more than . Furthermore, this is the best result one can hope for under the two-round model. Let be a monotone submodular function and let . Then, . In contrast, for any value of and , there is a monotone submodular function such that . The proof of all the theorems can be found in the appendix. The above theorem fully characterizes the performance of two-round distributed algorithms in terms of the best centralized solution. In practice, we cannot run Algorithm 1, since there is no efficient way to identify the optimum subset in set , unless P=NP. In the following, we introduce an efficient distributed approximation – GreeDi.

0:  Set , of partitions , constraints .
0:  Set .
1:  Partition into sets .
2:  In each partition find the optimum set of cardinality .
3:  Merge the resulting sets:
4:  Find the optimum set of cardinality in . Output this solution .
Algorithm 1 Intractable Distributed Submodular Maximization

4.2 Our GreeDi Approximation

Our efficient distributed method GreeDi is shown in Algorithm 2. It parallels the intractable Algorithm 1, but replaces the selection of optimal subsets, i.e., , by greedy solutions . Due to the approximate nature of the greedy algorithm, we allow it to pick sets slightly larger than . More precisely, GreeDi is a two-round algorithm that takes the ground set , the number of partitions , and the cardinality constraint . It first distributes the ground set over machines. Then each machine separately runs the standard greedy algorithm by sequentially finding an element that maximizes the discrete derivative (2). Each machine – in parallel – continues adding elements to the set until it reaches elements. We define to be the set with the maximum value among . Then the solutions are merged, i.e., , and another round of greedy selection is performed over until elements are selected. We denote this solution by . The final distributed solution with parameters and , denoted by , is the set with a higher value between and (c.f., Figure 2 shows GreeDi schematically). The following result parallels Theorem 4.1.

0:  Set , of partitions , constraints .
0:  Set .
1:  Partition into sets .
2:  Run the standard greedy algorithm on each set to find a solution .
3:  Find
4:  Merge the resulting sets:
5:  Run the standard greedy algorithm on to find a solution .
6:  Return .
Algorithm 2 Greedy Distributed Submodular Maximization (GreeDi)

Let be a monotone submodular function and . Then

For the special case of the result of 4.2 simplifies to . Moreover, it is straightforward to generalize GreeDi to multiple rounds (i.e., more than two) for very large datasets.

Figure 2: Illustration of our two-round algorithm GreeDi

In light of Theorem 4.1, it is clear that in general one cannot hope to eliminate the dependency of the distributed solution on . However, as we show in the sequel, in many practical settings, the ground set exhibits rich geometrical structures that can be used to obtain improved analytically results.

4.3 Performance on Datasets with Geometric Structure

In practice, we can hope to do much better than the worst case bounds shown previously by exploiting underlying structure often present in real data and important set functions. In this part, we assume that a metric exists on the data elements, and analyze performance of the algorithm on functions that change gracefully with change in the input. We refer to these as Lipschitz functions: Let . A set function is -Lipschitz w.r.t. metric , if for any integer , any equal sized sets and and any matching of elements: , the difference between and is bounded by:

(7)

We can show that the objective functions from both examples in Section 3.4 are -Lipschitz for suitable kernels/distance functions:

Suppose that the covariance matrix of a Gaussian process is parametrized via a positive definite kernel which is Lipschitz continuous with respect to metric with constant , i.e., for any triple of points , we have . Then, the mutual information for the Gaussian process is -Lipschitz with , where is the number of elements in the selected subset .

Let be a metric on the elements of the dataset. Furthermore, let encode the dissimilarity between elements of the underlying set . Then for , the loss function (and hence also the corresponding submodular utility function ) is -Lipschitz with , where is the diameter of the ball encompassing elements of the dataset in the metric space. In particular, for the -medoid problem, which minimizes the loss function over all clusters with respect to , we have , and for the -means problem, which minimizes the loss function over all clusters with respect to , we have .

Beyond Lipschitz-continuity, many practical instances of submodular maximization can be expected to satisfy a natural density condition. Concretely, whenever we consider a representative set (i.e., optimal solution to the submodular maximization problem), we expect that any of its constituent elements has potential candidates for replacement in the ground set. For example, in our exemplar-based clustering application, we expect that cluster centers are not isolated points, but have many almost equally representative points close by. Formally, for any element , we define its -neighborhood as the set of elements in within distance from (i.e., -close to ):

By -Lipschitz-continuity, it must hold that if we replace element in set by an -close element (i.e., ) to get a new set of equal size, it must hold that .

As described earlier, our algorithm GreeDi partitions into sets for parallel processing. If in addition we assume that elements are assigned uniformly at random to different machines, -neighborhoods are sufficiently dense, and the submodular function is Lipschitz continuous, then GreeDi is guaranteed to produce a solution close to the centralized one. More formally, we have the following theorem. Under the conditions that 1) elements are assigned uniformly at random to machines, 2) for each we have , and 3) is -Lipschitz continuous, then with probability at least the following holds:

Note that once the above conditions are satisfied for small values of (meaning that there is a high density of data points within a small distance from each element of the optimal solution) then the distributed solution will be close to the optimal centralized one. In particular if we let , the distributed solution is guaranteed to be within a factor from the optimal centralized solution. This situation naturally corresponds to very large datasets. In the following, we discusses more thoroughly this important scenario.

4.4 Performance Guarantees for Very Large datasets

Suppose that our dataset is a finite sample drawn i.i.d. from an underlying infinite set , according to some (unknown) probability distribution. Let be an optimal solution in the infinite set, i.e., , such that around each , there is a neighborhood of radius at least where the probability density is at least at all points (for some constants and ). This implies that the solution consists of elements coming from reasonably dense and therefore representative regions of the dataset.

Let us suppose is the growth function of the metric: is defined to be the volume of a ball of radius centered at a point in the metric space. This means, for the probability of a random element being in is at least and the expected number of neighbors of is at least . As a concrete example, Euclidean metrics of dimension have . Note that for simplicity we are assuming the metric to be homogeneous, so that the growth function is the same at every point. For heterogeneous spaces, we require to have a uniform lower bound on the growth function at every point.

In these circumstances, the following theorem guarantees that if the dataset is sufficiently large and is -Lipschitz, then GreeDi produces a solution close to the centralized one.

For , where , if the algorithm GreeDi assigns elements uniformly randomly to processors , then with probability at least ,

The above theorem shows that for very large datasets, GreeDi provides a solution that is within a factor of the optimal centralized solution. This result is based on the fact that for sufficiently large datasets, there is a suitably dense neighborhood around each member of the optimal solution. Thus, if the elements of the dataset are partitioned uniformly randomly to processors, at least one partition contains a set such that its elements are very close to the elements of the optimal centralized solution and provides a constant factor approximation of the optimal centralized solution.

4.5 Handling Decomposable Functions

So far, we have assumed that the objective function is given to us as a black box, which we can evaluate for any given set independently of the dataset . In many settings, however, the objective depends itself on the entire dataset. In such a setting, we cannot use GreeDi as presented above, since we cannot evaluate on the individual machines without access to the full set . Fortunately, many such functions have a simple structure which we call decomposable. More precisely, we call a submodular function decomposable if it can be written as a sum of submodular functions as follows (Krause and Gomes, 2010):

In other words, there is separate submodular function associated with every data point . We require that each can be evaluated without access to the full set . Note that the exemplar based clustering application we discussed in Section 3.4 is an instance of this framework, among many others. Let us define the evaluation of restricted to as follows:

In the remaining of this section, we show that assigning each element of the dataset randomly to a machine and running GreeDi will provide a solution that is with high probability close to the optimum solution. For this, let us assume that ’s are bounded, and without loss of generality for . Similar to Section 4.3 we assume that GreeDi performs the partition by assigning elements uniformly at random to the machines. These machines then each greedily optimize . The second stage of GreeDi optimizes , where is chosen uniformly at random with size .

Then, we can show the following result. First, for any fixed , let us define to be the smallest integer such that for we have .

For , , and under the assumptions of Theorem 4.4, we have, with probability at least ,

The above result demonstrates why GreeDi performs well on decomposable submodular functions with massive data even when they are evaluated locally on each machine. We will report our experimental results on exemplar-based clustering in the next section.

Figure 3: Illustration of our two-round algorithm GreeDi for decomposable functions

5 (Non-monotone) Submodular Functions with General Constraints

In this section we show how GreeDi can be extended to handle 1) more general constraints, and 2) non-monotone submodular functions. We begin by introducing the classes of constraints we consider.

5.1 Matroid Constraints

A matroid is a pair where is a finite set (called the ground set) and is a family of subsets of (called the independent sets) satisfying the following two properties:

  • Heredity property: and implies that , i.e. every subset of an independent set is independent.

  • Augmentation property: If and , there is an element such that

Maximizing a submodular function subject to matroid constraints has found several applications in machine learning and data mining, ranging from content aggregation on the web (Abbassi et al., 2013) to viral marketing (Narayanam and Nanavati, 2012) and online advertising (Streeter et al., 2009).

One way to approximately maximize a monotone submodular function subject to the constraint that each is independent, i.e., is to use a generalization of the greedy algorithm. This algorithm, which starts with an empty set and in each iteration picks the element with maximum benefit until there is no more element such that , is guaranteed to provide a -approximation of the optimal solution (Fisher et al., 1978). Recently, this bound has been improved to using the continuous greedy algorithm (Calinescu et al., 2011). For the non-negative and non-monotone submodular functions with matroid constraints, the best known result is a 0.325-approximation based on simulated annealing (Gharan and Vondrák, 2011).

Intersection of Matroids:

A more general case is when we have matroids on the same ground set , and we want to maximize the submodular function on the intersection of matroids. That is, consists of all subsets of that are independent in all matroids. This constraint arises, e.g., when optimizing over rankings (which can be modeled as intersections of two partition matroids). Another recent application considered is finding the influential set of users in viral marketing when multiple products need to be advertised and each user can tolerate only a small number of recommendations (Du et al., 2013). For matroid constraints, -approximation provided by the greedy algorithm (Fisher et al., 1978) has been improved to a -approximation for by Lee et al. (2009b). For the non-monotone case, a -approximation based on local search is also given by Lee et al. (2009b) .

-systems:

-independence systems generalize constraints given by the intersection of matroids. Given an independence family and a set , let denote the set of maximal independent sets of included in , i.e., . Then we call a -system if for all we have

Similar to matroid constraints, the greedy algorithm provides a -approximation guarantee for maximizing a monotone submodular function subject to -systems constraints (Fisher et al., 1978). For the non-monotone case, a -approximation can be achieved by combining an algorithm of Gupta et al. (2010) with the result for unconstrained submodular maximization of Buchbinder et al. (2012).

5.2 Knapsack Constraints

In many applications, including feature and variable selection in probabilistic models (Krause and Guestrin, 2005a) and document summarization (Lin and Bilmes, 2011b), elements have non-uniform costs , and we wish to find a collection of elements that maximize subject to the constraint that the total cost of elements in does not exceed a given budget , i.e.

Since the simple greedy algorithm ignores cost while iteratively adding elements with maximum marginal gains according (see Eq. 2) until , it can perform arbitrary poorly. However, it has been shown that taking the maximum over the solution returned by the greedy algorithm that works according to Eq. 2 and the solution returned by the modified greedy algorithm that optimizes the cost-benefit ratio

provides a -approximation of the optimal solution (Krause and Guestrin, 2005b). Furthermore, a more computationally expensive algorithm which starts with all feasible solutions of cardinality 3 and augments them using the cost-benefit greedy algorithm to find the set with maximum value of the objective function provides a - approximation (Sviridenko, 2004). For maximizing non-monotone submodular functions subject to knapsack constraints, a ()-approximation algorithm based on local search was given by Lee et al. (2009a).

Multiple Knapsack Constraints:

In some applications such as procurement auctions (Garg et al., 2001), video-on-demand systems and e-commerce (Kulik et al., 2009), we have a -dimensional budget vector and a set of element where each element is associated with a -dimensional cost vector. In this setting, we seek a subset of elements with a total cost of at most that maximizes a non-decreasing submodular function . Kulik et al. (2009) proposed a two-phase algorithm that provides a -approximation for the problem by first guessing a constant number of elements of highest value; and then taking the value residual problem with respect to the guessed subset. For the non-monotone case, Lee et al. (2009a) provided a -approximation based on local search.

-system and knapsack constraints:

A more general type of constraint that has recently found interesting applications in viral marketing (Du et al., 2013) can be constructed by combining a -system with knapsack constraints which comprises the intersection of matroids or knapsacks as special cases. Badanidiyuru and Vondrák (2014) proposed a modified version of the greedy algorithm that guarantees a -approximation for maximizing a monotone submodular function subject to -system and knapsack constraints.

Table 1 summarizes the approximation guarantees for monotone and non-monotone submodular maximization under different constraints.

Constraint Approximation ()
monotone submodular functions non-monotone submodular functions
Cardinality (Fisher et al., 1978) 0.325 (Gharan and Vondrák, 2011)
1 matroid (Calinescu et al., 2011) 0.325 (Gharan and Vondrák, 2011)
matroid (Lee et al., 2009b) (Lee et al., 2009b)
1 knapsack (Sviridenko, 2004) 1/5 - (Lee et al., 2009a)
knapsack Kulik et al. (2009) 1/5 - (Lee et al., 2009a)
-system 1/() (Fisher et al., 1978) (Gupta et al., 2010)
-system + knapsack (Badanidiyuru and Vondrák, 2014)
Table 1: Approximation guarantees () for monotone and non-monotone submodular maximization under different constraints.

5.3 GreeDi Approximation Guarantee under More General Constraints

Assume that we have a set of constraints and we have access to a “black box” algorithm  that gives us a constant factor approximation guarantee for maximizing a non-negative (but not necessarily monotone) submodular function subject to , i.e.

(8)

We can modify GreeDi to use any such approximation algorithm as a black box, and provide theoretical guarantees about the solution. In order to process a large dataset, it first distributes the ground set over machines. Then instead of greedily selecting elements, each machine – in parallel – separately runs the black box algorithm  on its local data in order to produce a feasible set meeting the constraints . We denote by the set with maximum value among . Next, the solutions are merged: , and the black box algorithm is applied one more time to set to produce a solution . Then, the distributed solution for parameter and constraints , , is the best among and . This procedure is given in more detail in Algorithm 3.

0:  Set , of partitions , constraints , submodular function .
0:  Set .
1:  Partition into sets .
2:  In parallel: Run the approximation algorithm X on each set to find a solution .
3:  Find .
4:  Merge the resulting sets:
5:  Run the approximation algorithm  on to find a solution .
6:  Return .
Algorithm 3 GreeDi under General Constraints

The following result generalizes Theorem 4.2 for maximizing a submodular function subject to more general constraints. Let be a non-negative submodular function and  be a black box algorithm that provides a -approximation guarantee for submodular maximization subject to a set of constraints . Then

where is the optimum centralized solution, and .

Specifically, for submodular maximization subject to the matroid constraint , we have where is the rank of the matroid (i.e., the maximum size of any independent set in the system). For submodular maximization subject to the knapsack constraint , we can bound by (i.e. the capacity of the knapsack divided by the smallest weight of any element).

Performance on Datasets with Geometric Structure.

If the objective function is Lipschitz continuous with respect to parameter with constant , GreeDi provides much better approximation guarantees than the worst case bounds shown above. The following result generalizes Theorem 4.3 for maximizing non-negative submodular functions subject to different constraints.

Under the conditions that 1) elements are assigned uniformly at random to machines, 2) for each we have and is -Lipschitz, then with probability at least ,

Performance Guarantee for Very Large datasets.

Similarly, we can generalize Theorem 4.4 for maximizing non-negative submodular functions subject to more general constraints. If the underlying large but finite dataset is sampled from an unknown probability distribution such that we have a high density of points around the elements of the optimal solution , we have the following: For , where , if GreeDi assigns elements uniformly at random to processors , then with probability at least ,

Performance Guarantee for Decomposable Functions.

For the case of decomposable functions described in Section 4.5, the following generalization of Theorem 4.5 holds for maximizing a non-negative submodular function subject to more general constraints. Let us define to be the smallest integer such that for we have . For , , and under the assumptions of Theorem 5.3, we have, with probability at least ,

6 Experiments

In our experimental evaluation we wish to address the following questions: 1) how well does GreeDi perform compared to the centralized solution, 2) how good is the performance of GreeDi when using decomposable objective functions (see Section 4.5), and finally 3) how well does GreeDi scale in the context of massive datasets. To this end, we run GreeDi on three scenarios: exemplar based clustering, active set selection in GPs and finding the maximum cuts in graphs.

We compare the performance of our GreeDi method to the following naive approaches:

  • random/random: in the first round each machine simply outputs randomly chosen elements from its local data points and in the second round out of the merged elements, are again randomly chosen as the final output.

  • random/greedy: each machine outputs randomly chosen elements from its local data points, then the standard greedy algorithm is run over elements to find a solution of size .

  • greedy/merge: in the first round elements are chosen greedily from each machine and in the second round they are merged to output a solution of size .

  • greedy/max: in the first round each machine greedily finds a solution of size and in the second round the solution with the maximum value is reported.

For GreeDi, we let each of the machines select a set of size , and select a final solution of size among the union of the solutions (i.e., among elements). We present the performance of GreeDi for different parameters . For datasets where we are able to find the centralized solution, we report the ratio of , where is the distributed solution (in particular for GreeDi).

6.1 Exemplar Based Clustering

Our exemplar based clustering experiment involves GreeDi applied to the clustering utility (see Sec. 3.4) with . We performed our experiments on a set of 10,000 Tiny Images (Torralba et al., 2008). Each 32 by 32 RGB pixel image was represented by a 3,072 dimensional vector. We subtracted from each vector the mean value, normalized it to unit norm, and used the origin as the auxiliary exemplar. Fig. 3(a) compares the performance of our approach to the benchmarks with the number of exemplars set to , and varying number of partitions . It can be seen that GreeDi significantly outperforms the benchmarks and provides a solution that is very close to the centralized one. Interestingly, even for very small , GreeDi performs very well. Since the exemplar based clustering utility function is decomposable, we repeated the experiment for the more realistic case where the function evaluation in each machine was restricted to the local elements of the dataset in that particular machine (rather than the entire dataset). Fig 3(b) shows similar qualitative behavior for decomposable objective functions.

(a) Global objective function
(b) Local objective function
(c) Global objective function
(d) Local objective function
Figure 4: Performance of GreeDi

compared to the other benchmarks. a) and b) show the mean and standard deviation of the ratio of distributed vs. centralized solution for global and local objective functions with budget

and varying the number of partitions. c) and d) show the same ratio for global and local objective functions for partitions and varying budget , for a set of 10,000 Tiny Images.

Large scale experiments with Hadoop. As our first large scale experiment, we applied GreeDi to the whole dataset of 80,000,000 Tiny Images (Torralba et al., 2008) in order to select a set of 64 exemplars. Our experimental infrastructure was a cluster of 10 quad-core machines running Hadoop with the number of reducers set to . Hereby, each machine carried out a set of reduce tasks in sequence. We first partitioned the images uniformly at random to reducers. Each reducer separately performed the lazy greedy algorithm on its own set of 10,000 images (123MB) to extract 64 images with the highest marginal gains w.r.t. the local elements of the dataset in that particular partition. We then merged the results and performed another round of lazy greedy selection on the merged results to extract the final 64 exemplars. Function evaluation in the second stage was performed w.r.t a randomly selected subset of 10,000 images from the entire dataset. The maximum running time per reduce task was 2.5 hours. As Fig. 4(a) shows, GreeDi highly outperforms the other distributed benchmarks and can scale well to very large datasets. Fig. 4(b) shows a set of cluster exemplars discovered by GreeDi where Fig. 4(c) and Fig. 4(d) show 100 nearest images to exemplars 26 and 63 (shown with red borders) in Fig. 4(b).

(a) Tiny Images 80M
(b)
(c)
(d)
Figure 5: Performance of GreeDi compared to the other benchmarks. a) shows the distributed solution with and varying for local objective functions on the whole dataset of 80,000,000 Tiny Images. b) shows a set of cluster exemplars discovered by GreeDi, and each column in c) shows 100 images nearest to exemplars 26 and d) shows 100 images nearest to exemplars 63 in b).

6.2 Active Set Selection

Our active set selection experiment involves GreeDi applied to the information gain (see Sec. 3.4) with Gaussian kernel, and . We used the Parkinsons Telemonitoring dataset (Tsanas et al., 2010) consisting of 5,875 bio-medical voice measurements with 22 attributes from people with early-stage Parkinson’s disease. We normalized the vectors to zero mean and unit norm. Fig. 5(b) compares the performance GreeDi to the benchmarks with fixed and varying number of partitions . Similarly, Fig 5(a) shows the results for fixed and varying . We find that GreeDi significantly outperforms the benchmarks.

(a) Parkinsons Telemonitoring
(b) Parkinsons Telemonitoring
Figure 6: Performance of GreeDi compared to the other benchmarks. a) shows the ratio of distributed vs. centralized solution with and varying for Parkinsons Telemonitoring. b) shows the same ratio with and varying on the same dataset.

Large scale experiments with Hadoop. Our second large scale experiment consists of 45,811,883 user visits from the Featured Tab of the Today Module on Yahoo! Front Page (web, 2012). For each visit, both the user and each of the candidate articles are associated with a feature vector of dimension 6. Here, we used the normalized user features. Our experimental setup was a cluster of 5 quad-core machines running Hadoop with the number of reducers set to . Each reducer performed the lazy greedy algorithm on its own set of 1,431,621 vectors (34MB) in order to extract 128 elements with the highest marginal gains w.r.t the local elements of the dataset in that particular partition. We then merged the results and performed another round of lazy greedy selection on the merged results to extract the final active set of size 128. The maximum running time per reduce task was 2.5 hours. Fig. 8 shows the performance of GreeDi compared to the benchmarks. We note again that GreeDi significantly outperforms the other distributed benchmarks and can scale well to very large datasets.

Figure 7: Yahoo! front page
Figure 8: Performance of GreeDi with and varying budget compared to the other benchmarks on Yahoo! Webscope data.

6.3 Non-Monotone Submodular Function (Finding Maximum Cuts)

We also applied GreeDi to the problem of finding maximum cuts in graphs. In our setting we used a Facebook-like social network (Opsahl and Panzarasa, 2009). This dataset includes the users that have sent or received at least one message in an online student community at University of California, Irvine and consists of 1,899 users and 20,296 directed ties. Fig. 8(a) and 8(b) show the performance of GreeDi applied to the cut function on graphs. We evaluated the objective function locally on each partition. Thus, the links between the partitions are disconnected. Since the problem of finding the maximum cut in a graph is non-monotone submodular, we applied the RandomGreedy algorithm proposed by Buchbinder et al. (2014) to find the near optimal solution in each partition.

Although the cut function does not decompose additively over individual data points, perhaps surprisingly, GreeDi still performs very well, and significantly outperforms the benchmarks. This suggests that our approach is quite robust, and may be more generally applicable.

(a) Facebook-like social network
(b) Facebook-like social network
Figure 9: Performance of GreeDi compared to the other benchmarks. a) shows the mean and standard deviation of the ratio of distributed to centralized solution for budget with varying number of machines and b) shows the same ratio for varying budget with on Facebook-like social network.

6.4 Comparision with Greedy Scaling.

Kumar et al. (2013) recently proposed an alternative approach – GreedyScaling – for parallel maximization of submodular functions. GreedyScaling is a randomized algorithm that carries out a number (typically less than ) rounds of MapReduce computations.

We applied GreeDi to the submodular coverage problem in which given a collection of sets, we would like to pick at most sets from in order to maximize the size of their union. We compared the performance of our GreeDi algorithm to the reported performance of GreedyScaling on the same datasets, namely Accidents (Geurts et al., 2003) and Kosarak (Bodon, 2012). As Fig 9(a) and 9(b) shows, GreeDi outperforms GreedyScaling on the Accidents dataset and its performance is comparable to that of GreedyScaling in the Kosarak dataset.

(a) Accidents
(b) Kosarak
Figure 10: Performance of GreeDi compared to the GreedyScaling algorithm of Kumar et al. (2013) (as reported in their paper). a) shows the ratio of distributed to centralized solution on Accidents dataset with 340,183 elements and b) shows the same ratio for Kosarak dataset with 990,002 elements. The results are reported for varying budget and varying number of machines where and is the size of the dataset. The results are reported for . Note that the results presented by Kumar et al. (2013) indicate that GreedyScaling generally requires a substantially larger number of MapReduce rounds compared to GreeDi.

7 Conclusion

We have developed an efficient distributed protocol GreeDi, for constrained submodular function maximization. We have theoretically analyzed the performance of our method and showed that under certain natural conditions it performs very close to the centralized (albeit impractical in massive datasets) solution. We have also demonstrated the effectiveness of our approach through extensive experiments, including active set selection in GPs on a dataset of 45 million examples, and exemplar based summarization of a collection of 80 million images using Hadoop. We believe our results provide an important step towards solving submodular optimization problems in very large scale, real applications.

This research was supported by SNF 200021-137971, DARPA MSEE FA8650-11-1-7156, ERC StG 307036, a Microsoft Faculty Fellowship, an ETH Fellowship, Scottish Informatics and Computer Science Alliance.

Appendix A Proofs

This section presents the complete proofs of theorems presented in the article.

Proof of Theorem 4.1

direction:
The proof easily follows from the following lemmas.

. Let be the elements in that are contained in the optimal solution, . Then we have:

Using submodularity of , for each , we have

and thus,

Since, , we have

Therefore,

. Let . Using submodularity of , we have

Thus, where . Suppose that the element with highest marginal gain (i.e., ) is in . Then the maximum value of on would be greater or equal to the marginal gain of , i.e., and since , we can conclude that

Since ; from Lemma A and A we have

direction:
Let us consider a set of unbiased and independent Bernoulli random variables for and , i.e., and if or . Let us also define for . Now assume that