Non-monotone Submodular Maximization with Nearly Optimal Adaptivity Complexity

08/19/2018
by   Matthew Fahrbach, et al.
0

As a generalization of many classic problems in combinatorial optimization, submodular optimization has found a wide range of applications in machine learning (e.g., in feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by such applications, we study the adaptivity and query complexity of non-monotone submodular optimization. We provide the first constant approximation algorithm for maximizing a non-monotone submodular function with cardinality constraint k that has nearly-optimal adaptivity complexity O((n)). Furthermore, our algorithm makes only O((k)) calls per element to the function evaluation oracle in expectation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset