Continuous Non-monotone DR-submodular Maximization with Down-closed Convex Constraint
We investigate the continuous non-monotone DR-submodular maximization problem subject to a down-closed convex solvable constraint. Our first contribution is to construct an example to demonstrate that (first-order) stationary points can have arbitrarily bad approximation ratios, and they are usually on the boundary of the feasible domain. These findings are in contrast with the monotone case where any stationary point yields a 1/2-approximation (<cit.>). Moreover, this example offers insights on how to design improved algorithms by avoiding bad stationary points, such as the restricted continuous local search algorithm (<cit.>) and the aided measured continuous greedy (<cit.>). However, the analyses in the last two algorithms only work for the discrete domain because both need to invoke the inequality that the multilinear extension of any submodular set function is bounded from below by its Lovasz extension. Our second contribution, therefore, is to remove this restriction and show that both algorithms can be extended to the continuous domain while retaining the same approximation ratios, and hence offering improved approximation ratios over those in <cit.> for the same problem. At last, we also include numerical experiments to demonstrate our algorithms on problems arising from machine learning and artificial intelligence.
READ FULL TEXT