
Logconcave polynomials, entropy, and a deterministic approximation algorithm for counting bases of matroids
We give a deterministic polynomial time 2^O(r)approximation algorithm f...
read it

An Improved Approximation Algorithm for TSP in the Half Integral Case
We design a 1.49993approximation algorithm for the metric traveling sal...
read it

Optimal BudgetFeasible Mechanisms for Additive Valuations
In this paper, we obtain tight approximation guarantee for budgetfeasib...
read it

Fractional Decomposition Tree Algorithm: A tool for studying the integrality gap of Integer Programs
We present a new algorithm, Fractional Decomposition Tree (FDT) for find...
read it

An Optimal Monotone Contention Resolution Scheme for Uniform and Partition Matroids
A common approach to solve a combinatorial optimization problem is to fi...
read it

A 12/7approximation algorithm for the discrete Bamboo Garden Trimming problem
We study the discrete Bamboo Garden Trimming problem (BGT), where we are...
read it

Approximation Algorithms for the Bottleneck Asymmetric Traveling Salesman Problem
We present the first nontrivial approximation algorithm for the bottlene...
read it
Maximizing Determinants under Matroid Constraints
Given vectors v_1,…,v_n∈ℝ^d and a matroid M=([n],I), we study the problem of finding a basis S of M such that (∑_i ∈ Sv_i v_i^⊤) is maximized. This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning. The current best results include an e^2kestimation for any matroid of rank k and a (1+ϵ)^dapproximation for a uniform matroid of rank k≥ d+d/ϵ, where the rank k≥ d denotes the desired size of the optimal set. Our main result is a new approximation algorithm with an approximation guarantee that depends only on the dimension d of the vectors and not on the size k of the output set. In particular, we show an (O(d))^destimation and an (O(d))^d^3approximation for any matroid, giving a significant improvement over prior work when k≫ d. Our result relies on the existence of an optimal solution to a convex programming relaxation for the problem which has sparse support; in particular, no more than O(d^2) variables of the solution have fractional values. The sparsity results rely on the interplay between the firstorder optimality conditions for the convex program and matroid theory. We believe that the techniques introduced to show sparsity of optimal solutions to convex programs will be of independent interest. We also give a randomized algorithm that rounds a sparse fractional solution to a feasible integral solution to the original problem. To show the approximation guarantee, we utilize recent works on strongly logconcave polynomials and show new relationships between different convex programs studied for the problem. Finally, we use the estimation algorithm and sparsity results to give an efficient deterministic approximation algorithm with an approximation guarantee that depends solely on the dimension d.
READ FULL TEXT
Comments
There are no comments yet.