Hardness Amplification of Optimization Problems

08/27/2019
by   Elazar Goldenberg, et al.
0

In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Π of size O(n·α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Π on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

Nearly Optimal Average-Case Complexity of Counting Bicliques Under SETH

In this paper, we seek a natural problem and a natural distribution of i...
research
12/10/2020

Learning from Survey Propagation: a Neural Network for MAX-E-3-SAT

Many natural optimization problems are NP-hard, which implies that they ...
research
05/30/2020

Grover Mixers for QAOA: Shifting Complexity from Mixer Design to State Preparation

We propose GM-QAOA, a variation of the Quantum Alternating Operator Ansa...
research
04/07/2022

Learning to Solve Travelling Salesman Problem with Hardness-adaptive Curriculum

Various neural network models have been proposed to tackle combinatorial...
research
08/18/2023

On Lifting Integrality Gaps to SSEH Hardness for Globally Constrained CSPs

A μ-constrained Boolean Max-CSP(ψ) instance is a Boolean Max-CSP instanc...
research
07/17/2020

Integer factorization and Riemann's hypothesis: Why two-item joint replenishment is hard

Distribution networks with periodically repeating events often hold grea...

Please sign up or login with your details

Forgot password? Click here to reset