Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size

05/28/2020
by   Christoph Hertrich, et al.
0

In view of the undisputed success of neural networks and due to the remarkable recent improvements in their ability to solve a huge variety of practical problems, the development of a satisfying and rigorous mathematical understanding of their performance is one of the main challenges in the field of learning theory. Against this background, we study the expressive power of neural networks through the example of the classical NP-hard Knapsack Problem. Our main contribution is a class of recurrent neural networks (RNNs) with rectified linear units that are iteratively applied to each item of a Knapsack instance and thereby compute optimal or provably good solution values. In order to find optimum Knapsack solutions, an RNN of depth four and width depending quadratically on the profit of an optimum Knapsack solution is sufficient. We also prove the following tradeoff between the size of an RNN and the quality of the computed Knapsack solution: For Knapsack instances consisting of n items, an RNN of depth five and width w computes a solution of value at least 1-𝒪(n^2/√(w)) times the optimum solution value. Our results build upon a dynamic programming formulation of the Knapsack Problem as well as a careful rounding of profit values that is also at the core of the well-known fully polynomial-time approximation scheme for the Knapsack Problem. Finally, we point out that similar results can be achieved for other optimization problems that can be solved by dynamic programming, such as, e.g., various Shortest Path Problems and the Longest Common Subsequence Problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2018

Definable Inapproximability: New Challenges for Duplicator

We consider the hardness of approximation of optimization problems from ...
research
11/19/2022

Can Gradient Descent Provably Learn Linear Dynamic Systems?

We study the learning ability of linear recurrent neural networks with g...
research
11/30/2018

Fast Algorithms for Knapsack via Convolution and Prediction

The knapsack problem is a fundamental problem in combinatorial optimizat...
research
09/07/2017

Approximating meta-heuristics with homotopic recurrent neural networks

Much combinatorial optimisation problems constitute a non-polynomial (NP...
research
05/15/2020

Minimizing the Installation Cost of Ground Stations in Satellite Networks: Complexity, Dynamic Programming and Approximation Algorithm

In this letter, we study the optimum selection of ground stations (GSs) ...
research
01/25/2023

Exact and rapid linear clustering of networks with dynamic programming

We study the problem of clustering networks whose nodes have imputed or ...
research
02/01/2019

Fast Re-Optimization via Structural Diversity

When a problem instance is perturbed by a small modification, one would ...

Please sign up or login with your details

Forgot password? Click here to reset