Neural Models for Output-Space Invariance in Combinatorial Problems

02/07/2022
by   Yatin Nandwani, et al.
15

Recently many neural models have been proposed to solve combinatorial puzzles by implicitly learning underlying constraints using their solved instances, such as sudoku or graph coloring (GCP). One drawback of the proposed architectures, which are often based on Graph Neural Networks (GNN), is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or board-size in sudoku. We call the output space for the variables as 'value-set'. While many works have demonstrated generalization of GNNs across graph size, there has been no study on how to design a GNN for achieving value-set invariance for problems that come from the same domain. For example, learning to solve 16 x 16 sudoku after being trained on only 9 x 9 sudokus. In this work, we propose novel methods to extend GNN based architectures to achieve value-set invariance. Specifically, our model builds on recently proposed Recurrent Relational Networks. Our first approach exploits the graph-size invariance of GNNs by converting a multi-class node classification problem into a binary node classification problem. Our second approach works directly with multiple classes by adding multiple nodes corresponding to the values in the value-set, and then connecting variable nodes to value nodes depending on the problem initialization. Our experimental evaluation on three different combinatorial problems demonstrates that both our models perform well on our novel problem, compared to a generic neural reasoner. Between two of our models, we observe an inherent trade-off: while the binarized model gives better performance when trained on smaller value-sets, multi-valued model is much more memory efficient, resulting in improved performance when trained on larger value-sets, where binarized model fails to train.

READ FULL TEXT
research
01/03/2022

Two-level Graph Neural Network

Graph Neural Networks (GNNs) are recently proposed neural network struct...
research
05/24/2019

Approximation Ratios of Graph Neural Networks for Combinatorial Problems

In this paper, from a theoretical perspective, we study how powerful gra...
research
02/18/2020

A Lagrangian Approach to Information Propagation in Graph Neural Networks

In many real world applications, data are characterized by a complex str...
research
08/15/2022

Rethinking Graph Neural Networks for the Graph Coloring Problem

Graph coloring, a classical and critical NP-hard problem, is the problem...
research
06/22/2023

Evolving Computation Graphs

Graph neural networks (GNNs) have demonstrated success in modeling relat...
research
05/20/2022

On the Prediction Instability of Graph Neural Networks

Instability of trained models, i.e., the dependence of individual node p...
research
02/03/2022

Graph Coloring with Physics-Inspired Graph Neural Networks

We show how graph neural networks can be used to solve the canonical gra...

Please sign up or login with your details

Forgot password? Click here to reset