Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted Graph Convolutional Neural Networks

04/04/2022
by   Steffen Jung, et al.
Max Planck Society
124

The minimum cost multicut problem is the NP-hard/APX-hard combinatorial optimization problem of partitioning a real-valued edge-weighted graph such as to minimize the total cost of the partition. While graph convolutional neural networks (GNN) have proven to be promising in the context of combinatorial optimization, most of them are only tailored to or tested on positive-valued edge weights, i.e. they do not comply to the nature of the multicut problem. We therefore adapt various GNN architectures including Graph Convolutional Networks, Signed Graph Convolutional Networks and Graph Isomorphic Networks to facilitate the efficient encoding of real-valued edge costs. Moreover, we employ a reformulation of the multicut ILP constraints to a polynomial program as loss function that allows to learn feasible multicut solutions in a scalable way. Thus, we provide the first approach towards end-to-end trainable multicuts. Our findings support that GNN approaches can produce good solutions in practice while providing lower computation times and largely improved scalability compared to LP solvers and optimized heuristics, especially when considering large instances.

READ FULL TEXT VIEW PDF

Authors

page 19

page 20

page 27

page 28

page 29

page 30

01/17/2018

Solving Minimum k-supplier in Adleman-Lipton model

In this paper, we consider an algorithm for solving the minimum k suppli...
02/14/2018

Graph2Seq: Scalable Learning Dynamics for Graphs

Neural networks have been shown to be an effective tool for learning alg...
10/25/2018

Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search

We present a learning-based approach to computing solutions for certain ...
06/01/2021

Experiments with graph convolutional networks for solving the vertex p-center problem

In the last few years, graph convolutional networks (GCN) have become a ...
05/04/2021

Reinforcement Learning for Scalable Logic Optimization with Graph Neural Networks

Logic optimization is an NP-hard problem commonly approached through han...
05/04/2021

Neural Weighted A*: Learning Graph Costs and Heuristics with Differentiable Anytime A*

Recently, the trend of incorporating differentiable algorithms into deep...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent years have shown great advances of neural network-based approaches in various application domains from image classification [35]

and natural language processing

[52] up to very recent advances in decision logics [3]. While these successes indicate the importance and potential benefit of learning from data distributions, other domains such as symbolic reasoning or combinatorial optimization are still dominated by classical approaches. Recently, first attempts have been made to address specific NP-hard combinatorial problems in a learning-based setup [46, 12, 38, 44]. Specifically, such papers employ (variants of) message passing neural networks (MPNN) [20], defined on graphs [45, 41, 33] in order to model, for example, the boolean satisfiability of conjunctive normal form formulas (SAT) [46] or address the travelling salesman problem [44] - both highly important NP-complete combinatorial problems. These first advances employ the ability of graph convolutional networks to efficiently learn representations of entities in graphs and prove the potential to solve hard combinatorial problems.

(a) Clustered nodes. (b) Node emb. PCA.
Figure 1: (a) Node clustering of the proposed GCN_W_BN on a complete graph (

) from IrisMP and the ordered cosine similarity between all learned node embeddings. (b) The first two principal components for each node embedding of (a). Node

is part of the green cluster in the optimal solution (). The closeness of both solutions is reflected in the embedding.

In this paper, we address the minimum cost multicut problem [13, 5], also known as the weighted correlation clustering problem (see Fig. 1). This grouping problem is substantially different from the aforementioned examples as it aims to assign binary edge labels based on a signed edge cost function. Such graph partitioning problems are ubiquitous in practical applications such as image segmentation [47, 4, 1, 2, 27], motion segmentation [29], stereo matching [25], inpainting [25], object tracking [28], pose tracking [23], or entity matching [43]. The minimum cost multicut problem is NP-hard, as well as APX-hard [5]

, which makes it a particularly challenging subject to explore on. Its main difficulty lies in the exponentially growing number of constraints that define feasible solutions, especially whenever non-complete graphs are considered. Established methods solve its binary linear program formulation or linear program relaxations

[25]. However, deriving optimal solutions is oftentimes intractable for large problem instances. In such cases, heuristic, iterative solvers are used as a remedy [27]. A significant disadvantage of such methods is that they can not provide gradients that would allow to train downstream tasks in an end-to-end way.

To address this issue, we propose a formulation of the minimum cost multicut problem as an MPNN. While the formulation of the multicut problem as a graph neural network seems natural, most existing GNN approaches are designed to aggregate node features potentially under edge constraints [49]. In contrast, instances of the multicut problem are purely defined through their edge weights. Graph Convolutional Networks (GCN) [33] rely on diverse node embeddings normalized by the graph Laplacian and an isotropic aggregation function. Yet, edge weights in general and signed edge weights in particular are not modeled in standard GCNs. In this paper, we propose a simple extension of GCNs and show that the signed graph Laplacian can provide sufficiently strong initial node embeddings from signed edge information. This, in conjunction with an anisotropic update function which takes into account signed edge weights, facilitates GCNs to outperform more recent models such as Signed Graph Convolutional Networks (SGCN) [14], Graph Isomorphic Networks (GIN) [54] as well as models that inherently handle real-valued edge weights such as Residual Gated Graph Convolutional Networks (RGGCN) [24]

and Graph Transformer Networks (GTN)

[48] on the multicut problem.

To facilitate effective training, we consider a polynomial programming formulation of the minimum cost multicut problem to derive a loss function that encourages the network to issue valid solutions. Since currently available benchmarks for the minimum cost multicut problem are notoriously small, we propose two synthetic datasets with different statistics, for example w.r.t. the graph connectivity, which we use for training and analysis. We further evaluate our models on the public benchmarks BSDS300 [40], CREMI [8], and Knott3D [2].

In the following, we first briefly review the minimum cost multicut problem and commonly employed solvers. Then, we provide an overview on GNN approaches and their application in combinatorial optimization. In subsection 3.1, we present the proposed approach for solving the minimum cost multicut problem with GNNs including model adaptations and the derivation of the proposed loss function. Sec. 4 provides an empirical evaluation of the proposed approach.

2 The Minimum Cost Multicut Problem

The minimum cost multicut problem [11, 15] is a binary edge labeling problem defined on a graph , where the connectivity is defined by edges , i.e.  is not necessarily complete. It allows for the definition of real-valued edge costs . Its solutions decompose such as to minimize the overall cost. Specifically, the MP can be defined by the following ILP [11]:

Definition 1

For a simple, connected graph and an associated cost function , written below is an instance of the multicut problem

(1)

with subject to the linear constraints

(2)

where enumerates all cycles in graph . The resulting

is a vector of binary decision variables for each edge. Eq. (

2) defines the cycle inequality constraints and ensures that if an edge is cut between two nodes, there can not be another path in the graph connecting them. Chopra and Rao [11] further showed that the facets of the MP can be sufficiently described by cycle inequalities on all chordless cycles of . The problem in Eq. (1)-(2) can be reformulated in a more compact way as a polynomial program (PP):

(3)

for a sufficiently large penalty . The above problem is well behaved for complete graphs where it suffices to consider all cycles of length three and Eq. (3) becomes a quadratic program. For sparse graphs, sufficient constraints may have arbitrary length and their enumeration might be practically infeasible. Finding an optimal solution is NP-hard and APX-hard [5]. Therefore, exact solvers are intractable for large problem instances. Linear program relaxations as well as primal feasible heuristics have been proposed to overcome this issue, which we will briefly review in the following.

Related work on Multicut Solvers.

To solve the ILP from Definition (1), one can use general purpose LP solvers, like Gurobi [21] or CPLEX, such that optimal solutions might be in reach for small instances if the enumeration of constraints is tractable. However, no guarantees on the runtime can be provided. To mitigate the exponentially growing number of constraints, various cutting-plane [26, 30, 31] or branch-and-bound [2, 25] algorithms exist. For example, [26] employ a relaxed version of the ILP in Eq. (1) without cycle constraints. In each iteration, violated constraints are searched and added to the ILP. This approach provides optimal solutions to formerly intractable instances - yet without any runtime guarantees. Linear program relaxations [30, 25, 51] increase the tightness of the relaxation, for example using additional constraints, and provide optimality bounds. While such approaches can yield solutions within optimality bounds, their computation time can be very high and the proposed solution can be arbitrarily poor in practice. In contrast, heuristic solvers can provide runtime guarantees and have shown good results in many practical applications. The primal feasible heuristic KLj [27] iterates over pairs of partitions and computes local moves which allow to escape local optima. Competing approaches have been proposed, for example by [7] or [6]. The highly efficient Greedy Additive Edge Contraction (GAEC) [27] approach aggregates nodes in a greedy procedure with an

worst case complexity. While such primal feasible heuristics are highly efficient in practice, they share one important draw-back with ILP solvers that becomes relevant in the learning era: they can not provide gradients that would allow for backpropagation for example to learn edge weights.

In contrast, [50] have proposed a third order conditional random field based on the PP in Eq. (3), which can be optimized in an end-to-end fashion using mean field iterations. This approach is strictly limited to the optimization on complete graphs. Our approach employs graph neural networks to overcome this limitation and provides a general purpose end-to-end trainable multicut approach.

3 Message Passing Neural Networks for Multicuts

[20] provide a general framework to describe convolutions for graph data spatially as message-passing scheme. In each convolutional layer, each node is propagating its current node features via edges to all of its neighboring nodes and updates its own features based on the messages it receives. The update is commonly described by an update function

(4)

where is the feature representation of node in layer with dimensionality , and are edge features. Here, and are differentiable functions, and is a differentiable, permutation invariant aggregation function, mostly sum, max, or mean. Commonly, the message function and the update function are parameterized, and apply the same parameters at each location in the graph.

Various formulations have been proposed to define . Graph Convolutional Network (GCN) [33]

normalizes messages with the graph Laplacian and linearly transforms their sum to update node representations. Signed Graph Convolutional Network (SGCN)

[14] aggregates messages depending on the sign of the connectivity and keeps two representations per node, one for balanced paths and one for unbalanced paths. Graph Isomorphic Network (GIN) [54] learns an injective function by defining message aggregation as a sum and learning the update function as an MLP. Residual Gated Graph Convolutional Network (RGGCN) [24] computes edge gates to aggregate messages in an anisotropic manner and learns to compute the residuals to the previous representations. Edge conditioned GCNs [49] aggregate node features using dynamic weights computed from high-dimensional edge features. Graph Transformer Network (GTN) [48]

also aggregate messages anisotropically by learning a self-attention model based on the transformer models in NLP

[52]. While the latter three can directly handle real-valued edge weights, all are tailored towards aggregating meaningful node features. In the following, we review recent approaches to employ such models in the context of combinatorial optimization.

MPNNs and Combinatorial Optimization

Recently, MPNNs have been applied to several hard combinatorial optimization problems, such as the minimum vertex cover [38], maximal clique [38], maximal independent set [38], the satisfiability problem [38], and the travelling salesman problem [24]. Their objective is either to learn heuristics such as branch-and-bound variable selection policies for exact or approximate inference [19, 16] or to use attention [53]

, reinforcement learning

[9, 12], or both [42, 34, 39] in an iterative, autoregressive procedure. [24] address the 2D Euclidean travelling salesman problem using the RGGCN model to learn edge representations. Other recent approaches address combinatorial problems by decoding, using supervised training such as [10]. The proposed approach is related to the work of [24], since we cast the minimum cost multicut problem as binary edge classification problem that we address using MPNN approaches, including RGGCN. We train our model in a supervised way, yet employing a dedicated loss function which encourages feasible solutions w.r.t. Eq. (2).

Figure 2: Message aggregation in an undirected, weighted graph where node features () are initialized with 1. (a) Standard message aggregation in an isotropic fashion leads to no meaningful node embeddings (). (b) Our proposed method takes edge weights into account leading to anisotropic message aggregation and meaningful node embeddings. A simple decision boundary at can now partition the graph.

3.1 Multicut Neural Network

We cast the multicut problem into a binary edge classification task, where label is assigned to an edge if it is cut, and

otherwise. The task of the model is to learn a probability distribution

over the edges of a given graph, inferring how likely it is that an edge is cut. Based on these probabilities, we derive a configuration of edge labels,

. In contrast to existing autoregressive MPNN-based models in combinatorial optimization, we derive a solution after a single forward pass over the graph to achieve an efficient bound on the runtime of the model. In this scenario, our model can be defined by three functions, i.e., . First, is the edge representation mapping, assigning meaningful embeddings to each edge in the graph given a multicut problem instance. This function is learned by an MPNN. Second, assigns to every edge its probability to be cut. This function is learned by an MLP. Last, function translates the resulting configuration of edge probabilities to a feasible configuration of edge labels, hence, computes a feasible solution.

3.1.1 Edge Representation Mapping

Given a multicut problem instance , the edge representation mapping learns to assign meaningful edge embeddings via MPNNs. One specific case of MPNN is GCN [33], where the node representation update function is defined as follows:

(5)

where denotes the feature representation of node in layer with channel size . In each layer, node representations of all neighbors of are aggregated and normalized by , where is the normalized graph Laplacian with additional self-loops in the adjacency matrix and degree matrix . Conventionally, is initialized with node features

. Intuitively, we expect normalization with the graph Laplacian to be beneficial in the MP setting, since i) its eigenvectors encode similarities of nodes within a graph

[47] and ii) even sparsely connected nodes can be assigned meaningful representations [4]. However, MP instances consist of real-valued edge-weighted graphs and the normalized graph Laplacian is not defined for negative node degrees. To the best of our knowledge, there is no work enabling GCN to incorporate real-valued edge weights so far.

Real-valued Edge Weights

Hence, our first task is to enable negative-valued edge weights in GCN. We can achieve this via the signed normalized graph Laplacian [22, 36]:

(6)

where is the weighted adjacency matrix and is the signed node degree matrix with . [18] shows that this formulation preserves the desired properties from the graph Laplacian w.r.t. encoding pairwise similarities as well as representation learning on sparsely connected nodes (see i) and ii) above).

Incorporting Eq. (6) into Eq. (5), we get

(7)

Here, we can observe two new terms. First, each message is weighted by the edge weight between two nodes enabling an anisotropic message-passing scheme. Fig. 2 motivates why this is necessary. While [54] show that GNNs with mean aggregation have theoretical limitations, they also note that these limitations vanish in scenarios where node features are diverse. Additionally, [54] only consider the case where neighboring nodes are aggregated in an isotropic fashion. As we show here, diverse node features are not necessary when messages are aggregated in the anisotropic fashion we propose. The resulting node representations enable to distinguish nodes in the graph despite the lack of meaningful node features. This is important in our case, since the multicut problem does not provide node features. Second, we are now able to normalize messages via the Laplacian in real-valued graphs. The normalization acts stronger on messages that are sent to or from nodes whose adjacent edges have weights with large magnitudes. Large magnitudes on the edges usually indicate a confident decision towards joining (for positive weights) or cutting (negative weights). Thus, the normalization will allow nodes with less confident edge cues to converge to a meaningful embedding while, without such normalization, the network would notoriously focus on embedding nodes with strong edge cues, i.e. on easy decisions.

Node Features

Conventionally, node representations at timestep , , are initialized with node features . However, multicut instances describe the magnitude of similarity or dissimilarity between two items via edge weights and provide no node features. Therefore, we initialize node representations with a two-dimensional vector of node degrees as:

(8)

where is the set of neighboring nodes of connected via positive edges, and is the set of neighboring nodes of connected via negative edges.

Node-to-Edge Representation Mapping

To map two node representations to an edge representation, we use concatenation , where is the representation of edge and the dimension of node embedding

. Since we consider undirected graphs, the order of the concatenation is ambiguous. Therefore, we generate two representations for each edge, one for each direction. This doubles the number of edges to be classified in the next step. The final classification result is the average computed from both representations.

Edge Classification

We learn edge classification function via an MLP that computes likelihoods for each edge in graph , expressing the confidence whether an edge should be cut. A binary solution is retrieved by thresholding the likelihoods at . Since there is no strict guarantee that the edge label configuration is feasible w.r.t. Eq. (2), we postprocess to round it to feasible solutions. Therefore, we compute a connected component labeling on after removing cut edges from and reinstate removed edges for which both corresponding nodes remain within the same component. For efficiency, we implement the connected component labeling as a message-passing layer and can therefore assign cluster identifications to each node efficiently on the GPU.

3.1.2 Training

Since we cast the multicut problem to a binary edge labelling problem, we can formulate a supervised training process that minimizes the Binary Cross Entropy (BCE) loss w.r.t. the optimal solution , which we denote .

Cycle Consistency Loss

The BCE loss encodes feasibility only implicitly by comparison to the optimal solution. To explicitly learn feasible solutions, we take recourse to the PP formulation of the multicut problem in Eq. (3) and formulate a feasibility loss, that we denote Cycle Consistency Loss (CCL):

(9)

where

is a hyperparameter, balancing BCE and CCL, and

is a function that returns all chordless cycles in of length at most . Term effectively penalizes infeasible edge label configurations during training; it adds a penalty of at most for each chordless cycle that is only cut once. In practice, we only consider chordless cycles of maximum length , and we only consider a cycle if is cut, hence . This is necessary to ensure practicable training runtimes. The total training loss is given by

. For best results, we train all models using batch normalization.

(a) Graph 1. (b) Graph 2. (c) Graph 3.
(d) OPT 1. (e) OPT 2. (f) OPT 3.
Figure 3: Samples of the IrisMP Graph Dataset. (a)-(c) depict problem instances. Red edges have negative weights. (d)-(f) depict optimal solutions. Gray edges are cut.
   
   (a) Graph 1. (b) Graph 2. (c) Graph 3.
   
   (d) OPT 1. (e) OPT 2. (f) OPT 3.
Figure 4: Samples of the RandomMP dataset. (a)-(c) depict problem instances. Red edges have negative weights. (d)-(f) depict optimal solutions. Gray edges are cut.
Training Datasets

While the multicut problem is ubiquitous in many real world applications, the amount of available annotated problem instances is scarce and domain specific. Therefore, in order train and test a general purpose model, we generated two synthetic datasets, IrisMP and RandomMP, with complementary connectivity statistics, of multicut instances each.

The first dataset, IrisMP, consists of multicut instances on complete graphs based on the Iris flower dataset [17]. The generation procedure is described in the Appendix. Each problem instance consists of to edges. Three graphs with their respective optimal solutions are depicted in Fig. 4. To complement the IrisMP dataset, we generated a second dataset that contains sparse but larger problem instances with nodes on average, called RandomMP. The generation procedure is described in the Appendix. Examples are depicted in Fig. 4.

4 Experiments

We evaluate all models trained on IrisMP and RandomMP and provide runtime as well as objective value evaluations, where we compare the proposed GCN to GIN and SGCN-based, edge-weight enabled models (see appendix for details) as well as to RGGCN [24] and GTN [48]. Then, we provide an ablation study on the proposed GCN-based edge representation mapping and the multicut loss.

4.1 Evaluation on Test Data

We evaluate our models on three segmentation benchmarks: a graph-based image segmentation dataset [1] based on the Berkeley Segmentation Dataset (BSDS300) [40] consisting of test instances, a graph-based volume segmentation dataset [2] (Knott3D) containing volumes, and additional test instances based on the challenge on Circuit Reconstruction from Electron Microscopy Images (CREMI) [8] that contains volumes of electron microscopy images of fly brains. BSDS300 and Knott3D instances are available as part of a benchmark containing discrete energy minimization problems, called OpenGM [25].

Implementation Details  We train the proposed MPNN-based solvers with (adapted) GCN, GIN, SGCN, RGGCN and GT backbones in different settings, where we uniformly set the node representation dimensionality to . We set the depth of the MPNN to for IrisMP and for RandomMP. CCL is applied with and chordless cycle length up to . All of our experiments are conducted on MEGWARE Gigabyte G291-Z20 servers with NVIDIA Quadro RTX 8000 GPUs. If not stated otherwise, we consider as performance metric the optimal objective ratio achieved, hence .

Results

In LABEL:tab:test

, we show the results on all test datasets of the best models based on the evaluation objective value after rounding, and thereby compare models trained on IrisMP and models trained on RandomMP. In general, sparser problems (RandomMP and established test datasets) are harder for the solvers to generalize to. This is likely due to the longer chordless cycles that the model needs to consider to ensure feasibility. Overall, our GCN-based model provides the best generalizability over all test datasets both when trained on IrisMP and RandomMP. We compare the GNN-based solvers to different baselines. First, we train logistic regression (LR) and MLPs as edge classifiers directly on the training data (concatenation of node features and edge weights). All our learned models outperform these baselines significantly. This indicates that MPNNs provide meaningful topological information to the edge classifier that facilitates solving MP instances. Second, we compare against Branch & Cut LP and ILP solvers as well as GAEC. In terms of objective value, GCN-based solvers are on par with heuristics and LP solvers on complete graphs, even when trained on sparse graphs. On general graphs, ILP solvers and GAEC issue lower energies, and, as expected, training on complete graphs does not generalize well to sparse graphs. However, the wall-clock runtime comparison shows that GCN-based solvers are faster by an order of

than ILP and LP solvers. They are also significantly faster than the fast and greedy GAEC heuristic. We further compared to a time-constrained version of GAEC, where we set the available time budget to the runtime of the GCN-based solver. The result shows that the trade-off between smaller energies and smaller runtime is in favor of the GCN-based solver. In the Appendix, we report additional experiments for our proposed GCN-based model on domain specific training and show that task specific priors can be learned efficiently from only few training samples.

GAEC LP ILP GCN_W_BN
Nodes [ms] Objective [ms] Objective [ms] Objective [ms] Objective
OOT
OOT OOT
Table 1: Wall-clock runtime and objective values of MPNN-based solver vs. GAEC, LP and ILP on a growing, randomly-generated graph. OOT indicates no termination within 24hrs.

Next, we conduct a scalability study on random graphs with increasing number of nodes, generated according to the RandomMP dataset. Results are shown in Tab. 1. While the GAEC is fast for small graphs, the GCN-based solver scales better and returns solutions significantly faster for larger graphs. LP and ILP solvers are not able to provide solutions within 24hrs for larger instances. It is noteworthy that GNN-based solvers spend - of their runtime rounding the solutions. Hence, GNN-based solvers are already more scalable and still have a large potential for improvement in this regard, while GAEC and LP/ILP solvers are already highly optimized for runtime.

Next, we ablate on the GCN aggregation functions, loss and network depths.

Edge-weighted GCNs

First, we determine the impact of each adjustment to the GCN update function. In Tab. 2 we show the results of this ablation study. While vanilla GCN is not applicable in the MP setting, simply removing the Laplacian from Eq. 5 provides a first baseline. We observe that adding edge weights () to Eq. 5 improves the performance on the test split of the training data substantially. However, the model is not able to generalize to different graph statistics. By adding the signed normalization term () we arrive at Eq. 7, achieving improved generalizability. Removing edge weights from Eq. 7 deteriorates performance and generalizability. Thus both changes are necessary to enable GCN in the MP setting.

Additionally, we compare GCNs with edge weights and signed graph Laplacian normalization, trained with batch normalization, to the plain GCN model [33]. To this end, we train on the IrisMP dataset and set the model width to and its depth to . Here, we set , hence, we do not apply CCL. Fig. 5 shows the results of this experiment. The corresponding plots for SGCN and GIN are given in the Appendix. The variants with edge features achieve lower losses than without edge features, and batch normalization improves the loss further. In fact, the original GCN is not able to provide any meaningful features for the edge classification network. The proposed extensions enable these networks to find meaningful node representations for the multicut problem.

(a) Training loss. (b) Evaluation loss. (c) Network Depth.
Figure 5: (a) Training and (b) evaluation loss while training variants of GCN on IrisMP. Each plot compares the variants with (GCN_W) and without (GCN) edge weights in the aggregation, and GCN_W with batch normalization (GCN_W_BN). (c) Results in terms of optimal objective ratio on the evaluation data when training GCN_W_BN with varying depths.
(a) Feasibility. (b) Optimality. (c) Objective.
Figure 6: (a) Ratio of feasible solutions before repairing, (b) Ratio of optimal solutions, and (c) Optimal objective ratio, for GCN_W_BN on RandomMP, applying CCL after instances.

4.2 Ablation Study

Variant IrisMP RandMP BSDS300 CREMI Knott
GCN Not applicable: Laplacian may not exist.
- Laplacian 0.41 0.18 0.00 0.49 0.00
+ edge weights 0.95 0.18 0.40 0.57 0.19
+ signed norm. 0.96 0.67 0.75 0.74 0.68
= GCN_W 0.96 0.67 0.75 0.74 0.68
- edge weights 0.64 0.05 0.00 0.48 0.00
GIN0 0.41 0.04 0.07 0.48 0.00
MPNN 0.93 0.45 0.48 0.49 0.06
Table 2: Ablation study with GCN [33] trained on IrisMP without CCL.. Additional comparison to vanilla versions of GIN0 [54], and MPNN [20]. We report the performance on the test data in terms of optimal objective ratio .
Number of Convolutional Layers

Next, we evaluate the effect of depth of the GCN model when trained on the IrisMP dataset and evaluated on IrisMP, RandomMP as well as BSDS300. Fig. 5(c) shows the results after varying the depth in increasing step sizes up to a depth of . The results suggest that increasing the depth improves the objective value up to a certain point. In the case of IrisMP graphs with diameter and lengths of chordless cycles of at most , increasing the depth beyond has no obvious effect. This is an important observation, because [37] raise concerns that GCN models can suffer from over-smoothing such that learned representations might become indistinguishable.

Cycle Consistency Loss

Here, we evaluate the effect of applying the cycle consistency loss from Eq. (9) by comparing models where CCL is applied after instances to models solely trained without CCL. Figures 6(a) and (b) show the progress of the ratio of feasible solutions and ratio of optimal solutions found during training. As soon as CCL is applied, the ratio of feasible solutions increases while the ratio of optimal solutions decreases. Hence, CCL induces a trade-off between finding feasible and optimal solutions, where the model is forced to find feasible solutions to avoid the penalty, and as a consequence, settles for suboptimal relaxated solutions. However, the objective value after rounding improves, which is most relevant because these values correspond to feasible solutions. This indicates that the model’s upper bound on the optimal energy is higher while the relaxation is tighter when CCL is employed. See the Appendix for an ablation on in Eq. 9.

Meaningful Embeddings

In Fig. 1 we visualize the node embedding space given by our best performing model on an IrisMP instance. Plotting the cosine similarity between all nodes reflects the resulting clusters. This shows that the model is able to distinguish nodes based on their connectivity. We show further examples in the Appendix.

5 Conclusion

In this paper, we address the minimum cost multicut problem using feed forward MPNNs. To this end, we provide appropriate model and training loss modifications. Our experiments on two synthetic and two real datasets with various GCN architectures show that the proposed approach provides highly efficient solutions even to large instances and scales better than highly optimized primal feasible heuristics (GAEC), while providing competitive energies. Another significant advantage of our learning-based approach is the ability to provide gradients for downstream tasks, which we assume will inherently improve inferred solutions.

References

  • [1] B. Andres, J. H. Kappes, T. Beier, U. Köthe, and F. A. Hamprecht (2011) Probabilistic Image Segmentation with Closedness Constraints. In ICCV, Vol. . Cited by: §1, §4.1.
  • [2] B. Andres, T. Kroeger, K. L. Briggman, W. Denk, N. Korogod, G. Knott, U. Koethe, and F. A. Hamprecht (2012) Globally Optimal Closed-Surface Segmentation for Connectomics. In ECCV, Cited by: §1, §1, §2, §4.1.
  • [3] E. Arakelyan, D. Daza, P. Minervini, and M. Cochez (2021) Complex Query Answering with Neural Link Predictors. External Links: 2011.03459 Cited by: §1.
  • [4] P. Arbeláez, M. Maire, C. Fowlkes, and J. Malik (2011) Contour Detection and Hierarchical Image Segmentation. TPAMI 33 (5), pp. 898–916. Cited by: §1, §3.1.1.
  • [5] N. Bansal, A. Blum, and S. Chawla (2004) Correlation clustering. Machine Learning 56 (1–3), pp. 89–113. Cited by: §1, §2.
  • [6] T. Beier, B. Andres, U. Köthe, and F. A. Hamprecht (2016) An efficient fusion move algorithm for the minimum cost lifted multicut problem. In ECCV, Cited by: §2.
  • [7] T. Beier, T. Kroeger, J.H. Kappes, U. Köthe, and F.A. Hamprecht (2014) Cut, glue, & cut: a fast, approximate solver for multicut partitioning. In CVPR, Cited by: §2.
  • [8] T. Beier, C. Pape, N. Rahaman, T. Prange, S. Berg, D. D. Bock, A. Cardona, G. W. Knott, S. M. Plaza, L. K. Scheffer, U. Koethe, A. Kreshuk, and F. A. Hamprecht (2017) Multicut brings automated neurite segmentation closer to human performance. Nature methods 14 (2), pp. 101—102. Cited by: §1, §4.1.
  • [9] I. Bello, H. Pham, Q. V. Le, M. Norouzi, and S. Bengio (2017) Neural Combinatorial Optimization with Reinforcement Learning. In ICLR Workshop, Cited by: §3.
  • [10] Y. Chen and B. Zhang (2020) Learning to solve network flow problems via neural decoding. External Links: 2002.04091 Cited by: §3.
  • [11] S. Chopra and M. R. Rao (1993) The partition problem. Mathematical Programming 59 (1), pp. 87–115. Cited by: §2, §2.
  • [12] H. Dai, E. B. Khalil, Y. Zhang, B. Dilkina, and L. Song (2017) Learning Combinatorial Optimization Algorithms over Graphs. In NIPS, Cited by: §1, §3.
  • [13] E. D. Demaine, D. Emanuel, A. Fiat, and N. Immorlica (2006) Correlation clustering in general weighted graphs. Theoretical Computer Science 361 (2–3), pp. 172–187. Cited by: §1.
  • [14] T. Derr, Y. Ma, and J. Tang (2018) Signed Graph Convolutional Networks. In ICDM, Cited by: Appendix 0.E, §1, §3.
  • [15] M. M. Deza and M. Laurent (1997) Geometry of cuts and metrics. Springer. Cited by: §2.
  • [16] J. Ding, C. Zhang, L. Shen, S. Li, B. Wang, Y. Xu, and L. Song (2020-Apr.) Accelerating primal solution findings for mixed integer programs based on solution prediction. AAAI 34 (02), pp. 1452–1459. Cited by: §3.
  • [17] R. A. Fisher (1936) The use of multiple measurements in taxonomic problems. Annals of Eugenics 7 (2), pp. 179–188. Cited by: §0.C.1, §3.1.2.
  • [18] J. Gallier (2016) Spectral theory of unsigned and signed graphs. applications to graph clustering: a survey. External Links: 1601.04692 Cited by: §3.1.1.
  • [19] M. Gasse, D. Chételat, N. Ferroni, L. Charlin, and A. Lodi (2019) Exact combinatorial optimization with graph convolutional neural networks. In NeurIPS, Cited by: §3.
  • [20] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural Message Passing for Quantum Chemistry. In ICML, Cited by: §1, §3, Table 2.
  • [21] L. Gurobi Optimization (2020) Gurobi optimizer reference manual. External Links: Link Cited by: §2.
  • [22] Y. P. Hou (2005)

    Bounds for the Least Laplacian Eigenvalue of a Signed Graph

    .
    Acta Math Sinica 21 (4), pp. 955–960. Cited by: §3.1.1.
  • [23] E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schieke (2016)

    DeeperCut: a deeper, stronger, and faster multi-person pose estimation model

    .
    In ECCV, Cited by: §1.
  • [24] C. K. Joshi, T. Laurent, and X. Bresson (2019) An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem. External Links: 1906.01227 Cited by: §1, §3, §3, §4.
  • [25] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnörr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kröger, J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother (2015) A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems. IJCV 115 (2), pp. 155–184. Cited by: §1, §2, §4.1.
  • [26] J. H. Kappes, M. Speth, B. Andres, G. Reinelt, and C. Schnörr (2011) Globally optimal image partitioning by multicuts. In EMMCVPR, Cited by: §2.
  • [27] M. Keuper, E. Levinkov, N. Bonneel, G. Lavoué, T. Brox, and B. Andres (2015) Efficient Decomposition of Image and Mesh Graphs by Lifted Multicuts. In ICCV, Cited by: §1, §2.
  • [28] M. Keuper, S. Tang, B. Andres, T. Brox, and B. Schiele (2020) Motion Segmentation Multiple Object Tracking by Correlation Co-Clustering. TPAMI 42 (1), pp. 140–153. Cited by: §1.
  • [29] M. Keuper (2017-10) Higher-order minimum cost lifted multicuts for motion segmentation. In ICCV, Cited by: §1.
  • [30] S. Kim, S. Nowozin, P. Kohli, and C. D. Yoo (2011) Higher-Order Correlation Clustering for Image Segmentation. In NIPS, Cited by: §2.
  • [31] S. Kim, C. Yoo, S. Nowozin, and P. Kohli (2014) Image segmentation using higher-order correlation clustering. TPAMI 36, pp. 1761–1774. Cited by: §2.
  • [32] D. P. Kingma and J. Ba (2015) Adam: A Method for Stochastic Optimization. In ICLR, Cited by: Appendix 0.E, Appendix 0.H.
  • [33] T. N. Kipf and M. Welling (2017) Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, Cited by: Appendix 0.E, §1, §1, §3.1.1, §3, §4.1, Table 2.
  • [34] W. Kool, H. V. Hoof, and M. Welling (2019) Attention, Learn to Solve Routing Problems!. In ICLR, Cited by: §3.
  • [35] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet Classification with Deep Convolutional Neural Networks. In NIPS, Cited by: §1.
  • [36] J. Kunegis, S. Schmidt, A. Lommatzsch, J. Lerner, E. W. D. Luca, and S. Albayrak (2010) Spectral Analysis of Signed Graphs for Clustering, Prediction and Visualization. In SDM, Cited by: §3.1.1.
  • [37] Q. Li, Z. Han, and X.-M. Wu (2018)

    Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning

    .
    In AAAI, Cited by: §4.2.
  • [38] Z. Li, Q. Chen, and V. Koltun (2018) Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search. In NIPS, Cited by: §1, §3.
  • [39] Q. Ma, S. Ge, D. He, D. Thaker, and I. Drori (2020) Combinatorial Optimization by Graph Pointer Networks and Hierarchical Reinforcement Learning. In

    AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications

    ,
    Cited by: §3.
  • [40] D. Martin, C. Fowlkes, D. Tal, and J. Malik (2001) A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In ICCV, Cited by: Appendix 0.B, §1, §4.1.
  • [41] A. Micheli (2009) Neural network for graphs: a contextual constructive approach. IEEE Transactions on Neural Networks 20 (3), pp. 498–511. Cited by: §1.
  • [42] M. Nazari, A. Oroojlooy, L. Snyder, and M. Takac (2018) Reinforcement Learning for Solving the Vehicle Routing Problem. In NIPS, Cited by: §3.
  • [43] Y. Oulabi and C. Bizer (2019) Extending cross-domain knowledge bases with long tail entities using web table data. In Advances in Database Technology, pp. 385–396. Cited by: §1.
  • [44] M. O. R. Prates, P. H. C. Avelar, H. Lemos, L. Lamb, and M. Vardi (2019) Learning to Solve NP-Complete Problems - A Graph Neural Network for the Decision TSP. In AAAI, Cited by: §1.
  • [45] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2009) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §1.
  • [46] D. Selsam, M. Lamm, B. Bünz, P. Liang, L. de Moura, and D. L. Dill (2019) Learning a sat solver from single-bit supervision. External Links: 1802.03685 Cited by: §1.
  • [47] J. Shi and J. Malik (2000) Normalized cuts and image segmentation. TPAMI 22 (8), pp. 888–905. Cited by: §1, §3.1.1.
  • [48] Y. Shi, Z. Huang, W. Wang, H. Zhong, S. Feng, and Y. Sun (2020) Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification. External Links: 2009.03509 Cited by: §1, §3, §4.
  • [49] M. Simonovsky and N. Komodakis (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In CVPR, Cited by: §1, §3.
  • [50] J. Song, B. Andres, M. Black, O. Hilliges, and S. Tang (2019) End-to-end Learning for Graph Decomposition. In ICCV, Cited by: §2.
  • [51] P. Swoboda and B. Andres (2017) A message passing algorithm for the minimum cost multicut problem. In CVPR, Cited by: §2.
  • [52] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is All You Need. In NIPS, Cited by: §1, §3.
  • [53] O. Vinyals, M. Fortunato, and N. Jaitly (2015) Pointer Networks. In NIPS, Cited by: §3.
  • [54] K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2019) How Powerful are Graph Neural Networks?. In ICLR, Cited by: Appendix 0.E, §1, §3.1.1, §3, Table 2.

Appendix 0.A Appendix

In this supplementary material, we provide several additional details, ablations and visualizations. We provide

  • an example graph from an image segmentation problem, providing some intuition on the practical quality of results;

  • details on the generation of the contributed training datasets IrisMP and RandomMP;

  • adapted update functions for GCN, for GIN and SGCN that we used for the evaluation in Table 1 of the main paper, as well as training and evaluation losses for these models with and without the proposed modifications and batch norm;

  • experiments for domain specific finetuning of our models. While the domain specific training data is very scarce, these experiments show the promis of learnable multicut solvers;

  • an ablation study on the choice of the hyperparamter , that weights the two loss terms in Eq.(8);

  • additional embedding space visualizations, similar to Fig. 1 in the main paper;

  • additional details to our training settings.

Appendix 0.B Multicut Segmentation Example

For visualization purposes, we generate a small graph based on image segmentation of a training sample from the Berkeley Segmentation Dataset, BSDS300 [40], given in Fig. 7(a). First, the gradients of the original image are computed using a Sobel filter. Then, the watershed transformation is computed with desired markers, and a compactness of . This results in the image consisting of segments as shown in Fig. 7(b). Then the image is superpixelated computing the mean color of each resulting segment. E.g., is the mean color of superpixel , where is the set of pixels it contains. is the color information of a pixel, yielding Fig. 7(c). From these superpixels in region adjacency graph is constructed by connecting superpixels with a squared spatial distance less than . A positive weight between two superpixels and is calculated using color similarity by applying a Gaussian kernel111https://scikit-image.org/docs/dev/api/skimage.future.graph.html#skimage.future.graph.rag_mean_color with on their color distance: . The resulting graph is shown in Fig. 8

. From positive weights that represent superpixel similarity, positive and negative edge weights for the multicut problem can be derived using the logit function.

a) Original image. b) Watershed transformed.
c) Superpixel image. d) adjacency graph.
Figure 7: Expressing image segmentation by superpixelization as a multicut problem instance.
(a) Similarities (b) real-valued weights
Figure 8:

Example graph (a) Weights are similarities based on a Gaussian kernel. (b) Weights are log-odds of (a) and define a multicut problem instance.

Since the graph is small, we can compute its optimal solution using an ILP solver (see Fig. 9(a)). The optimal objective value is denoted by . The optimal solution yields an objective value of , while GAEC performs slightly worse with , thus providing optimality ratio .

(a) ILP solution. (b) GAEC solution.
(c) ILP segmentation. (d) GAEC segmentation.
Figure 9: Multicuts of the graph using different solvers. (a) shows the optimal cut, and (b) shows the cut computed with GAEC. Dotted, gray lines indicate that the edge is cut, and solid, black lines indicate otherwise. (c) and (d) depict the resulting segmentations.
(a) GCN_W_BN, Iris, , .
(a) GCN_W_BN, Random, , .
Figure 10: Multicuts and resulting segmentations of the example graph by two of our trained models. Highlighted edges are removed by the model without partitioning the corresponding nodes. Hence, those cuts violate cycle consistencies and the edges are reinstated after rounding.

For comparison, we show multicuts computed by a two of our models in Fig. 10. While the resulting optimality ratios are lower, the results are still of comparable practical quality, especially when considering the model trained on RandomMP.

Appendix 0.C Training Datasets

0.c.1 IrisMP

The first dataset we generated, IrisMP, consists of multicut problem instances on complete graphs based on the Iris flower dataset [17]. This dataset is a well-known multivariate dataset containing different measurements, namely the width and length of sepal and petal for different species of flowers. For each species the dataset contains samples. For each graph we drew dimensions uniformly at random, and then uniformly drew to data points that we used as nodes. We connected all the nodes and computed edge weights for each connection. Edge weights are computed in three steps. First, we computed the distance between two nodes. Then we used a Gaussian kernel with to convert the distances into similarity measures. Since the resulting similarity is in , we applied the logit function to retrieve positive as well as negative edge weights. Since all graphs are fully connected it is sufficient for this dataset to only consider triangles to ensure cycle consistency. Since the number of edges increases quadratically with the number of nodes in such complete graphs, we kept the number of drawn nodes small. IrisMP instances consist of nodes on average. The resulting dataset contains instances for training. Additionally, we generated graphs each for validation and test splits.

Graph Stats
Split Train Eval Test
 
Weights
Objective Values
Table 3: IrisMP statistics.

0.c.2 RandomMP

To complement the IrisMP dataset, we generated a second dataset that contains sparse but larger problem instances in terms of the number of nodes, called RandomMP. To generate this dataset, we employed the following procedure. First, we sampled the number of nodes from a normal distribution with

and . Then, for each node, we sampled its coordinates on a 2D plane uniformly in for each coordinate. We connected nodes based on the nearest neighbors, where is drawn from a normal distribution with and . However, we constrained the minimum number of neighbors of each node to so that the graph is connected. We computed edge weights based on the distance on the plane. Then, we subtracted the median from all edge weights, to achieve an approximate distribution of positive as well as negative connections. Again, we generated a training dataset of size and splits of for validation and testing.

Graph Stats
Split Train Eval Test
 
Weights
Objective Values
Table 4: RandomMP statistics.

Appendix 0.D Test Datasets

Graph Stats
 
Weights
Objective Values
Table 5: BSDS300 statistics.
Graph Stats
 
  
Weights
Objective Values
Table 6: CREMI statistics.

Appendix 0.E Modified Update Functions

In the main paper, we extend the Graph Convolutional Network (GCN) [33] such that it is able to process weighted graphs. In addition to that, we extend Signed Graph Convolutional Network (SGCN) [14] and Graph Isomorphic Network (GIN) [54] in a similar way to be able to produce the results in Table 1 of the main paper. Tab. 7 shows our modifications to the update functions of those networks.

Network Modified Update Function
GCN
SGCN
GIN
Table 7: Modifications highlighted in blue to MPNNs to account for signed edge weights.

In Fig. 11 we show the training and evaluation losses of the modified update functions in comparison to their vanilla versions. We trained on IrisMP with message-passing iterations, set the dimensionality of node embeddings to , and performed edge classification with an MLP that consists of hiddens layers with neurons each. No CCL was applied. Optimization was performed with Adam [32] ( learning rate, weight decay, betas) and a batch size of .

(a) GCN Training Loss. (b) SGCN Training Loss. (c) GIN Training Loss.
(d) GCN Evaluation Loss. (e) SGCN Evaluation Loss. (f) GIN Evaluation Loss.
Figure 11: Comparison of the training (a)-(c) and evaluation (d)-(f) losses of the networks with (indicated by _W) and without modified update functions, as well as with batch normalization (indicated by _W_BN).
Hyperparameter .
Figure 12: Training progress of the optimal objective ratio for different values of when training GCN_W_BN on RandomMP and applying CCL after instances.

Appendix 0.F Finetuning Experiments

Originally, generating synthetic training data for our method was mainly driven by the lack of suitable training data available. The benchmark datasets mentioned consist of only a few problem instances (BSDS300: 100, CREMI: 3, Knott: 24). Although BSDS300 consists of 100 training and 100 testing images, the multicut problem instances provided by the OpenGM benchmark are solely based on the test images. This is the case, because training images were used to train a model that derives edge weights for the testing images and are discarded afterwards. Nevertheless, we consider it interesting to see how the model behaves in this environment of scarce training data, and whether finetuning can help to boost model performance on a specific task. Therefore, we ran the following additional experiments:

  1. we trained GCN_W_BN from scratch on training splits of these datasets and

  2. we finetuned the best performing model of Table 1 (GCN_W_BN trained on RandomMP - referred to as RMP-GCN in the following) on training splits of these datasets.

We split BSDS300 into 70/20/10 (train/eval/test), CREMI into 2/1 (train/eval) and Knott into 18/6 (train/eval). The performance after training is evaluated on the eval split and compared to the performance as optimality ratio of the RandomMP trained GCN_W_BN (RMP) on the eval split. Results are given in Tab. 8. Please note that these results can not be directly compared to Table 1 since the whole datasets were evaluated in Table 1.

RMP from scratch RMP + finetuned
BSDS300
Knott
CREMI
Table 8: Domain specific training of GCN_W_BN from scratch and finetuned on the scarce available training data for each task, compared to the general purpose model trained on RandomMP (RMP). Results show that domain specific properties can be learned, from few samples but pre-training can help in general.

Still, the results indicate that when trained on BSDS300 from scratch, the model improves on the eval split from 0.8818 to 0.8834. We were not able to find models that outperform RMP-GCN trained on Knott and CREMI. For finetuning we tried three different settings: i) retraining all parameters (GCN+edge classifier), ii) only retraining edge classifier parameters and iii) only retraining the last layer of the edge classifier. We found that setting iii) worked best overall. As shown in Tab. 8, we found models that improved on Knott and CREMI. Yet, finetuning did not help to improve performance on BSDS300.

Appendix 0.G Embedding Space Visualizations

Fig. 13 visualizes the results of our model (GCN_W_BN) on an IrisMP graph (#0).

(a) Graph cut by model. (b) Optimal solution.
(c) Node clustering by model. (d) Optimal solution.
(e) Node clustering by model. (f) Node embeddings.
Figure 13: (a) Graph cut solution computed by the model. (b) Graph cut of this graph according to the optimal solution. (c) Clustering of nodes according to the models’ graph cut. (d) Clustering of nodes according to the optimal solution. (e) Node embeddings projected into a 2D space using PCA. Node colors according to the model prediction. (f) Cosine similarity between all node embeddings, ordered by similarity.

Fig. 14 visualizes the results of our model (GCN_W_BN) on an IrisMP graph (#1).

(a) Graph cut by model. (b) Optimal solution.
(c) Node clustering by model. (d) Optimal solution.
(e) Node clustering by model. (f) Node embeddings.
Figure 14: (a) Graph cut solution computed by the model. (b) Graph cut of this graph according to the optimal solution. (c) Clustering of nodes according to the models’ graph cut. (d) Clustering of nodes according to the optimal solution. (e) Node embeddings projected into a 2D space using PCA. Node colors according to the model prediction. (f) Cosine similarity between all node embeddings, ordered by similarity.

Fig. 15 visualizes the results of our model (GCN_W_BN) on an IrisMP graph (#16).

(a) Graph cut by model. (b) Optimal solution.
(c) Node clustering by model. (d) Optimal solution.
(e) Node clustering by model. (f) Node embeddings.
Figure 15: (a) Graph cut solution computed by the model. (b) Graph cut of this graph according to the optimal solution. (c) Clustering of nodes according to the models’ graph cut. (d) Clustering of nodes according to the optimal solution. (e) Node embeddings projected into a 2D space using PCA. Node colors according to the model prediction. (f) Cosine similarity between all node embeddings, ordered by similarity.

Fig. 16 visualizes the results of our model (GCN_W_BN) on an IrisMP graph (#17).

(a) Graph cut by model. (b) Optimal solution.
(c) Node clustering by model. (d) Optimal solution.
(e) Node clustering by model. (f) Node embeddings.
Figure 16: (a) Graph cut solution computed by the model. (b) Graph cut of this graph according to the optimal solution. (c) Clustering of nodes according to the models’ graph cut. (d) Clustering of nodes according to the optimal solution. (e) Node embeddings projected into a 2D space using PCA. Node colors according to the model prediction. (f) Cosine similarity between all node embeddings, ordered by similarity.

Appendix 0.H Training settings

Fig. 17 shows mean evaluation plots of five training runs of GCN_W_BN on instances of RandomMP. No CCL was applied in the first instances. Then, was linearly increased over instances to . Afterwards, the training continued for instances with . The node embedding dimensionality was set to and the number of GCN layers to . The MLP edge classifier consists of hidden layers with neurons. Optimization was performed with Adam [32] ( learning rate, weight decay, betas) and a batch size of . Each training was performed on a MEGWARE Gigabyte G291-Z20 server on one NVIDIA Quadro RTX 8000 GPU and took hrs on average, whereof the training time of the last (CCL) instances took around hrs.

(a) BCE. (b) CCL. (c) Objective.
Figure 17:

Mean and standard deviation (a) BCE evaluation loss, (b) CCL evaluation loss, and (c) evaluation optimal objective ratio of

training runs of GCN_W_BN on RandomMP.