Hybrid Classification and Reasoning for Image-based Constraint Solving

by   Maxime Mulamba, et al.
Vrije Universiteit Brussel

There is an increased interest in solving complex constrained problems where part of the input is not given as facts but received as raw sensor data such as images or speech. We will use "visual sudoku" as a prototype problem, where the given cell digits are handwritten and provided as an image thereof. In this case, one first has to train and use a classifier to label the images, so that the labels can be used for solving the problem. In this paper, we explore the hybridization of classifying the images with the reasoning of a constraint solver. We show that pure constraint reasoning on predictions does not give satisfactory results. Instead, we explore the possibilities of a tighter integration, by exposing the probabilistic estimates of the classifier to the constraint solver. This allows joint inference on these probabilistic estimates, where we use the solver to find the maximum likelihood solution. We explore the trade-off between the power of the classifier and the power of the constraint reasoning, as well as further integration through the additional use of structural knowledge. Furthermore, we investigate the effect of calibration of the probabilistic estimates on the reasoning. Our results show that such hybrid approaches vastly outperform a separate approach, which encourages a further integration of prediction (probabilities) and constraint solving.



There are no comments yet.


page 1

page 2

page 3

page 4


A Multicore Tool for Constraint Solving

*** To appear in IJCAI 2015 proceedings *** In Constraint Programming (C...

Probabilistic Arc Consistency: A Connection between Constraint Reasoning and Probabilistic Reasoning

We document a connection between constraint reasoning and probabilistic ...

ACLP: Integrating Abduction and Constraint Solving

ACLP is a system which combines abductive reasoning and constraint solvi...

Constraint Answer Set Solver EZCSP and Why Integration Schemas Matter

Researchers in answer set programming and constraint programming have sp...

Hybrid Automated Reasoning Tools: from Black-box to Clear-box Integration

Recently, researchers in answer set programming and constraint programmi...

Heuristic Search as Evidential Reasoning

BPS, the Bayesian Problem Solver, applies probabilistic inference and de...

Tackling the DMN Challenges with cDMN: a Tight Integration of DMN and constraint reasoning

This paper describes an extension to the DMN standard, called cDMN. It a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial intelligence (AI) is defined as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.”[27]

. In that regard, recent advancements in deep neural network (DNN) architectures have achieved highly accurate performance in object and speech recognition and classification. However, many real life problems are relational, where inference on one instance is related to another through various constraints and logical reasoning. Attaining good performance in tasks which require reasoning over constraints and relations still remains elusive. The DNN architectures rely heavily on learning latent representation from the training datasets

[29]. The main reason deep architectures struggle in constraint reasoning is that the nuances of the relationship between entities are often lost in the latent representation. For instance, when solving a sudoku, a DNN model would take the partially filled sudoku as an input and would then be expected to produce the solved sudoku as output. In this process, the model fails to comprehend the interactions among different cells.

Moreover, the high quality performance of DNNs at complex tasks comes at a cost. As DNN models fail to comprehend the logical reasoning, they have to adjust to gradual feedback of the error signals. As a consequence, to be proficient in any simple task, a DNN needs an enormous amount of data. As an example, to be an efficient video-gamer, a DNN model has to play a game for more than 900 hours [9]. Motivated by such deficiencies, integrating logical and relational reasoning into DNN architecture has increasingly gained more attention.

In trying to bridge deep learning and logical reasoning, Wang et al. 


propose SATNet, a differentiable satisfiability solver that can be used to learn both constraints and image classification through backpropagation. Internally, it uses a quadratic SDP relaxation of a MaxSAT model, and hence learns a relaxed representation of the constraints. We argue that in many cases, there is no need to learn everything end-to-end. Indeed, in a visual sudoku setting, while the constraints are easy to specify in a formal language, the image classification task is difficult for a machine to capture. Hence, we seek to bridge deep learning and logical reasoning by directly plugging the (probabilistic) output of the deep learning into a constraint solver that reasons over the relevant hard constraints.

In this work, we present a framework where we perform joint inference [24, 23, 25]

over the different predictions, by integrating machine learning inference with first and second order logic. Specifically, instead of solving a constraint programming (


) problem over a set of independently predictied values, we use CP to do joint inference over a set of probability vectors. The training of the DNN happens on individual image instances, as is typically done. Effectively, our framework can be considered as a

forward-only layer on top of the predictions of a pre-trained network.

Specifically, we consider the “visual sudoku” problem where images of digits of some cells in the sudoku grid are fed as input. We first predict the digits using a DNN model and then use a CP solver to solve the sudoku puzzle. A conventional approach would use the predictions of the DNN as inputs to the CP. As the DNN model is not aware of the constraints of the sudoku problem, it misses the opportunity to improve its prediction by taking the constraints into account. When the predictions of the DNN are directly fed into the CP solver, in case of any error, the CP model is bound to fail. Note that in this case, even one prediction error will result in the failure of the whole problem.

We improve the process by considering the predicted class probabilities instead of directly using the arg max prediction. The advantage of our approach is that by avoiding hard assignments prior to the CP solver, we enable the CP solver to correct the errors of the DNN model. In this way, we use CP to do joint inference, which ensures that the predictions will respect the constraints of the problem.

The contributions of the paper are as follows:

  • We explore hybridisation of classification and constraint reasoning on the visual sudoku problem;

  • We show that constraint reasoning over the probabilistic predictions outperforms a pure reasoning approach, and that we can further improve by taking higher-order relations into account;

  • We investigate the increased computational cost of reasoning over the probabilities, and the trade-offs possible when limiting the reasoning to the top-k probabilities.

  • We experimentally explore the interaction of predictive power with the power of discrete reasoning, showing correction factors of 10% and more, as well as the effect of using calibrated probabilistic classifiers.

2 Related work

2.0.1 Predict-and-optimize

Our work is closely related to the growing body of research at the intersection of machine learning (ML) and combinatorial optimization

[7, 8, 17] where the predictions of an ML model is fed into a downstream optimization oracle. In most applications, feeding machine learning predictions directly into a combinatorial optimization problem may not be the most suitable approach. Bengio [2]

compared two ML approaches for optimizing stock returns—one uses a neural network model for predicting financial prices, and the second model makes use of a task-based loss function. Experimental results show that the second model delivers better optimized return. The results also suggest a closer integration of ML and optimization.

In this regard, Wilder et al. [32] propose a framework which trains the weight of the ML model directly from the task-loss of the downstream combinatorial problem from its continuous relaxation. The end-to-end model of [30] learns the constraints of a satisfiability problem by considering a differentiable SDP relaxation of the problem. A similar work[14] trains an ML model by considering a convex surrogate of the task-loss.

Our work differs from these as we do not focus on end-to-end learning. Rather, we enhance the predictions of an ML model by using CP to do joint inference over the raw probability vectors. In this way, we are taking the constraint interaction of the combinatorial problem into account.

2.0.2 Joint inference

Our work is also aligned with the research in joint inference. For example, Poon and Domingos [23] have shown its advantage for information extraction in the context of citation matching. Recent work in linguistic semantic analysis of Wang et al.[31] forms a factor graph from the DNN output by encoding it into logical predicates and performs a joint inference over the factor graph. Several other works[11, 12, 3] focus on leveraging joint inference in DNN architecture for relation extraction from natural language. Our work differs from these, as we perform probabilistic inference on combinatorial constraint solving problem where one inference is linked with another by hard constraints.

2.0.3 Training with Constraints

Various works have introduced methods to enforce constraints on the outputs of an NN. One of the earlier work[21] does this by optimizing the Lagrangian coefficients of the constraints at every parameter update of the network. But this would not be feasible in the context of deep neural network as very large dimension matrices must be numerically solved for each parameter update [16]. Pathak et al. [19]

introduce CCNN for image segmentation with size constraints where they introduce latent probability distributions over the labels and impose constraints on the latent distribution enabling efficient Lagrangian dual optimization. However, one drawback is, this involves solving an optimization problem at each iteration. Márquez-Neila et al.

[16] use a Lagrangian based Krylov subspace approach to enforce linear equality constraints on the output of an NN. But this approach is not found to be scalable to large problem instances. The proposed framework of [13] quantifies inconsistencies of the NN output with respect to the logic constraints and is able to significantly reduce inconsistent constraint violating outcomes by training the model to minimize inconsistency loss.

The closest work to ours is [24]

, where Punyakanok et al. train a multiclass classifier to identify the label of an argument in the context of semantic role labeling and then feed the prediction scores of each argument to an Integer Linear Programming solver so that the final inferences abide by some predefined linguistic constraints.

3 Preliminaries

3.0.1 CSP and COP

The concept of a constraint satisfaction problem (CSP) is fundamental in constraint programming [26]. A CSP is formulated as a triplet , where is a set of decision variables, each of which has its possible values in a domain contained in the set , and is a set of constraints that need to be satisfied over the variables in . In most cases, we are not only interested in knowing whether a constrained problem is solvable, but we want the best possible solution according to an objective.

A Constraint Optimization Problem finds a feasible solution of optimum value with respect to an objective function over the variables. In case of a minimisation problem, we have: iff and with .

3.0.2 Sudoku

In our work we consider a prototype CSP, namely the sudoku. Sudoku is a number puzzle, played on a partially filled x grid. The goal is to find the unique solution by filling in the empty grid cells with numbers from to in such a way that each row, each column and each of the nine x subgrids contain all the numbers from to once and only once.

Formally, the sudoku is a where is the set of variables for every cell in the grid, and for each . We separate the sudoku constraints into two parts: the set of constraints defining the assignment of numbers in the filled cells (hereinafter referred to as the givens) of the grid and the set of constraints defined by the rules of sudoku.

Formally, consists of the following constraints:


For the given cells, is simply an assignment: where the are known. Because and are obvious from the constraints, we will write or alternatively to represent a solution of a sudoku specification.

Sudoku has one additional property, namely that for a set of givens, the solution is unique: , with .

3.0.3 ML Classifier

We will consider the visual sudoku problem, where the given cells are not provided as facts, but each given cell will be an image of a handwritten digit. We will hence first use Machine Learning (ML) to classify what digit each of the images represents.

Given a dataset of size , with (denoting that each element is a feature vector of real numbers) and the corresponding class label, the goal of an ML classifier is to learn a function approximator (with the trainable parameters of the learning function), such that for all pairs. In case of a probabilistic classifier, the predicted class label is with the predicted probability that belongs to class  [4].

Formally, the goal of training is to compute where is a loss function measuring how well the function approximates the target. An example of a loss function for probabilistic classifiers with possible classes is the cross-entropy loss, defined as:


where is the indicator function having the value only when has value , i.e., belongs to class .

4 Visual sudoku and solution methods

We first introduce the visual sudoku problem as an example of an image-based constraint solving problem, and then propose three different approaches to solving it by combining classification and reasoning.

4.0.1 Visual sudoku

In visual sudoku, the given cells of the sudoku are provided as unlabeled images of handwritten digits. We are also given a large dataset of labeled handwritten digits (the MNIST dataset [10]). It is inspired by an experiment in [30], although we consider the case where the constraints are known and can be used for reasoning.

Formally, consists of the rules of sudoku (Eq 1), and a set of given images each one consisting of a pixel representation of the handwritten digit. The goal is to use a classifier on such that the predicted labels lead to the solution of the sudoku, that is: with the true labels of the given images if known.

4.1 Separate classification and reasoning

Figure 1: Architecture of separate classification and reasoning approach

The most straightforward approach to solving the visual sudoku problem is to consider the classification and reasoning problems separately. In this approach, first, the most likely digit for each of the given cells are predicted, after which the puzzle is solved using the resulting grid. This will be our baseline approach.

The baseline approach, explained on Fig. 1

, is composed of a separate convolutional neural network and a CP solver. The process begins with the training of the DNN on the MNIST training set

to obtain a handwritten digit classifier . Then for each visual sudoku instance, we use the classifier to predict the value of each given cell’s image. This takes us from a visual to a purely digital representation of the problem, which is then fed into the CP sudoku solver. Note, that training is separate from the concept of sudoku, and done on individual images as is standard in image recognition tasks.

Once the model is trained, we use it to solve . For that, we first predict the digit for each of the given images . For each given, the trained DNN computes a class probability for each digit and predicts the value with the highest probability:


Once all the given images are predicted, the CP component finds a solution as visualised in Fig. 1.

From an inference standpoint, the above approach commits to the independent predictions made by the classifier and tries to use them as best as possible.

4.2 Hybrid1: reasoning over class probabilities

In this approach, we will use the same DNN architecture for digit classification as before. However, instead of using the hard labels from the DNN model, we will make use of the class probabilities of each of the given cells. Hence the outputs of the DNN, i.e., the inputs to the CP solver for each of the given cells, are probabilities – one for each digit that can appear in a sudoku cell. The idea is to completely solve a sudoku grid by solving a . See Fig 2 for a visual representation of the architecture.

Figure 2: Architecture of class-probability reasoning approach

Note that here, we make a joint inference over all the predictions, including their effect and relation to the a-priori empty cells. In the resulting solution, the digits of both given and non-given cells are obtained at once, while satisfying all the sudoku constraints.

First, the DNN is trained on images of single handwritten digits as before. After training, we store the DNN computed probabilities for each of the given . We wish to make the CP solver reason (do inference) over these probabilities directly, hence the sudoku problem formulation of Eq 1 needs to be modified to accommodate the probabilities. Instead of only satisfying the regular sudoku constraints of Eq 1, we seek to find a solution which optimizes the likelihood of the solution, given the probabilities obtained from the classifier.

More specifically, as each image is predicted on its own, we assume each to be an observation of an independent random variable, and hence the most likely solution is the one that maximizes the joint probability over the given images

for a solution . We would like to find the most likely solution that also satisfies all constraints. After a log-transform, we can write the joint probability as a weighted sum objective function as follows:


Treating as a -dimensional vector, one can see that the inner sum could be formulated with a traditional element constraint in a CP solver. We must emphasize that the log-likelihood is maximized only over the given cells and not for the whole grid due to the fact that we have the classifier provided probability vector only for these cells with given images.

Note that in this approach, the CP solver has to solve a more complex problem with larger domains for the given cells, and hence a larger search space. Contrary to the approach in section  4.1 where the problem was a CSP, here the problem is a COP. The advantage of this approach is that it makes use of the constraint relationships of the sudoku problem. Moreover, it improves the prediction of the ML classifier by reasoning over these constraint relationships.

4.3 Hybrid2: Higher-order knowledge exploitation

As mentioned before, a sudoku must have a unique solution for a set of givens. For traditional sudoku puzzles this is the case by construction, as otherwise, a human solver would be faced with having to choose among two or more options, rather than reasoning up to a full solution.

In the approach of section 4.2, we simply find one solution and treat that as the solution, without verifying whether it is unique with respect to the set of givens. When projecting the solution of the entire sudoku back to only the assignment to the ‘given’ cells, e.g. those for which an image is given, then this assignment to the givens should have one and only one unique solution. If not, this assignment to the givens, and hence the entire sudoku solution, can not be the intended solution.

Therefore, we can use the (non) existence of a unique solution as an additional relational property that can steer the joint inference. The pseudo-code of this approach is shown in Algorithm 1. We start with finding the most likely solution as in the hybrid1 approach described in the previous section. We will write to represent the projected part of the solution, that is, only the part of the assignment of the cells with an image given.

Instead of counting all solutions given , it is sufficient (and computationally cheaper) to only check whether any other solution exists. Hence, we will search for any sudoku solution (line 1) that is different from the solution that we already know exists (line 1).

If there does not exist such other solution, i.e. the assignment is an empty set (line 1), then the solution is unique and there is nothing more we can infer. If there is another solution, we reject for not being unique. That is, we add a nogood ensuring that no completion of will be found anymore (line 1), and repeat the procedure.

This use of a nogood, or a blocking clause, is common in solving such second-order logic problems. It can be seen as an instantiation of solution dominance [5].

1  // as in hybrid1
2     // temporarily forbid this solution
3   // check for other solutions having these givens
4 while   do
5        // add nogood on givens
6         // as in hybrid1
7          // temporarily forbid this solution
10 end while
Algorithm 1 Higher-order COP of using a trained DNN

5 Class probability calibration

In a machine learning context, calibration is the process of modifying the predicted probabilities so that they match the expected distribution of probabilities for each class [6]. We will investigate the effect of calibration on our joint inference approach. Our method reasons over all 9 probability estimates and actively trades-off the probability of a prediction of one image to the prediction of another image in its objective function. Hence, it is not just a method of getting the top-predicted value right, but rather of getting all predicted probabilities correctly. Our reasoning approach hence assumes real (calibrated) probabilities and could be hampered by over- or under-confident class probability estimations.

In a multi-class setting, for a given handwritten digit a neural probabilistic classifier computes a vector containing raw scores for each class (i. e. a digit value), being the score assigned to class . The SoftMax function is then applied to convert these raw scores into probabilities:

such that is the output of the neural network.

While this output is normalized across classes to sum up to 1, the values are not real probabilities. More specifically, it has been shown that especially neural networks tend to overestimate the probability that an item belongs to its maximum likelihood class [6].

Post-processing methods such as Platt scaling [22] aim at calibrating the probabilistic output of a pre-trained classifier. Guo et al. [6] describe three variants of Platt scaling in the multi-class setting. In matrix scaling, a weight matrix

and a bias vector

apply a linear transform to the input vector of the softmax layer

such that the calibrated probabilities become:


where and are parameters, learned by minimizing the Negative Log Likelihood loss on a validation set. Vector scaling applies the same linear transform, except that is a diagonal matrix, that is, only the diagonal is non-zero. Finally, Temperature scaling considers a single scalar value to calibrate the probability such that:


To calibrate the predictions, we train a model where is calibrated on a validation set . More specifically, we will do calibration on top of a pre-trained neural network, so is pre-trained and the calibration learns the best .

We will evaluate whether better calibrated probabilities lead to better joint inference reasoning in the experiments.

6 Experiments

Numerical experiments were done on a subset of the Visual Sudoku Dataset used in [30]. The subset contains 3000 sudoku boards whose givens are represented by MNIST digits. The average number of givens per sudoku grid is . Unless stated otherwise, the MNIST train data was split into train and validation set.

The DNN architecture for the digit classification task is the LeNet architecture [10]

which uses two convolutional layers followed by two fully connected layers. The network is trained for 10 epochs to minimize cross-entropy loss, and is optimized via Adam with a learning rate of

. Once trained on the MNIST train data, we use the same model for both separate and hybrid approaches. The neural network and CP model were implemented using PyTorch 1.3.0

[18] and OR-tools 7.4.7247 [20], respectively. All experiments were run on a laptop with 8 Intel® Core™ i7-8565U CPU @ 1.80GHz and 16 Gb of RAM.

To test the performance of our proposed frameworks, we define the following evaluation measures:


img accuracy = percentage of givens correctly labeled by the classifier

cell accuracy = percentage of cells matching the true solution

grid accuracy = percentage of correctly solved sudokus. A sudoku is correctly solved if its true solution was found. That is, if

failure rate grid = percentage of sudokus without a solution. A sudoku has no solution if

In the subsequent experiments, we denote as baseline the separated classification and reasoning approach, whereas we refer to our proposed approaches as hybrid1 and hybrid2.

6.1 Separate vs Hybrid Approaches

First we compare the result of the three approaches described in section 4. As displayed on Table 1, the ability of the baseline approach to handle the image classification task with an accuracy of translates to a meagre success rate of only at the level of sudoku grids correctly solved. This is because the constraints relationships are not translated to the DNN model. As a consequence there is no way to ensure that the predictions would respect the constraints. Even a single mistake in predictions out of all the given images may result in an unsolvable puzzle. As an example, if one prediction error makes the same number appear twice in a row then the whole puzzle will be unsolvable even if the rest of the predictions are accurate.

accuracy failure rate time
img cell grid grid average (s)
baseline 94.75% 15.51% 14.67% 84.43% 0.01
hybrid1 99.69% 99.38% 92.33% 0% 0.79
hybrid2 99.72% 99.44% 92.93% 0% 0.83
Table 1: Comparison of hybrid solving approaches

On the other hand the hybrid approaches do not consider the model predictions as final and by using the constraints relationships, hybrid2, for instance, brings the classifier to correctly label 5361 additional images. As a result we observed an increase in overall accuracy of the predictions. The advantage of our frameworks is more prominent from the grid perspective, where we can see that more than 92% of the sudokus are now correctly solved. This is a huge improvement from the baseline approach which solves only 14.67% of the grids.

In terms of final performance hybrid2 is more accurate as it exploits one more sudoku property; namely that sudoku must have a unique solution. By this mechanism we are able to further rectify more predictions and 18 additional puzzles are solved accurately.

However, from a computational standpoint, our hybrid approaches solve a COP instead of a CSP in the pure reasoning case. Hence they are almost a 100 times more time consuming (only the average per sudoku is shown). The average computation time is slightly higher for hybrid2 as we need to prove that predicted givens only have a unique solution, or optimize again with a forbidden assignments if that is not the case; this situation happens 18 times in our experiments.

6.2 Reasoning Over Top- Probable Digits

rank-0 rank-1 rank-2 rank-3 rank-4 rank-5 rank-6 rank-7 rank-8
hybrid1 94.85% 3.68% 0.93% 0.32% 0.12% 0.07% 0.02% 0.01% 0.01%
hybrid2 94.84% 3.68% 0.92% 0.33% 0.12% 0.06% 0.02% 0.01% 0.01%
Table 2: Rank distribution for cell values in correctly solved sudokus

We are curious to know how the hybrid approaches outperform the separate approach. So we investigate when a digit is chosen by the hybrid approaches, how, on average, it is ranked by the ML classifier when ranking by probability.

Table 2 reveals, among the instances where we find the correct solution, that the top-ranked value is chosen in most cases, with a quick decline in how often the other values are chosen. Remarkably, in 42 cases (i.e. of predictions) hybrid2 actually uses a digit which is ranked 8 or lower by the classifier.

From a combinatorial optimisation perspective, one can also consider that this allows to trade-off the size of the search space with the accuracy of the resulting solutions by only taking the highest probable digits into account and removing the others from the domains. In this regard the experiment in the previous section considered two extremes: the baseline uses only the maximum probable digit, and the hybrid approaches use all digits.

Therefore, we investigate the effect of considering the top-k probability ranked digits on computational time and accuracy. Table 3 shows the effect of using only reasoning over the top- predicted values of the classifier:

accuracy failure rate time
top- img cell grid grid average (s)
top-1 94.75% 15.36% 14.67% 84.60% 0.03
top-2 96.15% 63.63% 55.43% 34.20% 0.03
top-3 96.63% 94.73% 77.17% 0.20% 0.06
top-4 98.78% 98.04% 86.33% 0% 0.12
top-5 99.35% 98.86% 89.67% 0% 0.26
top-6 99.57% 99.21% 91.60% 0% 0.38
top-7 99.67% 99.36% 92.33% 0% 0.55
top-8 99.69% 99.40% 92.63% 0% 0.66
top-9 99.71% 99.43% 92.90% 0% 0.80
Table 3: Rank experiment using hybrid2 for joint inference

When considering top-1 to top-4 values, we see that the image accuracy steadily goes up as does the grid correctness, and grid failure reaches 0 for top-4. As we consider 4 or more digits, both grid and image values slowly increase, with the best results obtained using all possible values; which makes the difference for 8 sudoku instances when using hybrid2.

This shows that there is indeed a trade-off between computational time of the joint inference and accuracy of the result, with runtime performance gains possible at low accuracy cost if needed.

6.3 Classifier strength versus reasoning strength

(a) img-correct
(b) cell-correct
(c) grid-correct
Figure 3: Strength of hybrid with less accurate predictions
accuracy failure rate
img cell grid grid
baseline 99.384% 80.380% 80.100% 19.6%
hybrid1 99.984% 99.966% 99.500% 0%
hybrid2 99.986% 99.972% 99.600% 0%
Table 4: Comparison of separate and hybrid approach with a stronger classifier

So far, we have used a fairly accurate model. We have also seen that joint inference by constraint solving could indeed correct many of the wrong predictions. In this experiment, we investigate the limits of this ‘correcting’ power of the reasoning. That is, for increasingly worse predictive models, we compare the accuracy of the baseline with our hybrid approaches.

Results in Figure 3 show that even after 2 epochs, with an accuracy of approximately 88%, the reasoning is able to correct this to 98%, i.e., a correction factor of 10%. Hence, with weaker predictive models, the reasoning has even more potential for correcting.

Results on Table 4 show that this trend remains true even with a stronger classifier, obtained by considering a learning rate of In the stronger classifier case, hybrid2 correctly classifies more images than the baseline.

Also noteworthy is that the average runtime goes up by a significant factor, e.g., it is 10 times slower as the predictions become less accurate. Further investigation shows that the predicted values are less skewed at lower accuracy levels, e.g., the softmax probabilities are more similar and hence the branch-and-bound search takes more time in finding and proving optimality.

6.4 Effect of calibration

Figure 4: Calibration curve, mean of probabilities over 15 equally-sized intervals
uncalibrated Temp. scaling Vector scaling Matrix scaling
NLL 12.07 11.61 11.38 10.12
test acc. 96.75% 96.75% 96.70% 96.93%
Table 5: NLL loss on validation set and test accuracy for Platt scaling variants
Figure 5: Performance measures for joint inference from calibrated classifier and comparison with uncalibrated counterpart

As the joint inference reasons over the probabilities, we will investigate the effect of calibration on the reasoning. The first step towards that goal is to compare the different calibration methods we presented in section  5, namely Matrix scaling, Vector scaling, and Temperature scaling. As described earlier, for each of these methods, calibration parameters are learned by minimizing the Negative Log Likelihood loss on the validation set (while remaining parameters of the network are fixed). Table 5 shows the validation NLL and the test accuracy before and after calibrating of the network. This table suggests that Matrix scaling produces the most calibrated classifier. Figure 4 shows how the classifier, although already quite well calibrated, is brought closer to a perfectly calibrated model.

Figure 5 displays the effect of using a more calibrated model by running the top-k experiment with the hybrid2 framework, with calibrated and uncalibrated classifiers. It shows that calibration improves the accuracy of our framework. This is true when considering not only less accurate, but also more accurate, neural networks, as reasoning over all 9 probabilities leads a calibrated classifier used within the hybrid2 framework to an img rate of , an accuracy cell rate of and of correctly solved grids.

7 Conclusions

In this paper we study a prototype application of hybrid prediction and constraint optimisation, namely the visual sudoku. Although deep neural networks have achieved unprecedented success in classification and reinforcement learning, they still fail at directly predicting the result of a combinatorial optimisation problem, due to the hard constraints and combinatorial optimisation aspect.

We propose a framework for solving challenging combinatorial problems like this, by adding a constraint programming layer on top of a neural network, which does joint inference over a set of predictions. We argue that reasoning over the actual predictions is limited as it ignores the probabilistic nature of the classification task, as confirmed by the experimental results. Instead, we can optimize the most likely joint solution over the classification probabilities which respects the hard constraints. Higher-order relations, such as that a solution must be unique, can also be taken into account to further improve the results.

Our proposed approach always finds a solution that satisfies the constraints, and corrects the underlying neural network output up to 10% in accuracy, for example transforming the output of a 94.8% accurate classifier into a 99.7% accurate joint inference classifier.

More broadly, we believe that this work is a notable path to incorporate domain-specific expertise in ML models. Practitioners often feel that they can help to make a ML model better by infusing their expertise into the model. However, incorporating such structured knowledge is often not feasible in a DNN setting. Our work proposes one way to impart human knowledge, namely on top of the neural network architecture and independent of the learning.

An interesting direction for future work is to look at differential classification+optimisation techniques, such as OptNet [1], and investigate whether it is possible to train better models end-to-end for this kind of hard constrained problems. In this respect, there is also a link with probabilistic programming techniques, which often use knowledge compilation to embed (typically simpler) constraints in a satisfaction setting [15]. Finally, we are keen to apply this technique on applications involving classification tasks, such as manhole maintenance [28] and more.


This research received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme.


  • [1] B. Amos and J. Z. Kolter (2017) Optnet: differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 136–145. Cited by: §7.
  • [2] Y. Bengio (1997) Using a financial training criterion rather than a prediction criterion. International Journal of Neural Systems 8 (04), pp. 433–443. Cited by: §2.0.1.
  • [3] L. Chen, Y. Feng, S. Huang, B. Luo, and D. Zhao (2018) Encoding implicit relation requirements for relation extraction: a joint inference approach. Artificial Intelligence 265, pp. 45–66. Cited by: §2.0.2.
  • [4] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT Press. Cited by: §3.0.3.
  • [5] T. Guns, P. J. Stuckey, and G. Tack (2018) Solution dominance over constraint satisfaction problems. CoRR abs/1812.09207. External Links: 1812.09207 Cited by: §4.3.
  • [6] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger (2017) On calibration of modern neural networks. External Links: 1706.04599 Cited by: §5, §5, §5.
  • [7] G. Ifrim, B. O’Sullivan, and H. Simonis (2012) Properties of energy-price forecasts for scheduling. In International Conference on Principles and Practice of Constraint Programming, pp. 957–972. Cited by: §2.0.1.
  • [8] W. Kool, H. van Hoof, and M. Welling (2019) Attention, learn to solve routing problems!. In ICLR 2019 : 7th International Conference on Learning Representations, Cited by: §2.0.1.
  • [9] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman (2017) Building machines that learn and think like people. Behavioral and brain sciences 40. Cited by: §1.
  • [10] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.0.1, §6.
  • [11] Q. Li, S. Anzaroot, W. Lin, X. Li, and H. Ji (2011) Joint inference for cross-document information extraction. In Proceedings of the 20th ACM international conference on Information and knowledge management, pp. 2225–2228. Cited by: §2.0.2.
  • [12] Q. Li, H. Ji, and L. Huang (2013) Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 73–82. Cited by: §2.0.2.
  • [13] T. Li, V. Gupta, M. Mehta, and V. Srikumar (2019) A logic-driven framework for consistency of neural models. arXiv preprint arXiv:1909.00126. Cited by: §2.0.3.
  • [14] J. Mandi, E. Demirović, P. Stuckey, T. Guns, et al. (2019) Smart predict-and-optimize for hard combinatorial optimization problems. arXiv preprint arXiv:1911.10092. Cited by: §2.0.1.
  • [15] R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, and L. De Raedt (2018)

    Deepproblog: neural probabilistic logic programming

    In Advances in Neural Information Processing Systems, pp. 3749–3759. Cited by: §7.
  • [16] P. Márquez-Neila, M. Salzmann, and P. Fua (2017) Imposing hard constraints on deep networks: promises and limitations. arXiv preprint arXiv:1706.02025. Cited by: §2.0.3.
  • [17] A. Mukhopadhyay, Y. Vorobeychik, A. Dubey, and G. Biswas (2017) Prioritized allocation of emergency responders based on a continuous-time incident prediction model. adaptive agents and multi agents systems, pp. 168–177. Cited by: §2.0.1.
  • [18] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop, Cited by: §6.
  • [19] D. Pathak, P. Krahenbuhl, and T. Darrell (2015) Constrained convolutional neural networks for weakly supervised segmentation. In

    Proceedings of the IEEE international conference on computer vision

    pp. 1796–1804. Cited by: §2.0.3.
  • [20] L. Perron and team (2019) Google’s or-tools. Google. Cited by: §6.
  • [21] J. C. Platt and A. H. Barr (1988) Constrained differential optimization. In Neural Information Processing Systems, pp. 612–621. Cited by: §2.0.3.
  • [22] J. C. Platt (1999)

    Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods

    In ADVANCES IN LARGE MARGIN CLASSIFIERS, pp. 61–74. Cited by: §5.
  • [23] H. Poon and P. Domingos (2007) Joint inference in information extraction. In AAAI, Vol. 7, pp. 913–918. Cited by: §1, §2.0.2.
  • [24] V. Punyakanok, D. Roth, W. Yih, and D. Zimak (2004) Semantic role labeling via integer linear programming inference. In Proceedings of the 20th International Conference on Computational Linguistics, COLING ’04, USA, pp. 1346–es. Cited by: §1, §2.0.3.
  • [25] S. Riedel (2012) Improving the accuracy and efficiency of map inference for markov logic. arXiv preprint arXiv:1206.3282. Cited by: §1.
  • [26] F. Rossi, P. Van Beek, and T. Walsh (2006) Handbook of constraint programming. Elsevier. Cited by: §3.0.1.
  • [27] The High-Level Expert Group on Artificial Intelligence (AI HLEG) (2017) A definition of ai. Cited by: §1.
  • [28] T. Tulabandhula and C. Rudin (2013) Machine learning with operational costs. The Journal of Machine Learning Research 14 (1), pp. 1989–2028. Cited by: §7.
  • [29] A. van den Oord, O. Vinyals, et al. (2017) Neural discrete representation learning. In Advances in Neural Information Processing Systems, pp. 6306–6315. Cited by: §1.
  • [30] P. Wang, P. Donti, B. Wilder, and Z. Kolter (2019) SATNet: bridging deep learning and logical reasoning using a differentiable satisfiability solver. In ICML 2019 : Thirty-sixth International Conference on Machine Learning, pp. 6545–6554. Cited by: §1, §2.0.1, §4.0.1, §6.
  • [31] Y. Wang, Q. Chen, M. Ahmed, Z. Li, W. Pan, and H. Liu (2019)

    Joint inference for aspect-level sentiment analysis by deep neural networks and linguistic hints

    IEEE Transactions on Knowledge and Data Engineering. Cited by: §2.0.2.
  • [32] B. Wilder (2019) Melding the data-decisions pipeline: decision-focused learning for combinatorial optimization. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Cited by: §2.0.1.