1 Introduction
Computer vision and machine learning give rise to a number of powerful computational models. It is typical that inference in these models reduces to nontrivial combinatorial optimization problems. For some of the models, such as conditional random fields (CRF), powerful specialized solvers like [46, 47, 11, 50] were developed. In general, however, one has to resort to offtheshelf integer linear program (ILP) solvers like CPLEX [2] or Gurobi [35]. Although these solvers have made a tremendous progress in the past decade, the size of the problems they can tackle still remains a limiting factor for many potential applications, as the running time scales superlinearly in the problem size. The goal of this work is to partially fill this gap between practical requirements and existing computational methods.
It is an old observation that many important optimization ILPs can be efficiently decomposed into easily solvable combinatorial subproblems [31]. The convex relaxation, which consists of these subproblems coupled by linear constraints is known as Lagrangean or dual decomposition [30, 48]. Although this technique can be efficiently used in various scenarios to find approximate solutions of combinatorial problems, it has a major drawback: In the most general setting only slow (sub)gradientbased techniques [49, 55, 48, 40, 59] can be used for optimization of the corresponding convex relaxation.
In the area of conditional random fields, however, it is wellknown [39] that message passing or dual (blockcoordinate) ascent algorithms (like e.g. TRWS [46]) significantly outperform (sub)gradientbased methods. Similar observations were made much earlier in [57] for a constrained shortest path problem.
Although dual ascent algorithms were proposed for a number of combinatorial problems (see the related work overview below), there is no general framework, which would (i) give a generalized view on the properties of such algorithms and more importantly (ii) provide tools to easily construct such algorithms for new problems. Our work provides such a framework.
Related Work
Dual ascent algorithms optimize a dual problem and guarantee monotonous improvement (nondeterioration) of the dual objective. The most famous examples in computer vision are blockcoordinate ascent (known also as message passing) algorithms like TRWS [46] or MPLP [27] for maximum a posteriori inference in conditional random fields [39].
To the best of our knowledge the first dual ascent algorithm addressing integer linear programs belongs to Bilde and Krarup [10] (the corresponding technical report in Danish appeared ). In that work an uncapacitated facility location problem was addressed. A similar problem (simple plant location) was addressed with an algorithm of the same class in [29]. In Fisher and Hochbaum [21] constructed a dual ascentbased algorithm for a problem of database location in computer networks, which was used to optimize the topology of Arpanet [1], predecessor of Internet. The generalized linear assignment problem was addressed by the same type of algorithms in [22]. The Authors considered a Lagrangean decomposition of this problem into multiple knapsack problems, which were solved in each iteration of the method. An improved version of this algorithm was proposed in [33]. Efficient dual ascent based solvers were also proposed for the mincost flow in [24], for the set covering and the set partitioning problems in [23] and the resourceconstrained minimum weighted arborescence problem in [34]. The work [32] describes basic principles for constructing dual ascent algorithms. Although the authors provide several examples, they do not go beyond that and stick to the claim that these methods are structure dependent and problem specific.
The work [17] suggests to use the maxproduct belief propagation [71] to decomposable optimization problems. However, their algorithm is neither monotone nor even convergent in general.
In computer vision, dual block coordinate ascent algorithms for Lagrangean decomposition of combinatorial problems were proposed for multiple targets tracking [7], graph matching (quadratic assignment) problem [76] and inference in conditional random fields [46, 47, 27, 72, 73, 60, 36, 54, 70]. From the latter, the TRWS algorithm [46] is among the most efficient ones for pairwise conditional random fields according to [39]. The SRMP algorithm [47] generalizes TRWS to conditional random fields of arbitrary order. In a certain sense, our framework can be seen as a generalization of SRMP to a broad class of combinatorial problems.
Contribution.
We propose a new dual ascent based computational framework for combinatorial optimization.
To this end we:
(i) Define the class of problems, called integerrelaxed pairwiseseparable linear programs (IRPSLP), our framework can be used for. Our definition captures Lagrangean decompositions of many known discrete optimization problems (Section 2).
(ii) Give a general monotonically convergent messagepassing algorithm for solving IRPSLP, which in particular subsumes several known solvers for conditional random fields
(Section 4).
(iii) Give a characterization of the fixed points of our algorithm, which subsumes such wellknown fixed point characterizations as weak tree agreement [46] and arcconsistency [72] (Section 5).
We demonstrate efficiency of our method by outperforming stateoftheart solvers for two famous special cases of IRPSLP, which are widely used in computer vision: the multicut and the graph matching problems. (Section 6).
A C++framework containing the above mentioned solvers and the datasets used in experiments are available under http://github.com/pawelswoboda/LP_MP.
We give all proofs in the supplementary material.
Notation.
Undirected graphs will be denoted by , where is a finite node set and is the edge set. The set of neighboring nodes of w.r.t. graph is denoted by . The convex hull of a set is denoted by . Disjoint union is denoted by .
2 IntegerRelaxed PairwiseSeparable Linear Programs (IRPSLP)
Combinatorial problems having an objective to minimize some cost over a set
of binary vectors often have a decomposable representation as
for being sets of binary vectors, typically corresponding to subsets of the coordinates of . This decomposed problem is equivalent to the original one under a set of linear constraints , which guarantee the mutual consistency of the considered components. Replacing by its convex hull and therefore switching to realvalued vectors from binary ones one obtains a convex relaxation^{1}^{1}1More precisely, this is a linear programming relaxation, since a convex hull of a finite set can be represented in terms of linear inequalities of the problem, which reads:(1) 
(2) 
Here are called factors of the decomposition and are called coupling constraints. The undirected graph is called factor graph. We will use variable names whenever we want to emphasize and whenever , .
Definition 1 (IrpsLp).
Assume that for each edge the matrices of the coupling constraints are such that and for some , analogously for . The problem is called an IntegerRelaxed PairwiseSeparable Linear Program, abbreviated by IRPSLP.
In the following, we give several examples of IRPSLP. To distinguish between notation for the factor graph of IRPSLP, where we stick to bold letters (such as , , ) we will use the straight font (such as , , ) for the graphs occurring in the examples.
Example 1 (MAPinference for CRF).
A conditional random field is given by a graph , a discrete label space , unary and pairwise costs for , . We also denote . The associated maximum a posteriori (MAP)inference problem reads
(3) 
where and denote the components corresponding to node and edge respectively. The wellknown local polytope relaxation [72] can be seen as an IRPSLP by setting , that is associating to each node and each edge a factor, and introducing two coupling constraints for each edge of the graphical model, i.e. . For the sake of notation we will assume that each label is associated a unit vector with dimensionality equal to the total number of labels and on the th position. Therefore, the notation makes sense as a convex hull of all such vectors. After denoting an dimensional simplex as the resulting relaxation reads
(4) 
in the overcomplete representation [69] and is defined as
(5) 
Here and denote those coordinates of vectors and , which correspond to the label and the pair of labels respectively.
Example 2 (Graph Matching).
The graph matching problem, also known as quadratic assignment [12] or feature matching, can be seen as a MAPinference problem for CRFs (as in Example 1) equipped with additional constraints: The label set of belongs to a universe , i.e. and each label can be assigned at most once. The overall problem reads
(6) 
Graph matching is a key step in many computer vision applications, among them tracking and image registration, whose aim is to find a onetoone correspondence between image points. For this reason, a large number of solvers have been proposed in the computer vision community [17, 74, 76, 68, 51, 67, 61, 28, 77, 37, 52, 15]. Among them two recent methods [68, 76] based on Lagrangean decomposition show superior performance and provide lower bounds for their solutions. The decomposition we describe below, however, differs from those proposed in [68, 76].
Our IRPSLP representation for graph matching consists of two blocks: (i) the CRF itself (which further decomposes into node and edgesubproblems with variables and (ii) additional labelfactors keeping track of nodes assigned the label . We introduce these labelfactors for each label . The set of possible configurations of this factor consists of those nodes which can be assigned the label and an additional dummy node . The dummy node denotes nonassignment of the label and is necessary, as not every label needs to be taken. As in Example 1, we associate a unit binary vector with each element of the set , and denotes the convex hull of such vectors. The set of factors becomes , with the set of the factorgraph edges. The resulting IRPSLP formulation reads
(7)  
Here we introduced (i) auxiliary variables for all variables and (ii) auxiliary node costs , which may take other values in course of optimization. Factors associated with the vectors and correspond to the nodes and edges of the graph (node and edgefactors), as in Example 1 and are coupled in the same way. Additionally, factors associated with the vectors ensure that the label can be taken at most once. These labelfactors are coupled with nodefactors (last line in (7)).
Example 3 (Multicut).
The multicut problem (also known as correlation clustering) for an undirected weighted graph is to find a partition , , of the graph vertexes, such that the total cost of edges connecting different components is minimized. The number of components is not fixed but is determined by the algorithm. See Fig. 1 for an illustration. Although the problem has numerous applications in computer vision [3, 4, 5, 75] and beyond [6, 58, 13, 14]
, there is no scalable solver, which could provide optimality bounds. Existing methods are either efficient primal heuristics
[64, 56, 26, 18, 19, 8, 9] or combinatorial branchandbound/branchandcut/column generation algorithms, based on offtheshelf LP solvers [41, 42, 45, 75]. Movemaking algorithms do not provide lower bounds, hence, one cannot judge their solution quality or employ them in branchandbound procedures. Offtheshelf LP solvers on the other hand scale superlinearly, limiting their application in largescale problems.Instead of directly optimizing over partitions (which has many symmetries making optimization difficult in a linear programming setting), we follow [16] and formulate the problem in the edge domain. Let , denote the cost of graph edges and let be the set of all cycles of the graph . Each edge that belongs to different components is called a cut edge. The multicut problem reads
(8) 
Here signifies a cut edge and the inequalities force each cycle to have none or at least two cut edges. The formulation (8) has exponentially many constraints. However, it is wellknown that it is sufficient to consider only chordless cycles [16] in place of the set in (8). Moreover, the graph can be triangulated by adding additional edges with zero weights and therefore the set of chordless cycles reduces to edge triples. Such triangulation is refered to as chordal completion in the literature [25]. The number of triples is cubic, which is still too large for practical efficiency and therefore violated constraints are typically added to the problem iteratively in a cutting plane manner [41, 42]. To simplify the description, we will ignore this fact below and consider all these cycles at once. Assuming a triangulated graph and redefining as the set of all chordless cycles (triples) we consider the following IRPSLP relaxation of the multicut problem ^{2}^{2}2One can show that this relaxation coincides with the standard LP relaxation for the multicut problem [16]:
(9)  
(10) 
For the sake of notation we shortened a feasible set definition to . Here is the relaxed (potentially noninteger) variable corresponding to . Variable is a copy of , which corresponds to the cycle . Therefore, each gets as many copies , as many chordless cycles contain the edge . For each cycle the set of binary vectors satisfying the cycle inequality is considered. For a cycle with edges this set can be written explicitly as in (10). Along with copies of , we copy the corresponding cost and create auxiliary costs for each cycle containing the edge . During optimization, the cost will be redistributed between itself and its copies , . The factors of the IRPSLP are associated with each edge (variable ) and each chordless cycle (variable ). Coupling constraints connect edgefactors with those cyclefactors, which contain the corresponding edge (see the last constraint in (10)). An indepth discussion of message passing for the multicut problem with tighter relaxations can be found in [66].
3 Dual Problem and Admissible Messages
Since our technique can be seen as a dual ascent, we will not optimize the primal problem (1) directly, but instead maximize its dual lower bound.
Dual IPSLP
The Lagrangean dual to (1) w.r.t. the coupling constraints reads
(11) 
Here for for some . The function is called lower bound and is concave in . The modified primal costs are called reparametrizations of the potentials . We have duplicated the dual variables by introducing to symmetrize notation. In practice, only one copy is stored and the other is computed on the fly. Note that in this doubled notation the reparametrized node and edge potentials of the CRF from Example 1 read
It is wellknown for CRFs that cost of feasible solutions are invariant under reparametrization. We generalize this to the IRPSLPcase.
Proposition 1.
, whenever obey the coupling constraints.
Admissible Messages
While Proposition 1 guarantees that the primal problem is invariant under reparametrizations, the dual lower bound is not. Our goal is to find such that is maximal. By linear programming duality, will then be equal to the optimal value of the primal (1).
First we will consider an elementary step of our future algorithm and show that it is nondecreasing in the dual objective. This property will ensure the monotonicity of the whole algorithm. Let be any reparametrization of the problem and be the corresponding dual value. Let us consider changing the reparametrization of a factor by a vector with the only nonzero components and . This will change reparametrization of the coupled factors (such that ) due to . The lemma below states properties of which are sufficient to guarantee improvement of the corresponding dual value :
Lemma 1 (Monotonicity Condition).
Let be a pair of factors related by the coupling constraints and be a corresponding dual vector. Let and satisfy
(12) 
Then implies .
Example 4.
Let us apply Lemma 1 to Example 1. Let correspond to , where is some node and is any of its incident edges. Then corresponds to a locally optimal label and . Therefore we may assign to any value from . This assures that (20) is fulfilled and remains a locally optimal label after reparametrization even if there are multiple optima in .
Lemma 1 can be straightforwardly generalized to the case, when more than two factors must be reparametrized simultaneously. In terms of Example 1 this may correspond to the situation when a graph node sends messages to several incident edges at once:
Definition 2.
Let be a factor and be a subset of its neighbors. Let , satisfies (20) for all and all other coordinates of are zero. If there exists such that , the dual vector is called admissible. The set of admissible vectors is denoted by .
Lemma 2.
Let then .
(13) 
(14) 
(15) 
MessagePassing Update Step
To maximize , we will iteratively visit all factors and adjust messages connected to it, monotonically increasing the lower bound (11). Such an elementary step is defined by Procedure 1.
Procedure 1 is defined up to the vector , which satisfies (14) (see Proc. 1). Usually, is a good choice. Although different may result in different efficiency of our framework, fulfillment of (14) is sufficient to prove its convergence properties.
The reparametrization adjustment problem (15) serves an intuitive goal to move as much slack as possible from the factor to its neighbors . For example, for the setting of Example 4 its solution reads . Depending on the selected it might correspond to maximization of the dual objective in the direction defined by admissible reparametrizations. Although maximization (15) is not necessary to prove convergence of our method (as we show below, only a feasible solution of (15) is required for the proof), (i) it leads to faster convergence; (ii) for the case of CRFs (as in Example 1) it makes our method equivalent to well established techniques like TRWS [46] and SRMP [47], as shown in Section 4.1.
4 Message Passing Algorithm
Now we combine message passing updates into Algorithm 2. It visits every node of the factor graph and performs the following two operations: (i) Receive Messages, when messages are received from a subset of neighboring factors, and (ii) Send Messages, when messages to some neighboring factors are computed and reweighted by . Distribution of weights may influence the efficiency of Algorithm 2 just like it influences the efficiency of message passing for CRFs (see [47]). We provide typical settings in Section 4.1. Usually, factors are traversed in some given apriori order alternately in forward and backward direction, as done in TRWS [46] and SRMP [47]. We refer to [47] for a motivation for such a schedule of computations.
We will discuss parameters of Algorithm 2 (factor partitioning , weights ) right after the theorem stating monotonicity for any choice of parameters.
4.1 Parameter Selection for Algorithm 2
There are the following free parameters in Algorithm 2: (i) The order of traversing factors of ; (ii) for each factor the neighboring factors from which to receive messages ; (iii) the partition of factors to send messages to and (iv) the associated weights for messages.
Although for any choice of these parameters Algorithm 2 monotonically increases the dual lower bound (as stated by Theorem 1), its efficiency may significantly depend on their values. Below, we will describe the parameters for Examples 13, which we found the most efficient empirically. Additionally, in the supplement we discuss parameters, which turn our algorithm into existing message passing solvers for CRFs (as in Example 1).
Sending a message by some factor automatically implies receiving this message by another, coupled factor. Therefore, usually there is no need to go over all factors in Algorithm 2. It is usually sufficient to guarantee that all coupling constraints are updated by Procedure 1. Formally, we can always exclude processing some factors by setting and , to the empty set. Instead, we will explicitly specify, which factors are processed in the loop of Algorithm 2 in the examples below.
Parameters for Example 1, MAPinference in CRFs.
Pairwise CRFs have the specific feature that node factors are coupled with edge factors only. This implies that processing only node factors in Algorithm 2 is sufficient. Below, we describe parameters, which turn Algorithm 2 into SRMP [47] (which is up to details of implementation equivalent to TRWS [46] for pairwise CRFs). Other settings, given in the supplement, may turn it to other popular message passing techniques like MPLP [27] or minsum diffusion [62].
We order node factors and process them according to this ordering. The ordering naturally defines the sets of incoming and outgoing edges for each node . Here is incoming for if and outgoing if . Each node receives messages from all incoming edges, which is . The messages are send to all outgoing edges, each edge in the partition in line 2 of Algorithm 2 is represented by a separate set. That is, the partition reads . Weights are distributed uniformly and equal to , . After each outer iteration, when all nodes were processed, the ordering is reversed and the process repeats. We refer to [47] for substantiation of these parameters.
Parameters for Example 2, Graph Matching.
Additionally to the node and edge factors, the corresponding IRPSLP has also label factors (7). To this end all node factors are ordered, as in Example 1. Each node factor receives messages from all incoming edge factors and label factors and sends them to all outgoing edges and label factors. The corresponding partition reads . The weights are distributed uniformly with . The label factors are processed after all node factors were visited. Each label factor receives messages from all connected node factors and send messages back as well: . We use the same single set for sending messages, i.e. . After each iteration we reverse the factor order.
Parameters for Example 3, Multicut.
Similarly to Example 1, it is sufficient to go only over all edge factors in the loop of Algorithm 2, since each coupling constraint contains exactly one cycle and one edge factor. Each edge factor receives messages from all coupled cycle factors and sends them to the same factors. As in Example 1, each cycle factor forms a trivial set in the partition in line 2 of Algorithm 2, the partition reads . Weights are distributed uniformly with . After each iteration the processing order of factors is reversed.
4.2 Obtaining Integer Solution
Eventually we want to obtain a primal solution of (1), not a reparametrization . We are not aware of any rounding technique which would work equally well for all possible instances of IRPSLP problem. According to our experience, the most efficient rounding is problem specific. Below, we describe our choices for the Examples 1 – 3.
Rounding for Example 1
Rounding for Example 2
is the same except that we select the best label among those, which have not been assigned yet, to satisfy uniqueness constraints:
(17) 
Rounding for Example 3.
5 Fixed Points and Comparison to Subgradient Method
Algorithm 2 does not necessarily converge to the optimum of (1). Instead, it may get stuck in suboptimal points, similar to those correspoding to the ”weak tree agreement” [46] or ”arc consistency” [72] in CRFs from Example 1. Below we characterise these fixpoints precisely.
Definition 3 (Marginal Consistency).
Given a reparametrization , let for each factor a nonempty set , be given. Define . We call reparametrization marginally consistent for on if
(18) 
If is marginally consistent for on all , we call marginally consistent for .
Note that marginal consistency is necessary, but not sufficient for optimality of the relaxation (1). This can be seen in the case of CRFs (Example 1), where it exactly corresponds to arcconsistency. The latter is only necessary, but not sufficient for optimality [72].
Theorem 2.
If is marginally consistent, the dual lower bound cannot be improved by Algorithm 2.
Comparison to Subgradient Method.
Decomposition IRPSLP and more general ones can be solved via the subgradient method [48]. Similar to Algorithm 2, it operates on dual variables and manipulates them by visiting each factor sequentially. Contrary to Algorithm 2, subgradient algorithms converge to the optimum. Moreover, on a periterations basis, computing subgradients is cheaper than using Algorithm 2, as only (13) needs to be computed, while Algorithm 2 needs to solve (15) additionally. However, for MAPinference, the study [39] has shown that subgradientbased algorithms converge much slower than message passing algorithms like TRWS [46]. In Section 6 we confirm this for the graph matching problem as well.
The reason for this large empirical difference is that one iteration of the subgradient algorithm only updates those coordinates of dual variables that are affected by the current minimal labeling (i.e. coordinates ), while in Algorithm 2 all coordinates of are taken into account. Also message passing implicitly chooses the stepsize so as to achieve monotonical convergence in Algorithm 1, while subgradient based algorithms must rely on some stepsize rule that may either make too large or too small changes to the dual variables .
6 Experimental Evaluation
Our experiments’ goal is to illustrate applicability of the proposed technique, they are not an exhaustive evaluation. The presented algorithms are only basic variants, which can be further improved and tuned to the considered problems. Both issues are addressed in the specialized studies [65, 66] which are appended. Still, we show that the presented basic variants are already able to surpass stateoftheart specialized solvers on challenging datasets. All experiments were run on a computer with a 2.2 GHz i55200U CPU and 8 GB RAM.
6.1 Graph Matching
Solvers.
We compare against two stateoftheart algorithms: (i) the subgradient based dual decomposition solver [68] abbreviated by DD and (ii) the recent “hungarian belief propagation” message passing algorithm [76], abbreviated as HBP. While the authors of [76] have embedded their solver in a branchandbound routine to produce exact solutions, we have reimplemented their message passing component but did not use branch and bound to make the comparison fair. Both algorithms DD and HBP outperformed alternative solvers at the time of their publication, hence we have not tested against [17, 74, 51, 67, 61, 28, 77, 37, 52, 15]. We call our solver AMP.
Datasets.
We selected three challenging datasets. The first two are the standard benchmark datasets car and motor, both used in [53], containing 30 pairs of cars and 20 pairs of motorbikes with keypoints to be matched 1:1. The images are taken from the VOC PASCAL 2007 challenge [20]. Costs are computed from features as in [53]. Instances are densely connected graphs with – nodes. The third one is the novel worms datasets [38], containing 30 problem instances coming from bioimaging. The problems are made of sparsely connected graphs with up to nodes and up to labels. To our knowledge, the worms dataset contains the largest graph matching instances ever considered in the literature. For runtime plots showing averaged logarithmic primal/dual gap over all instances of each dataset see Fig. 2.
Results.
Our solver AMP consistently outperforms HBP and DD w.r.t. primal/dual gap and anytime performance Most markedly on the largest worms dataset, the subgradient based algorithm DD struggles hard to decrease the primal/dual gap, while AMP gives reasonable results.
6.2 Multicuts
Solvers.
We compare against stateoftheart multicut algorithms implemented in the OpenGM [39] library, namely (i) the branchandcut based solver MCILP [42] utilizing the ILP solver CPLEX [2], (ii) the heuristic primal “fusion move” algorithm CCFusion [8]
with random hierarchical clustering and random watershed proposal generator, denoted by the suffixes
RHC and RWS and (iii) the heuristic primal “Cut, Glue & Cut” solver CGC [9]. Those solvers were shown to outperform other multicut algorithms [8]. Algorithm MCILP provides both upper and lower bounds, while CCFusion and CGC are purely primal algorithms. We call our message passing solver with cycle constraints added in a cutting plane fashion MPC.Datasets.
A source of large scale problems comes from electron microscopy of brain tissue, for which we wish to obtain neuron segmentation. We have selected three datasets
knott3d{150300450} of increasingly large size [39], each consisting of 8 instances. Instances have , and nodes and , , and edges respectively.Results.
For plots showing dual bounds and primal solution objectives over time see Figure 3. Our algorithm MPC combines advantages of LPbased techniques awith those of primal heuristics: It delivers high dual lower bounds faster than MCILP. Its has fast primal convergence speed and delivers primal solutions comparable/superior to CGC’s and CCFusion’s.
7 Acknowledgments
The authors would like to thank Vladimir Kolmogorov for helpful discussions. This work is partially funded by the European Research Council under the European Unions Seventh Framework Programme (FP7/20072013)/ERC grant agreement no 616160.
References
 [1] https://en.wikipedia.org/wiki/ARPANET.
 [2] IBM ILOG CPLEX Optimizer. http://www01.ibm.com/software/integration/optimization/cplexoptimizer/.
 [3] A. Alush and J. Goldberger. Break and conquer: Efficient correlation clustering for image segmentation. In E. R. Hancock and M. Pelillo, editors, SIMBAD, volume 7953 of Lecture Notes in Computer Science, pages 134–147. Springer, 2013.
 [4] B. Andres, J. H. Kappes, T. Beier, U. Köthe, and F. A. Hamprecht. Probabilistic image segmentation with closedness constraints. In D. N. Metaxas, L. Quan, A. Sanfeliu, and L. J. V. Gool, editors, ICCV, pages 2611–2618. IEEE Computer Society, 2011.
 [5] B. Andres, T. Kröger, K. L. Briggman, W. Denk, N. Korogod, G. Knott, U. Köthe, and F. A. Hamprecht. Globally optimal closedsurface segmentation for connectomics. In A. W. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid, editors, ECCV (3), volume 7574 of Lecture Notes in Computer Science, pages 778–791. Springer, 2012.
 [6] A. Arasu, C. Ré, and D. Suciu. Largescale deduplication with constraints using dedupalog. In Y. E. Ioannidis, D. L. Lee, and R. T. Ng, editors, ICDE, pages 952–963. IEEE Computer Society, 2009.
 [7] C. Arora and A. Globerson. Higher order matching for consistent multiple target tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 177–184, 2013.
 [8] T. Beier, F. A. Hamprecht, and J. H. Kappes. Fusion moves for correlation clustering. In CVPR, pages 3507–3516. IEEE Computer Society, 2015.
 [9] T. Beier, T. Kröger, J. H. Kappes, U. Köthe, and F. A. Hamprecht. Cut, glue & cut: A fast, approximate solver for multicut partitioning. In CVPR. Proceedings, 2014.
 [10] O. Bilde and J. Krarup. Sharp lower bounds and efficient algorithms for the simple plant location problem. Annals of Discrete Mathematics, 1:79–97, 1977.
 [11] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence, 23(11):1222–1239, 2001.
 [12] R. E. Burkard, E. Çela, P. M. Pardalos, and L. S. Pitsoulis. The Quadratic Assignment Problem, pages 1713–1809. Springer US, Boston, MA, 1999.
 [13] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In P. L. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, NIPS, pages 2213–2221, 2012.
 [14] F. Chierichetti, N. Dalvi, and R. Kumar. Correlation clustering in mapreduce. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 641–650, New York, NY, USA, 2014. ACM.
 [15] M. Cho, J. Lee, and K. M. Lee. Reweighted random walks for graph matching. In K. Daniilidis, P. Maragos, and N. Paragios, editors, ECCV (5), volume 6315 of Lecture Notes in Computer Science, pages 492–505. Springer, 2010.
 [16] S. Chopra and M. R. Rao. The partition problem. Mathematical Programming, 59(1):87–115, 1993.
 [17] J. Duchi, D. Tarlow, G. Elidan, and D. Koller. Using combinatorial optimization within maxproduct belief propagation. In NIPS, pages 369–376. MIT Press, 2006.
 [18] M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversation disentanglement. In K. McKeown, J. D. Moore, S. Teufel, J. Allan, and S. Furui, editors, ACL, pages 834–842. The Association for Computer Linguistics, 2008.
 [19] M. Elsner and W. Schudy. Bounding and comparing methods for correlation clustering beyond ILP. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, ILP ’09, pages 19–27, Stroudsburg, PA, USA, 2009. Association for Computational Linguistics.
 [20] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, June 2010.
 [21] M. L. Fisher and D. S. Hochbaum. Database location in computer networks. Journal of the ACM (JACM), 27(4):718–735, 1980.
 [22] M. L. Fisher, R. Jaikumar, and L. N. Van Wassenhove. A multiplier adjustment method for the generalized assignment problem. Management Science, 32(9):1095–1103, 1986.
 [23] M. L. Fisher and P. Kedia. Optimal solution of set covering/partitioning problems using dual heuristics. Management science, 36(6):674–688, 1990.
 [24] D. Gamarnik, D. Shah, and Y. Wei. Belief propagation for mincost network flow: Convergence & correctness. In SODA, pages 279–292. SIAM, 2010.
 [25] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman & Co., New York, NY, USA, 1979.
 [26] A. Gionis, H. Mannila, and P. Tsaparas. Clustering aggregation. ACM Trans. Knowl. Discov. Data, 1(1):4, 2007.
 [27] A. Globerson and T. S. Jaakkola. Fixing maxproduct: Convergent message passing algorithms for MAP LPrelaxations. In NIPS, pages 553–560, 2007.
 [28] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. IEEE Trans. Pattern Anal. Mach. Intell., 18(4):377–388, 1996.
 [29] M. Guignard. A Lagrangean dual ascent algorithm for simple plant location problems. European Journal of Operational Research, 35(2):193–200, 1988.
 [30] M. Guignard and S. Kim. Lagrangean decomposition: A model yielding stronger Lagrangean bounds. Mathematical programming, 39(2):215–228, 1987.
 [31] M. Guignard and S. Kim. Lagrangean decomposition for integer programming: theory and applications. Revue française d’automatique, d’informatique et de recherche opérationnelle. Recherche opérationnelle, 21(4):307–323, 1987.
 [32] M. Guignard and M. B. Rosenwein. An applicationoriented guide for designing Lagrangean dual ascent algorithms. European Journal of Operational Research, 43(2):197–205, 1989.
 [33] M. Guignard and M. B. Rosenwein. Technical notean improved dual based algorithm for the generalized assignment problem. Operations Research, 37(4):658–663, 1989.
 [34] M. Guignard and M. B. Rosenwein. An application of Lagrangean decomposition to the resourceconstrained minimum weighted arborescence problem. Networks, 20(3):345–359, 1990.
 [35] Gurobi Optimization, Inc., 2015. http://www.gurobi.com.
 [36] J. Jancsary and G. Matz. Convergent decomposition solvers for treereweighted free energies. In AISTATS 2011, 2011.

[37]
B. Jiang, J. Tang, C. Ding, and B. Luo.
A local sparse model for matching problem.
In
Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence
, AAAI’15, pages 3790–3796. AAAI Press, 2015.  [38] D. Kainmueller, F. Jug, C. Rother, and G. Myers. Active graph matching for automatic joint segmentation and annotation of C. elegans. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 81–88. Springer, 2014.
 [39] J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnörr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kröger, J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother. A comparative study of modern inference techniques for structured discrete energy minimization problems. International Journal of Computer Vision, 115(2):155–184, 2015.

[40]
J. H. Kappes, B. Savchynskyy, and C. Schnörr.
A bundle approach to efficient MAPinference by Lagrangian
relaxation.
In
Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on
, pages 1688–1695. IEEE, 2012.  [41] J. H. Kappes, M. Speth, B. Andres, G. Reinelt, and C. Schnörr. Globally optimal image partitioning by multicuts. In EMMCVPR. Springer, Springer, 2011.
 [42] J. H. Kappes, M. Speth, G. Reinelt, and C. Schnörr. Higherorder segmentation via multicuts. CoRR, abs/1305.6387, 2013.
 [43] B. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. The Bell Systems Technical Journal, 49(2), 1970.
 [44] M. Keuper, E. Levinkov, N. Bonneel, G. Lavoué, T. Brox, and B. Andres. Efficient decomposition of image and mesh graphs by lifted multicuts. In ICCV, 2015.
 [45] S. Kim, S. Nowozin, P. Kohli, and C. D. Yoo. Higherorder correlation clustering for image segmentation. In J. ShaweTaylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, editors, NIPS, pages 1530–1538, 2011.
 [46] V. Kolmogorov. Convergent treereweighted message passing for energy minimization. IEEE Trans. Pattern Anal. Mach. Intell., 28(10):1568–1583, 2006.
 [47] V. Kolmogorov. A new look at reweighted message passing. IEEE Trans. Pattern Anal. Mach. Intell., 37(5):919–930, 2015.
 [48] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(3):531–552, March 2011.
 [49] C. Lemaréchal. Lagrangian decomposition and nonsmooth optimization: Bundle algorithm, prox iteration, augmented Lagrangian. Nonsmooth Optimization: Methods and Applications, pages 201–216, 1992.
 [50] V. Lempitsky, C. Rother, S. Roth, and A. Blake. Fusion moves for Markov random field optimization. IEEE transactions on pattern analysis and machine intelligence, 32(8):1392–1405, 2010.
 [51] M. Leordeanu and M. Hebert. A spectral technique for correspondence problems using pairwise constraints. In ICCV, pages 1482–1489. IEEE Computer Society, 2005.
 [52] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph matching and MAP inference. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, NIPS, pages 1114–1122. Curran Associates, Inc., 2009.
 [53] M. Leordeanu, R. Sukthankar, and M. Hebert. Unsupervised learning for graph matching. International Journal of Computer Vision, 96(1):28–45, 2012.
 [54] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms  a unifying view. In UAI, pages 393–401. AUAI Press, 2009.
 [55] I. Necoara and J. A. Suykens. Application of a smoothing technique to decomposition in convex optimization. IEEE Transactions on Automatic Control, 53(11):2674–2679, 2008.
 [56] V. Ng and C. Cardie. Improving machine learning approaches to coreference resolution. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics  ACL ’02, (July):104, 2001.
 [57] C. Ribeiro and M. Minoux. Solving hard constrained shortest path problems by Lagrangean relaxation and branchandbound algorithms. Methods of Operations Research, 53:303–316, 1986.
 [58] E. Sadikov, J. Madhavan, L. Wang, and A. Halevy. Clustering query refinements by user intent. In World Wide Web Conference (WWW). ACM Press, April 2010.
 [59] B. Savchynskyy, J. Kappes, S. Schmidt, and C. Schnörr. A study of Nesterov’s scheme for Lagrangian decomposition and MAP labeling. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1817–1823. IEEE, 2011.
 [60] B. Savchynskyy, S. Schmidt, J. H. Kappes, and C. Schnörr. Efficient MRF energy minimization via adaptive diminishing smoothing. In UAI, pages 746–755. AUAI Press, 2012.
 [61] C. Schellewald and C. Schnörr. Probabilistic subgraph matching based on convex relaxation. In EMMCVPR, volume 3757 of Lecture Notes in Computer Science, pages 171–186. Springer, 2005.
 [62] M. I. Schlesinger and K. V. Antoniuk. Diffusion algorithms and structural recognition optimization problems. Cybernetics and Systems Analysis, 47(2):175–192, 2011.
 [63] D. Sontag and T. S. Jaakkola. Tree block coordinate descent for map in graphical models. In AISTATS, volume 5 of JMLR Proceedings, pages 544–551, 2009.
 [64] W. M. Soon, H. T. Ng, and D. C. Y. Lim. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544, 2001.
 [65] C. Swoboda, Paul Rother, A. A. Hassan, and Kainm\̇lx@bibnewblockStudy of Lagrangean decomposition and dual ascent solvers for graph matching.
 [66] P. Swoboda and B. Andres. A message passing algorithm for the minimum cost multicut problem.
 [67] P. H. S. Torr. Solving Markov random fields using semi definite programming. In In: AISTATS, 2003.
 [68] L. Torresani, V. Kolmogorov, and C. Rother. A dual decomposition approach to feature correspondence. IEEE Trans. Pattern Anal. Mach. Intell., 35(2):259–271, 2013.
 [69] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(12):1–305, 2008.
 [70] H. Wang and D. Koller. Subproblemtree calibration: A unified approach to maxproduct message passing. In 30th International Conference on Machine Learning (ICML13), pages 190–198, 2013.
 [71] Y. Weiss and W. T. Freeman. On the optimality of solutions of the maxproduct beliefpropagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47(2):736–744, 2001.
 [72] T. Werner. A linear programming approach to maxsum problem: A review. IEEE Trans. Pattern Analysis and Machine Intelligence, 29(7):1165–1179, 2007.
 [73] T. Werner. Revisiting the linear programming relaxation approach to Gibbs energy minimization and weighted constraint satisfaction. IEEE Trans. Pattern Anal. Mach. Intell., 32(8):1474–1488, 2010.
 [74] J. Yarkony, C. C. Fowlkes, and A. T. Ihler. Covering trees and lowerbounds on quadratic assignment. In CVPR, pages 887–894. IEEE Computer Society, 2010.
 [75] J. Yarkony, A. Ihler, and C. C. Fowlkes. Fast Planar Correlation Clustering for Image Segmentation, pages 568–581. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
 [76] Z. Zhang, Q. Shi, J. McAuley, W. Wei, Y. Zhang, and A. van den Hengel. Pairwise matching through maxweight bipartite belief propagation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
 [77] F. Zhou and F. D. la Torre. Factorized graph matching. In CVPR, pages 127–134. IEEE Computer Society, 2012.
8 Supplementary Material
Proof of Proposition 1
Proposition.
, whenever obey the coupling constraints.
Proof.
where due to and . ∎
Proof of Proposition 2
Proposition.
Proof of Lemma 1 and Lemma 2
Lemma.
Let be a pair of factors related by the coupling constraints and be a corresponding dual vector. Let and satisfy
(20) 
Then implies .
Proof.
Let be a solution of (13) at which the dual lower bound (11) is attained before the update and be an integral solution at which the dual lower bound is attained after has been updated. Variable as chosen in (13) is optimal for and for by construction. We need to prove
(21) 
We shuffle all terms with variables , to the right side and all other terms to the left side.
Comments
There are no comments yet.