1 Introduction
1.1 Model
Consider a connected undirected graph and denote and . For a node , we stick to the convention that denotes the set of ’s neighbors in . An edge is said to be incident on if it connects between and one of its neighbors.
In the realm of distributed graph algorithms, the nodes of graph are associated with processing units that operate in a decentralized fashion. We assume that node distinguishes between its incident edges by means of port numbers, i.e., a bijection between the set of edges incident on and the integers in . Additional graph attributes, such as node ids, edge orientation, and edge and node weights, are passed to the nodes by means of an input assignment that assigns to each node , a bit string , referred to as ’s local input, that encodes the additional attributes of and its incident edges. The nodes return their output by means of an output assignment that assigns to each node , a bit string , referred to as ’s local output. We often denote and and refer to these tuples as an inputoutput (IO) graph and an input graph, respectively.^{3}^{3}3Refer to Table 1 for a full list of the abbreviations used in this paper.
A distributed graph problem (DGP) is a collection of IO graphs . In the context of a DGP , an input graph is said to be legal (and the graph and input assignment are said to be colegal) if there exists an output assignment such that , in which case we say that is a feasible solution for (or simply for and ). Given a DGP , we may slightly abuse the notation and write to denote that is legal.
A distributed graph minimization problem (MinDGP) (resp., distributed graph maximization problem (MaxDGP)) is a pair , where is a DGP and is a function, referred to as the objective function of , that maps each IO graph to an integer value .^{4}^{4}4We assume for simplicity that the images of the objective functions used in the context of this paper, are integral. Lifting this assumption and allowing for real numerical values would complicate some of the arguments, but it does not affect the validity of our results. Given a colegal graph and input assignment , define
if is a MinDGP; and
if is a MaxDGP. We often use the general term distributed graph optimization problem (OptDGP) to refer to MinDGPs as well as MaxDGPs. Given a OptDGP and colegal graph and input assignment , the output assignment is said to be an optimal solution for (or simply for and ) if is a feasible solution for and .
Let us demonstrate our definitions through the example of the maximum weight matching problem in bipartite graphs, i.e., explaining how it fits into the framework of a MaxDGP . Given a graph and an input assignment , the input graph is legal (with respect to ) if is bipartite and encodes an edge weight function . Formally, for every node , the local input assignment
is set to be a vector, indexed by the port numbers of
, defined so that if edge corresponds to ports and at nodes and , respectively, then both the th entry in and the th entry in hold the value . Given a legal input graph , the output assignment is a feasible solution for if encodes a matching in . Formally, the local output assignment is set to the port number corresponding to if there exists an edge incident on ; and to otherwise. The objective function of is defined so that for an IO graph with corresponding edge weight function and matching , the value of is set to . Following this notation, a feasible solution for colegal and is optimal if and only if is a maximum weight matching in with respect to the edge weight function .While the formulation introduced in the current section is necessary for the general definitions presented in Section 1.1.1 and the generic methods developed in Section 3, in Section 5, when considering IO graphs in the context of specific DGPs and OptDGPs, we often do not explicitly describe the input and output assignments, but rather take a more natural highlevel approach. For example, in the context of the aforementioned maximum weight matching problem in bipartite graphs, we may address the input edge weight function and output matching directly without providing an explanation as to how they are encoded in the input and output assignments, respectively. The missing details would be clear from the context and could be easily completed by the reader.
1.1.1 Proof Labeling Schemes
In this section we present the notions of proof labeling schemes [KKP10] and approximate proof labeling schemes [CPP20] for OptDGPs and their decision variants. To unify the definitions of these notions, we start by introducing the notion of gap proof labeling schemes based on the following definition.
A configuration graph is a pair consisting of a graph and a function assigning a bit string to each node . In particular, an input graph is a configuration graph, where , and an IO graph is a configuration graph, where .
Fix some universe of configuration graphs. A gap proof labeling scheme (GPLS) is a mechanism designed to distinguish the configuration graphs in a yesfamily from the configuration graphs in a nofamily , where . This is done by means of a (centralized) prover and a (distributed) verifier that play the following roles: Given a configuration graph , if , then the prover assigns a bit string , called the label of , to each node . Let be the vector of labels assigned to ’s neighbors. The verifier at node is provided with the tuple and returns a Boolean value .
We say that the verifier accepts if for all nodes ; and that the verifier rejects if for at least one node . The GPLS is said to be correct if the following requirements hold for every configuration graph :
R1.
If , then the prover produces a label assignment such that the verifier accepts .
R2.
If , then for any label assignment , the verifier rejects .
We emphasize that no requirements are made for configuration graphs ; in particular, the verifier may either accept or reject these configuration graphs (the same holds for configuration graphs that do not belong to the universe ).
The performance of a GPLS is measured by means of its proof size defined to be the maximum length of a label assigned by the prover to the nodes assuming that . We say that GPLS admits a sequentially efficient prover if for any configuration graph , the sequential runtime of the prover is polynomial in the number of bits used to encode ; and that it admits a sequentially efficient verifier if the sequential runtime of the verifier in node is polynomial in , , and . The GPLS is called sequentially efficient if both its prover and verifier are sequentially efficient.
Proof Labeling Schemes for OptDGPs.
Consider some OptDGP and let . A proof labeling scheme (PLS) for is defined as a GPLS over by setting the yesfamily to be
and the nofamily to be . In other words, a PLS for determines for a given IO graph whether the output assignment is an optimal solution (which means in particular that it is a feasible solution) for the colegal graph and input assignment .
In the realm of OptDGPs, it is natural to relax the definition of a PLS so that it may also accept feasible solutions that only approximate the optimal ones. Specifically, given an approximation parameter , an approximate proof labeling scheme (APLS) for a OptDGP is defined in the same way as a PLS for with the sole difference that the nofamily is defined by setting
Decision Proof Labeling Schemes for OptDGPs.
Consider some MinDGP (resp., MaxDGP) and let . A decision proof labeling scheme (DPLS) for and a parameter is defined as a GPLS over by setting the yesfamily to be
and the nofamily to be . In other words, given an input graph , a DPLS for and decides if (resp., ) for every feasible output assignment . Notice that while PLSs address the task of verifying the optimality of a given output assignment , that is, verifying that no output assignment admits an objective value smaller (resp., larger) than , in DPLSs, the output assignment is not specified and the task is to verify that no output assignment admits an objective value smaller (resp., larger) than the parameter , provided as part of the DPLS task.
Similarly to PLSs, the definition of DPLS admits a natural relaxation. Given an approximation parameter , an approximate decision proof labeling scheme (ADPLS) for a OptDGP and a parameter is defined in the same way as a DPLS for and with the sole difference that the nofamily is defined by setting
We often refer to an ADPLS without explicitly mentioning its associated parameter ; this should be interpreted with a universal quantifier over all parameters .
1.2 Related Work and Discussion
Distributed verification is the task of locally verifying a global property of a given configuration graph by means of a centralized prover and a distributed verifier. Various models for distributed verification have been introduced in the literature including the PLS model [KKP10] as defined in Section 1.1.1, the locally checkable proofs (LCP) model [GS16], and the distributed complexity class nondeterministic local decision (NLD) [FKP11, BDFO18]. Refer to [FF16] for a comprehensive survey on the topic of distributed verification.
The current paper focuses on the PLS (and DPLS) model. This model was introduced by Korman, Kutten, and Peleg in [KKP10] and has been extensively studied since then, see, e.g., [KK07, BFPS14, OPR17, FF17, PP17, FFH18, Feu19]. A specific family of tasks that attracted a lot of attention in this regard is that of designing PLSs for classic optimization problems. Papers on this topic include [KK07], where a PLS for minimum spanning tree is shown to have a proof size of , where is the maximum weight, and [GS16], where a PLS for maximum weight matching in bipartite graphs is shown to have a proof size of .
In parallel, numerous researchers focused on establishing impossibility results for PLSs and DPLSs, usually derived from nondeterministic communication complexity lower bounds [KN06]. Such results are provided, e.g., in [BCHD19], where a proof size of is shown to be required for many classic optimization problems, and in [GS16], where an lower bound is established on the proof size of DPLSs for the problem of deciding if the chromatic number is larger than . For the minimum spanning tree problem, the authors of [KK07] proved that their upper bound on the proof size is asymptotically optimal, relying on direct combinatorial arguments.
The lower bounds on the proof size of PLSs (and DPLSs) for some optimization problems have motivated the authors of [CPP20] to introduce the APLS (and ADPLS) notion as a natural relaxation thereof. This motivation is demonstrated by the task of verifying that the unweighted diameter of a given graph is at most : As shown in [CPP20], the diameter task admits a large gap between the required proof size of a DPLS, shown to be , and the proof sizes of ADPLS and ADPLS shown to be and , respectively. To the best of our knowledge, APLSs (and ADPLSs) have not been studied otherwise until the current paper.
One of the generic methods developed in the current paper for the design of APLSs for an abstract OptDGP relies on a primal dual
approach applied to the linear program that encodes the OptDGP, after relaxing its integrality constraints (see Section
3.1). This can be viewed as a generalization of a similar approach used in the literature for concrete OptDGPs. Specifically, this primal dual approach is employed in [GS16] to obtain their PLS for maximum weight matching in bipartite graphs with a proof size of . A similar technique is used by the authors of [CPP20] to achieve a APLS for maximum weight matching in general graphs with the same proof size.While most of the PLS literature (including the current work) focuses on deterministic schemes, an interesting angle that has been studied recently is randomization in distributed proofs, i.e., allowing the verifier to reach its decision in a randomized fashion. The notion of randomized proof labeling schemes was introduced in [FPP19], where the strength of randomization in the PLS model is demonstrated by a universal scheme that enables one to reduce the amount of required communication in a PLS exponentially by allowing a (probabilistic) onesided error. Another interesting generalization of PLSs is the distributed interactive proof model, introduced recently in [KOS18] and studied further in [NPY20, CFP19, FMO19].
On Sequential Efficiency.
In this paper, we focus on sequentially efficient schemes, restricting the prover and verifier to “reasonable computations”. We argue that beyond the interesting theoretical implications of this restriction (see Section 1.3), it also carries practical justifications: A natural application of PLSs is found in local checking for selfstabilizing algorithms [APV91], where the verifier’s role is played by the detection module and the prover is part of the correction module [KKP10]. Any attempt to implement these modules in practice clearly requires sequential efficiency on behalf of both the verifier and the prover (although, for the latter, the sequential efficiency condition alone is not sufficient as the correction module is also distributed).
While most of the PLSs presented in previous papers are naturally sequentially efficient, there are a few exceptions. One example of a scheme that may require intractable computations on the verifier side is the universal PLS presented in [KKP10] that enables the verification of any decidable graph property with a label size of simply by encoding the entire structure of the graph within the label. A PLS that inherently relies on sequentially inefficient prover can be found, e.g., in [GS16], where a scheme is constructed to decide if the graph contains a Hamiltonian cycle.
1.3 Our Contribution
Our goal in this paper is to explore the power and limitations of APLSs and ADPLS for OptDGPs. We start by developing two generic methods: a primal dual method for the design of sequentially efficient APLSs that expands and generalizes techniques used by Göös and Suomela [GS16] and CensorHillel, Paz, and Perry [CPP20]; and a method that exploits the local properties of centralized approximation algorithms for the design of sequentially efficient ADPLSs. Next, we establish blackbox reductions between APLSs and ADPLSs for certain families of OptDGPs. Based (mainly) on these generic methods and reductions, we design a total of twentytwo new sequentially efficient APLSs and ADPLSs for various classic optimization problems; refer to Tables 2 and 3 for a summary of these results.
On the negative side, we establish an lower bound on the proof size of a APLS for maximum
matching (in fact, this lower bound applies even for the simpler case of maximum matching) and minimum edge cover in graphs of oddgirth
; and an lower bound on the proof size of a PLS for minimum edge cover in odd rings. These lower bounds, that rely on combinatorial arguments and hold regardless of sequential efficiency, match the proof size established in our corresponding APLSs for these OptDGPs, thus proving their optimality.Additional lower bounds are established under the restriction of the verifier and/or prover to sequentially efficient computations, based on hardness assumptions in (sequential) computational complexity theory. Consider a OptDGP that corresponds to an optimization problem that is NPhard to approximate within . We first note that under the assumption that , the yesfamilies of both an APLS for and an ADPLS for (with some parameter ) are languages in the complexity class . Therefore, restricting the verifier to sequentially efficient computations implies that admits neither an APLS, nor an ADPLS, with a polynomial proof size. This provides additional motivation for the study of APLSs and ADPLSs over their exact counterparts.
Furthermore, the (weaker) assumption that suffices to rule out the existence of ADPLS for when both the verifier and prover are required to be sequentially efficient. This is due to the fact that the yesfamily of an ADPLS for (with some parameter ) is a coNP complete language, combined with the trivial observation that any sequentially efficient GPLS can be simulated by a centralized algorithm in polynomial time. We note that most of the OptDGPs considered in this paper correspond to NPhard optimization problems; refer to Table 4 for their known inapproximability results with and without the unique games conjecture [Kho02].
1.4 Paper’s Organization
The rest of the paper is organized as follows. Following some preliminaries presented in Section 2, our generic methods for the design of APLSs and ADPLSs are developed in Section 3. The reductions between APLSs and ADPLSs are presented in Section 4. Finally, the bounds we establish for concrete OptDGPS are established in Section 5.
2 Preliminaries
Linear Programming and Duality.
A linear program (LP) consists of a linear objective function that one wishes to optimize (i.e., minimize or maximize) subject to linear inequality constraints. The standard form of a minimization (resp., maximization) LP is (resp., ), where is a vector of variables and , , and are a matrix and vectors of coefficients. An integer linear program (ILP) is a LP augmented with integrality constraints. In Section 5, we formulate OptDGPs as LPs and ILPs. In the latter case, we often turn to a LP relaxation of the problem, i.e., a LP obtained from an ILP by relaxing its integrality constraints.
Every LP admits a corresponding dual program (in this context, we refer to the original LP as the primal program). Specifically, for a minimization (resp., maximization) LP in standard form, its dual is a maximization (resp., minimization) LP, formulated as (resp., ).
LP duality has the following useful properties. Let and be feasible solutions to the primal and dual programs, respectively. The weak duality theorem states that (resp., ). The strong duality theorem states that and are optimal solutions to the primal and dual programs, respectively, if and only if . The relaxed complementary slackness conditions are stated as follows, for given parameters .

Primal relaxed complementary slackness:
For every primal variable , if , then (resp., ). 
Dual relaxed complementary slackness:
For every dual variable , if , then (resp., ).
If the (primal and dual) relaxed complementary slackness conditions hold, then it is guaranteed that (resp., ) which, combined with the aforementioned weak duality theorem, implies that approximates an optimal primal solution by a multiplicative factor of . Moreover, the relaxed complementary slackness conditions with parameters , often referred to simply as the complementary slackness conditions, hold if and only if and are optimal.
Let be a OptDGP that can be represented as an ILP. Let be its LP relaxation and the dual LP of . Given parameters , we say that is ()fitted if for any optimal (integral) solution for the ILP corresponding to , there exists a feasible solution for such that the relaxed primal and dual complementary slackness conditions hold for and with parameters and , respectively.
Comparison Schemes.
Let be the universe of IO graphs where is an input assignment that encodes a unique id represented using bits for each node (possibly among other input components). For a function and parameter , an comparison scheme is a mechanism designed to decide if for a given IO graph . Formally, an comparison scheme is defined as a GPLS over by setting the yesfamily to be and the no family to be . Notice that the task of deciding if can be achieved by a comparison scheme, where is defined by setting for every .
The following lemma has been established by Korman et al. [KKP10, Lemma 4.4].
Lemma 2.1.
Given a function that is computable in polynomial time and an integer , there exists a sequentially efficient comparison scheme with proof size , where is the maximal number of bits required to represent for any .
Additional Definitions.
A feasibility scheme for a DGP is a GPLS over the universe with the yesfamily and the nofamily . The oddgirth of a graph is the length of the shortest odd cycle contained in .
3 Methods
In this section, we present two generic methods that facilitate the design of sequentially efficient APLSs and ADPLSs with small proof sizes for many OptDGPs. These methods are used in most of the results established later on in Section 5.
3.1 The Primal Dual Method
LP duality theory can be a useful tool in the design of a APLS for a fitted OptDGP (as shown in [CPP20, GS16]). The main idea of this approach is to use the relaxed complementary slackness conditions to verify that the output assignment of a given IO graph is approximately optimal for and with respect to . Specifically, the prover provides the verifier with a proof that there exists a feasible dual solution within a multiplicative factor of from the primal solution derived from the output assignment ; the verifier then verifies the primal and dual feasibility of and , respectively, as well as their relaxed complementary slackness conditions.
We take a particular interest in the following family of OptDGPs. Consider a OptDGP that can be represented by an ILP that admits a LP relaxation whose matrix form is given by the variable vector and coefficient matrix and vectors , , and . We say that is locally verifiable if for every IO graph , there exist mappings and that satisfy the following conditions: (1) for every and such that is not incident on ; (2) the variable is encoded in the local output of node for every such that is incident on ; and (3) the coefficients , , , and are either universal constants or encoded in the local input of node for every and such that , , and .
The primal dual method facilitates the design of an APLS, , for a fitted and locally verifiable OptDGP whose goal is to determine for a given IO graph if the output assignment is an optimal (feasible) solution for the colegal and or far from being an optimal solution. Let be the primal variable vector encoded in the output assignment . If is an optimal solution for and , then the prover uses a sequential algorithm to generate a feasible dual variable vector such that and meet the relaxed complementary slackness conditions with parameters and (such a dual solution exists as is fitted). The label assignment constructed by the prover assigns to each node , a label that encodes the vector of dual variables mapped to in the dual variable vector .
Consider some node of the given IO graph . The verifier at node extracts (i) the vector of primal variables mapped to edges incident on from the local output ; (ii) the vector of dual variables mapped to from the label ; (iii) the vector of dual variables mapped to ’s neighbors from the label vector ; and (iv) the vectors , , and of coefficients mapped to and the edges incident on from the local input .
The verifier at node then proceeds as follows: (1) using , , and , the verifier verifies that the primal constraints that correspond to rows such that are satisfied; (2) using , , , and , the verifier verifies that the dual constraints that correspond to columns such that are satisfied; (3) using , , , , and , the verifier verifies that the primal relaxed complementary slackness conditions that correspond to primal variables such that hold with parameter ; and (4) using , , , and , the verifier verifies that the dual relaxed complementary slackness conditions that correspond to dual variables such that hold with parameter . If all four conditions are satisfied, then the verifier at node returns ; otherwise, it returns . Put together, the verifier accepts the IO graph if and only if and are feasible primal and dual solutions that satisfy the primal and dual relaxed complementary slackness conditions with parameters and , respectively.
To establish the correctness of the APLS, notice first that the primal constraints are satisfied if and only if is a feasible solution for and . Assuming that primal constraints are satisfied, if is an optimal solution for and , then the fact that is fitted implies that the verifier generates a feasible dual solution such that the primal and dual relaxed complementary slackness conditions are satisfied with parameters and . Conversely, If is a feasible dual solution and the primal and dual relaxed complementary slackness conditions are satisfied with parameters and , then approximates the optimal primal (fractional) solution within an approximation bound of , hence approximates within the same approximation bound.
The proof size of a APLS for a fitted and locally verifiable OptDGP , designed by means of the primal dual method, is the maximum number of bits required to encode the vector of dual variables mapped to a node . Let be the range of possible values assigned by the prover to a dual variable . We aim for schemes that minimize . Particularly, for OptDGPs where the number of primal constraints mapped to each node is bounded by a constant, this results in a APLS with a proof size of .
In Section 5 we present APLSs that are obtained using the primal dual method. We note that for all these APLSs, both the prover and verifier run in polynomial sequential time, thus yielding sequentially efficient APLSs.
3.2 The Verifiable Centralized Approximation Method
Consider some OptDGP . We say that is identified if the input assignment encodes a unique id represented using bits at each node (possibly among other input components) for every IO graph .
We say that is decomposable if there exists a function , often referred to as a decomposition function, such that for every IO graph (cf. the notion of semigroup functions in [KKP10]). Given an input and output assignments , let denote the sum of the decomposition function values over all nodes . Notice that the decomposition function is well defined for all bit string pairs; in particular, the definition of does not require that the output assignment is a feasible solution for the graph and the input assignment .
Let be a decomposable MinDGP (resp., MaxDGP) with a decomposition function . Given a legal input graph and a parameter , we say that a (not necessarily feasible) output assignment is a decomposable approximation for and if (resp., ).
Fix some identified decomposable MinDGP (resp., MaxDGP) with a decomposition function . The verifiable centralized approximation (VCA) method facilitates the design of an ADPLS for whose goal is to determine for a given legal input graph and some parameter if every output assignment yields an objective value of at least (resp., at most) or if there exists an output assignment that yields an objective value smaller than (resp., larger than ). The ADPLSs designed by means of the VCA method are composed of two verification tasks, namely, the approximation task and the comparison task, so that the verifier accepts if and only if both verification tasks accept. The label assigned by the prover to each node is composed of the fields and serving the approximation task and the comparison task, respectively.
In the approximation task, the prover runs a centralized algorithm that is guaranteed to produce a decomposable approximation for graph and input assignment . The field of the label assigned by the prover to each node consists of both and a proof that the output assignment is indeed the outcome of the centralized algorithm . The correctness requirement for this task is defined so that the verifier accepts if and only if the field encodes an output assignment that can be obtained using .
The purpose of the comparison task is to verify that (resp., ), where is the decomposition function associated with the (decomposable) MinDGP (resp., MaxDGP) and is the output assignment encoded in the fields of the labels assigned to nodes . This is done by means of the comparison scheme (resp., the comparison scheme) presented in Section 2.
The correctness of the ADPLS for the MinDGP (resp., MaxDGP) and the integer is established as follows. If (resp., ), then the field of the label assigned by the prover to each node encodes an output assignment generated by the algorithm . This means that is a decomposable approximation, thus (resp., ) and the verifier accepts . On the other hand, if (resp., ), then for any decomposable approximation , it holds that (resp., ), hence the verifier rejects for any label assignment .
The proof size of the ADPLS designed via the VCA method is the maximum size of a label assigned by the prover for a given input graph such that (resp., ). As discussed in Section 2, it is guaranteed that the fields are represented using bits, where is an upper bound on the number of bits required to represent a value for any , and is the decomposable approximation generated by the prover in the approximation task. In Section 5, we develop ADPLSs whose fields are also represented using bits. Moreover, the OptDGPs we consider admit some fixed parameter (typically an upper bound on the weights in the graph) such that which results in a proof size of .
A desirable feature of the ADPLSs we develop in Section 5 is that the centralized algorithms employed in the approximation task are efficient, hence the prover runs in polynomial time. Since the (sequential) runtime of the verifier is also polynomial, it follows that all our ADPLSs are sequentially efficient.
4 Reductions Between APLSs and ADPLSs
4.1 From an ADPLS to an Apls
Consider an identified decomposable MinDGP (resp., MaxDGP) with a decomposition function . Let and be the proof sizes of a feasibility scheme for and an ADPLS for , respectively. We establish the following lemma.
Lemma 4.1.
There exists an APLS for with a proof size of , where is the maximal number of bits required to represent for any .
Proof.
Observe that if is known to be a feasible solution for and , then the correctness requirements of an APLS for the MinDGP (resp., MaxDGP) are equivalent to those of an ADPLS for and . That is, for a given IO graph , if (resp., ), then is an optimal solution for and which requires the verifier of an APLS to accept ; if (resp., ), then is at least far from being optimal for and which requires the verifier of an APLS to reject .
The design of an APLS for is thus enabled by taking the label assigned by the prover to each node to be , where is the bit label assigned to by the prover of the feasibility scheme for ; (note that all nodes are assigned with the same field); is the label constructed in the comparison scheme (resp., the comparison scheme) presented in Section 2; and is the bit label of an ADPLS for and . This label assignment allows the verifier to verify that (1) is a feasible solution for and ; (2) (resp., ) for each ; and (3) the verifier of an ADPLS for and accepts the input graph . ∎
Consider the OptDGPs presented in Section 5 in the context of an ADPLS with a proof size of . We note that these OptDGPs admit sequentially efficient feasibility schemes with a proof size of . Specifically, for minimum weight vertex cover and minimum weight dominating set a proof size of bit suffices; for metric traveling salesperson, a feasibility scheme requires verifying that a given solution is a Hamiltonian cycle which can be done efficiently with a proof size of [GS16]; and the feasibility scheme for minimum metric Steiner tree requires verifying that a given solution is a tree that spans all nodes of a given set which can be done efficiently with a proof size of [KKP10]. Since their objective functions are simply sums of weights, these OptDGPs also admit natural decomposition functions whose images can be represented using bits assuming that is a feasible output assignment. Put together with Lemma 4.1, we get that for each sequentially efficient ADPLS presented in Section 5, there exists a corresponding sequentially efficient APLS with a proof size of .
4.2 From an APLS to an Adpls
Consider an identified, locally verifiable, and fitted OptDGP with the mappings and that are associated with its LP relaxation whose matrix form is given by the variable vector and coefficient matrix and vectors , , and . Define for each and let . Let be the maximal number of bits required to represent for any . Let and let be the proof size of an APLS for produced by the primal dual method. We obtain the following lemma.
Lemma 4.2.
There exists an ADPLS for with a proof size of .
Proof.
We construct an ADPLS for the MinDGP (resp., MaxDGP) by means of the VCA method. Recall that an APLS for established by means of the primal dual method is defined so that the labels encode a feasible dual solution that satisfies (resp., ). Define and (resp., ) for each . The prover sets the sublabel associated with the approximation task for each node , which allows the verifier to verify that is a feasible dual solution.
For the correctness of this scheme, it suffices to show that is a decomposable approximation for and (with respect to the decomposition function ). Note that is defined so that it satisfies (resp., ); and weak duality implies that (resp., ). It follows that is a decomposable approximation for and since (resp., ). ∎
Observe that for the minimum edge cover problem presented in Section 5.1 it holds that , ; and for the maximum matching problem presented in Section 5.2 it holds that , . These allow us to obtain the following results: (1) a ADPLS for minimum edge cover in graphs of oddgirth with a proof size of based on Theorem 5.3; (2) a DPLS for minimum edge cover in bipartite graphs with a proof size of based on Theorem 5.11; (3) a ADPLS for maximum matching in graphs of oddgirth with a proof size of based on Theorem 5.14; and (4) a DPLS for maximum matching in bipartite graphs with a proof size of based on Theorem 5.17.
5 Bounds for Concrete OptDGPs
5.1 Minimum Edge Cover
Given a graph , an edge cover is a subset of edges such that every node is incident on at least one edge in . A minimum edge cover is an edge cover of minimal size.
Given an edge cover in graph , a node is said to be tight if it is incident on exactly one edge ; otherwise it is said to be loose. An interchanging path is a simple path between a loose node and a node that satisfies (1) ; and (2) for all . We define to be the length of a shortest interchanging path ending in , defined to be if no such path exists, for each . In particular, if and only if is loose.
Lemma 5.1.
Given an edge cover and a node , if is odd, then for any node , it holds that .
Proof.
Let be an interchanging path of length ending in . Clearly, if a node precedes in , then it follows that ; otherwise (since is tight) which means that the path is an interchanging path and thus . ∎
An inflating path is an interchanging path between two loose nodes , , such that .
Lemma 5.2.
If is a minimum edge cover in a graph , then there are no inflating paths in .
Proof.
Assume towards a contradiction that is a minimum edge cover and there exists an inflating path between two loose nodes . Let let and let . The edge set is an edge cover (since and are loose in ) that satisfies which contradicts being a minimum edge cover. ∎
Theorem 5.3.
For every , there exists a sequentially efficient APLS for minimum edge cover in graphs of oddgirth at least with a proof size of bits.
Proof.
We provide a APLS by means of the primal dual method. Consider the following LP relaxation for the minimum edge cover problem
Comments
There are no comments yet.