1 Introduction
Greedy algorithms are among the first class of algorithms studied in an undergraduate computer science curriculum. They are among the simplest and fastest algorithms for a given optimization problem, often achieving a reasonably good approximation ratio, even when the problem is NPhard. In spite of their importance, the notion of a greedy algorithm is not well defined. This might be satisfactory for studying upper bounds; when an algorithm is suggested, it does not matter much whether everyone agrees that it is greedy or not. However, lower bounds (inapproximation results) require a precise definition. Perhaps giving a precise definition for all greedy algorithms is not possible, since one can provide examples that seem to be outside the scope of the given model.
Setting this philosophical question aside, we follow the model of greedylike algorithms due to Borodin, Nielsen, and Rackoff [8]. The fixed priority model captures the observation that many greedy algorithms work by first sorting the input items according to some priority function, and then, during a single pass over the sorted input, making online irrevocable decisions for each input item. This model is similar to the online algorithm model with an additional preprocessing step of sorting inputs. Of course, if any sorting function is allowed, this would trivialize the model for most applications. Instead, a total ordering on the universe of all possible input items is specified before any input is seen, and the sorting is done according to this ordering, after which the algorithm proceeds as an online algorithm. This model has been adopted with respect to a broad array of topics [20, 2, 16, 12, 19, 5, 7, 3]. In spite of the appeal of the model, there are relatively few lower bounds in this model. There does not seem to be a general method for proving lower bounds; that is, the adversary arguments tend to be adhoc. In addition, the basic priority model does not capture the notion of side information. The assumption that an algorithm does not know anything about the input is quite pessimistic in practice. This issue has been addressed recently in the area of online algorithms by considering models with advice (see [9] for an overview). In these models, side information, such as the number of input items or a maximum weight of an item, is computed by an all powerful oracle and is available to an algorithm before seeing any of the input. This information is then used to make better online decisions. The goal is to study tradeoffs between advice length and the competitive ratio.
We introduce a general technique for establishing lower bounds on priority algorithms with advice. These algorithms are a simultaneous generalization of priority algorithms and online algorithms with advice. Our technique is inspired by the recent success of the binary string guessing problem and reductions in the area of online algorithms with advice. We identify a difficult problem (Pair Matching) that can be thought of as a sortingresistant version of the binary string guessing problem. Then, we describe the template of gadget reductions from Pair Matching to other problems in the world of priority algorithms with advice. This part turns out to be challenging, mostly because one has to ensure that priorities are respected by the reduction. We then apply the template to a number of classic optimization problems. We restrict our attention to the fixed priority model. We also note that we consider deterministic algorithms unless otherwise specified.
Related model. Fixed priority algorithms with advice can be viewed in terms of the fixed priority backtracking model of Alekhnovich et al [1]. That model starts by ordering the inputs using a fixed priority function and then executes a computation tree where different decisions can be tried for the same input item by branching in the tree, and then choosing the best result. The lower bound results generally consider how much width (maximum number of nodes for any fixed depth in the tree) is necessary to obtain optimality where the width proven is often of the form . In contrast, our results give a parameterized tradeoff between the number of advice bits and the competitive ratio. However, given an algorithm in the fixed priority backtracking model, the logarithm of the width gives an upper bound on the number of bits of advice needed for the same approximation ratio. Similarly, a lower bound on the advice complexity gives a lower bound on width.
Organization. We give a formal description of the models in Section 2. We motivate the study of the priority model with advice in Section 3. We introduce and analyze the Pair Matching problem in Section 4. We describe the reduction framework for obtaining lower bounds in Section 5 and apply it to classic problems in Section 6. We conclude in Section 7.
2 Preliminaries
We consider optimization problems for which we are given an objective function to minimize or maximize, and measure our success relative to an optimal offline algorithm.
Online Algorithms with Advice. In an online setting, the input is revealed one item at a time by an adversary. An algorithm makes an irrevocable decision about the current item before the next item is revealed. For more background on online algorithms, we refer the reader to the texts by Borodin and ElYaniv [6] and Komm [15].
The assumption that an online algorithm does not know anything about the input is quite often too pessimistic in practice. Depending on the application domain, the algorithm designer may have access to knowledge about the number of input items, the largest weight of an input item, some partial solution based on historical data, etc. The advice tape model for online algorithms captures the notion of side information in a purely informationtheoretic way as follows. An allpowerful oracle that sees the entire input prepares the infinite advice tape with bits, which are available to the algorithm during the entire process. The oracle and the algorithm work in a cooperative mode – the oracle knows how the algorithm will use the bits and is trying to maximize the usefulness of the advice with regards to optimizing the given objective function. The advice complexity of an algorithm is a function of the input length and is the number of bits read by the algorithm in the worst case for inputs of a given size. For more background on online algorithms with advice, see the survey by Boyar et al. [9].
Fixed Priority Model with Advice. Fixed priority algorithms can be formulated as follows. Let be a universe of all possible input items. An input to the problem consists of a finite set of items satisfying some consistency conditions. The algorithm specifies a total order on before seeing the input. Then, the input items are revealed according to the total order specified by the algorithm. The algorithm makes irrevocable decisions about the items as they arrive.^{1}^{1}1In the adaptive priority model, the algorithm is allowed to specify a new ordering depending on previous items and decisions before a new input item is presented. The overall set of decisions is then evaluated according to some objective function. The performance of the algorithm is measured by the asymptotic approximation ratio with respect to the value provided by an optimal offline algorithm. The notion of advice is added to the model as follows. After the algorithm has chosen a total order on , an allpowerful oracle that has access to the entire input creates a tape of infinitely many bits. The algorithm knows how the advice bits are created and has access to them during the online decision phase. Our interest is in how many bits of advice the algorithm uses compared with the result it obtains.
We consider only countable universes . In this case, having a total order on elements in is equivalent (via a simple inductive argument) to having a priority function . The assumption of the universe being countable is natural, but also necessary for the above equivalence: there are uncountably many totally ordered sets that do not embed into the reals with the standard order.
Definition 2.1
Let be the universe of input items and let be a priority function. For , we write to mean . We will say that larger priority means that the item appears earlier in the input, i.e., means that appears before when the input is given according to .
Example. Kruskal’s optimal algorithm for the minimum spanning tree problem is a fixed priority algorithm without advice. The universe of items is . An item represents an edge between a vertex and a vertex of weight . The consistency condition on the input is that the edge can be present at most once in the input. The total order on the universe is specified by all items of smaller weight having higher priority than all items of larger weight, breaking ties, say, by lexicographic order on the names of vertices. Kruskal’s algorithm processes input items in the given order and greedily accepts those items that do not result in cycles.
In this paper, we shall only consider the following input model for graph problems in the priority setting:
Vertex arrival, vertex adjacency: an input item consists of a name of a vertex together with a set of names of adjacent vertices. There is a consistency condition on the entire input: if appears as a neighbor of , then must appear as a neighbor of .
Binary String Guessing Problem. Later we introduce the Pair Matching problem that can be viewed as a priority model analogue of the following online binary string guessing problem.
Definition 2.2
The Binary String Guessing Problem [4] with known history (2SGKH) is the following online problem. The input consists of , where . Upon seeing an algorithm guesses the value of . The actual value of is revealed after the guess. The goal is to maximize the number of correct guesses.
Böckenhauer et al. [4] provide a tradeoff between the number of advice bits and the approximation ratio for the binary string guessing problem.
Theorem 2.3
[Böckenhauer et al. [4]] For the SGKH problem and any , no online algorithm reading fewer than advice bits can make fewer than mistakes for large enough , where is the binary entropy function.
Competitive and Approximation Ratios. The performance of online algorithms is measured by their competitive ratios. For a minimization problem, an online algorithm is said to be competitive if there exists a constant such that for all input sequences we have , where denotes the cost of the algorithm on and is the value achieved by an offline optimal algorithm. The infimum of all such that is competitive is ’s competitive ratio. For a maximization problem, is referred to as profit, and we require that . In this way, we always have and the closer is to , the better. Priority algorithms are thought of as approximation algorithms and the term (asymptotic) approximation ratio is used (but the definition is the same).
3 Motivation
In this section we present a motivating example for studying the priority model with advice. We present a problem that is difficult in the pure priority setting or in the online setting with advice, but easy in the priority model with advice. Furthermore, the advice is easily computed by an offline algorithm.
The problem of interest is called Greater Than Mean (GTM). In the GTM problem, the input is a sequence of rational numbers. Let denote the sample mean of the sequence. The goal of an algorithm is to decide for each whether is greater than the mean or not, answering or , respectively. We can also assume that the length of the sequence is known to the algorithm in advance. We start by noting that there is a trivial optimal priority algorithm with little advice for this problem.
Theorem 3.1
For Greater Than Mean, there exists a fixed priority algorithm reading at most advice bits, solving the problem optimally.

Proof The priority order is such that . Thus, the integers arrive in the order from largest to smallest. The advice specifies the earliest index such that .
Next, we show that a priority algorithm without advice has to make many errors^{2}^{2}2In Theorem 3.2 and in all of our lower bound advice results, we state the result so as to include , in which case the conditions “fewer than ” and “fewer than ” make the statements vacuously true..
Theorem 3.2
For Greater Than Mean and any , no fixed priority algorithm without advice can make fewer than mistakes for large enough .

Proof Let be a fixed priority algorithm without advice for the GTM problem. Let be the corresponding priority function. For simplicity, we assume that repeated items must occur consecutively when ordered according to . We show how to get rid of the consecutive repeated items assumption in the remark immediately following this proof. Consider integers in the interval . One of the following two cases must occur:
Case 1: there exists such that and . Consider the behavior of the algorithm on the input where is presented times first. If the algorithm answers on the majority of these requests, then the last element is set to , ensuring that all the answers were incorrect. If the algorithm answers on the majority, then the last element is set to , ensuring that all the answers were incorrect. In either case, the algorithm makes at least mistakes.
Case 2: the priority function on the interval is . Consider the behavior of the algorithm on the input where the first item is and the following items are set to . If an algorithm answers on the majority of the items, then the last item is . Thus, the mean is , ensuring that all the answers on the items with value are incorrect. If an algorithm answers on the majority of the items, then the last item is . Thus, the mean is strictly smaller than , ensuring that all the answers of the algorithm on the items are incorrect. In either case, the algorithm can be made to produce errors on items, which is at least for .
Remark 3.3
Suppose that we allow repeated input items to appear nonconsecutively when ordered according to . Formally, this can be modeled by the universe . The input item consists of a rational number , called the value of an item, and its identification number . Input to the GTM problem is a subset of . The GTM problem is defined entirely in terms of values of input items, and repeated values are distinguished by their . Fix a priority function and choose different items of value , i.e., . Suppose that we have an item of value that is of higher priority than any of the and an item of value that is of lower priority than any of the . Then we can repeat the argument of Case 2 from the proof above.
Otherwise, pick distinct items of value . Call them in the decreasing order of priorities. For items either (a) there is no item of value of higher priority than all of them, or (b) there is no item of value of lower priority than all of them (otherwise, it is covered by the previous case). To handle (a), pick an arbitrary item of value . This item has lower priority than , and, in particular, lower priority than all of . This can be handled similarly to Case 1 in the proof above. Thus, the only scenario left is (b) when there is no item of value of lower priority than all of . Pick arbitrary items of value – they all have priority higher than . Thus, this can again be handled similarly to Case 1 in the proof above.
Finally, we show that an online algorithm requires a lot of advice to achieve good performance for the GTM problem. The proof is a minor modification of a reduction from 2SGKH to the Binary Separation Problem (see [10] for details). We present the proof in its entirety for completeness.
Theorem 3.4
For the Greater Than Mean problem and any , no online algorithm reading fewer than advice bits can make fewer than mistakes for large enough .

Proof We present a reduction from the 2SGKH problem to the GTM problem. Let be an online algorithm with advice for the GTM problem. Our reduction is presented in Algorithm 1. In the course of the reduction, an online input of length for the 2SGKH problem is converted into an online input of length for the GTM problem with the following properties: The number of advice bits is preserved and for each , our algorithm for 2SGKH makes a mistake on if and only if makes a mistake on . This would finish the proof of the theorem.
Let and . The reduction uses a technique similar to binary search to make sure that and we have , i.e., all the corresponding to are larger than all the corresponding to . Then is chosen to make sure that the mean of the entire stream lies between the smallest with and the largest with . This implies that is greater than the mean if and only if the corresponding .
The following invariants are easy to see and are left to the reader: (1) ; (2) if , then ; (3) if , then .
The required properties of the reduction follow immediately from the invariants. Let and . Then, . Finally, observe that is chosen so that the mean is This mean correctly separates from .
4 Pair Matching Problem
We introduce an online problem called Pair Matching. The input consists of a sequence of distinct rational numbers between 0 and 1, i.e., . After the arrival of , an algorithm has to answer if there is a such that , in which case we refer to and as forming a pair and say that has a matching value, . The answer “accept” is correct if exists, and “reject” is correct if it does not. Note that since the are all distinct, if , the correct answer is “reject”, since cannot have a matching value.
We let denote the number of pairs in the input .
4.1 Online Setting
Analyzing Pair Matching in the online setting is relatively straightforward for both deterministic and randomized algorithms.
We start with a simple upper bound achieved by a deterministic online algorithm.
Theorem 4.1
For Pair Matching, there exists a competitive algorithm, answering correctly on input items.

Proof The algorithm works as follows: suppose the algorithm has already given answers for items , and a new item arrives. If there is a such that , then the algorithm answers “accept”. Otherwise, the algorithm answers “reject”. Observe that the algorithm always answers correctly on all items that do not come from pairs. There are such items. Moreover, it always answers correctly on exactly a half of all items that form pairs – namely, it answers incorrectly on the first item from a given pair and answers correctly on the second item from the given pair. Thus, the algorithm gives correct answers in addition to the answers given correctly on items not forming pairs. The total number of correct answers is . Observe that . Thus, this simple online algorithm gives correct answers on items, achieving competitive ratio of at least .
Next, we show that the above upper bound is actually tight.
Theorem 4.2
For Pair Matching, no deterministic online algorithm can achieve a competitive ratio less than .

Proof Let be a hypothetical deterministic algorithm for Pair Matching. An adversary keeps track of the current pool of possible inputs . Initially, . An adversary picks an arbitrary number as the first input item. Depending on how answers on there are two cases.
Case 1: If answers “reject” on , then the adversary picks as the next input item. One can assume that answers correctly on . Then, the adversary removes and from and proceeds.
Case 2: If answers “accept” on , then the adversary removes and from (thus, the matching value is never given) and proceeds.
Observe that in Case 1 the algorithm makes mistakes on of the subinput corresponding to that case. In Case 2, removing and from ensures that is not part of a pair in the input. Thus, the algorithm makes mistakes on the entire subinput corresponding to Case 2.
Next, we analyze randomized online algorithms for Pair Matching. A modification of the simple deterministic algorithm results in a better competitive ratio.
Theorem 4.3
For Pair Matching, there exists a randomized online algorithm that in expectation answers correctly on input items.

Proof Let be a parameter to be specified later. Intuitively,
denotes the probability with which our algorithm is going to answer “reject” on input items which are not obviously part of a pair. More specifically, suppose that the algorithm has already given answers for items
, and a new item arrives. If there is a such that , then the algorithm answers “accept”. Otherwise, the algorithm answers “reject” with probability . We can analyze the performance of the algorithm by analyzing the following three groups of input items: Input items that are not part of a pair:

There are such input items and the algorithm answers correctly on in expectation.
 Input items that are the first of a pair:

There are such input items and the algorithm answers correctly on of them in expectation.
 Input items that are the last of a pair:

There are such input items and the algorithm answers correctly on all of them.
Thus, in expectation the algorithm gives correct answers on
items. Observe that as long as , we can use the bound to derive a lower bound of on the number of correct answers, and the largest value, , is attained for . Values of less than give poorer results for the case when there are no pairs.
Next, we show that the above algorithm is an optimal randomized algorithm for Pair Matching.
Theorem 4.4
For Pair Matching, no randomized online algorithm can achieve a competitive ratio less than .
Next, we show that the above algorithm is an optimal randomized algorithm for Pair Matching.

Proof Let be a hypothetical randomized algorithm for Pair Matching. An adversary keeps track of the current pool of possible inputs . Initially, . An adversary picks an arbitrary number as the first input item. Let be the probability that answers “reject” on . Depending on the value of , there are two cases.
Case 1: , then the adversary picks as the next input item. One can assume that answers correctly on . Then, the adversary removes and from and proceeds.
Case 2: , then the adversary removes and from and proceeds.
Observe that in Case 1, the algorithm is given two input items and it answers correctly on input items in expectation. Thus, the fraction of correct answers is .
In Case 2, removing and from ensures that is not part of a pair in the input. Thus, the algorithm answers correctly on of the input in this case in expectation.
Lastly, we prove that online algorithms need a lot of advice in order to start approaching a competitive ratio of for Pair Matching.
Theorem 4.5
For Pair Matching and any , no deterministic online algorithm reading fewer than advice bits can make fewer than mistakes for large enough .

Proof We prove the statement by a reduction from the 2SGKH problem. Let be an online algorithm solving Pair Matching. Fix an arbitrary infinite sequence of distinct numbers from .
Let be the input to 2SGKH. The online reduction works as follows. Suppose that we have already processed and we have to guess the value of . We query on . If answers that is a part of a pair, then the algorithm predicts ; otherwise, the algorithm predicts . Then the actual value of is revealed. If the actual value is , then the reduction algorithm feeds as the next input item to . We assume that answers correctly on in this case. If the actual value of is , the algorithm proceeds to the next step.
Note that the number of mistakes that the reduction algorithm makes is exactly equal to the number of mistakes that makes. The statement of the theorem follows by observing that the input to is of length at most .
4.2 Priority Setting
In this section, we show that Theorem 4.5 also holds in the priority setting. The proof becomes a bit more subtle, so we give it in full detail.
Theorem 4.6
For Pair Matching and any , no fixed priority algorithm reading fewer than advice bits can make fewer than mistakes for large enough .

Proof We prove the statement by a reduction from the online problem 2SGKH. Let be a priority algorithm solving Pair Matching, and let be the corresponding priority function. (Note that we assume that the algorithm knows ; this is the case in all of our priority algorithm reductions.) The reduction follows the proof of Theorem 4.5 closely. The idea is to transform the online input to 2SGKH into an input to Pair Matching. The difficulty arises from having to present the transformed input in the online fashion while respecting the priority function .
Let be the input to 2SGKH. The online reduction works as follows. The online algorithm picks distinct numbers from and creates a list consisting of and sorted according to . The algorithm keeps a (maxheap ordered) priority queue of elements from as well as a subsequence of . The reduction always picks the first element from . We maintain the invariant that appears later in according to . If needed, the reduction algorithm will enter onto to be simulated as an input to at the right time later on.
Initialization. Initially, is empty and is the entire sequence . Before the element arrives, the algorithms feeds to . If answers that is a part of a pair, then the online algorithm predicts ; otherwise the algorithm predicts . Then the online algorithm finds such that and updates by deleting and . Then is revealed. If the actual value of is , the algorithm inserts into ; otherwise the algorithm does not modify .
Middle step. Suppose that the algorithm has processed and has to guess the value of . The algorithm picks the first element from the subsequence . While the top element of has higher priority than according to , the algorithm deletes that element from the priority queue and feeds it to . Then, the algorithm feeds to . The next steps are similar to the initialization case. If answers that is a part of a pair, then the online algorithm predicts ; otherwise the algorithm predicts . The online algorithm finds in such that , and updates by deleting and . Then is revealed. If the actual value of is , the algorithm inserts into ; otherwise the algorithm does not modify .
Postprocessing. After the algorithm finishes processing , it feeds the remaining elements (in priority order) from to .
It is easy to see that the online algorithm feeds a subsequence of to in the correct order according to . In addition, the online algorithm makes exactly the same number of mistakes as (assuming that always answers correctly on the second element of a pair). The statement of the theorem follows since the size of the input to is at most .
5 Reduction Template
Our template is restricted to binary decision problems since the goal is to derive inapproximations based on the Pair Matching problem. (See also the discussion in Section 6.2.) In reducing from Pair Matching to a problem , we assume that we have a priority algorithm with advice, for problem , with priorities defined by . Based on and , we define a priority algorithm with advice and a priority function, , for the Pair Matching problem. Input items in to Pair Matching arrive in an order specified by the priority function we define, based on . We assume that we are informed when the input ends and can take steps at that point to complete our computation. Knowing the size of the input, which one naturally would in many situations after the initial sorting according to , would of course be sufficient.
Based on the input to the Pair Matching problem, we create input items to problem , and they have to be presented to , respecting the priority function . Responses from are then used by to help it answer “accept” or “reject” for its current . Actually, will always answer correctly for a request when , so the responses from are only used when this is not the case. The main challenge is to ensure that the input items to are presented in the order determined by , because the decision as to whether or not they are presented needs to be made in time, without knowing whether or not the matching value will arrive.
Here, we give a high level description of a specific kind of gadget reduction. A gadget for problem is simply some constantsized instance for , i.e., a collection of input items that satisfy the consistency condition for problem . For example, if is a graph problem in the vertex arrival, vertex adjacency model, could be a constantsized graph, and the universe then contains all possible pairs of the form: a vertex name coupled with a list of possible neighboring vertex names. Note that each possible vertex name exists many times as a part of an input, because it can be coupled with many different possible lists of vertex names. The consistency condition must apply to the actual input chosen, so for each vertex name which is listed as a neighbor of , it must be the case that is listed as a neighbor of .
The gadgets used in a reduction will be created in pairs (gadgets in a pair may be isomorphic to each other, so that they are the same up to renaming), one pair for each input item less than or equal to (for , the gadget will only be used to assign a priority to ). One gadget from the pair is presented to when appears later in the input; and the other gadget when it does not. Using fresh names in the input items for problem , we ensure that each input item less than to the Pair Matching problem has its own collection of input items for its gadgets for problem . The pair of gadgets associated with an input item can be written . The same universe of input items is used for both of these gadgets.
We write to denote the first item according to from the universe of input items for , i.e., the highest priority item. For now, assume that responds “accept” or “reject” to any possible input item. This captures problems such as vertex cover, independent set, clique, etc.
For each , the gadget pair satisfies two conditions: the first item condition, and the distinguishing decision condition. The first item condition says that the first input item according to gives no information about which gadget it is in. To accomplish this, we define the priority function for as for all and set (the second equality holds since we assume the two gadgets have the same input universe). The distinguishing decision condition says that the decision with regards to item that results in the optimal value of the objective function in is different from the decision that results in the optimal value of the objective function in . This explains why the one gadget is presented to when appears later in the input sequence and the other when it does not.
Now that the first item of the gadget associated with is defined, the remaining actual input items in the gadget pair for must be completely defined according to the distinguishing decision condition. This gives two sets (overlapping, at least in ) of input items. The item with highest priority among all of the items in the actual gadget pair, ignoring , is called , and we define for . Thus, we guarantee the following list of properties: will arrive before in the input sequence for Pair Matching for , will arrive for algorithm at the same time, ’s response for can define the response of to , and the decision as to which gadget in the pair is presented for can be made at the time arrives or can determine that it will not arrive (because either the input sequence ended or an with lower priority than arrived).
To warm up, we start with an example reduction from Pair Matching to a somewhat artificial problem. This reduction then serves as a model for the general reduction template.
5.1 Example: Triangle Finding
Consider the following priority problem in the vertex arrival, vertex adjacency model: for each vertex , decide whether or not belongs to some triangle (a cycle of length ) in the entire input graph. The answer “accept” is correct if belongs to some triangle, and otherwise the answer should be “reject”. We refer to this problem as Triangle Finding. This problem might look artificial and it is optimally solvable offline in time , but as mentioned above, advicepreserving reductions between priority problems require subtle manipulations of a priority function. The Triangle Finding problem allows us to highlight this issue in a relatively simple setting.
Theorem 5.1
For Triangle Finding and any , no fixed priority algorithm reading at most advice bits can make fewer than mistakes.

Proof We prove this theorem by a reduction from the Pair Matching problem. Let be an algorithm for the Triangle Finding problem, and let be the corresponding priority function. Let be the input to Pair Matching. We define a priority function and a valid input sequence to Triangle Finding. When is presented according to to our priority algorithm for Pair Matching, it is able to construct for , respecting the priority function . Moreover, our algorithm for Pair Matching will be able to use answers of to answer the queries about .
Now, we discuss how to define . With each number , we associate four unique vertices . The universe consists of all input items of the form with , and ; there are input items for each : possibilities for the vertex, and for each of the
possibilities for the ordered pair of neighbors. Let
be the first item according to among the items. Using only the input items from the items we are currently considering, we extend this item in two ways, to a 3cycle and to a 4cycle . When we write or , we mean the set of items forming the 3cycle or 4cycle, respectively. Now, is defined as follows:In other words, if , we set to be the first element other than in . In terms of our high level description given at the beginning of this section, form the pair of gadgets – a triangle and a square. By construction, this pair of gadgets satisfies the first item condition. By the definition of the problem, the optimal decision for all vertices in is “accept” (belongs to a triangle) and the optimal decision for all vertices in is “reject” (does not belong to a triangle). Thus, these gadgets also satisfy the distinguishing decision condition.
Let denote the order input items are presented to our algorithm as specified by . Our algorithm constructs an input to which is consistent with along the following lines: for each that appears in the input, the algorithm constructs either a threecycle or a fourcycle (disjoint from the rest of the graph). Thus, each is associated with one connected component. During the course of the algorithm, each connected component will be in one of the following three states: undecided, committed, or finished. When arrives, the algorithm initializes the construction with the item and sets the component status to undecided. It answers “accept” (there will be a matching pair) for if responds “accept” (triangle) for , and it answers “reject” if responds “reject” (square).
Note that for any , , so if arrives and has not appeared earlier, can simply reject and does not need to present anything to . If has arrived and at some point, arrives, the algorithm commits to constructing the 3cycle . If had guessed correctly that would arrive, it is because responded “accept” for ) and also guessed correctly. If had guessed that would not arrive, it is because guessed that a square would arrive, and both guessed incorrectly. If some arrives with for some and has arrived earlier, then can be certain that will not arrive. It commits to constructing the 4cycle . Thus, if answered “reject” for , it answered correctly, and a square makes ’s decision for correct. Similarly, if answered “accept” for , it answered incorrectly, so a square makes ’s decision incorrect.
At the end of the input, finishes off by checking which values of have arrived without arriving or some with higher priority than arriving, and again commits to the 4cycle, as in the other case where does not arrive.
Throughout the algorithm, there are several connected components, each of which can be undecided, committed, or finished. Note that an undecided component corresponding to input consists of a single item . Upon receiving an item , the algorithm first checks whether some undecided components have turned into committed ones: namely if an undecided component consisting of satisfies , it switches the status to a committed component according to the rules described above. Then, the algorithm feeds input items corresponding to committed yet unfinished connected components to and does so in the order of up until the priority of such items falls below (this can be done by maintaining a priority queue). Finally, the algorithm processes the item by either creating a new component or by turning an undecided component into a decided one. Then, the algorithm moves to the next item. Due to our definition of and this entire process, the input constructed for is valid and consistent with . Observe that the input to is of size at most , so the number of advice bits must be divided by four relative to Theorem 4.6, and the theorem follows.
5.2 General Template
In this subsection, we establish two theorems that give general templates for gadget reductions from Pair Matching – one for maximization problems and one for minimization problems. The high level overview has been given at the beginning of this section.
We let denote the objective function for on input . The size of a gadget , denoted by , is the number of input items specifying the gadget. We write to denote the best value of the objective function on . Recall that we focus on problems where a solution is specified by making an accept/reject decision for each input item. We write to denote the best value of the objective function attainable on after making the wrong decision for the first item (the item with highest priority, ), i.e., if there is an optimal solution that accepts (rejects) the first item of , then denotes the best value of the objective function given that the first item was rejected (accepted). We say that the objective function for a problem is additive, if for any two instances and to such that , we have .
Theorem 5.2
Let be a minimization problem with an additive objective function. Let be a fixed priority algorithm with advice for with a priority function . Suppose that for each one can construct a pair of gadgets satisfying the following conditions:
 The first item condition:

.
 The distinguishing decision condition:

the optimal decision for in is different from the optimal decision for in (in particular, the optimal decision is unique for each gadget). Without loss of generality, we assume is accepted in an optimal solution in .
 The size condition:

the gadgets have finite sizes, and we let , where the cardinality of a gadget is the number of input items it consists of.
 The disjoint copies condition:

for and , input items making up and are disjoint.
 The gadget and condition:

the values , as well as , are independent of , and we denote them by , , , and ; we assume that .
Define . Then for any , no fixed priority algorithm reading fewer than advice bits can achieve an approximation ratio smaller than

Proof The proof proceeds by constructing a reduction algorithm (fixed priority with advice) for Pair Matching that uses to make decisions about input items. We start by defining a priority function for the reduction algorithm.
Define to be the highest priority input item in or different from , i.e.,
We define a priority function as follows.
For the Pair Matching problem, we denote the given input sequence ordered by as . We have to give an overall strategy for how the reduction algorithm for Pair Matching handles an input item and which input items it presents to . In order to do this, we use a priority queue which is a maxheap ordered based on the priority of input items to problem , with the purpose of presenting these input items in the correct order (respecting , highest priority items appear first). When commits to a particular gadget in a pair, the remainder of that gadget (all inputs except which has already been presented) are inserted into .
By definition, for all . Thus, is presented to in Line 14 before the remaining parts of the same gadget associated with are inserted into in one of Lines 6, 10, or 17.
Since the priority of any is defined to be the priority of , the s are presented in the correct relative order.
Clearly, input items entered into the priority queue, , are extracted and presented to in the correct relative order, and before any is presented, higher priority items are presented first in Line 12. The remaining issues are whether the remainder of the gadget associated with some is entered into early enough relative to some from another gadget and whether all gadgets are eventually completely presented to .
By the definition of , the priority of is at least the priority of any remaining input item in the gadget associated with .
Consider the point in time when arrives. If arrived earlier or is greater than , the gadget associated with would have been processed correctly or have been inserted into earlier. Before is presented to , a check is made to see if . If the check in the ifstatement is positive, the entire remaining part of gadget for is inserted into at this point in Line 10.
If some arrives, but never arrives, if , this is discovered in Line 16 and the remainder of is presented to in Line 17.
Thus, input items are presented to in the order defined by its priority function .
Now we turn to the approximation ratio obtained. We want to lower bound the number of incorrect decisions by . We focus on the input items which are for some input to the Pair Matching Problem and assume that answers correctly on anything else.
When receives an , in Line 15 it answers the same for as does for . By considering the four cases where the gadget associated with is later inserted into , we can see that this answer for was correct for if and only if the answer gave for could lead to the optimal result for the gadget associated with .

If arrives, then is committed to and the remainder of is inserted into in Line 6. If answered “accept” to , then has accepted and could obtain the optimal result on , by the definition of these gadget pairs. If answered “reject” to , then has rejected and cannot obtain the optimal result on , again by the definition of these gadget pairs.

If does not arrive, then is committed to and the remainder of is inserted into in Lines 10 or 17. If answered “reject” to , then has rejected and could obtain the optimal result on , by the definition of these gadget pairs. If answered “accept” to , then has accepted and cannot obtain the optimal result on , again by the definition of these gadget pairs.
We know from Theorem 4.6 that for any , any priority algorithm with advice length less than makes at least mistakes. Since we want to lower bound the performance ratio of , and since a ratio larger than one decreases when increasing the numerator and denominator by equal quantities, we can assume that when answers correctly, it is on the gadget with the larger value, . For the same reason, we can assume that the “at least ” incorrect answers are in fact exactly
, since classifying some of the incorrect answers as correct just lowers the ratio. For the incorrect answers, assume that the gadget
is presented times, and, thus, the gadget, , times.Denoting the input created by for by , we obtain the following, where we use that .
Taking the derivative with respect to and setting equal to zero gives no solutions for , so the extreme values must be found at the endpoints of the range for which is .
Inserting , we get , while gives
The latter is the smaller ratio and thus the lower bound we can provide.

The following theorem for maximization problems is proved analogously.
Theorem 5.3
Let be a maximization problem with an additive objective function. Let be a fixed priority algorithm with advice for with a priority function . Suppose that for each one can construct a pair of gadgets satisfying the conditions in Theorem 5.2. Set . Then for any , no fixed priority algorithm reading fewer than advice bits can achieve an approximation ratio smaller than

Proof The proof proceeds as for the minimization case in Theorem 5.2 until the calculation of the lower bound of . We continue from that point, using the inverse ratio to get values larger than one.
We use that .
Comments
There are no comments yet.