DeepAI

# Black-box Methods for Restoring Monotonicity

In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. In this work we develop algorithms that are able to restore monotonicity in the parameters of interest. Specifically, given oracle access to a (possibly non-monotone) multi-dimensional real-valued function f, we provide an algorithm that restores monotonicity while degrading the expected value of the function by at most ε. The number of queries required is at most logarithmic in 1/ε and exponential in the number of parameters. We also give a lower bound showing that this exponential dependence is necessary. Finally, we obtain improved query complexity bounds for restoring the weaker property of k-marginal monotonicity. Under this property, every k-dimensional projection of the function f is required to be monotone. The query complexity we obtain only scales exponentially with k.

• 8 publications
• 23 publications
• 46 publications
04/04/2022

01/19/2022

### The Query Complexity of Certification

We study the problem of certification: given queries to a function f : {...
11/15/2019

### New Query Lower Bounds for Submodular Function MInimization

We consider submodular function minimization in the oracle model: given ...
11/18/2020

### Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing

We generalize the celebrated isoperimetric inequality of Khot, Minzer, a...
11/20/2016

### Dealing with Range Anxiety in Mean Estimation via Statistical Queries

We give algorithms for estimating the expectation of a given real-valued...
11/01/2017

### Optimizing quantum optimization algorithms via faster quantum gradient computation

We consider a generic framework of optimization algorithms based on grad...
04/14/2019

### Approximating the noise sensitivity of a monotone Boolean function

The noise sensitivity of a Boolean function f: {0,1}^n →{0,1} is one of ...

## 1 Introduction

“You are preparing a paper for an upcoming deadline and try to fit the content within the page limit. You identified a redundant sentence and remove it but to your surprise, the page count of your paper increases!”

This is an example where a natural monotonicity property expected from the output fails to hold, resulting in unintuitive behavior. As another example, consider the knapsack problem where given a collection of items with values and weights the goal is to identify the most valuable subset of items with total weight less than . Now if the capacity of the knapsack increases, the new set of items about to be selected are expected to be at least as valuable as before. While such a monotonicity property holds under the optimal solution, when heuristic or approximate algorithms are used, monotonicity can often fail.

In this work we develop tools to restore monotonicity in a black-box way. Our goal is to create a meta-algorithm that is guaranteed to be monotone, while querying a provided (possibly non-monotone) algorithm behind the scenes. For example, such a meta-algorithm might query the provided oracle at many different inputs in an attempt to smooth out non-monotonicities.

More precisely, we can describe the output of a (possibly non-monotone) algorithm using a function that measures the quality of the solution at any given point . We assume that inputs are drawn from a known product distribution, and that there is an (unknown) feasibility condition being satisfied by the function . Since may not initially be monotone, we want to correct it through our meta-algorithm while additionally maintaining the following three properties: it needs to be query efficient, feasible and comparable to the original algorithm in terms of expected solution quality. To ensure feasibility, at any given point we allow outputting any solution that corresponds to some smaller input , thus achieving quality at input . The exact constraints of the initial algorithm are unknown, and we have to infer them using , in a black-box way. We want our meta-algorithm to do this in a way that the resulting quality function is monotone but also have expected quality that is comparable to the expected quality of the original algorithm, with respect to the input distribution. We finally require that our meta-algorithm is query efficient, meaning that it does not require querying too many points of the original function in order to return an answer for a given point.

One natural way of solving this problem is to pick for every the best answer that corresponds to any input , resulting in . While this ensures that solutions are monotone and that the expected quality is always better than before, it requires a large number of queries to identify the best possible answer among inputs . A more query efficient strategy would be to always output a constant solution that corresponds to the smallest possible solution. While this is monotone, feasible and uses few queries, it has very low expected quality.

### 1.1 Results

Our main result is stated informally below:

###### Theorem 1.

There is a meta-algorithm that corrects the monotonicity of any given function through queries. The meta-algorithm is feasible and has expected quality loss under a given product distribution of inputs. The total (expected) number of queries required for every input is at most .

Our meta-algorithm starts by discretizing the space to a fine grid of points. We show that this step incurs at most an penalty in expected function value. Given this discretization it is straightforward to correct monotonicity by querying all the points in the grid at a cost of and for any returning the best solution for any smaller input.

We remark that our meta-algorithm is significantly more efficient than this naive approach, achieving number of queries that is at most logarithmic in . It is able to obtain this speedup by searching over a hierarchical partition of the space to efficiently determine which value to assign to the given input at query time, rather than preprocessing all the answers. Additionally, our algorithm is local and computes answers on the fly without requiring any precomputation and without the need to remember any prior queries to answer the future ones.

We also note that our algorithm depends exponentially on the number of input parameters . This means that while the algorithm is extremely efficient for small , the savings become less significant when grows. Such an exponential dependence in the number of parameters, however, is unavoidable for correcting monotonicity. As we show, for any black-box scheme that aims to correct monotonicity, there are always some problems where either it would fail to be feasible or monotone, or it would require exponentially many queries to calculate the appropriate answers.

###### Theorem 2.

Let be any feasible meta-algorithm for monotonicity with query complexity . Then, there exists an input function with such that with .

Theorem 2 shows that ensuring monotonicity when is large can be quite costly, either in queries or the quality of the solution. To better understand this tradeoff, we consider a weakening of monotonicity, namely -marginal monotonicity, where we only require monotonicity of the -dimensional projections of the function to be monotone. That is, when , we want that for any coordinate , the function to be monotone with respect to . For larger , we want that for any subset of size , the function is monotone with respect to . For this setting, we show that:

###### Theorem 4.

There is a meta-algorithm that corrects the -marginal monotonicity of any given function through queries. The meta-algorithm is feasible and has expected quality loss under a given product distribution of inputs. The total (expected) number of queries required for every input is at most .

Note that when requiring marginal monotonicity, the dependence on is improved from exponential to polynomial. Instead, the query complexity is only exponential in .

### 1.2 Related work

A very related line of work to our paper considers the problem of online property-preserving data reconstruction. In this framework introduced by Ailon et al. [1] there is a set of unreliable data that should satisfy a certain structural property, like monotonicity or convexity. The reconstruction algorithm acts as a filter for this data, such that whatever query the user makes on them is answered in a way consistent with the property they should satisfy. Ailon at al. [1] proposed the first algorithm for monotonicity reconstruction, and in the follow-up work, Saks and Seshadhri [24] designed a more efficient and local algorithm for the same problem. The main focus of this work is to compute a function that is not different than the original function in a large number of inputs. In comparison, we allow our algorithm to output arbitrary solutions at any point subject to a feasibility criterion and consider the expected quality of the solution as a measure of performance.

In addition to these works on upper bounds on monotonicity reconstruction, Bhattacharyya et al. in [5] proved a lower bound for local monotonicity reconstruction of Saks and Seshadhri using transitive-closure spanners. Several reconstruction algorithms have also been proposed for reconstructing Lipschitzness [17], convexity [9], connectivity of directed or undirected graphs [6], or a given hypergraph property [3], while [7] focuses on lower bounds.

Our work can also be viewed as a special case of the Local Computation Algorithms framework introduced by [23] and [2]. In this model the algorithm is required to answer queries that arrive online such that the answers given are always consistent with a specific solution. The algorithms presented also do not need to remember old answers to remain consistent, exactly as our algorithms do. Such local algorithms have been designed for many problems like maximal independent set maximal matching, approximate maximum matching, set cover, vertex coloring, hypergraph coloring [2, 21, 18, 11, 19, 20, 22, 13, 14].

Finally, our work is also closely related to the black-box reductions literature in algorithmic mechanism design, where we are given access to an algorithmic solution - an oracle- and the goal is to implement an incentive compatible mechanism with similar performance. Ensuring incentive compatibility amounts to preserving a similar monotonicity property, like cyclic monotonicity. This line of work was initiated by Hartline and Lucier [16], and later several reductions were given under different solution concepts, approximate or exact Bayesian Incentive Compatibility [4, 15, 10, 12] along with a lower bound for Dominant Strategy Incentive Compatibility [8].

## 2 Preliminaries

We are given oracle access to a function , and a stream of input points for which we need to evaluate . Our goal is to give an answer for every point such that satisfies some motion of monotonicity and it also has the following properties

1. Feasibility:

2. Close in expectation to the initial function:

#### Evaluation

We evaluate our algorithms on their query complexity. Query complexity is defined as the maximum number of times we invoke the oracle in order to give us an answer and the goal is to invoke the oracle as few times as possible.

#### Distributional assumptions

We assume that each coordinate of every point in the input sequence is drawn according to a distribution . We denote the product of these by . We want to be close to the expectation of the transformed which we denote by . From now on, we will omit the when it is clear from the context.

#### Monotonicity

We proceed by defining the various notions of monotonicity we will be using throughout this work. For we say that when for all . We say that the function is monotone if implies for every .

A relaxation of the monotonicity requirement is that of marginal monotonicity. We say that is marginally monotone if is monotone. Note that this is weaker than monotonicity since even if all marginals are monotone, this does not imply that is also monotone.

A further relaxation of the marginal monotonicity is the -marginal monotonicity. Similarly to -wise independent variables, we say that a function is -marginally monotone when for every such that the function

 fI(xI)≜Ex−I[fI(xI,x−I)]

is monotone, where we denote by (or ) the size

-vector of all the coordinates

that also belong (or do not belong) to the set , and by the marginal of the coordinates that gets as input a vector of size with the coordinates .

#### Discretization of the Domain

In all the algorithms described in the following sections, we assume the domain is discrete. To do that we use the discretization process described below. Intuitively, we split the domain into smaller intervals with equal probability and then “shift” every coordinate’s distribution downward by sampling a point from the lower interval. This is made more formal here, we first describe the process for the single dimensional case (

), and then for general .

We convert the oracle to an oracle for a piecewise constant function with pieces. The function is such that , and .

For this purpose, we split the support of the distribution into intervals such that each has probability , i.e. for every , . Then, for each interval we draw a random according to the conditional distribution . For any , we set

 ˜f(x)={0i=0f(xi−1)i>0.

It is easy to see that since is either or equal to where . To bound the expectation, we note that

 Ex∼D[˜f(x)] =m−1∑i=1εExi∼D|Ii[f(xi)] =m∑i=1εExi∼D|Ii[f(xi)]−εExm∼D|Im[f(xm)] =Ex∼D[f(x)]−εExm∼D|Im[f(xm)] ≥Ex∼D[f(x)]−ε

For dimensions the above process is essentially the same for every coordinate with the difference that now we choose points and every interval for of coordinate has probability . Let the input vector be . For every coordinate , we draw a random from the conditional and finally evaluate at these points . Note that this is feasible, similarly to before, as each input returns an outcome generated by on a pointwise smaller input. This perturbation effectively shifts each coordinate’s distribution downward, removing the range , therefore removing from the expectation any input that had at least one coordinate in the last interval. This occurs with probability at most , and therefore the new expectation is not far from the old.

Assuming a discrete domain, where every coordinate , we define the low -neighborhood of a point as the set of points that are smaller in exactly coordinate. Formally, the low -neighborhood of a point is .

## 3 Monotonicity

In this section we show how to design an algorithm to correct a non monotone function while preserving its expectation, using queries.

###### Theorem 1.

There exists a meta-algotithm that corrects the monotonicity for any function . is feasible, has query complexity , and has , where the first expectation is taken over the input distribution and the randomness in the meta-algorithm.

To establish Theorem 1 we first show how to solve the problem for the single-dimensional case, . Observe initially that the trivial meta-algorithm that simply queries the function at every point in the (sufficiently discretized) domain has complexity . To get the improvement to we first consider the following “greedy” meta-algorithm: query the (discretized) domain in a uniformly random order, and for each point sequentially, define the transformed output to be the value closest to that maintains monotonicity with the values of constructed so far. We prove that this random meta-algorithm preserves the function value in expectation over the random query order and then that this transformation can be implemented locally, with only logarithmically many queries per input. The details of these steps are presented in subsection 3.1 below.

### 3.1 Single-dimensional case (d=1)

Using the discretization process described in section 2, we may assume that is supported on points and the underlying distribution is uniform. We now provide an algorithm to obtain a monotone function that has the same expectation as and performs queries to with probability . We later choose and cap the queries to and show that it is possible to do this while maintaining feasibility, monotonicity and incurring error at most .

#### Construction of the Oracle

The function that the algorithm outputs is a function for a uniformly random permutation . Given a permutation of , we define the function by setting for every ,

 Mπf(πi)=⎧⎨⎩Hif(πi)>HiLif(πi)

where is the value of the lowest point with or if such a point does not exist. Similarly, is the value placed at the highest point with or if such a point does not exist. That is, the function visits all points according to the permutation and greedily assigns values consistent with the choices made for the previous points visited so far to preserve monotonicity. Equivalently, one can write that .

We now show that this choice of satisfies the properties of Theorem 1 with the following two claims. Their proofs are deferred to section A.1 of the Appendix.

###### Claim 1.

For any permutation , the function is monotone and satisfies for all , .

###### Claim 2.

.

It remains to show that one can evaluate at any point without querying the oracle for at all points. To do this, we make the following observation: once we have computed , in order to calculate the values of for any , we don’t need to know the values of or of at any . Similarly, to calculate the values of for any , we don’t need to know the values of or of at any .

Building on this idea, we use the following algorithm to evaluate .

#### Description of Oracle for Mπf

At any point in time , we keep track of a range of relevant points starting with and . We also keep track of a lower bound, , and an upper bound, , on the value of starting with and .

For any , if , then it is relevant and must be evaluated. Its value is then according to the definition , which is equal to for the upper and lower bounds we have so far. Once the value is computed, if , we set while keeping and the relevant interval now becomes . Similarly, if , we update , and set and . In contrast, if , it is irrelevant and is not evaluated. The interval and upper and lower-bounds are then kept the same for the next iteration.

The following claim shows that for a uniformly random permutation , the above process only queries the oracle at a few points. The proof is deferred to section A.1 of the appendix.

###### Claim 3.

With probability , the oracle can be evaluated at any point using at most queries to oracle .

We now argue that it is possible to perform the transformation so that the algorithm always makes at most while maintaining feasibility, monotonicity and incurring error at most . Our construction of the oracle maintains for every interval a high and a low value that points in the interval may take. The interval shrinks with good probability by a constant factor at every step which gives the high probability result. To ensure that no more than , we need to ensure that every interval shrinks at most times. Indeed, we can enforce that after these many rounds, if the interval has not shrunk to a single point every point in the interval is allocated the lowest value. This ensures monotonicity while it incurs a decrease in the expected value. As this decrease happens with probability at most and the decrease is bounded by the total error is at most .

### 3.2 Extending to many dimensions

In order to generalize to many dimensions, we apply our construction for this “single-dimensional case” to fix monotonicity in each direction separately starting with the first. The key property we use is that when given oracle access to a function that is monotone in the first coordinates, our construction of the meta-algorithm will fix the monotonicity in the -th coordinate while preserving monotonicity in the first coordinates. This allows us to obtain a chain of oracles where is monotone in the first coordinates. Evaluating requires only queries to oracle and gets error at most . Thus, to evaluate , queries to oracle are sufficient to get error . Details are deferred to section A.2 of the appendix.

## 4 Lower bound

Having designed the meta-algorithm to “monotonize” a function, in this section we show that the exponential dependence on the dimension our previous algorithm exhibits, as shown in Theorem 1, is actually necessary even when the domain is the boolean hypercube and the distribution of values is uniform. The idea for this lower bound is to show that there exists a function such that any monotone and feasible meta-algorithm with subexponential query complexity will have very low expectation. This is made formal in Theorem 2 below.

###### Theorem 2.

Let be any feasible meta-algorithm that fixes monotonicity with query complexity . Then, there exists an input function with such that with .

If the meta-algorithm is infeasible or non-monotone with probability , then .

To prove the statement, we construct a distribution over functions on and show that any monotone and feasible meta-algorithm must have very low expectation with high probability.

We consider the following family of functions parametrized by where and so that . We define for any

We also define the function

 f1(X)=I{|X|≥4d10}

Observe that even though these functions are defined over subsets of it is straightforward to view each of these subsets as a point in .

We define a distribution over the family of functions by selecting uniformly at random. The sets and

will also be random variables with

including each element with probability and including each element with probability . Since , this means that given , the set contains each element outside of with probability . Similarly given , the set contains each element of with probability . We also define random variable that is a uniformly random subset of .

What we are trying to achieve with these functions is while has high expectation, the function does not, but we will not be able to tell them apart. The idea is that in both functions there is the “low” set where the function outputs , but inside it there is a hidden set where the function is either or . The two claims shown below, will prove that first we cannot distinguish between the function that gives either or , and then this function from the “high” function that gives to all large inputs. This is shown in figure 1 below111The figure serves as a simplified example of the structure of the functions, the sizes of the sets are not on scale.

###### Claim 5.

The proofs of both the claims are deferred to section B of the appendix. It is now easy to complete the proof of Theorem 2 by setting .

 E[Mf1(x)] =E[Mf1(S)] ≤E[Mf1S,T(S)]+q2−d10 ≤δ+E[Mf1S,T(T)]+q2−d10 ≤δ+E[Mf0S,T(T)]+q2−d10+q2−d450 ≤2δ+q2−d10+q2−d450 =2δ+2−Ω(d)

where the first line follows since is chosen uniformly at random, then we use Claim 5 and then we use that satisfies monotonicity with probability . Following this, the third line follows from Claim 4 and then we use the fact that with probability . Finally we get the result for any .

In contrast, for the initial function we get .

## 5 Marginal Monotonicity

In this section, we switch gears towards the relaxations of monotonicity defined in section 2. We start by considering marginal monotonicity. In this case, we want to guarantee that each of the marginals of the function will be monotone, and not loosing much in expectation. As it turns out, we can achieve this in time polynomial in . The formal statement follows.

###### Theorem 3.

There exists a feasible meta-algorithm that fixes marginal monotonicity for any function . is feasible, has query complexity , and satisfies , where the first expectation is taken over the input distribution and the randomness of the meta-algorithm.

The discretization process described in section 2 will also be used here. Therefore we can safely assume that the domain is discretized and supported on different values which we denote by .

We start by assuming we are given query access directly to the marginal distribution in every dimension and showing that we can achieve the theorem using queries. Then in subsection 5.2, we show how to achieve the same result by only querying the initial function

in order to estimate the marginals.

### 5.1 Transformation Using Exact Marginals

In this case we assume that we know exactly each one of the marginals .

Consider meta-algorithms of the following form: in each dimension there will be a mapping , with for all . We will write . We will then define . Observe that any such satisfies feasibility, since .

We will build the mapping iteratively, starting with the identity mapping, which we will call . Since the distribution over values is discrete, it suffices to define each mapping on the finitely many values in the support of the distribution (for each one of the dimensions).

Suppose our current mapping is , for some and let . If , the ’th marginal function is monotone for every , then we will choose .

Otherwise, there is some and some such that . In this case, we will define as follows: , and on all other inputs. That is, whenever is given as input, we will instead invoke as though we have gotten . Observe that this modification chains: if on some previous iteration we had mapped input to , then after this iteration we will ultimately be passing the input to the original function . As argued above, this modified function will be feasible. Moreover,

 Ex∼D[fr+1(x)]=Exi[fr+1i(xi)]≥Exi[fri(xi)]=Ex∼D[fr(x)]

so has only weakly greater expected value than .

We can think of as acting on a reduced domain, where the possible input is removed from the support of ’s distribution and instead its probability mass is added to that of some lower value, . Under this interpretation, each iteration reduces the total number of possible input values in the support of by . This process must therefore stop at or before iteration , since a marginal over a single input value is always monotone. Thus, after at most iterations, this process will terminate at an function that is feasible, monotone, and has .

### 5.2 Sampling to Estimate Marginals

The meta-algorithm above assumes direct access to the marginal distributions even after the modifications we make at each step. We will show how to remove these assumptions, at the cost of a loss of on the expectation of . This loss is due to sampling error, and can be made as small as desired with additional sampling.

Prior to viewing the input, our meta-algorithm will estimate each one of the marginals. For each , take samples and observe . Let for the empirical mean of the observed samples. By Hoeffding inequality and a union bound over all coordinates and all possible input values, we will have that for all and all , with high probability.

We will then apply our meta-algorithm from above using the marginals as an oracle. This generates mappings , such that the new “monotonized” marginals have weakly increased expectation relative to . If , then we also have for all and all as well. Monotonicity of therefore implies -approximate monotonicity of , and that .

#### From Approximate to Exact Monotonicity

From the above steps, we can assume access to a function that is -approximately monotone. We will implement a new function such that, on input , say with for each , returns . Then is monotone, as whenever we have that either or . Moreover, this modification reduces the expected allocation of by at most . So as long as , its expected allocation is within of .

## 6 k-Marginal Monotonicity

Having designed an algorithm for marginal monotonicity, we move on to a generalization that guarantees -marginal monotonicity. In this case, given oracle access to a function we want to guarantee that all -marginals of will be monotone.

###### Theorem 4.

There exists a meta-algorithm to fix -marginal monotonicity (whp) for any function . is feasible, has query complexity and satisfies , where the first expectation is taken over the input distribution and the randomness of the meta-algorithm.

In order to prove the theorem, we start by using the discretization method described in section 2. After having a discrete domain, our meta-algorithm estimates in each step all222Note that there is no way to avoid fixing all the marginals; even monotone marginals cannot guarantee that the ’th is also monotone. the marginals and whenever it detects a non-monotonicity in one of them, it fixes it within by defining a set of replacement rules. Then for an input , we sequentially try to apply the rules, bu starting from the first, and trying to match the given point with one of the patterns of the rules. After we applied the possible rules, we reached some other point and then we output . An example of the set of replacement rules is shown in Table 1.

We describe the algorithm in three steps; first we describe the replacement process, in order to fix the non-monotonicities, given that we have access to the exact margninals, and then describe the sampling process used to estimate the marginals. Using the first two steps, we are guaranteed only approximate monotonicity, so as the third step we show how to further modify our function to achieve exact monotonicity.

### 6.1 Transformation Using Exact Marginals

Assuming now that we have access to all the -marginals exactly, we describe an iterative process, in order to correct the non-monotonicities in all the -marginals. The answer we give is the transformed function .

As a direct extension from the previous marginals case, consider transformations of the form: such that , for all . We define .

We denote by , for some , with , the projection of the function in the coordinates that are in , where the variables for are treated as constants and remain the same. Recall that by we denote all the coordinates such that is in the set with .

Intuitively, what the process does is when we detect a non-monotonicity between some input and its neighbor , where is larger in at least one coordinate333We can assume without loss of generality that is higher in exactly one coordinate., from then on we always map to . This is reflected in the function that replaces with to correct the non-monotonicities. Since this process is done iteratively, when we map to and to some other input , it means is ultimately mapped to .

More formally, we start from the identity function , and iteratively define . In this case we define a non-monotonicity as the case when there exists a set , with , such that we have that . Observe that in this case, we only ensure that the function is monotone. In iteration , one of the following cases can happen

1. There is no such set : we terminate the process and return

2. There exists such a set : we set

 ϕr+1I(yI)=maxz∈N(y)ϕrI(zI)

where recall from section 2 that is the low -neighborhood of .

Using this process, and exactly how we argued in the previous section, the output function is feasible and that .

Observe that every time the replacement described above happens, the expectation of the transformed function increases by at least , meaning that this process cannot happen more than times in total since . When the process halts, the transformed function is feasible and approximately -monotone, with .

### 6.2 Sampling to Estimate Marginals

In the process described above, we assumed that the exact marginals were known. In reality, in every step we estimate the marginals again before checking for non-monotonicities, by sampling points and observing .

This step differs from the estimation step in the marginals case in that we do not draw samples from each specific marginal , but directly from the function and then we estimate each marginal.

Recall from the discretization process, that now the distribution over the different values can take is uniform, which means that by drawing samples from , we need samples in expectation to get a sample from a specific marginal. Using this fact, Hoeffding’s inequality and a union bound over all different marginals, values and all rounds this sampling process is happening, we need samples.

### 6.3 From Approximate to Exact Monotonicity

This part is exactly the same as the previous section with the only difference that we have access to a function that is -approximately monotone. This difference is due to the fact that we only guaranteed monotonicity when we knew the exact marginals compared to exact monotonicity. Therefore, we now return a new function that on input with for each , returns , which as before is guaranteed to be monotone.

This modification reduces the expected allocation of by at most in this case, so when , its expected allocation is within of .

#### Query Complexity

In order to calculate the query complexity of the meta-algorithm, recall that there are rounds, where for each round we use queries for the marginal estimation.

Using that and that from the discretization process, we get that the query complexity is .

## References

• [1] Nir Ailon, Bernard Chazelle, Seshadhri Comandur, and Ding Liu. Property-preserving data reconstruction. Algorithmica, 51(2):160–182, 2008.
• [2] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1132–1139, 2012.
• [3] Tim Austin and Terence Tao. Testability and repair of hereditary hypergraph properties. Random Struct. Algorithms, 36(4):373–463, 2010.
• [4] Xiaohui Bei and Zhiyi Huang. Bayesian incentive compatibility via fractional assignments. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pages 720–733. Society for Industrial and Applied Mathematics, 2011.
• [5] Arnab Bhattacharyya, Elena Grigorescu, Madhav Jha, Kyomin Jung, Sofya Raskhodnikova, and David P. Woodruff. Lower bounds for local monotonicity reconstruction from transitive-closure spanners. SIAM J. Discrete Math., 26(2):618–646, 2012.
• [6] Andrea Campagna, Alan Guo, and Ronitt Rubinfeld. Local reconstructors and tolerant testers for connectivity and diameter. In Prasad Raghavendra, Sofya Raskhodnikova, Klaus Jansen, and José D. P. Rolim, editors,

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 16th International Workshop, APPROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley, CA, USA, August 21-23, 2013. Proceedings

, volume 8096 of Lecture Notes in Computer Science, pages 411–424. Springer, 2013.
• [7] Sourav Chakraborty, Eldar Fischer, and Arie Matsliah. Query complexity lower bounds for reconstruction of codes. Theory of Computing, 10:515–533, 2014.
• [8] Shuchi Chawla, Nicole Immorlica, and Brendan Lucier. On the limits of black-box reductions in mechanism design. In Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, pages 435–448, 2012.
• [9] Bernard Chazelle and C. Seshadhri. Online geometric reconstruction. J. ACM, 58(4):14:1–14:32, 2011.
• [10] Shaddin Dughmi, Jason D Hartline, Robert Kleinberg, and Rad Niazadeh. Bernoulli factories and black-box reductions in mechanism design. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 158–169. ACM, 2017.
• [11] Guy Even, Moti Medina, and Dana Ron. Deterministic stateless centralized local algorithms for bounded degree graphs. In Andreas S. Schulz and Dorothea Wagner, editors, Algorithms - ESA 2014 - 22th Annual European Symposium, Wroclaw, Poland, September 8-10, 2014. Proceedings, volume 8737 of Lecture Notes in Computer Science, pages 394–405. Springer, 2014.
• [12] Evangelia Gergatsouli, Brendan Lucier, and Christos Tzamos. The complexity of black-box mechanism design with priors. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC 2019, Phoenix, AZ, USA, June 24-28, 2019, pages 869–883, 2019.
• [13] Mohsen Ghaffari and Jara Uitto. Sparsifying distributed algorithms with ramifications in massively parallel computation and centralized local computation. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1636–1653. SIAM, 2019.
• [14] Christoph Grunau, Slobodan Mitrovic, Ronitt Rubinfeld, and Ali Vakilian. Improved local computation algorithm for set cover via sparsification. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 2993–3011. SIAM, 2020.
• [15] Jason D. Hartline, Robert Kleinberg, and Azarakhsh Malekian. Bayesian incentive compatibility via matchings. Games and Economic Behavior, 92:401 – 429, 2015.
• [16] Jason D. Hartline and Brendan Lucier. Non-optimal mechanism design. American Economic Review, 105(10):3102–24, October 2015.
• [17] Madhav Jha and Sofya Raskhodnikova. Testing and reconstruction of lipschitz functions with applications to data privacy. SIAM J. Comput., 42(2):700–731, 2013.
• [18] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Local algorithms for sparse spanning graphs. In Klaus Jansen, José D. P. Rolim, Nikhil R. Devanur, and Cristopher Moore, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, volume 28 of LIPIcs, pages 826–842. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2014.
• [19] Reut Levi, Dana Ron, and Ronitt Rubinfeld. A local algorithm for constructing spanners in minor-free graphs. In Klaus Jansen, Claire Mathieu, José D. P. Rolim, and Chris Umans, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2016, September 7-9, 2016, Paris, France, volume 60 of LIPIcs, pages 38:1–38:15. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
• [20] Reut Levi, Ronitt Rubinfeld, and Anak Yodpinyanee. Local computation algorithms for graphs of non-constant degrees. Algorithmica, 77(4):971–994, 2017.
• [21] Yishay Mansour and Shai Vardi. A local computation approximation scheme to maximum matching. In Prasad Raghavendra, Sofya Raskhodnikova, Klaus Jansen, and José D. P. Rolim, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 16th International Workshop, APPROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley, CA, USA, August 21-23, 2013. Proceedings, volume 8096 of Lecture Notes in Computer Science, pages 260–273. Springer, 2013.
• [22] Merav Parter, Ronitt Rubinfeld, Ali Vakilian, and Anak Yodpinyanee. Local computation algorithms for spanners. In Avrim Blum, editor, 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, volume 124 of LIPIcs, pages 58:1–58:21. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2019.
• [23] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. In Innovations in Computer Science - ICS 2010, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, pages 223–238, 2011.
• [24] Michael E. Saks and C. Seshadhri. Local monotonicity reconstruction. SIAM J. Comput., 39(7):2897–2926, 2010.

## Appendix A Missing proofs from Section 3

### a.1 Single-dimensional case

###### Proof of Claim 1.

It is easy to see that the function is monotone by induction. Assuming that for any , if then , we show that this property also holds for any . Indeed, by definition for any with . Similarly, for any with .

To see that for any , notice that if this is trivially true. We thus only need to argue that this is true when . In this case, there is some with , such that . Again by induction, and thus . ∎

###### Proof of Claim 2.

We first argue that it suffices to show the statement for functions that take values in instead of .

For a function , denote by the indicator function that . One can check that as the definition of only involves comparisons. Since the value , we get that .

We have . Therefore if for any boolean-valued function we show that then we obtain the required statement, as we can use for any .

We move on to prove the statement for functions taking values in . The description of the constructed becomes much simpler in this case. Starting from the interval the algorithm first selects an element uniformly at random. If , it sets for any and recursively solves the problem in the interval . If , it sets for any and recursively solves the problem in the interval .

Let be the interval at the current iteration. Let be the value of this set defined as

 V(l,r)=r∑x=lf(x)−r∑x=lE[f′(x)]

where the expectation is taken over the randomness of on the interval .

We will prove by induction on that . In the case that we will select the only point with probability 1, so . We assume that for any with .

Let be the uniformly random chosen point from . We distinguish two cases for . If we obtain

 r∑x=lf(x)−r∑x=lE[f′(x)|y]=y∑x=lf(x)=r∑x=l1{f(x)=0,f(y)=1,x≥y}

where the first equality follows, since conditional on , for and by the induction hypothesis . Similarly, if we obtain

 r∑x=lf(x)−r∑x=lE[f