## 1 Introduction

We recently hypothesized [2] that an efficient form of computational learning underlies general-purpose, non-local, noise-tolerant optimization in genetic algorithms with uniform crossover (UGAs). The hypothesized computational efficiency, *implicit concurrent multivariate effect evaluation*—*implicit concurrency*

for short—is broad and versatile, and carries significant implications for efficient large-scale, general-purpose global optimization in the presence of noise, and, in turn, large-scale machine learning. In this paper, we describe implicit concurrency and explain how it can power general-purpose, non-local, noise-tolerant optimization. We then establish that implicit concurrency is a bonafide form of efficient computational learning by using it to obtain close to optimal bounds on the query and time complexity of an algorithm that solves a constrained version of a problem from the computational learning literature: learning parities with a noisy membership query oracle

[18, 6].## 2 Implicit Concurrenct Multivariate Effect Evaluation

First, a brief primer on schemata and schema partitions [13]: Let be a search space consisting of binary strings of length and let be some set of indices between and , i.e. . Then represents a partition of into subsets called schemata (singular schema) as in the following example: Suppose , and , then partitions into eight schemata:

00*0* |
00*1* | 01*0* | 01*1* | 10*0* | 10*1* | 11*0* | 11*1* |
---|---|---|---|---|---|---|---|

00000 | 00010 | 01000 | 01000 | 01010 | 10000 | 10010 | 11010 |

00001 | 00011 | 01001 | 01001 | 01011 | 10001 | 10011 | 11011 |

00100 | 00110 | 01100 | 01100 | 01110 | 10100 | 10110 | 11110 |

00101 | 00111 | 01101 | 01101 | 01111 | 10101 | 10111 | 11111 |

where the symbol stands for ’wildcard’. Partitions of this type are called schema partitions. As we’ve already seen, schemata can be expressed using templates, for example, . The same goes for schema partitions. For example denotes the schema partition represented by the index set ; the one shown above. Here the symbol stands for ’defined bit’. The *fineness order* of a schema partition is simply the cardinality of the index set that defines the partition, which is equivalent to the number of symbols in the schema partition’s template (in our running example, the fineness order is 3). Clearly, schema partitions of lower fineness order are coarser than schema partitions of higher fineness order.

We define the *effect*

of a schema partition to be the variance

^{2}

^{2}2We use variance because it is a well known measure of dispersion. Other measures of dispersion may well be substituted here without affecting the discussion

of the average fitness values of the constituent schemata under sampling from the uniform distribution over each schema in the partition. So for example, the effect of the schema partition

, iswhere the operator gives the average fitness of a schema under sampling from the uniform distribution.

Let denote the schema partition represented by some index set . Consider the following intuition pump [4] that illuminates how effects change with the coarseness of schema partitions. For some large , consider a search space , and let . Then is the finest possible partition of ; one where each schema in the partition has just one point. Consider what happens to the effect of as we remove elements from . It is easily seen that the effect of decreases monotonically. Why? Because we are averaging over points that used to be in separate partitions. Secondly, observe that the number of schema partitions of order is . Thus, when , the number of schema partitions with fineness order will grow very fast with (sub-exponentially to be sure, but still very fast). For example, when , the number of schema partitions of fineness order and are on the order of , and respectively.

The point of this exercise is to develop the following intuition: when is large, a search space will have vast numbers of coarse schema partitions, but most of them will have negligible effects due to averaging. In other words, while coarse schema partitions are numerous, ones with non-negligible effects are rare. Implicit concurrent multivariate effect evaluation is a capacity for scaleably learning (with respect to ) small numbers of coarse schema partitions with non-negligible effects. It amounts to a capacity for efficiently performing vast numbers of concurrent effect/no-effect multivariate analyses to identify small numbers of interacting loci.

### 2.1 Use (and Abuse) of Implicit Concurrency

Assuming implicit concurrency is possible, how can it be used to power efficient general-purpose, non-local, noise-tolerant optimization? Consider the following heuristic: Use implicit concurrency to identify a coarse schema partition

with a significant effect. Now limit future search to the schema in this partition with the highest average sampling fitness. Limiting future search in this way amounts to permanently setting the bits at each locus whose index is in to some fixed value, and performing search over the remaining loci. In other words, picking a schema effectively yields a new, lower-dimensional search space. Importantly, coarse schema partitions in the new search space that had negligible effects in the old search space may have detectable effects in the new space. Use implicit concurrency to pick one and limit future search to a schema in this partition with the highest average sampling fitness. Recurse.Such a heuristic is *non-local* because it does not make use of neighborhood information of any sort. It is *noise-tolerant* because it is sensitive only to the *average* fitness values of coarse schemata. We claim it is *general-purpose* firstly, because it relies on a very weak assumption about the distribution of fitness over a search space—the existence of *staggered conditional effects* [2]; and secondly, because it is an example of a *decimation heuristic*, and as such is in good company. Decimation heuristics such as Survey Propagation [12, 10] and Belief Propagation [11], when used in concert with local search heuristics (e.g. *WalkSat* [17]

), are state of the art methods for solving large instances of a number of NP-Hard combinatorial optimization problems close to their solvability/unsolvability thresholds.

The hyperclimbing hypothesis [2] posits that by and large, the heuristic described above *is* the abstract heuristic that UGAs implement—or as is the case more often than not, *misimplement* (it stands to reason that a “secret sauce” computational efficiency that stays secret will not be harnessed fully). One difference is that in each “recursive step”, a UGA is capable of identifying *multiple* (some small number greater than one) coarse schema partitions with non-negligible effects; another difference is that for each coarse schema partition identified, the UGA does not always pick the schema with the highest average sampling fitness.

### 2.2 Needed: The Scientific Method, Practiced With Rigor

Unfortunately, several aspects of the description above are not crisp. (How small is small? What constitutes a “negligible” effect?) Indeed, a formal statement of the above, much less a formal proof, is difficult to provide. Evolutionary Algorithms are typically constructed with biomimicry, not formal analyzability, in mind. This makes it difficult to formally state/prove complexity theoretic results about them without making simplifying assumptions that effectively neuter the algorithm or the fitness function used in the analysis.

We have argued previously [2] that the adoption of the scientific method [15] is a necessary and appropriate response to this hurdle. Science, *rigorously practiced*, is, after all, the foundation of many a useful field of engineering. A hallmark of a rigorous science is the ongoing making and testing of predictions. Predictions found to be true lend credence to the hypotheses that entail them. The more unexpected a prediction (in the absence of the hypothesis), the greater the credence owed the hypothesis if the prediction is validated [15, 14].

The work that follows validates a prediction that is straightforwardly entailed by the hyperclimbing hypothesis, namely that a UGA that uses a noisy membership query oracle as a fitness function should be able to efficiently solve the learning parities problem for small values of , the number of essential attributes, and non-trivial values of , where

is the probability that the oracle makes a classification error (returns a 1 instead of a 0, or vice versa). Such a result is completely unexpected in the absence of the hypothesis.

### 2.3 Implicit Concurrency Implicit Parallelism

Given its name and description in terms of concepts from schema theory, implicit concurrency bears a superficial resemblance to *implicit parallelism*, the hypothetical “engine” presumed, under the beleaguered *building block hypothesis* [7, 16], to power optimization in genetic algorithms with strong linkage between genetic loci. The two hypothetical phenomena are emphatically not the same. We preface a comparison between the two with the observation that strictly speaking, implicit concurrency and implicit parallelism pertain to different kinds of genetic algorithms—ones with tight linkage between genetic loci, and ones with no linkage at all. This difference makes these hypothetical engines of optimization non-competing from a scientific perspective. Nevertheless a comparison between the two is instructive for what it reveals about the power of implicit concurrency.

The *unit of implicit parallel evaluation* is a schema belonging to a coarse schema partition that satisfies the following *adjacency constraint*: the elements of are adjacent, or close to adjacent (e.g. ). The *evaluated characteristic* is the average fitness of samples drawn from , and the *outcome of evaluation* is as follows: the frequency of rises if its evaluated characteristic is greater than the evaluated characteristics of the other schemata in .

The *unit of implicit concurrent evaluation*, on the other hand, is the coarse schema partition , where the elements of are unconstrained. The *evaluated characteristic* is the effect of , and the *outcome of evaluation* is as follows: if the effect of is non negligible, then a schema in this partition with an above average sampling fitness goes to fixation, i.e. the frequency of this schema in the population goes to 1.

Implicit concurrency derives its superior power viz-a-viz implicit parallelism from the absence of an adjacency constraint. For example, for chromosomes of length , the number of schema partitions with fineness order 7 is [3]. The enforcement of the adjacency constraint, brings this number down to . Schema partition of fineness order 7 contain a constant number of schemata (128 to be exact). Thus for fineness order 7, the number of units of evaluation under implicit parallelism are in , whereas the number of units of evaluation under implicit concurrency are in .

## 3 The Learning Model

For any positive integer , let denote the set . For any set such that and any binary string , let denote the string such that for any iff is the smallest element of (i.e. strips out bits in whose indices are not in ). An *essential attribute oracle with random classification error* is a tuple where and are positive integers, such that and , is a boolean function over (i.e. ), and , the random classification error parameter, obeys . For any input bitstring ,

Clearly, the value returned by the oracle depends only on the bits of the attributes whose indices are given by the elements of . These attributes are said to be *essential*. All other attributes are said to be *non-essential*. The *concept space* of the learning model is the set and the *target concept* given the oracle is the element such that for any , . The hypothesis space is the same as the concept space, i.e. .

###### Definition 1 (Approximately Correct Learning).

Given some positive integer , some boolean function , and some random classification error parameter , we say that the learning problem can be *approximately correctly solved* if there exists an algorithm such that for any oracle and any , returns a hypothesis such that , where is the target concept.

## 4 Our Result and Approach

Let denote the parity function over bits. For and , we give an algorithm that approximately correctly learns in queries and time. Our argument relies on the use of hypothesis testing to reject two null hypotheses, each at a Bonferroni adjusted significance level of . In other words, we rely on a hypothesis testing based rejection of a global null hypothesis at the level of significance. In layman’s terms, our result is based on conclusions that have a 1 in chance of being false.

Comments

There are no comments yet.