On the Bounds of Function Approximations

08/26/2019 ∙ by Adrian de Wynter, et al. ∙ Amazon 0

Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The typical machine learning task can be abstracted out as the problem of finding the set of parameters of a computable function, such that it approximates an underlying probability distribution to seen and unseen examples

[Goodfellow-et-al-2016]. Said function is often hand-designed, and the subject of the great majority of current machine learning research. It is well-established that the choice of function heavily influences its approximation capability [ArchitecturesBengio, NFLWolpert, XinYao], and considerable work has gone into automating the process of finding such function for a given task [CarpenterAndGrossberg, Carvalho, Golovin2017GoogleVA]. In the context of neural networks, this task is known as Neural Architecture Search (NAS), and it involves searching for the best performing combination of neural network components and parameters from a set, also known as the search space. Although promising, little work has been done on the analysis of its viability with respect to its computation-theoretical bounds [Elsken2018NeuralAS]. Since NAS strategies tend to be expensive in terms of their hardware requirements [Jin2018AutoKerasEN, Real2017LargeScaleEO], research emphasis has been placed on optimizing search algorithms, [Elsken2018NeuralAS, MetaDesignOfFFNNs], even though the search space is still manually designed [Elsken2018NeuralAS, liu2018hierarchical, liu2019darts, Zoph2016NeuralAS]. Without a better understanding of the mathematical confines governing NAS, it is unlikely that these strategies will efficiently solve new problems, or present reliably high performance, thus leading to complex systems that still rely on manually engineering architectures and search spaces.

Theoretically, learning has been formulated as a function approximation problem where the approximation is done through the optimization of the parameters of a given function [CybenkoSigmoids, Goodfellow-et-al-2016, PoggioTheoryOfNets, PoggioNetworks, Valiant1984ATO]; and with strong results in the area of neural networks in particular [CybenkoSigmoids, FunahashiApprox, HornikApprox2, ShaferRNNs]. On the other hand, NAS is often regarded as a search problem with an optimality criterion [Carvalho, Elsken2018NeuralAS, Real2017LargeScaleEO, SunEtAl, XinYao]

, within a given search space. The choice of such search space is critical, yet strongly heuristic

[Elsken2018NeuralAS]. Since we aim to obtain a better insight on how the process of finding an optimal architecture can be improved with relation to the search space, we hypothesize that NAS can be enunciated as a function approximation problem. The key observation that motivates our work is that all computable functions can be expressed in terms of combinations of members of certain sets, better known as models of computation. Examples of this are the

-recursive functions, Turing Machines, and, of relevance to this paper, a particular set of neural network architectures

[Neto1997TuringUO].

Thus, in this study we reformulate the function approximation problem as the task of, for a given search space, finding the procedure that outputs the computable sequence of functions, along with their parameters, that best approximates any given input function. We refer to this reformulation as the Function Approximation (FA) problem, and regard it as a very general computational problem; akin to building a fully automated machine learning pipeline where the user provides a series of tasks, and the algorithm returns trained models for each input.222Throughout this paper, the problem of data selection is not considered, and is simply assumed to be an input to our solution strategies. This approach yields promising results in terms of the conditions under which the FA problem has optimal solutions, and about the ability of both machine learning and NAS to solve the FA problem.

1.1 Technical Contributions

The main contribution of this paper is a reformulation of the function approximation problem in terms of sequences of functions, and a framework within the context of the theory of computation to analyze it. Said framework is quite flexible, as it does not rely on a particular model of computation and can be applied to any Turing-equivalent model. We leverage its results, along with well-known results of computer science, to prove that it is not possible to devise a procedure that approximates all functions everywhere to zero error. However, we also show that, if the smallest class of functions along with the operators for the chosen model of computation are present in the search space, it is possible to attain an error that is globally minimal.

Additionally, we tie said framework to the field of machine learning, and analyze in a formal manner three solution strategies for FA: the Machine Learning (ML) problem, the Architecture Search problem (ASP), and the less-strict version of ASP, the Approximate Architecture Search problem (a-ASP). We analyze the feasibility of all three approaches in terms of the bounds described for FA, and their ability to solve it. In particular, we demonstrate that ML is an ineffective solution strategy for FA, and point out that ASP is the best approach in terms of generalizability, although it is intractable in terms of time complexity. Finally, by relating the results from this paper, along with the existing work in the literature, we describe the conditions under which a-ASP is able to solve the FA problem as well as ASP.

1.2 Outline

We begin by reviewing the existing literature in Section 2. In Section 3 we introduce FA, and analyze the general properties of this problem in terms of its search space. Then, in Section 4 we relate the framework to machine learning as a mathematical problem, and show that it is a weak solution strategy for FA, before defining a stronger approach (ASP) and its computationally tractable version (a-ASP). We conclude in Section 5 with a discussion of our work.

2 Related Work

The problem of approximating functions and its relation to neural networks can be found formulated explicitly in [PoggioNetworks], and it is also mentioned often when defining machine learning as a task, for example in [Bartlett, bendavidzfc, ArchitecturesBengio, Goodfellow-et-al-2016, Valiant1984ATO]. However, it is defined as a parameter optimization problem for a predetermined function. This perspective is also covered in our paper, yet it is much closer to the ML approach than to FA. For FA, as defined in this paper, it is central to find the sequence of functions which minimizes the approximation error.

Neural networks as function approximators are well understood, and there is a trove of literature available on the subject. An inexhaustive list of examples are the studies found in [CybenkoSigmoids, FunahashiApprox, HornikApprox2, HornikApprox, LeshnoMLP, ParkAndSandberg, pmlr-v80-pham18a, PoggioNetworks, ShaferRNNs, SiegelAndXu, SunEtAl]. It is important to point out that the objective of this paper is not to prove that neural networks are function approximators, but rather to provide a theoretical framework from which to understand NAS in the contexts of machine learning, and computation in general. However, neural networks were shown to be Turing-equivalent in [Neto1997TuringUO, Siegelmann1991TuringCW, Siegelmann1995OnTC], and thus they are extremely relevant this study.

NAS as a metaheuristic is also well-explored in the literature, and its application to deep learning has been booming lately thanks to the widespread availability of powerful computers, and interest in end-to-end machine learning pipelines. There is, however, a long standing body of research on this area, and the list of works presented here is by no means complete. Some papers that deal with NAS in an applied fashion are the works found in

[AngelineetAl, CarpenterAndGrossberg, Carvalho, Luo2018NeuralAO, SchafferCaruana, stanley:naturemi19, StanleyEvolvingNNs, NIPS1988_149], while explorations in a formal fashion of NAS and metaheuristics in general can also be found in [Baxter, Carvalho, SiegelAndXu, MetaOptimizationAlgoAnalysis, XinYao]. There is also interest on the problem of creating an end-to-end machine learning pipeline, also known as AutoML. Some examples are studies such as the ones in [NIPS2015_5872, he2018amc, Jin2018AutoKerasEN, Wong:2018:TLN:3327757.3327928]. The FA problem is similar to AutoML, but it does not include the data preprocessing step commonly associated with such systems. Additionally, the formal analysis of NAS tends to be as a search, rather than a function approximation, problem.

The complexity theory of learning and neural networks has been explored as well. The reader is referred to the recent survey from [ComplexityTheoryNNs], and [Bartlett, BlumerLearnabilityVC, CybenkoTheory, RegularizationAndANN, Vapnik1995TheNO]. Leveraging the group-like structure of models of computation is done in [RabinComputability], and the Blum Axioms [blum] are a well-known framework for the theory of computation in a model-agnostic setting. It was also shown in [Bshouty]

that, under certain conditions, it is possible to compose some learning algorithms to obtain more complex procedures. Bounds in terms of the generalization error was proven for convolutional neural networks in

[cnnsgoogle]. None of the papers mentioned, however, apply directly to FA and NAS in a setting agnostic to models of computation, and the key insights of our work, drawn from the analysis of FA and its solution strategies, are, to the best of our knowledge, not covered in the literature. Finally, the Probably Approximately Correct (PAC) learning framework [Valiant1984ATO] is a powerful theory for the study of learning problems. It is a slightly different problem than FA, as the former has the search space abstracted out, while the latter concerns itself with finding a sequence that minimizes the error, by searching through combinations of explicitly defined members of the search space.

3 A Formulation of the Function Approximation Problem

In this section we define the FA problem as a mathematical task whose goal is–informally–to find a sequence of functions whose behavior is closest to an input function. We then perform a short analysis of the computational bounds of FA, and show that it is computationally infeasible to design a solution strategy that approximates all functions everywhere to zero error.

3.1 Preliminaries on Notation

Let be the set of all total computable functions. Across this paper we will refer to the finite set of elementary functions as the smallest class of functions, along with their operators, of some Turing-equivalent model of computation.

Let be a set of functions defined over some sets , such that is indexed by a set , and that . Also let be a sequence of elements of applied successively and such that for some . We will utilize the abbreviated notation to denote such a sequence; and we will use to describe the set of all -or-less long possible sequences of functions drawn from said , such that .

For consistency purposes, throughout this paper we will be using Zermelo-Fraenkel with the Axiom of Choice (ZFC) set theory. Finally, for simplicity of our analysis we will only consider continuous, real-valued functions, and beginning in Section 3.3, only computable functions.

3.2 The FA Problem

Prior to formally defining the FA problem, we must be able to quantify the behavioral similarity of two functions. This is done through the approximation error of a function:

Definition 1 (The approximation error)

Let and be two functions. Given a nonempty subset , the approximation error of a function to a function is a procedure which outputs if is equal to with respect to some metric across all of , and a positive number otherwise:

(1)

Where we assume that, for the case where , .

Definition 2 (The FA Problem)

For any input function , given a function set (the search space) , an integer , and nonempty sets , find the sequence of functions , such that is minimal among all members of and .

The FA problem, as stated in Definition 2, makes no assumptions regarding the characterization of the search space, and follows closely the definition in terms of optimization of parameters from [PoggioTheoryOfNets, PoggioNetworks]. However, it makes a point on the fact that the approximation of a function should be given by a sequence of functions.

If the input function were to be continuous and multivariate, we know from [Kolmogorov, Ostrand] that there exists at least one exact (i.e., zero approximation error) representation in terms of a sequence of single-variable, continuous functions. If such single-variable, continuous functions were to be present in , one would expect that the FA problem could solved to zero error for all continuous multivariate inputs, by simply comparing and returning the right representation.333With the possible exception of the results from [Vitushkin]. However, it is infeasible to devise a generalized algorithmic procedure that outputs such representation:

Theorem 3.1

There is no computable procedure for FA that approximates all continuous, real-valued functions to zero error, across their entire domain.

Proof

Solution strategies for FA are parametrized by the sequence length , the subset of the domain , and the search space .

Assume is infinite. The input function may be either computable or uncomputable. If the input

is uncomputable, by definition it can only be estimated to within its computable range, and hence its approximation error is nonzero. If

is a computable function, we have guaranteed the existence of at least one function within which has zero approximation error: itself. Nonetheless, determining the existence of such a function is an undecidable problem. To show this, it suffices to note that it reduces to the problem of determining the equivalence of two halting Turing Machines by asking whether they accept the same language, which is undecidable.

When or are infinite, there is no guarantee that a procedure solving FA will terminate for all inputs.

When , , or are finite, there will always be functions outside of the scope of the procedure that can only be approximated to a nonzero error.

Therefore, there cannot be a procedure for FA that approximates all functions, let alone all computable functions, to zero error for their entire domain. ∎

It is a well-known result of computer science that neural networks [CybenkoSigmoids, FunahashiApprox, Goodfellow-et-al-2016, HornikApprox2, HornikApprox], and PAC learning algorithms [Valiant1984ATO], are able to approximate a large class of functions to an arbitrary, non-zero error. However, Theorem 3.1 does not make any assumptions regarding the model of computation used, and thus it works as more generalized statement of these results.

For the rest of this paper we will limit ourselves to the case where , , and are finite, and the elements of are computable functions.

3.3 A Brief Analysis of the Search Space

It has been shown that the solutions to FA can only be found in terms of finite sequences built from a finite search space, whose error with respect to the input function is nonzero. It is worth analyzing under which conditions these sequences will present the smallest possible error.

For this, we note that any solution strategy for FA will have to first construct at least one sequence , and then compute its error against the input function . It could be argued that this “bottom-up” approach is not the most efficient, and one could attempt to “factor” a function in a given model of computation that has explicit reduction formulas, such as the Lambda calculus. This, unfortunately, is not possible, as the problem of determining the reduction of a function in terms of its elementary functions is well-known to be undecidable [ChurchAnUP]. However, the idea of “factoring” a function can still be leveraged to show that, if the set of elementary functions is present in the search space , any sufficiently clever procedure will be able to get the smallest possible theoretical error for , for any given input function :

Theorem 3.2

Let be a search space such that it contains the set of elementary functions, . Then, for any input function , there exists at least one sequence with the smallest approximation error among all possible computable functions of sequence length up to and including .

Proof

By definition, can generate all possible computable functions. If , then , and so there exist input functions whose sequence with the smallest approximation error, , is not contained in . ∎

In practice, constructing a space that contains , and subsequently performing a search over it, can become a time consuming task given that the number of possible members of grows exponentially with . On the other hand, constructing a more “efficient” space that already contains the best possible sequence requires prior knowledge of the structure of a function relating to –the problem that we are trying to solve in the first place. That being said, Theorem 3.2 implies that there must be a way to quantify the ability of a search space to generalize to any given function, without the need of explicitly including . To achieve this, we first look at the ability of every sequence to approximate a function, by defining the information capacity of a sequence:

Definition 3 (The Information Capacity)

Let be a finite sequence, where every has associated a finite set of possible parameters , and a restriction set in its domain: , so that the next element in the sequence is a function with .

Then the information capacity of a sequence is given by the Cartesian product of the domain, parameters, and range of each :

(2)

Note that the information capacity of a function is quite similar to its graph, but it makes an explicit relationship with its parameters. Specifically, in the case where for every in some , .

At a first glance, Definition 3 could be seen as a variant of the VC dimension [BlumerLearnabilityVC, Vapnik1995TheNO], since both quantities attempt to measure the ability of a given function to generalize. However, the latter is designed to work on a fixed function, and our focus is on the problem of building such a function. A more in-depth discussion of this distinction, along with its application to the framework from this paper, is given in Section 4.1, and in Appendix 0.B.

A search space is comprised of one or more functions, and algorithmically we are more interested about the quantifiable ability of the search space to approximate any input function. Therefore, we define the information potential of a search space as follows:

Definition 4 (The Information Potential)

The information potential of a search space , is given by all the possible values its members can take for a given sequence length :

(3)

The definition of the information potential allows us to make the important distinction between comparing two search spaces containing the same function , but defined over different parameters ; and comparing and with another space, , containing a different function : the information potentials will be equivalent on the first case, , but not on the second: .

For a given space , as the sequence length grows to infinity, and if the search space includes the set of elementary functions, , its information potential encompasses all computable functions:

(4)

In other words, the information potential of such approaches the information capacity of a universal approximator, which depending on the model of computation chosen, might be a universal Turing machine, or the universal function from [RogersComputability], to name a few.

In the next section, we leverage the results shown so far to evaluate three different procedures to solve FA, and show that there exists a best possible solution strategy.

4 The FA Problem in the Context of Machine Learning

In this section we relate the results from analyzing FA to the field of machine learning. First, we show that the machine learning task can be seen as a solution strategy for FA. We then introduce the Architecture Search Problem (ASP) as a theoretical procedure, and note that it is the best possible solution strategy for FA. Finally, we note that ASP is unviable in an applied setting, and define a more relaxed version of this approach: the Approximate Architecture Search Problem (a-ASP), which is the analogous of the NAS task commonly seen in the literature.

4.1 Machine Learning as a Solver for FA

The Machine Learning (ML) problem, informally, is the task of approximating an input function through repeated sampling and the parameter search of a predetermined function. This definition is a simplified, abstracted out version of the typical machine learning task. It is, however, not new, and a brief search in the literature ([bendavidzfc, ArchitecturesBengio, Goodfellow-et-al-2016, PoggioTheoryOfNets]) can attest to the existence of several equivalent formulations. We reproduce it here for notational purposes, and constrain it to computable functions:

Definition 5 (The ML Problem)

For an unknown, continuous function defined over some domain , given finite subsets , a function with parameters from some finite set , and a function , find a such that is minimal for all .

As defined in Definition 2, any procedure solving FA is required to return the sequence that best approximates any given function. In the ML problem, however, such sequence is already given to us. Even so, we can still reformulate ML as a solution strategy for FA. For this, let the search space be a singleton of the form ; set to be the metric function in the approximation error; and leave as it is. We then carry out a “search” over this space by simply picking , and then optimizing the parameters of with respect to the approximation error . We then return the function along with the parameters that minimize the error.

Given that the search is performed over a single element of the search space, this is not an effective procedure in terms of generalizability. To see this, note that the procedure acts as intended, and “finds” the function that minimizes the approximation error between and any other in the search space . However, being able to approximate an input function in a single-element search space tells us nothing about the ability of ML to approximate other input functions, or even whether such is the best function approximation for in the first place. In fact, we know by Theorem 3.2 that for a given sequence length , for every there exists an optimal sequence in , which is may not be present in .

Since we are constrained to a singleton search space, one could be tempted to build a search space with one single function that maximizes the information potential, such as the one as described in Equation 4, say, by choosing to be a universal Turing Machine. There is one problem with this approach: this would mean that we need to take in as an input the encoding of the input function , along with the subset of the domain . If we were able to take the encoding of as part of the input, we would already know the function and this would not be a function approximation problem in the first place. Additionally, we would only be able to evaluate the set of computable functions which take in as an argument their own encoding, as it, by definition, needs to be present in .

In terms of the framework from this paper we can see that, no matter how we optimize the parameters of to fit new input functions, the information potential remains unchanged, and the error will remain bounded. This leads us to conclude that measuring a function’s ability to learn through its number of parameters [Goodfellow-et-al-2016, SontagVC, Vapnik1995TheNO] is a good approach for a fixed and single input , but incomplete in terms of describing its ability to generalize to other problems. This is of critical importance, because, in an applied setting, even though nobody would attempt to use the same architecture for all possible learning problems, the choice of remains a crucial, and mostly heuristic, step in the machine learning pipeline.

The statements regarding the information potential of the search space are in accordance with the results in [NFLWolpert], where it was shown that–in the terminology of this paper–two predetermined sequences and , when averaging their approximation error across all possible input functions, will have equivalent performance. We have seen that ML is unable to generalize well to any other possible input function, and is unable to determine whether the given sequence is the best for the given input. This leads us to conclude that, although ML is a computationally tractable solution strategy for FA, it is a weak approach in terms of generalizability.

4.2 The Architecture Search Problem (ASP)

We have shown that ML is a solution strategy for FA, although the nature of its search space makes it ineffective in a generalized setting. It is only natural to assume that a stronger formulation of a procedure to solve FA would involve a more complex search space.

Similar to Definition 5, we are given the task of approximating an unknown function through repeated sampling. Unlike ML, however, we are now able to select the sequence of functions (i.e., architecture) that best fits a given input function :

Definition 6 (The Architecture Search Problem (ASP))

For an unknown, continuous function defined over some domain , given a finite subset , a sequence length , a search space , and a function , find the sequence , such that is minimal for all , and all .

Note that we have left the parameter optimization problem implicit in this formulation, since, as pointed out in Section 4.1, a single-function search space would be ineffective for dealing with multiple input functions , no matter how well the optimizer performed for a given subset of these inputs.

At a first glance, ASP looks similar to the PAC learning framework [Valiant1984ATO]. However, FA is the task about finding the right sequence of computable functions for all possible functions, while PAC is a generalized, tractable formulation of learning problems, with the search space abstracted out. A more precise analysis of the relationship between FA and PAC is described in Appendix 0.A.

As a solution strategy for FA, ASP is also subject to the results from section Section 3. The key difference between ML and ASP is that ASP has access to a richer search space, which allows it to have a better approximation capability. In particular, ASP could be seen as a generalized version of the former, since for any -sized sequence present in , one could construct a space with bigger information potential in ASP, but with the same constrains in sequence length. For example, we could use as our search space, choose a sequence length , and so .

Since ASP has no explicit constraints on time and space, this procedure is essentially performing an exhaustive search. Theorem 3.2 implies that, for fixed and any input , ASP will always return the best possible sequence within that space, as long as the search space contains the set of elementary functions, .

On the other hand, it is a cornerstone of the theory and practice of machine learning that learning algorithms must be tractable–that is, they must run in polynomial time. Given that the search space for ASP grows exponentially with the sequence length, this approach is an interesting theoretical tool, but not very practical. We will still use ASP as a performance target for the evaluation of more applicable procedures. However, it is desirable to formulate a solution strategy for FA that can be used in an applied setting, but can also be analyzed within the framework of this paper.

To achieve this, first we note that any other solution strategy for FA which terminates in polynomial time will have to be able to avoid verifying every possible function in the search space. In other words, such procedure would require a function that is able to choose a nonempty subset of the search space. We denote such function as , such that for a search space , . We can now define the Approximate Architecture Search Problem (a-ASP) as the formulation of NAS in terms of the FA framework:

Definition 7 (The Approximate ASP (a-ASP))

If is an unknown, continuous function defined over some domain , given a finite subset , a sequence length , a search space , a function , and a set builder function , find the sequence , such that is minimal for all and .

Just as the previous two procedures we defined, a-ASP is also a solution strategy for FA. The only difference between Definition 6 and Definition 7 is the inclusion of the set builder function to traverse the space in a more efficient manner. Due to the inclusion of this function, however, a-ASP is weaker than ASP, since it is not guaranteed to find the functions that globally minimizes , for all given . Additionally, the fact that this function must be included into the parameters for a-ASP implies that such procedure requires some design choices. Given that everything else in the definition of a-ASP is equivalent to ASP, it can be stated that the set builder function is the only deciding factor when attempting to match the performance of ASP with a-ASP.

It has been shown [Wolpert2005CoevolutionaryFL] that certain set builder functions perform better than others in a generalized setting. This can be also seen from the perspective of the FA framework, where we have available at our disposal the sequences that make up a given function. In particular, if is a search space, and is a function that selects elements from , a-ASP not only has access to the performance of all the sequences chosen so far, , but also the encoding (the configurations from [Wolpert2005CoevolutionaryFL]) of their composition. This means that, given enough samples, when testing against a subset of the input, , such an algorithm would be able to learn the expected output of the functions , and their behavior if included in the current sequence , for . Including such information in a set builder function could allow the procedure to make better decisions at every step, and this approach has been used in applied settings with success [MillerAndHedge, liu2018hierarchical].

It can be seen that these design choices are not necessarily problem-dependent, and, from the results of Theorem 3.2, they can be done in a theoretically motivated manner. Specifically, we note that the information potential of the search space remains unchanged between a-ASP and ASP, and so, by including , a-ASP could have the ability to perform as well as ASP.

5 Conclusion

The FA problem is a reformulation of the problem of approximating any given function, but with finding a sequence of functions as a central aspect of the task. In this paper, we analyzed its properties in terms of the search space, and its applications to machine learning and NAS. In particular, we showed that it is impossible to write a procedure that solves FA for any given function and domain with zero error, but described the conditions under which such error can be minimal. We leveraged the results from this paper to analyze three solution strategies for FA: ML, ASP, and a-ASP. Specifically, we showed that ML is a weak solution strategy for FA, as it is unable to generalize or determine whether the sequence used is the best fit for the input function. We also pointed out that ASP, although the best possible algorithm to solve for FA, is intractable in an applied setting.

We finished by formulating a solution strategy that merged the best of both ML and ASP, a-ASP, and pointed out, through existing work in the literature, complemented with the results from this framework, that it has the ability to solve FA as well as ASP in terms of approximation error.

One area that was not discussed in this paper was whether it would be possible to select a priori a good subset of the input function’s domain. This problem is important since a good representative of the input will greatly influence a procedure’s capability to solve FA. This is tied to the data selection process, and it was not dealt with on this paper. Further research on this topic is likely to bear great influence on machine learning as a whole.

Acknowledgments

The author is grateful to the anonymous reviewers for their helpful feedback on this paper, and also thanks Y. Goren, Q. Wang, N. Strom, C. Bejjani, Y. Xu, and B. d’Iverno for their comments and suggestions on the early stages of this project.

References

Appendices

Appendix 0.A PAC Is a Solver for FA

PAC learning, as defined by Valiant [Valiant1984ATO], is a slightly different problem than FA, as it concerns itself with whether a concept class can be described with high probability with a member of a hypothesis class . It also establishes bounds in terms of the amount of samples from members that are needed to learn . On the other hand, FA and its solution strategies concern themselves with finding a solution that minimizes the error, by searching through sequences of explicitly defined members drawn from a search space.

Regardless of these differences, PAC learning as a procedure can still be formulated as a solution strategy for FA. To do this, let be our search space. Then note that the PAC error function , is equivalent to computing for some subset , and choosing the frequentist difference between the images of the functions as the metric . Our objective would be to return the that minimizes the approximation error for a given subset . Note that we do not search through the expanded search space .

Finding the right distribution for a specific class may be NP-hard [BlumerLearnabilityVC], and so requires us to make certain assumptions about the distribution of the input values. Additionally, any optimizer for PAC is required to run in polynomial time. Due to all of this, PAC is a weaker approach to solve FA when compared to ASP, but stronger than ML since this solution strategy is fixed to the design of the search space, and not to the choice of function. Nonetheless, it must be stressed that the bounds and paradigms provided by PAC and FA are not mutually exclusive, either: the most prominent example being that PAC learning provides conditions under which the choice subset is optimal.

With the polynomial constraint for PAC learning lifted, and letting the sample and search space sizes grow infinitely, PAC is effectively equivalent to ASP. However, that defies the purpose of the PAC framework, as its success relies on being a tractable learning theory.

Appendix 0.B The VC Dimension and the Information Potential

There is a natural correspondence between the VC dimension [BlumerLearnabilityVC, Vapnik1995TheNO] of a hypothesis space, and the information capacity of a sequence.

To see this, note that the VC dimension is usually defined in terms of the set of concepts (i.e., the input function ) that can be shattered by a predetermined function with . It is frequently used to quantify the ability of a procedure to learn the input function .

In the FA framework we are more interested in whether the search space–also a set–of a given solution strategy is able to generalize well to multiple, unseen input functions. Therefore, for fixed and , the VC dimension and its variants provide a powerful insight on the ability of an algorithm to learn. When is not fixed, it is still possible to utilize this quantity to measure the capacity of a search space , by simply taking the union of all possible for a given . However, when the the input functions are not fixed either, we are unable to use the definition of VC dimension in this context, as the set of input concepts is unknown to us. We thus need a more flexible way to model generalizability, and that is where we leverage the information potential of a search space.