Level set learning with pseudo-reversible neural networks for nonlinear dimension reduction in function approximation

Due to the curse of dimensionality and the limitation on training data, approximating high-dimensional functions is a very challenging task even for powerful deep neural networks. Inspired by the Nonlinear Level set Learning (NLL) method that uses the reversible residual network (RevNet), in this paper we propose a new method of Dimension Reduction via Learning Level Sets (DRiLLS) for function approximation. Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space. The PRNN not only relaxes the invertibility constraint of the nonlinear transformation present in the NLL method due to the use of RevNet, but also adaptively weights the influence of each sample and controls the sensitivity of the function to the learned active variables. The synthesized regression uses Euclidean distance in the input space to select neighboring samples, whose projections on the space of active variables are used to perform local least-squares polynomial fitting. This helps to resolve numerical oscillation issues present in traditional local and global regressions. Extensive experimental results demonstrate that our DRiLLS method outperforms both the NLL and Active Subspace methods, especially when the target function possesses critical points in the interior of its input domain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 20

11/22/2018

Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

Random projections are able to perform dimension reduction efficiently f...
04/22/2019

Local Deep-Feature Alignment for Unsupervised Dimension Reduction

This paper presents an unsupervised deep-learning framework named Local ...
04/29/2021

Nonlinear Level Set Learning for Function Approximation on Sparse Data with Applications to Parametric Differential Equations

A dimension reduction method based on the "Nonlinear Level set Learning"...
06/27/2012

Variable noise and dimensionality reduction for sparse Gaussian processes

The sparse pseudo-input Gaussian process (SPGP) is a new approximation m...
11/08/2019

On approximating the shape of one dimensional functions

Consider an s-dimensional function being evaluated at n points of a low ...
03/20/2021

Train Deep Neural Networks in 40-D Subspaces

Although there are massive parameters in deep neural networks, the train...
02/20/2021

Nonlinear dimension reduction for surrogate modeling using gradient information

We introduce a method for the nonlinear dimension reduction of a high-di...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

High-dimensional function approximation plays an important role in building predictive models for a variety of scientific and engineering problems. It is typical for scientists to build an accurate and fast-to-evaluate surrogate model to replace a computationally expensive physical model, in order to reduce the overall cost of a large set of model executions. However, when the dimension of the target function’s input space becomes large, the data fitting becomes a computationally challenging task. Due to the curse of dimensionality, an accurate function approximation would require the number of samples in the training dataset to increase exponentially with respect to the dimension of input variables. On the other hand, given the complexity of the underlying physical model, the amount of observational data is often very limited. This causes classical approximation methods such as sparse polynomial approximation (e.g. sparse grids) to fail on high-dimensional problems outside of some special situations. One way to alleviate the challenge is to reduce the input dimension of the target function by finding intrinsically low-dimensional structures.

The existing methods for dimension reduction in function approximation can be divided into two main categories. The first one is to exploit the dependence between input variables to build low-dimensional manifolds in the input space. For example, principal component analysis

[1] is widely used, due to its simplicity, to compress the input space to a low-dimension manifold. Isometric feature mapping [30]

is an effective method to compute a globally nonlinear low-dimensional embedding of high-dimensional data. Its modification known as locally linear embedding

[28, 11] provides solutions to more general cases. However, in practice there are often no dependences between input variables to exploit, so that the dimension of the input space cannot be effectively reduced by methods reliant on this assumption. This represents a challenging research question for function approximation, namely, how to effectively reduce the dimension of a function with independent input variables.

To answer this question, the second category of dimension reduction methods aims at reducing the input dimension by exploiting the relationship between the input and the output, i.e., learning the geometry of a function’s level sets. This includes methods such as sufficient dimension reduction (SDR) [8, 2, 20, 26], the active subspace (AS) method [7, 6], and neural network based methods [34, 31, 17, 3]. This type of method first identifies a linear/nonlinear transformation that maps the input variables to a handful of active variables (or coordinates), then projects the observational data onto the subspace spanned by the active variables, and finally performs the data fitting in the low-dimensional subspace to determine the function approximation.

The SDR method [8, 2, 20] provides a general framework for finding reduced variables in statistical regression. Given the predictor (input) and its associated scalar response (output) , the SDR seeks such that . Various algorithms have been developed to determine , including the sliced inverse regression [22, 25, 9]

, sliced average variance estimation

[10], and principal Hessian directions [23]

in which the population moment matrices of the inverse regression are approximated based on the given regression data. These methods can be extended to the nonlinear setting by introducing the kernel approaches as done in

[21, 19, 32, 33] .

The AS method [7, 6] is a popular dimension reduction approach that seeks a set of directions in the input space, named active components, affecting the function value most significantly on average. Given the values of the function and its gradient at a set of sample points, this method first evaluates the uncentered covariance matrix of the gradient

. The eigenvectors associated with the leading eigenvalues of

, denoted by , are selected to define active components

, which is a linear transformation of the input

. The subspace spanned by the set of active components describes a low-dimensional linear subspace embedded in the original input space that captures most of the variation in model output. A regression surface is then constructed based on the data projected onto the active subspace , i.e., .

Recently, neural network based approaches [34, 31, 17, 3] were developed to extract low-dimensional structures from high-dimensional functions. For instance, a feature map was built in [3] by aligning its Jacobian with the target function’s gradient field, and the function was approximated by solving a gradient-enhanced least-squares problem. The Nonlinear Level set Learning (NLL) method [34] finds a bijective nonlinear transformation that maps an input point to a new point which is of the same dimension as , more specifically, with modeled by a reversible residual neural network (RevNet) [12, 5]. In this approach, the transformed variables are expected to be split into two sets: a set of active variables (coordinates) and a set of inactive variables , so that the function value is insensitive to perturbations in . That is, if , a small perturbation in within the neighborhood of

would lead almost no change in function value. Based on this fact, the NLL method employs a loss function that encourages the gradient vector field

to be orthogonal to the derivative of with respect to each inactive variable . Therefore, after a successful training, the NLL method can provide a manifold that captures the low-dimensional structure of the function’s level sets. Similar to the AS method, once is determined, a regression surface can be built using the data projected onto the subspace of active variables, . It has been shown in [34] that NLL outperforms AS when the level sets of the function have nontrivial curvature. An improved algorithm for the NLL method was studied in [14]. However, there still are even simple cases in which the NLL fails to effectively extract low-dimensional manifolds as shown later in this paper.

In this paper, we introduce a new Dimension Reduction via Learning Level Sets (DRiLLS) method for function approximation that improves upon existing level set learning methods in the following aspects: (1) To enhance the model’s capability, we propose a novel pseudo-reversible neural network (PRNN) to model the nonlinear transformation for extracting active variables. (2) The learning process is driven by geometric features of the unknown function, which is reflected in a loss function consisting of three terms: the pseudo-reversibility loss, the active direction fitting loss, and the bounded derivative loss. (3) A novel synthesized regression on the manifold spanned by the learned active variables is also proposed, which helps to resolve numerical oscillation issues and provides accuracy benefits over traditional local and global regressions. Extensive numerical experiments demonstrate that the proposed DRiLLS method leads to significant improvements on high-dimensional function approximations with limited or sparse data.

The rest of paper is organized as follows. In Section 2 the setting of the function approximation problem is introduced and the DRiLLS method is proposed and discussed. More specifically, the PRNN module is described in Section 2.1 and the synthesized regression module in Section 2.2. We then numerically investigate the performance of our DRiLLS method in Section 3, including ablation studies in Section 3.1, high-dimensional function approximations with limited/sparse data in Section 3.2 and a PDE-related application in Section 3.3. Finally, some concluding remarks are drawn in Section 4.

2 The proposed DRiLLS method

We consider a scalar target function, which is continuously differentiable on a bounded Lipschitz domain in :

(1)

The input variables are assumed to be independent from each other, which implies that the input space itself does not possess a low-dimensional structure. The goal is to find an approximation of the target function, given the information of and on a set of training samples in . We denote the training dataset by

which contains the input, the output and the gradient information at the samples. When the number of dimensions is large, taking a handful of random selections in each coordinate would result in a huge amount of data, which is infeasible in many application scenarios. Therefore, the sample dataset is usually sparse for high-dimensional problems.

The NLL method has achieved successes in high-dimensional function approximation on sparse data for real-world applications such as composite material design problems [34], however, it has difficulties in learning level sets of certain functions. In particular, NLL struggles on functions with critical points contained in the interior of the domain , such as the functions or on to be discussed later in Section 3.1

. One reason for such drawback is that the RevNet employed by NLL enforces invertibility as a hard constraint, which limits the capability of the RevNet in learning the structure of functions whose level sets are not homeomorphic to hyperplanes in the input space. Another reason is that the rate of change in the target function with respect to the inactivate variables is always zero at any interior critical points, as the gradient of the function vanishes there. Hence, the training process tends to ignore samples lying in a small neighborhood of the critical points since they do not contribute much to the training loss.

To overcome these issues and improve the performance of level set learning based function approximation, the proposed DRiLLS method consists of two major components: (1) the PRNN module that identifies active variables and reduces the dimension of input space, and (2) the synthesized regression module that selects neighboring sample points according to their Euclidean distances in the original input space and performs a local least-squares fitting based on the learned active variables to approximate the target function. A schematic diagram of the proposed method is shown in Figure 1.

Figure 1: The overall structure of the proposed DRiLLS method, which consists of two major components: the PRNN module and the synthesized regression module.

2.1 The pseudo-reversible neural network

To construct the PRNN, we first define a nonlinear mapping from the input to a new point of the same dimension. In contrast to the RevNet used by the NLL method, the invertibility of this transformation is relaxed by defining another mapping from to and encouraging to be close to in distance. Thus, the reversibility is imposed as a soft constraint on the PRNN model. Specifically, the two nonlinear transformations are denoted by

(2)

respectively, where , with and being their learnable parameters. Since is not exactly invertible by definition, can be viewed as a pseudo-inverse function to . Both and are represented by a fully connected neural network (FCNN), as displayed in Figure 2

. The PRNN network structure is reminiscent of an autoencoder

[13], but the dimension of latent space (i.e., the dimension of ) remains the same as the dimension of . While there are no theoretical restrictions on the structure of and , the experiments in Section 3 use the same FCNN architecture for both mappings.

Figure 2: The pseudo-reversible neural network (PRNN) consists of two FCNNs representing and

respectively, that possess the same number of hidden layers (3 layers for illustration) and the same number of neurons at each layer.

2.1.1 The loss function

The learnable parameters and are updated synchronously during the training process by minimizing the following total loss function:

(3)

Here, is the pseudo-reversibility loss which measures the difference between and the PRNN output , is the active direction fitting loss which which enforces the tangency between and the level sets of , and is the bounded derivative loss which regularizes the sensitivity of with respect to the active variables . The weights and are hyper-parameters for balancing the three loss terms. Below each term of is discussed in detail.

The pseudo-reversibility loss

In order to train to be a pseudo inverse of , the pseudo-reversibility condition is simply enforced in the sense:

(4)

which is the same as the standard loss used to train autoencoders.

The active direction fitting loss

This loss is defined based on the fact that if the -th output of is inactive, a small perturbation of in a neighborhood of would change the target function along a direction tangent to its level sets. Specifically, we define the Jacobian matrix of the nonlinear transformation as:

with

In the ideal case, if is completely inactive, then the gradient vector field is orthogonal to , that is with denoting the inner product. Thus the active direction fitting loss is defined to encourage the orthogonality, i.e.,

(5)

where the scaling factors

contain the hyper-parameter and are weight hyper-parameters determining how strictly the orthogonality condition is enforced for each of the variables. A typical choice is

(6)

where denotes the dimension of the active variables/coordinates. An ideal case would be , which implies that there exists only one active variable and the intrinsic dimension of is exactly one when . The scaling factor distinguishes from the one used in [34], and its value changes according to the magnitude of the gradient: it approaches if gets close to and stays close to otherwise. Therefore, it serves as a rescaling factor designed to overcome the situation where the contributions of samples near interior critical points are ignored by the optimization due to their small gradients.

The bounded derivative loss

Existing methods such as NLL do not place any restrictions on the active variables, because the used RevNet imposes sufficient regularization on those variables. On the other hand, using PRNNs without regularization in may cause the network to learn an active subspace which changes too fast, producing undesirable oscillations in the target function. To address this issue, we introduce a regularization term into the loss as

(7)

where is a positive rescaling hyper-parameter. The purpose is to regularize the magnitude of to be not much greater than one. In the practical implementation, we further approximate with by considering the pseudo-reversibility of the PRNN.

2.2 The synthesized regression

The active variables (coordinates) is naturally identified based on the pre-setting values of the weights . Once the PRNN training is completed, the sample points then can be nonlinearly projected through PRNN to a much lower dimensional space spanned by . Ideally, approximating the high-dimensional function often can be achieved by approximating the low-dimensional function

(8)

where

. Many existing methods could be used, including classic polynomial interpolations, least-squares polynomial fitting

[15], and regression by deep neural networks. However, because the control on is quite loose through the PRNN, could be very oscillatory with respect to or even make fail to form a function. For example, there could exist two sample points and , which are separated in the input space with different values and but mapped close together in the transformed space, i.e., . This is often the case for functions with interior critical points. The top row of Figure 3 presents an example illustration of such case, where we take and set as the active variable and the inactive variable. Consequently, general global or local regression approaches based solely on the projected information in the space of active variables are not able to effectively handle this case due to large numerical oscillations.

Figure 3: Top row: An example illustration of the sample points and their active variables learned by the PRNN, where we take and set as the active variable and the inactive variable. As is very oscillatory with respect to and even may not be a function of , this case is not suitable with general local or global regression approaches based solely on the projected information in the space of active variables, . Bottom row: The proposed synthesized regression first selects the local neighbor sample points for each of the five new inputs (i.e., the five -shaped points whose function values are to be predicted) from the original input space, then performs respective least-squares polynomial data fitting.

We develop a synthesized regression method to address this type of numerical oscillation problem. The method uses local least-squares polynomial fitting in the space of active variables, but selects neighboring sample points based on the Euclidean distance in the original input space to help to keep track of original neighborhood relationships. Our synthesized regression algorithm can be described as follows:

  1. [leftmargin=20pt]

  2. Given an unseen input sample , we select a set of points closest to from the set of all training samples, denoted by .

  3. The samples are fed into the trained PRNN to generate the samples of the active variables .

  4. We perform least-squares polynomial fitting using the data that is a subset of the training set. The approximation of , denoted by , is defined by the value of the resulting polynomial at .

Note that when the graph of in has several branches, the first two steps in the proposed synthesized regression encourages localization of the polynomial data fitting to only one of the branches. Indeed, the selected neighbors to usually stay on the same branch or intersecting region without much oscillations as shown in the bottom row of Figure 3.

3 Experimental results

The goal of this section is two-fold: the first is to test the influence of each ingredient of the proposed DRiLLS method on its overall performance, and the second is to investigate the numerical performance of the method in approximating high-dimensional functions. In particular, an ablation study is implemented in Section 3.1, including PRNN vs. RevNet and the effect of in Section 3.1.1, the effect of bounded derivative loss in Section 3.1.2, and the synthesized regression vs. some existing regression methods in Section 3.1.3. Then, through extensive comparisons with the AS and the NLL methods, we demonstrate the effectiveness and accuracy of the proposed DRiLLS method under limited/sparse data. Particularly, high-dimensional example functions are considered in Section 3.2 and a PDE-related application is given in Section 3.3.

The training dataset of size is randomly generated using the Latin hypercube sampling (LHS) method [29]. To measure the approximation accuracy, we use the normalized root-mean-square error () and the relative error () over a test set of randomly selected input points from the domain:

(9)

where are the exact function values and and are the approximated values. In the experiments, we set for low-dimensional problems () and for high-dimensional problems (). This procedure is replicated for 10 times and the average values are reported as the final and errors for function approximation.

Our DRiLLS method is implemented using PyTorch. If not specified otherwise, we choose the following

default model setting: and in the PRNN are constructed by FCNNs that contain 4 hidden layers with hidden neurons per layer, respectively;

is used as the activation function; the hyper-parameters

, , and are selected in the loss function; and cubic polynomial are used for the local least-squares fitting in the synthesized regression. For the training of PRNN, we use a combination of the Adam optimizer [18] and the L-BFGS optimizer [24]. The Adam iteration [18] is first applied with the initial learning rate 0.001, and the learning rate decays every steps by a factor of for up to steps. Then, the L-BFGS iteration is applied for a maximum of steps to accelerate the convergence. The training process is immediately stopped when training error reduces to . Both the AS and the NLL methods used for comparison are implemented in ATHENA111ATHENA codes available at https://github.com/mathLab/ATHENA. [27], which is a Python package for parameter space dimension reduction in the context of numerical analysis. All the experiments reported in this work are performed on an Ubuntu 20.04.2 LTS desktop with a 3.6GHz AMD Ryzen 7 3700X CPU, 32GB DDR4 memory and NVIDIA RTX 2080Ti GPU.

3.1 Ablation studies

We first numerically investigate the effect of major components in the proposed DRiLLS method, including the PRNN, the loss functions, the hyper-parameters and the synthesized regression. Several functions of two dimensions are considered. Since the dimension is , it is natural to take , i.e., in (6) with being the active variable and the inactive one in the transformed space of . For the same reason, two hidden layers are used for each of the FCNNs representing and , different from the default settings. From the tests reported in Sections 3.1.2 and 3.1.1, we observe that the Adam optimization during PRNN training terminated within 20000 steps in all cases, while the tests in Section 3.1.3 required up to 60000 steps to meet the stopping criterion due to more complicated geometric structures in the target function.

To visually evaluate the function approximation, we present two types of plots: The quiver plot shows the gradient field of (blue arrows) and the vector field corresponding to the second Jacobian column (red arrows) on a uniform grid, where increased orthogonality between the red and blue arrows indicates increased accuracy in the network mapping; The regression plot draws the approximated function values (red circles) over 400 randomly generated points in the domain together with the associated exact function values (blue stars), where good performance is indicated by a thin regression curve and a large degree of overlap between the blue stars and the red circles (exact and approximate function values).

3.1.1 PRNN vs. RevNet and the effect of

One of the main differences between the proposed PRNN and the RevNet is their treatments of reversibility

: the former imposes it as a soft constraint while the latter imposes a hard constraint (realized by a special network structure). Furthermore, the special structure of the RevNet requires an equal separation of the inputs into two groups. Thus, if the input space has an odd dimension, it has to be padded with an auxiliary variable (e.g., a column of zero). On the other hand, the PRNN represents a larger class of functions than the RevNet

[5], so that a better nonlinear transformation can be found when there is no need for explicit invertibility. To compare these two neural network structures, the following two functions are considered for testing:

(10)

where the domain of is either or . Note that both and reach their minimum at the origin, which is located in the interior of but only on the boundary of . Since we focus on the influence of reversibility in this subsection, we temporarily set and . The corresponding total loss for our DRiLLS method defined by (3) then becomes

respectively, because is automatically zero in the case of RevNet. In the following tests, the RevNet uses 10 RevNet Blocks with 2 neurons each, as the input space has the dimension two, and a step size (see [34] for details about the used RevNet structure). We choose the size of the sample dataset for training to be and both the PRNN and the RevNet are trained using the same dataset.

The testing results of are presented in Figure 4 for the case and in Figure 5 for the case , where several choices of are considered, i.e., the first column for , the second column for , and the third column for . It is observed that both network structures, the PRNN and the RevNet, work well for with the domain as shown in Figure 4. It is worth noting that and for any , i.e, the behavior of in is somehow monotonic. However, when the domain is changed to , the behavior of in is not monotonic anymore and the RevNet encounters difficulties in finding the appropriate active variable. Indeed, as shown in the third row of Figure 5, the gradient is not orthogonal to at many points no matter the value of is, which indicates the function value is still sensitive to the first inactivate variable . This further leads larger errors in the regression process and function approximation, as seen in the fourth row of Figure 5.

The testing results of are displayed in Figures 7 and 6 for the function respectively defined in and . We remark that the behavior of in either or in is not monotonic at all. It is observed from Figures 7 and 6 that the PRNN achieves superior performances on both domains: the quiver plots indicate that the RevNet has difficulty in ensuring the function value to be insensitive to in both and cases, and the associated regression plots show that the RevNet produces a more erroneous function approximation. The PRNN, on the contrary, still works well on both domains, which further leads to more accurate function approximations.

Meanwhile, we also observe that the value of does not have much impact on the performance of RevNet. For the PRNN, the effect of on the performance also seems negligible for the case , but becomes significantly different for the case . As increases from to and , the learned level sets and the function approximations get more and more accurate, especially for . As shown in the first rows of Figures 7 and 5, red arrows are well perpendicular to the blue arrows in the quiver plots for two larger values of , manifesting more effective dimension reductions. Moreover, less blue dots are visible in the regression plots in the third column than those in the first two column, which indicates less discrepancy between the predicted values and the exact function values.

PRNN with RevNet with
PRNN with RevNet with
PRNN with RevNet with
Figure 4: Level set learning and function approximation results produced by the our DRiLLS method with PRNN (the quiver plot in Row 1 and the regression plot in Row 2) or the RevNet (the quiver plot in Row 3 and the regression plot in Row 4) for in , at three different values of = 0, 25, 50, respectively. There is no critical point in the interior of the domain and both PRNN and RevNet successfully learn the level sets of the target function.
PRNN with RevNet with
PRNN with RevNet with
PRNN with RevNet with
Figure 5: Level set learning and function approximation results produced by our DRiLLS method with the PRNN (the quiver plot in Row 1 and the regression plot in Row 2) or the RevNet (the quiver plot in Row 3 and the regression plot in Row 4) for in , at three different values of = 0, 25, 50, respectively. RevNet fails to learn the level sets of the target function because it cannot handle the interior critical point at the origin. In comparison, the PRNN successfully learns these level sets partly because it does not enforce the hard reversibility around the critical point.
PRNN with RevNet with
PRNN with RevNet with
PRNN with RevNet with
Figure 6: Level set learning and function approximation results produced by our DRiLLS method with the PRNN (the quiver plot in Row 1 and the regression plot in Row 2) or the RevNet (the quiver plot in Row 3 and the regression plot in Row 4) for in , at three different values of = 0, 25, 50, respectively. PRNN successfully learns the level sets of the target function but RevNet is somehow unable.
PRNN with RevNet with
PRNN with RevNet with
PRNN with RevNet with
Figure 7: Level set learning and function approximation results produced by our DRiLLS method with the PRNN (the quiver plot in Row 1 and the regression plot in Row 2) or the RevNet (the quiver plot in Row 3 and the regression plot in Row 4) for in , at three different values of = 0, 25, 50, respectively. PRNN successfully learns the level sets of the target function but RevNet is somehow unable.

3.1.2 The effectiveness of the bounded derivative loss

We use with the domain to investigate the effect of the bounded derivative loss that is a new loss term compared to those used in the NLL method. The purpose of is to reduce the oscillation in the function values after they are projected onto the active variable space, thus, it mainly can be regarded as a regularization term.

To check whether the proposed bounded derivative loss helps the training process of the proposed PRNN in our DRiLLS method, we vary the value of from to and while fixing the other experimental settings. The training dataset again has size 500. The evolutions of the total loss , the pseudo-reversibility loss , and the active direction fitting loss during the training process are presented in Figure 8. It is observed that the pseudo-reversibility loss is not affected by the choice of , but the total training loss and the active direction fitting loss both decay faster when than when . Conversely, the even larger value does not further accelerate the training process.

(a) Total Loss
(b) Pseudo-reversibility Loss
(c) Active direction fitting Loss
Figure 8: Evolutions of the total loss (left), the pseudo-reversibility loss (middle) and the active direction fitting loss (right) with three different values of during the training process of the PRNN for in .

3.1.3 The synthesized regression v.s. other regression methods

Once the transformation to the active variable is obtained through the PRNN, we apply the proposed synthesized regression for approximating the target function. To better demonstrate the advantage of our synthesized regression, we consider the following example featured in Figure 3:

(11)

Due to the complicated behavior of the function in , we set the size of training dataset in the PRNN and the associated quiver and regression plots produced by our method are presented in Figure 9. The former demonstrates the efficacy of PRNN dimension reduction as the derivative in the function with respect to is tangent to the level sets, and the latter indicates accurate regressions have been obtained as almost all the blue stars and red circles coincide with each other, though the graph of has several branches.

Figure 9: Level set learning and function approximation results produced by our DRiLLS method for in : the quiver plot (left) and the regression plot (right). The synthesized regression approach successfully overcomes the numerical oscillation issue illustrated in the top row of Figure 3.
Synthesized Regression Direct Local Fitting Global Fitting Neural Network
0.86 20.49 20.16 19.93
1.32 91.11 93.89 91.27
Table 1: Approximation errors by different regression methods on the same PRNN transformed data for in .

The performance of the proposed synthesized regression is also compared with some other popular regression methods based on the same PRNN transformed data, including the polynomial regression in local and global fashions and the nonlinear regression by neural networks. In particular, cubic polynomial fitting is applied, and the neural network regression uses a FCNN of 3 hidden layers with 20 neurons in each layer. The function approximation errors are summarized in Table 1, which shows that the direct local fitting, the global fitting, and the neural network regression all fail to provide accurate predictions while the synthesized regression performs significantly well.

3.2 High-dimensional function approximation with limited data

Here we compare our DRiLLS method with two popular dimension reduction methods, the AS and the NLL, for function approximation with limited/sparse data. To ensure a fair comparison, the proposed synthesized regression will be applied to all compared methods for regression after active subspaces/variables are identified. In particular, the dimension of the active variables is set to and for DRiLLS and NLL and similarly for AS, which are often the typical choices in practice. We consider the following four functions:

(12)

For and , with and with are respectively used as the domain of the functions. For , we only take with is used as the domain. Obviously, the behaviors of the these functions are much more complicated in than in . We observed that for all the tests reported in this section their training processes again terminated within 60000 Adam optimization steps.

in
500 2500 10000
DRiLLS () 0.60 0.80 0.22 0.31 0.20 0.26
DRiLLS () 0.61 0.82 0.22 0.31 0.20 0.26
NLL () 0.32 0.43 0.30 0.43 0.32 0.45
NLL () 0.37 0.50 0.40 0.56 0.37 0.51
AS () 3.52 5.51 3.20 4.88 2.68 4.19
AS () 3.81 5.84 3.49 5.22 2.92 4.39
in
500 2500 10000
DRiLLS () 9.74 11.18 0.57 0.62 0.39 0.40
DRiLLS () 11.25 12.68 0.68 0.75 0.73 0.71
NLL () 0.31 0.33 0.28 0.31 0.37 0.38
NLL () 0.39 0.40 0.35 0.36 0.40 0.38
AS () 3.83 4.52 3.74 4.35 3.68 4.24
AS () 4.27 4.84 4.07 4.60 3.97 4.46
in
2500 10000 40000
DRiLLS () 4.26 3.19 2.25 1.51 1.53 1.17
DRiLLS () 2.63 2.52 1.55 1.39 1.05 0.87
AS () 11.42 19.14 9.73 15.49 7.53 12.14
AS () 12.66 20.13 9.96 16.25 8.04 13.62
in
2500 10000 40000
DRiLLS () 13.71 19.66 3.59 2.40 2.22 1.59
DRiLLS () 13.88 19.69 2.93 2.47 1.86 1.80
AS () 12.96 19.38 12.17 17.45 10.98 15.22
AS () 14.35 20.12 12.70 18.14 11.55 15.96



Table 2: Numerical approximation errors produced by DRiLLS, NLL and AS for on various domains.
in
500 2500 10000
DRiLLS () 1.54 3.56 0.71 1.63 0.51 1.18
DRiLLS () 1.58 3.68 0.73 1.60 0.54 1.21
NLL () 1.19 2.27 1.08 2.52 1.13 2.76
NLL () 1.42 2.56 1.03 2.33 0.84 1.95
AS () 8.95 22.95 8.07 20.66 6.91 17.55
AS () 10.02 24.97 8.58 21.80 7.23 18.38
in
500 2500 10000
DRiLLS () 28.73 79.29 3.75 7.72 2.81 5.73
DRiLLS () 33.28 87.72 4.06 7.72 3.08 5.82
NLL () 5.09 7.46 3.48 5.78 2.58 4.92
NLL () 5.92 8.13 4.17 6.22 3.07 5.02
AS () 13.89 32.84 12.96 30.99 12.60 29.90
AS () 15.54 35.73 14.04 32.87 13.42 31.44


in
2500 10000 40000
DRiLLS () 9.18 15.90 6.34 9.96 4.56 6.66
DRiLLS () 8.06 13.56 5.10 8.19 3.11 4.90
AS () 25.21 62.51 21.29 52.42 17.29 42.63
AS () 26.61 64.48 22.42 54.13 18.31 44.15
in
2500 10000 40000
DRiLLS () 26.86 74.17 17.17 30.54 11.95 20.81
DRiLLS () 22.54 49.60 15.55 27.14 9.62 15.90
AS () 26.82 73.84 24.58 67.50 22.15 60.43
AS () 28.66 76.43 26.76 70.45 23.78 62.81
Table 3: Numerical approximation errors produced by DRiLLS, NLL and AS for on various domains.
in
500 2500 10000
DRiLLS () 0.32 1.44 0.12 0.39 0.09 0.27
DRiLLS () 0.29 1.31 0.11 0.37 0.09 0.27
NLL () 0.73 3.29 0.59 2.42 0.32 1.41
NLL () 0.95 2.76 0.51 2.12 0.57 2.31
AS () 2.19 9.91 2.03 8.73 1.76 7.58
AS () 2.58 10.73 2.09 9.30 1.97 7.99
in
500 2500 10000
DRiLLS () 1.28 13.35 0.27 2.48 0.14 1.27
DRiLLS () 1.34 13.63 0.23 1.97 0.14 1.23
NLL () 0.67 5.11 1.76 16.86 0.96 8.23
NLL () 0.85 6.88 0.46 4.20 0.54 4.83
AS () 1.98 17.25 1.55 15.93 1.96 15.73
AS () 2.37 19.84 1.90 17.48 1.99 16.93


in
2500 10000 40000
DRiLLS () 4.31 11.76 2.56 6.36 1.74 3.95
DRiLLS () 3.38 8.15 2.09 5.05 1.06 2.66
AS () 8.43 36.86 6.43 27.92 4.94 20.31
AS () 8.96 38.55 6.90 29.48 5.17 21.98
in
2500 10000 40000
DRiLLS (