Robust and Active Learning for Deep Neural Network Regression

07/28/2021
by   Xi Li, et al.
Penn State University
Imperial College London
0

We describe a gradient-based method to discover local error maximizers of a deep neural network (DNN) used for regression, assuming the availability of an "oracle" capable of providing real-valued supervision (a regression target) for samples. For example, the oracle could be a numerical solver which, operationally, is much slower than the DNN. Given a discovered set of local error maximizers, the DNN is either fine-tuned or retrained in the manner of active learning.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/06/2021

Backdoor Attack and Defense for Deep Regression

We demonstrate a backdoor attack on a deep neural network used for regre...
01/18/2020

Active Learning over DNN: Automated Engineering Design Optimization for Fluid Dynamics Based on Self-Simulated Dataset

Optimizing fluid-dynamic performance is an important engineering task. T...
11/08/2020

Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems

We show that by controlling parts of a physical environment in which a p...
02/11/2019

Fast Evaluation of Low-Thrust Transfers via Deep Neural Networks

The design of low-thrust-based multitarget interplanetary missions requi...
05/25/2021

Optimal Sampling Density for Nonparametric Regression

We propose a novel active learning strategy for regression, which is mod...
05/07/2022

Individualized Risk Assessment of Preoperative Opioid Use by Interpretable Neural Network Regression

Preoperative opioid use has been reported to be associated with higher p...
06/21/2021

Active Learning for Deep Neural Networks on Edge Devices

When dealing with deep neural network (DNN) applications on edge devices...

1 Introduction

We describe a heuristic method for active learning of a regression model, in particular of a deep neural network model, e.g.

[6, 2, 7, 3, 4]. In the following, an automated “oracle” capable of providing real-valued supervision (a regression target) for samples, for purposes of training, is assumed available. A parameterized model for regression, which could be a deep neural network (DNN), is typically trained to approximate the oracle based on the mean squared error (MSE) training objective, i.e.,

(1)

where is the training set of supervised input samples , with the the supervising target for . The benefit of a DNN model is that it can perform inference at much higher speed than the oracle. This benefit is herein assumed to greatly outweigh the cost of invoking the oracle during training. Moreover, a DNN model is preferable to simpler alternatives when the oracle is a complicated (highly nonconvex) function of the input and the inputs belong to a high-dimensional sample space.

The following describes an active learning approach to this training problem. We give an approach that iteratively enriches the training set with new supervising examples, which inform the learning and result in improved accuracy of the (re-)learned model.

2 Overview of the Method

Initially, the DNN is trained based on a random sampling of the input space (where each sample is supervised by the oracle). Seeded by the initial training samples that exhibit the largest absolute errors , gradient-ascent search is then used to identify a set of local maximizers, , of the squared error, , where a finite-difference approximation is used for the gradient of and the gradient of is directly computed by back-propagation with respect to the input variables, e.g., [1]. As often done for gradient based optimization for training

(deep learning), here the step size can be periodically reduced by a fixed proportion. Once the set of local maximizers are identified, amongst any subset that are all very highly proximal to each other, only one need be retained.

In summary, a gradient-ascent sequence with index , seeking a local maximizer of the square error , is:

where is the initialization, the step size is non-increasing with , a finite-difference approximation is used for the gradient of the oracle , and the gradient of the neural network model is computed by back-propagation.

Using the training set , the DNN is then retrained in the manner of active learning. Depending on the application, a differently weighted combination of error terms can be used as the training objective, e.g.,

(2)

for some . If and , then this objective is just (1) with . Depending on the application, one might want to give greater weight to the local maximizers in subsequent learning iterations, i.e., by taking .

The foregoing process is iteratively repeated in the manner of classical iterated gradient based hill-climbing exploration/exploitation optimization methods, more recently called Cuckoo search [8]: At step , gradient ascent search of the error is seeded with the elements of with largest absolute errors, and uses smaller initial step size and tighter stopping condition than for step .

Many obvious variations of the foregoing approach are possible. Though we assume the frequent use of the oracle is justified for training, computational costs of training could be much more significant for more complex oracles. Given this, note the likely significant computational advantage of the use of gradient ascent search over “pure” iterated random sampling of regions of sample space with higher error, i.e., a kind of iterative importance sampling based Monte Carlo.

3 Discussion: Overfitting and regularization

Typically, one has no idea a priori how large a model is needed for a given application domain. So a particular DNN may or may not be (initially) overparameterized with respect to its training set. Low accuracy on the training set may indicate too few parameters (insufficient DNN capacity to learn). On the other hand, low accuracy on a supervised validation set held out from training (poor generalization performance) with high accuracy on the training set may indicate too many parameters (overfitting to the training set).

Suppose that in each iteration of the training method described in Section 2

, a feed-forward neural network with five fully connected internal layers and 256 neurons per layer (alternatively, we could use “convolutional” layers with far fewer parameters per layer) is initially used. Note that as local maximizers of error are identified, it is possible that the resulting regression problem becomes more complex, particularly if the number and density of local extrema increases. Thus, this DNN may need to be “regularized” initially to avoid overfitting (e.g., using dropout when training), while in later iterations the DNN may not have sufficient capacity and so the number of layers/neurons may need to be increased to achieve required accuracy on the validation set.

4 Experimental Results

Some experimental results are now given for a simple oracle used to value a single-barrier option. The input is dimensional: barrier over spot price, strike over spot price, time to maturity, volatility, and interest rate. Each input sample was confined to a “realistic” interval and normalized for training.

The initial training set had size k and there is a test set of k samples, the latter used to evaluate accuracy. These sets were taken by selecting sample -tuples uniformly at random and discarding those which were extraneous.

Under PyTorch, we used a feed-forward DNN with five fully connected internal layers each having 512 ReLU neurons. The learning rate (step size) was 0.01 initially and divided by 10 every 50 epochs. Training of the DNN halted when the normalized change in training MSE was less than

over 10 epochs. Dropout [5] was not employed when training.

To identify the local maximizers of the squared error of the DNN trained on , gradient ascent is performed on the square error starting from an initial point with large error, with the step size divided by every epochs, with initial step size and with the termination condition when the squared error is . To approximate the gradient of the oracle , a first-order finite difference with parameter was used. Local maximizers were deemed identical when the Euclidean norm between them was less than . k unique local maximizers were thus found by seeding gradient ascent with the % of samples of having highest absolute error.

From Table 1 consider (the DNN trained on ). The training set MSE is (dollars) and the test set MSE is , i.e., the performance on the training set “generalizes” well to the test set. Also, the mean absolute error (MAE) on the test set is 0.1734 (dollars), while the MAE on the set of local maximizers of square error, , is 1.168. That is, this DNN is not extremely accurate, particularly on the local maximizers.

1 0.99 0.98 0.97 0.96 0.95 0.94 0.93 0.92 0.91 0.9
Training MSE 0.0646 0.0701 0.0463 0.0425 0.0506 0.0359 0.0448 0.0422 0.0411 0.0435 0.0475
Test MSE 0.0688 0.07 0.0501 0.0495 0.0513 0.0381 0.0437 0.0486 0.0452 0.0521 0.046
Maximizer MSE 1.557 2.916 2.981 0.8214 2.196 0.7501 1.012 0.8615 3.262 1.153 1.493
Training MAE 0.1686 0.1777 0.1451 0.1413 0.1524 0.1298 0.1436 0.1422 0.1373 0.1437 0.1435
Test MAE 0.1734 0.1696 0.1419 0.1408 0.1501 0.1301 0.1425 0.1417 0.1375 0.1457 0.1412
Maximizer MAE 1.168 0.9941 0.8234 0.7524 0.8511 0.6821 0.7345 0.7287 0.7557 0.7477 0.8271
Table 1: Results for the DNN trained on () and ().

Retraining the DNN111Alternatively, the model based on could be fine-tuned [3]. using was terminated when the normalized change in the training MSE over 10 iterations was less than . To find k of its local maximizers with respect to squared error, gradient ascent was used, reducing the step size by every iterations, with initial step size and stopping condition when the normalized changed in square error . The seeds for gradient ascent are the of samples of having highest absolute error.

From Table 1, for (the DNN trained on ), note that means that all samples (those in and ) are weighted equally. Compared to the original DNN, this DNN has lower training and test MSE and lower test and maximizer MAEs, even though the learning task is more difficult (given the additional training samples and the fact that the DNN architecture (its model size) has not been changed).

We notice that, compared with the case with , the maximizers in some cases (e.g., ) have higher MSE but lower MAE. For , with the maximizers having highest absolute errors removed, the MSE and MAE are and , respectively, and are smaller than the MSE and MAE for . Hence, the inconsistency between maximizer MSE and MAE is caused by the extreme instances, which significantly lift the MSE.

The foregoing retraining process can be repeated, and terminated when no new local maximizers are found, and when the increment in generalization performance on both the test set and the set of local maximizers levels off without indication that the learning capacity of the DNN has been reached.

References