We describe a heuristic method for active learning of a regression model, in particular of a deep neural network model, e.g.[6, 2, 7, 3, 4]. In the following, an automated “oracle” capable of providing real-valued supervision (a regression target) for samples, for purposes of training, is assumed available. A parameterized model for regression, which could be a deep neural network (DNN), is typically trained to approximate the oracle based on the mean squared error (MSE) training objective, i.e.,
where is the training set of supervised input samples , with the the supervising target for . The benefit of a DNN model is that it can perform inference at much higher speed than the oracle. This benefit is herein assumed to greatly outweigh the cost of invoking the oracle during training. Moreover, a DNN model is preferable to simpler alternatives when the oracle is a complicated (highly nonconvex) function of the input and the inputs belong to a high-dimensional sample space.
The following describes an active learning approach to this training problem. We give an approach that iteratively enriches the training set with new supervising examples, which inform the learning and result in improved accuracy of the (re-)learned model.
2 Overview of the Method
Initially, the DNN is trained based on a random sampling of the input space (where each sample is supervised by the oracle). Seeded by the initial training samples that exhibit the largest absolute errors , gradient-ascent search is then used to identify a set of local maximizers, , of the squared error, , where a finite-difference approximation is used for the gradient of and the gradient of is directly computed by back-propagation with respect to the input variables, e.g., . As often done for gradient based optimization for training
(deep learning), here the step size can be periodically reduced by a fixed proportion. Once the set of local maximizers are identified, amongst any subset that are all very highly proximal to each other, only one need be retained.
In summary, a gradient-ascent sequence with index , seeking a local maximizer of the square error , is:
where is the initialization, the step size is non-increasing with , a finite-difference approximation is used for the gradient of the oracle , and the gradient of the neural network model is computed by back-propagation.
Using the training set , the DNN is then retrained in the manner of active learning. Depending on the application, a differently weighted combination of error terms can be used as the training objective, e.g.,
for some . If and , then this objective is just (1) with . Depending on the application, one might want to give greater weight to the local maximizers in subsequent learning iterations, i.e., by taking .
The foregoing process is iteratively repeated in the manner of classical iterated gradient based hill-climbing exploration/exploitation optimization methods, more recently called Cuckoo search : At step , gradient ascent search of the error is seeded with the elements of with largest absolute errors, and uses smaller initial step size and tighter stopping condition than for step .
Many obvious variations of the foregoing approach are possible. Though we assume the frequent use of the oracle is justified for training, computational costs of training could be much more significant for more complex oracles. Given this, note the likely significant computational advantage of the use of gradient ascent search over “pure” iterated random sampling of regions of sample space with higher error, i.e., a kind of iterative importance sampling based Monte Carlo.
3 Discussion: Overfitting and regularization
Typically, one has no idea a priori how large a model is needed for a given application domain. So a particular DNN may or may not be (initially) overparameterized with respect to its training set. Low accuracy on the training set may indicate too few parameters (insufficient DNN capacity to learn). On the other hand, low accuracy on a supervised validation set held out from training (poor generalization performance) with high accuracy on the training set may indicate too many parameters (overfitting to the training set).
Suppose that in each iteration of the training method described in Section 2
, a feed-forward neural network with five fully connected internal layers and 256 neurons per layer (alternatively, we could use “convolutional” layers with far fewer parameters per layer) is initially used. Note that as local maximizers of error are identified, it is possible that the resulting regression problem becomes more complex, particularly if the number and density of local extrema increases. Thus, this DNN may need to be “regularized” initially to avoid overfitting (e.g., using dropout when training), while in later iterations the DNN may not have sufficient capacity and so the number of layers/neurons may need to be increased to achieve required accuracy on the validation set.
4 Experimental Results
Some experimental results are now given for a simple oracle used to value a single-barrier option. The input is dimensional: barrier over spot price, strike over spot price, time to maturity, volatility, and interest rate. Each input sample was confined to a “realistic” interval and normalized for training.
The initial training set had size k and there is a test set of k samples, the latter used to evaluate accuracy. These sets were taken by selecting sample -tuples uniformly at random and discarding those which were extraneous.
Under PyTorch, we used a feed-forward DNN with five fully connected internal layers each having 512 ReLU neurons. The learning rate (step size) was 0.01 initially and divided by 10 every 50 epochs. Training of the DNN halted when the normalized change in training MSE was less thanover 10 epochs. Dropout  was not employed when training.
To identify the local maximizers of the squared error of the DNN trained on , gradient ascent is performed on the square error starting from an initial point with large error, with the step size divided by every epochs, with initial step size and with the termination condition when the squared error is . To approximate the gradient of the oracle , a first-order finite difference with parameter was used. Local maximizers were deemed identical when the Euclidean norm between them was less than . k unique local maximizers were thus found by seeding gradient ascent with the % of samples of having highest absolute error.
From Table 1 consider (the DNN trained on ). The training set MSE is (dollars) and the test set MSE is , i.e., the performance on the training set “generalizes” well to the test set. Also, the mean absolute error (MAE) on the test set is 0.1734 (dollars), while the MAE on the set of local maximizers of square error, , is 1.168. That is, this DNN is not extremely accurate, particularly on the local maximizers.
Retraining the DNN111Alternatively, the model based on could be fine-tuned . using was terminated when the normalized change in the training MSE over 10 iterations was less than . To find k of its local maximizers with respect to squared error, gradient ascent was used, reducing the step size by every iterations, with initial step size and stopping condition when the normalized changed in square error . The seeds for gradient ascent are the of samples of having highest absolute error.
From Table 1, for (the DNN trained on ), note that means that all samples (those in and ) are weighted equally. Compared to the original DNN, this DNN has lower training and test MSE and lower test and maximizer MAEs, even though the learning task is more difficult (given the additional training samples and the fact that the DNN architecture (its model size) has not been changed).
We notice that, compared with the case with , the maximizers in some cases (e.g., ) have higher MSE but lower MAE. For , with the maximizers having highest absolute errors removed, the MSE and MAE are and , respectively, and are smaller than the MSE and MAE for . Hence, the inconsistency between maximizer MSE and MAE is caused by the extreme instances, which significantly lift the MSE.
The foregoing retraining process can be repeated, and terminated when no new local maximizers are found, and when the increment in generalization performance on both the test set and the set of local maximizers levels off without indication that the learning capacity of the DNN has been reached.
-  D.T. Davis and J.-N. Hwang. Solving Inverse Problems by Bayesian Neural Network Iterative Inversion with Ground Truth Incorporation. IEEE Trans. Sig. Proc., 45(11), 1997.
-  C. Kading, E. Rodner, A. Freytag, O. Mothes, B. Barz, and J. Denzler. Active Learning for Regression Tasks with Expected Model Output Changes. In Proc. British Machine Vision Conference, 2018.
-  S. Lathuiliere, P. Mesejo, X. Alameda-Pineda and R. Horaud. A comprehensive analysis of deep regression. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 2019.
-  P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, X. Chen, and X. Wang. A Survey of Deep Active Learning. https://arxiv.org/abs/2009.00236, 16 Sep 2020.
-  S. Ruder. An overview of gradient descent optimization algorithms. https://ruder.io/optimizing-gradient-descent/, 19 Jan 2018.
-  E. Tsymbalov, M. Panov, and A. Shapeev. Dropout-based Active Learning for Regression. https://arxiv.org/abs/1806.09856v2, 5 Jul 2018.
-  D. Wu, C.-T. Lin, and J. Huang. Active Learning for Regression Using Greedy Sampling. https://arxiv.org/abs/1808.04245v1, 8 Aug 2018.
-  X.-S. Yang and S. Deb. Cuckoo search via Levy fights. In Proc. World Congress on Nature & Biologically Inspired Computing, 2009.