The presence of noise in data is an issue recurrently approached in the machine learning field. Noisy data can highly influence the performance of machine learning techniques, leading to overfitting and poor data generalization (Nettleton et al., 2010)
. We define noise as anything that obscures the relationship between the predictor variables and the target variable of a problem(Hickey, 1996). In classification and regression problems, noise can be found in the input (predictor) variables, in the output (target) variable or both, and is usually the result of non-systematic errors during the process of data generation.
In the context of regression problems, robust regression methods have been proposed to address noisy data points or outliers111We consider that both noisy points and outliers are out of pattern instances that should be identified. We do not go into the merit of whether a noisy point may be actually useful to the task and represent an outlier., and also to deal with other data assumptions most regression methods do not respect (Rousseeuw and Leroy, 2005), such as the independence between the input variables. Although not very popular for some time due to its computational cost, robust regression provide an alternative to deal with noise. When modeling Genetic Programming (GP) to solve symbolic regression problems, only a few studies have looked at the impact of noise on the results of data generalization and overfitting (Borrelli et al., 2006; Sivapragasam et al., 2007; De Falco et al., 2007; Imada and Ross, 2008).
Instead, the community has given great focus to the relations between complexity, overfitting and generalization, and its relation to bloat and parsimony (Fitzgerald and Ryan, 2014; Vanneschi et al., 2010). These are indeed close-related issues in GP, but they do not account for problems that are not inherent to the GP search, but intrinsic to the input data. A few works have also investigated this matter considering the behavior of the GP when additive noise is added to the input data (Borrelli et al., 2006; Sivapragasam et al., 2007; De Falco et al., 2007; Imada and Ross, 2008).
The main objective of this work is not to look at how canonical GP deals with noise, but rather investigate how GPs that take semantics into account deal with the problem when compared to GP. A few papers in the literature have claimed Geometric Semantic Genetic Programming (GSGP) to be more robust to overfitting—which can be caused by noisy data points—when compared to canonical GP techniques (Vanneschi et al., 2013; Castelli et al., 2012; Castelli et al., 2013; Vanneschi et al., 2014; Vanneschi, 2014). At first, this might be even counter-intuitive, as the exponential growth of solutions caused by GSGP might even worsen the effects of overfitting. However, no systematic study has been performed to assess whether and in which situations this might be true, and how noisy data points affect the performance of GSGP.
We are particularly interested in noise found in the output variable of symbolic regression problems. This is because GSGP operates in a semantic space, guided by the vector of outputs defined by the training set. Hence, noise in the output has a much bigger impact in the search process in GSGP than noise in the predicted variables.
In order to investigate the impacts of noise, we systematically introduced additive noise to a set of 15 artificial datasets from the literature. We evaluated how increasing noise affected the results of error of the methods in both the training and test sets, using a total of 165 versions of the datasets. We also adapted two measures from the classification literature that capture the noise robustness of a method, and present results for these measures in the datasets considered.
In general, our results show that, although GSGP performs better in low levels of noise, as we increase the percentage of noisy instances, the performance of GSGP and GP tend to approximate.
2. Geometric Semantic Genetic Programming
Previous GP works have shown that the evolutionary search can be improved through the inclusion of semantic information (Vanneschi et al., 2014). Among them, the Geometric Semantic GP (GSGP) introduces geometric semantic crossover and mutation operators that, acting on the syntax of the parent programs, produce offspring with known semantic properties (Moraglio et al., 2012).
From the symbolic regression perspective, given a training set —where ()—the semantics of an individual representing a program , denoted by , is defined as the vector of outputs it produces when applied to the set of inputs defined by , i.e., . This definition allows the semantics of any program to be straightforwardly represented in a -dimensional semantic space , where
is the size of the training set. Notice that the target output vector defined by the training set—given by—is also representable in .
The Geometric Semantic Crossover (GSX) operator combines two parent individuals, and , generating an offspring placed in the metric segment in connecting both parents:
where is a random real constant in [0, 1] (for fitness function based on Euclidean distance) or a random real function with codomain (for fitness function based on Manhattan distance).
The Geometric Semantic Mutation (GSM) operator, in turn, produces semantic perturbations to a given individual , such that its resulting semantics is placed in a ball with radius , proportional to the mutation step , as given by:
where and are real functions randomly generated.
Figure 1 shows the representation of the geometric semantic operators in the semantic space with a Manhattan-based fitness function. The resulting offspring of these operators over the semantics of and are placed in the grey area.
3. Related Work
As previously mentioned, the GP community has given a lot of attention to the relations between complexity, overfitting and generalization, and their association to bloat and parsimony (Fitzgerald and Ryan, 2014; Vanneschi et al., 2010). This section focuses specifically on works performed to analyze and minimize the effects of noisy data in GP. In addition, to the best of our knowledge, so far there are no measures to quantify the impact of noise in GP-induced models for symbolic regression problems. Thus, we also present an overview of techniques to measure the impact of noisy data on the performance of classification techniques, which we adapted to the regression domain.
3.1. Genetic Programming with Noisy Data
Different strategies have been proposed in symbolic regression to investigate and minimize the impact of noisy data on the search performed by GP. On the one hand, one can try to filter out noise data before performing the regression. On the other hand, one can improve the methods to simply deal with the problem—a much more common approach.
Following the first strategy, Sivapragasam et al. (Sivapragasam et al., 2007) use Singular Spectrum Analysis (SSA) to filter out the noise components before performing the symbolic regression of a short time series of fortnight river flow. The experimental study indicates that when the stochastic (noise) components are removed from short and noisy time-series, the short-lead forecasts can be improved.
Regarding methods that try to deal with the problem, Borrelli et al. (Borrelli et al., 2006)
employ a Pareto multi-objective GP for symbolic regression of time series with additive and multiplicative noise. The authors adopt two different configurations employing statistical metrics for the fitness objectives: (1) the Mean Squared Error (MSE) combined with the first two momenta and (2) the MSE with the skewness added to the kurtosis—all the measures computed regarding the desired and evaluated outputs. An experimental analysis considering time series generated from 50 functions from the literature shows that, although reducing overfitting and bloat, the multi-objective approach does not perform well when the noise level is too high. However, for moderate noise levels, the approach can successfully discover the trend of the series.
De Falco et al. (De Falco et al., 2007)
, in turn, present two GP methods guided by context-free grammars with different fitness functions that take parsimony and the simplicity of the solutions into account. The Parsimony-based Fitness Algorithm (PFA) and Solomonoff-based Fitness Algorithm (SFA) adopt fitness functions based, respectively, on parsimony ideas and on Solomonoff probability induction concepts. These methods are compared in four datasets generated from known functions, with five different levels of additive noise. The experimental analysis indicates that the SFA achieves smaller error when compared to PFA for all the datasets and levels of noise.
Imada and Ross (Imada and Ross, 2008) also present a fitness function, alternative to functions based on the sum of errors, where the scores are determined by the sum of the normalized differences between the target and evaluated values, regarding different statistical features. The experimental analysis in two datasets with two levels of additive noise shows that the proposed fitness function outperforms the fitness based on the sum of errors.
Although the above works handle noise in the symbolic regression context, there is a lack of studies directed to quantify the impact of the noise in GP-based regression methods. The next section presents measures adopted to quantify the influence of noise in classification algorithms from the machine learning literature. In Section 4 we select—and adapt—these metrics to apply to our regression test bed.
3.2. Quantifying Noise Robustness
When a machine learning method is capable of inducing models that are not influenced by the presence of noise in data, we say it is robust to noise—i.e., the more robust a method is to noise, the more similar are the models it induces from data with and without noise (Sáez et al., 2016).
Following this premise, works in the classification literature adopt measures that compare the performance of models induced in the presence and absence of noise in the dataset, in order to evaluate the robustness of the learner. Here we introduce three of these metrics: relative risk bias, relative loss of accuracy and equalized loss of accuracy.
The Relative Risk Bias (RRB) (Kharin and Zhuk, 1994) measures the robustness of an optimal decision rule—i.e., the Bayesian Decision rule providing the minimal risk when the training data has no “contaminations”. Sáez et al. (Sáez et al., 2016)
extend the measure to any classifier, given by:
where is the classification error rate obtained by the classifier in a dataset with noise level given by and is the classification error rate of the Bayesian Decision rule without noise (this is a theoretical decision rule, not learned from the data and depends on the data generating process), which is by definition the minimum expected error that can be achieved by any decision rule.
The Relative Loss of Accuracy (RLA) (Sáez et al., 2011), in turn, quantifies the impact of increasing levels of noise in the accuracy of the classifier model when compared to the case with no noise. The RLA measure with level of noise equals to is defined by:
where and are the accuracies of the classifier with a noise level of and , respectively. RLA is considered more intuitive than RRB, as methods obtaining high values of accuracy without noise () will have a low RLA value.
Finally, the Equalized Loss of Accuracy (ELA) (Sáez et al., 2016) was proposed as a correction of the RLA inspired by the measure from (Kharin and Zhuk, 1994), and overcomes the limitations of RRB and RLA. The initial performance () has a very low influence in the RLA equation, which can negatively bias the loss of accuracy of methods with high when compared to methods with low initial accuracy. E.g., let be the accuracies of the method and and be the accuracies of the method . Although method has very low loss of accuracy for of noise, the classifier has a better —equals to 0. The ELA measure is given by:
where and are defined as in Equation 4. is equivalent to —see (Sáez et al., 2016) for the derivation—where the factor is equivalent to and depends only on the initial accuracy . Thus the value of a method is based on its robustness, measured by the , and on the behavior for clean data—i.e., without controlled noise—measured by .
This section presents the methodology followed to analyze how GSGP performs in symbolic regression problems with different levels of noise when compared to GP. We present the datasets considered in our study, along with the strategy to incrementally add noise to the data, and the measures we adopt to assess the impact of different levels of noise on the performance of GSGP and GP.
4.1. Test Bed
Since real-world problems have intrinsic noise inserted when the data is acquired and pre-processed from the environment (Nettleton et al., 2010), we adopt a test bed composed of synthetic data, generated from 15 known functions selected from the list of benchmark candidates for symbolic regression GP presented in (McDermott et al., 2012). Table 1 presents the function set, the sampling strategy adopted to build the dataset, the input domain, the number of instances and the source from the literature.
|Dataset||Objective function||# of variables||Sampling strategy||# of instances||Src.|
|Keijzer-4||1||E[0, 10, 0.1]||E[0.05, 10.05, 0.1]||101||101||(Keijzer, 2003)|
|Vladislavleva-1||2||100||2025||(Vladislavleva et al., 2009)|
|Vladislavleva-2||1||100||221||(Vladislavleva et al., 2009)|
|Vladislavleva-3||2||600||5083||(Vladislavleva et al., 2009)|
|Vladislavleva-4||5||1024||5000||(Vladislavleva et al., 2009)|
|Vladislavleva-5||3||300||2700||(Vladislavleva et al., 2009)|
|Vladislavleva-7||2||300||1000||(Vladislavleva et al., 2009)|
|Vladislavleva-8||2||50||1089||(Vladislavleva et al., 2009)|
The training and test sets are sampled independently, according to two strategies presented in Table 1. indicates a uniform random sample of size drawn from the interval and indicates a grid of points evenly spaced with an interval , from to , inclusive. For the former strategy, we generated five sets of samples and for the latter, since the procedure is deterministic, we generated only one sample.
In order to evaluate the impact of noise on GSGP and GP performances, the response variable (desired output) of the training instances was perturbed by an additive Gaussian noise with zero mean and unitary standard deviation, applied with probability given by. We generated datasets with varying from to with steps equal to , resulting in 11 different levels of noise, in a total of 165 datasets analyzed.
The performance of the methods in the datasets was measured using the Normalized Root Mean Square Error (NRMSE) (Keijzer, 2003; De Falco et al., 2007), given by222The presented NRMSE equation regards the training set. However, the formula is easily extensible to the test set.:
where and are, respectively, the mean and standard deviation of the target output vector and is the model (function) induced by the regression method. NRMSE is equal to 1 when the model performs equivalently to and equal to 0 when the model perfectly fits the data. We used the normalized version of RMSE to be able to compare results from different levels of noise and datasets in a fair way, as described in the next section.
4.2. Noise Robustness in Regression
The performance of GSGP and GP in the same datasets with different levels of noise is assessed by the robustness measures presented in Section 3.2, namely RLA and ELA, adapted to the regression domain. Instead of using the accuracy—a performance measure for classification methods—we adopted the NRMSE.
Notice that the accuracy is defined in —or —with higher values meaning better accuracy and, consequently, smaller error. Thus, the larger the RLA or ELA measured values, the less robust is the method to the respective noise level. The NRMSE, on the other hand, is defined in and higher values mean greater error.
In this context, we introduce the Relative Increase in Error (RIE) and Equalized Increase in Error (EIE) measures as alternatives to RLA and ELA, respectively, to quantify the noise robustness in the regression domain. RIE and EIE are given by Equations 7 and 8, respectively, where is the NRMSE obtained by the model in the dataset with of noise, is the NRMSE obtained by the model in the dataset with no noise, and a plus one term is added to both denominators in order to avoid division by zero. The higher the values of both measures, the more sensitive the model is to the respective noise level.
Similarly to ELA, we can derive EIE according to Equation 9, such that is equal to plus a term depending only on the model NRMSE with no noise—given by .
5. Experimental Analysis
|Training instances affected by noise (%)|
indicates the null hypothesis was not discarded and the symbol() indicates that GSGP is statistically better (worse) than GP with 95% confidence.
This section presents the experimental analysis of the performance of GSGP in symbolic regression problems with noisy data. We compare the results with a canonical GP (Banzhaf et al., 1998), using the noise robustness measures introduced in Section 4.2 and the 15 datasets presented in Table 1 with 11 different noise levels.Given the non-deterministic nature of GSGP and GP, each experiment was repeated 50 times. As explained in Section 4.1, we resampled five times the data obtained randomly by the uniform strategy. In datasets with this sampling strategy, the experiments were repeated 10 times for each sample, resulting in a total of 50 repetitions.
Both GP and GSGP were run with a population of 1000 individuals evolved for 2000 generations with tournament selection of size 10. The grow method (Koza, 1992) was adopted to generate the random functions inside the geometric semantic crossover and mutation operators, and the ramped half-and-half method (Koza, 1992) to generate the initial population, both with maximum individual depth equals to 6. The function set included three binary arithmetic operators () and the analytic quotient (AQ) (Ni et al., 2013), an alternative to the arithmetic division with similar properties, but without discontinuity, given by:
The terminal set comprised the variables of the problem and constant values uniformly picked from . The GP method employed the canonical crossover and mutation operators (Koza, 1992) with probabilities and , respectively. GSGP employed the geometric semantic crossover for fitness function based on Manhattan distance and mutation operators, as presented in (Castelli et al., 2015), both with probability . The mutation step adopted by the geometric semantic mutation operator was defined as 10% of the standard deviation of the target vector given by the training data.
Figure 2 shows how the median training and test NRMSE are affected when increasing the percentage of noisy instances. Regarding the results for data with no noise, GSGP presents better median test NRMSE in all but two datasets, Keijzer-6 and Vladislavleva-5. However, the opposite behavior is observed for noise levels greater than or equal to in Keijzer-1, in Keijzer-9, in Vladislavleva-1 and in Vladislavleva-4. Moreover, GSGP test NRMSE approximates from GP when the noise level increases in the datasets Keijzer-2, Keijzer-3, Keijzer-4, Keijzer-7, Keijzer-8, Vladislavleva-2 and Vladislavleva-8. This behavior may indicate that, although GSGP outperforms GP in low levels of noise in most of the datasets, its performance deteriorates faster than GP when the level of noise increases. Notice that in all experiments the median training NRMSE of the GSGP is smaller than the one obtained by GP, regardless of the behavior of both methods in the test data, which may indicate that GSGP has a greater tendency to overfit noisy data than GP.
Figure 3, in turn, shows the median values for the EIE and RIE measures presented in Section 4.2, obtained by GSGP and GP methods for different noise levels considering only the test set. When analyzing RIE values, we verify that GSGP is less robust to noise than GP for all noise levels in 10 datasets—Keijzer-2, Keijzer-3, Keijzer-4, Keijzer-6, Keijzer-7, Keijzer-9, Vladislavleva-1, Vladislavleva-2, Vladislavleva-4 and Vladislavleva-7—and for noise levels greater than or equal to 4% for Keijzer-1 and Keijzer-8 and 6% in the dataset Vladislavleva-8.
However, this scenario changes when we look at the values of EIE. GSGP is more robust than GP in all noise levels in six datasets—Keijzer-4, Keijzer-7, Keijzer-8, Vladislavleva-2, Vladislavleva-3 and Vladislavleva-7—and the opposite happens in only two datasets—Keijzer-6 and Vladislavleva-1. Besides, we can observe that GSGP obtains smaller EIE values than GP for noise levels smaller than in the datasets Keijzer-1 and Keijzer-3. On the other hand, GP outperforms GSGP in terms of EIE for noise levels greater than in the datasets Keijzer-9 and Vladislavleva-4. These analyses indicate that, overall, GSGP is more robust to noise than GP according to the EIE measure.
The main reason for these contradicting results lies on what these measures regard as important to quantify noise robustness. As presented in Section 3.2, the method performance in the dataset with no noise has very low influence in the RLA measure—and consequently in its regression counterpart (RIE). The ELA and EIE, on the other hand, add a term to their respective equations to represent the behavior of the model in the data without controlled noise. As GSGP performs better than GP in the majority of scenarios when no noise is present, it is natural that EIE considers it more robust to noise than RIE.
In order to compare the results presented in Figures 2 and 3, we conducted three paired one-tailed Wilcoxon tests comparing GP and GSGP under the null hypothesis that their median performance—measured by their median test NRMSE, RIE and EIE in all datasets—are equal. The adopted alternative hypotheses differ according to to the overall results presented in Figures 2 and 3: GSGP outperforms GP in terms of NRMSE and EIE and GP outperforms GSGP in terms of RIE. The p-values reported by the tests are presented in Table 2. Considering a confidence level of 95%, the symbol indicates the null hypothesis was not discarded and the symbol () indicates that GSGP is statistically better (worse) than GP. For the NMRSE measure, GSGP outperforms GP in datasets with , , , , and of noise. However, there are no statistical differences when the noise level is greater than , which indicates that GSGP performance approximates from GP. When analyzing the robustness measures, RIE indicates that GP is more robust than GSGP in all noise levels. However, the same is not true for the EIE measure, which indicates GSGP is more robust than GP with low levels of noise ( and ) and have no significant differences for noise levels greater than .
This paper presented an analytic study of the impact of noisy data on the performance of GSGP when compared to GP in symbolic regression problems. The performance of both methods was measured by the normalized RMSE and two robustness measures adapted from the classification literature to the regression domain, namely Relative Increase in Error (RIE) and Equalized Increase in Error (EIE), in a test bed composed of 15 synthetic datasets, each of them with 11 different levels of noise equally spaced in .
Results indicated that GP is more robust to all levels of noise than GSGP when the RIE measure is employed to analyze the outcomes. However, when the NRMSE or EIE values were analyzed, GSGP outperformed GP in terms of robustness to lower levels of noise and presented no significant differences regarding GP in higher levels of noise. Overall, these outcomes indicate that, although GSGP performs better than GP in low levels of noise, the methods tend to perform equivalently for larger levels of noise. Given these conclusions, potential future developments include investigating techniques to identify the noisy instances in order to remove them or minimize their importance during the search.
Acknowledgements.The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was partially supported by the following Brazilian Research Support Agencies: CNPq, FAPEMIG, and CAPES.
- Banzhaf et al. (1998) W. Banzhaf, P. Nordin, R.E. Keller, and F.D. Francone. 1998. Genetic Programming — an Introduction: on the Automatic Evolution of Computer Programs and Its Applications. Morgan Kaufmann Publishers.
- Borrelli et al. (2006) A. Borrelli, I. De Falco, A. Della Cioppa, M. Nicodemi, and G. Trautteur. 2006. Performance of genetic programming to extract the trend in noisy data series. Physica A: Statistical Mechanics and its Applications 370, 1 (2006), 104–108.
Castelli et al. (2013)
Mauro Castelli, Davide
Castaldi, Ilaria Giordani, Sara Silva,
Leonardo Vanneschi, Francesco Archetti,
and Daniele Maccagnola. 2013.
An efficient implementation of geometric semantic
genetic programming for anticoagulation level prediction in
16th Portuguese Conference on Artificial Intelligence, EPIA 2013(LNCS), Luís Correia, Luís Paulo Reis, and José Cascalho (Eds.), Vol. 8154. Springer Berlin Heidelberg, 78–89.
- Castelli et al. (2012) Mauro Castelli, Luca Manzoni, and Leonardo Vanneschi. 2012. An efficient genetic programming system with geometric semantic operators and its application to human oral bioavailability prediction. arXiv preprint arXiv:1208.2437 (2012).
- Castelli et al. (2015) M. Castelli, S. Silva, and L. Vanneschi. 2015. A C++ framework for geometric semantic genetic programming. Genetic Prog. and Evolvable Machines 16, 1 (Mar 2015), 73–81.
- De Falco et al. (2007) I. De Falco, A. Della Cioppa, D. Maisto, U. Scafuri, and E. Tarantino. 2007. Parsimony doesn’t mean simplicity: Genetic programming for inductive inference on noisy data. In Proceedings of the 10th European Conference, EuroGP’07 (LNCS), M. Ebner, M. O’Neill, A. Ekárt, L. Vanneschi, and A. Esparcia-Alcázar (Eds.), Vol. 4445. Springer, 351–360.
Jeannie Fitzgerald and
Conor Ryan. 2014.
On Size, Complexity and Generalisation Error in
Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation(GECCO ’14). ACM, New York, NY, USA, 903–910. DOI:http://dx.doi.org/10.1145/2576768.2598346
- Hickey (1996) Ray J Hickey. 1996. Noise modelling and evaluating learning from examples. Artificial Intelligence 82, 1 (1996), 157–179.
- Imada and Ross (2008) J. H Imada and B. J Ross. 2008. Using feature-based fitness evaluation in symbolic regression with added noise. In Proceedings of the 10th annual conference companion on Genetic and evolutionary computation. ACM, 2153–2158.
- Keijzer (2003) M. Keijzer. 2003. Improving Symbolic Regression with Interval Arithmetic and Linear Scaling. In 6th European Conference, EuroGP 2003, Conor Ryan, Terence Soule, Maarten Keijzer, Edward Tsang, Riccardo Poli, and Ernesto Costa (Eds.), Vol. 2610. Springer Berlin Heidelberg, 70–82. DOI:http://dx.doi.org/10.1007/3-540-36599-0_7
Kharin and Zhuk (1994)
Yu. Kharin and E.
Robustness in statistical pattern recognition under “contaminations” of training samples. In
Pattern Recognition, 1994. Vol. 2-Conference B: Computer Vision & Image Processing., Proceedings of the 12th IAPR International. Conference on, Vol. 2. IEEE, 504–506.
- Koza (1992) J. R. Koza. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Selection. Vol. 1. MIT Press.
- McDermott et al. (2012) J. McDermott, D. R. White, S. Luke, L. Manzoni, M. Castelli, L. Vanneschi, W. Jaskowski, K. Krawiec, R. Harper, K. De Jong, and U. O’Reilly. 2012. Genetic Programming Needs Better Benchmarks. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO ’12). ACM, New York, NY, USA, 791–798. DOI:http://dx.doi.org/10.1145/2330163.2330273
- Moraglio et al. (2012) A. Moraglio, K. Krawiec, and C. G. Johnson. 2012. Geometric Semantic Genetic Programming. Springer Berlin Heidelberg, Berlin, Heidelberg, 21–31. DOI:http://dx.doi.org/10.1007/978-3-642-32937-1_3
- Nettleton et al. (2010) F. Nettleton, D, A. Orriols-Puig, and A. Fornells. 2010. A study of the effect of different types of noise on the precision of supervised learning techniques. Artificial Intelligence Review 33, 4 (2010), 275–306.
- Ni et al. (2013) J. Ni, R. H. Drieberg, and P. I. Rockett. 2013. The use of an analytic quotient operator in genetic programming. Evolutionary Computation, IEEE Trans. on 17, 1 (Apr 2013), 146–152.
Peter J Rousseeuw and
Annick M Leroy. 2005.
Robust regression and outlier detection. Vol. 589. John wiley & sons.
- Sáez et al. (2011) J. A. Sáez, J. Luengo, and F. Herrera. 2011. Fuzzy rule based classification systems versus crisp robust learners trained in presence of class noise’s effects: a case of study. In Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on. IEEE, 1229–1234.
- Sáez et al. (2016) J. A. Sáez, J. Luengo, and F. Herrera. 2016. Evaluating the classifier behavior with noisy data considering performance and robustness: the Equalized Loss of Accuracy measure. Neurocomputing 176 (2016), 26–35. DOI:http://dx.doi.org/10.1016/j.neucom.2014.11.086
- Sivapragasam et al. (2007) C. Sivapragasam, P. Vincent, and G. Vasudevan. 2007. Genetic programming model for forecast of short and noisy data. Hydrological processes 21, 2 (2007), 266–272.
- Vanneschi (2014) Leonardo Vanneschi. 2014. Improving genetic programming for the prediction of pharmacokinetic parameters. Memetic Computing 6, 4 (2014), 255–262.
- Vanneschi et al. (2013) Leonardo Vanneschi, Mauro Castelli, Luca Manzoni, and Sara Silva. 2013. A new implementation of geometric semantic GP and its application to problems in pharmacokinetics. In 16th European Conference, EuroGP 2013 (LNCS), Krzysztof Krawiec, Alberto Moraglio, Ting Hu, A. Şima Etaner-Uyar, and Bin Hu (Eds.), Vol. 7831. Springer Berlin Heidelberg, 205–216.
- Vanneschi et al. (2010) Leonardo Vanneschi, Mauro Castelli, and Sara Silva. 2010. Measuring Bloat, Overfitting and Functional Complexity in Genetic Programming. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation (GECCO ’10). ACM, New York, NY, USA, 877–884. DOI:http://dx.doi.org/10.1145/1830483.1830643
- Vanneschi et al. (2014) Leonardo Vanneschi, Mauro Castelli, and Sara Silva. 2014. A survey of semantic methods in genetic programming. Genetic Programming and Evolvable Machines 15, 2 (2014), 195–214. DOI:http://dx.doi.org/10.1007/s10710-013-9210-0
- Vanneschi et al. (2014) Leonardo Vanneschi, Sara Silva, Mauro Castelli, and Luca Manzoni. 2014. Geometric Semantic Genetic Programming for Real Life Applications. In Genetic Programming Theory and Practice XI, Rick Riolo, Jason H. Moore, and Mark Kotanchek (Eds.). Springer New York, 191–209.
- Vladislavleva et al. (2009) E. J. Vladislavleva, G. F. Smits, and D. Den Hertog. 2009. Order of Nonlinearity As a Complexity Measure for Models Generated by Symbolic Regression via Pareto Genetic Programming. Trans. Evol. Comp 13, 2 (April 2009), 333–349. DOI:http://dx.doi.org/10.1109/TEVC.2008.926486