Regularization is all you Need: Simple Neural Nets can Excel on Tabular Data

06/21/2021 ∙ by Arlind Kadra, et al. ∙ University of Freiburg uni hannover 0

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs on a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In contrast to the mainstream in deep learning (DL), in this paper, we focus on tabular data, a domain that we feel is understudied in DL. Nevertheless, it is of great relevance for many practical applications, such as climate science, medicine, manufacturing, finance, recommender systems, etc. During the last decade, traditional machine learning methods, such as Gradient-Boosted Decision Trees (GBDT) Chen:2016:XST:2939672.2939785 , dominated tabular data applications due to their superior performance, and the success story DL has had for raw data (e.g., images, speech, and text) stopped short of tabular data.

Even in recent years, the existing literature still gives mixed messages on the state-of-the-art status of deep learning for tabular data. While some recent neural network methods (49698, ; Popov2020Neural, ) claim to outperform GBDT, others confirm that GBDT are still the most accurate method on tabular data (10.5555/3326943.3327070, ; katzir2021netdnf, ). The extensive experiments on 40 datasets we report indeed confirm that recent neural networks (49698, ; Popov2020Neural, ; DBLP:journals/corr/abs-2003-06505, ) do not outperform GBDT when the hyperparameters of all methods are thoroughly tuned.

We hypothesize that the key to improving the performance of neural networks on tabular data lies in exploiting the recent DL advances on regularization techniques (reviewed in Section 3), such as data augmentation, residual blocks, model averaging (e.g. dropout, or snapshot ensembles), or on learning dynamics (e.g. look-ahead optimizer, or stochastic weight averaging). Indeed, we find that even plain Multilayer Perceptrons (MLPs) achieve state-of-the-art results when regularized by multiple modern regularization techniques applied jointly and simultaneously.

Applying multiple regularizers jointly is already a common standard for practitioners, who routinely mix regularization techniques (e.g. Dropout with early stopping and weight decay). However, the deeper question of “Which subset of regularizers gives the largest generalization performance on a particular dataset among dozens of available methods?” remains unanswered, as practitioners currently combine regularizers via inefficient trial-and-error procedures. In this paper, we provide a simple, yet principled answer to that question, by posing the selection of the optimal subset of regularization techniques and their inherent hyperparameters, as a joint search for the best combination of MLP regularizers for each dataset among a pool of 13 modern regularization techniques and their subsidiary hyperparameters (Section 4).

From an empirical perspective, this paper is the first to provide compelling evidence that well-regularized neural networks (even simple MLPs!) indeed surpass the current state-of-the-art models in tabular datasets, including recent neural network architectures and GBDT (Section 6). In fact, the performance improvements are quite pronounced and highly significant. We believe this finding to potentially have far-reaching implications, and to open up a garden of delights of new applications on tabular datasets for DL.

Our contributions are as follows:

  1. We demonstrate that modern DL regularizers (developed for DL applications on raw data, such as images, speech, or text) also substantially improve the performance of deep multi-layer perceptrons on tabular data.

  2. We propose a simple, yet principled, paradigm for selecting the optimal subset of regularization techniques and their subsidiary hyperparameters (so-called regularization cocktails).

  3. We demonstrate that these regularization cocktails enable even simple MLPs to outperform both recent neural network architectures, as well as traditional strong ML methods, such as GBDT, on tabular data. Specifically, we are the first to show neural networks to significantly (and substantially) outperform XGBoost in a fair, large-scale experimental study.

2 Related Work on Deep Learning for Tabular Data

Recently, various neural architectures have been proposed for improving the performance of neural networks on tabular data. TabNet (49698, ) introduced a sequential attention mechanism for capturing salient features. Neural oblivious decision ensembles (NODE (Popov2020Neural, )) blend the concept of hierarchical decisions into neural networks. Regularization learning networks train a regularization strength on every neural weight by posing the problem as a large-scale hyperparameter tuning scheme (10.5555/3326943.3327070, ). The recent NET-DNF technique introduces a novel inductive bias in the neural structure corresponding to logical Boolean formulas in disjunctive normal forms (katzir2021netdnf, ). An approach that is often mistaken as deep learning for tabular data is AutoGluon Tabular (DBLP:journals/corr/abs-2003-06505, ). It builds ensembles of basic neural networks together with other traditional ML techniques, with its key contribution being a strong stacking approach. We emphasize that some of these publications claim to outperform Gradient Boosted Decision Trees (GDBT) (49698, ; Popov2020Neural, ), and other papers explicitly stress that their neural networks do not outperform GBDT on tabular datasets (10.5555/3326943.3327070, ; katzir2021netdnf, ). In contrast, we do not propose a new kind of neural architecture, but a novel paradigm for learning a combination of regularization methods.

3 An Overview of Regularization Methods for Deep Learning

Weight decay: The most classical approaches of regularization focused on minimizing the norms of the parameter values, e.g., either the L1 (tibshirani96regression, ), the L2 (Tikhonov1943OnTS, ), or a combination of L1 and L2 known as the Elastic Net (zou2005regularization, ). A recent work fixes the malpractice of adding the decay penalty term before momentum-based adaptive learning rate steps (e.g., in common implementations of Adam (kingma:adam, )), by decoupling the regularization from the loss and applying it after the learning rate computation (loshchilov2018decoupled, ).

Data Augmentation: Among the augmentation regularizers, Cut-Out (Devries2017ImprovedRO, ) proposes to mask a subset of input features (e.g., pixel patches for images) for ensuring that the predictions remain invariant to distortions in the input space. Along similar lines, Mix-Up (zhang2018mixup, ) generates new instances as a linear span of pairs of training examples, while Cut-Mix (yun2019cutmix, ) suggests super-positions of instance pairs with mutually-exclusive pixel masks. A recent technique, called Aug-Mix (hendrycks*2020augmix, )

, generates instances by sampling chains of augmentation operations. On the other hand, the direction of reinforcement learning (RL) for augmentation policies was elaborated by Auto-Augment 

(Cubuk_2019_CVPR, ), followed by a technique that speeds up the training of the RL policy (NIPS2019_8892, ). Last but not least, adversarial attack strategies (e.g., FGSM (43405, )) generate synthetic examples with minimal perturbations, which are employed in training robust models (madry2018towards, ).

Model Averaging:

Ensembled machine learning models have been shown to reduce variance and act as regularizers

(polikar_ensemble_2012, ). A popular ensemble neural network with shared weights among its base models is Dropout (10.5555/2627435.2670313, ), which was extended to a variational version with a Gaussian posterior of the model parameters (10.5555/2969442.2969527, ). As a follow-up, Mix-Out (Lee2020Mixout:, ) extends Dropout by statistically fusing the parameters of two base models. Furthermore, so-called “snapshot ensembles” (huang_snapshot_2016, )

can be created using models from intermediate convergence points of stochastic gradient descent with restarts 

(loshchilov-ICLR17SGDR, ).

Structural and Linearization: In terms of structural regularization, ResNet adds skip connections across layers (7780459, ), while the Inception model computes latent representations by aggregating diverse convolutional filter sizes (szegedy2017inception, ). A recent trend adds a dosage of linearization to deep models, where skip connections transfer embeddings from previous less non-linear layers (7780459, ; huang2017densely, ). Along similar lines, the Shake-Shake regularization deploys skip connections in parallel convolutional blocks and aggregates the parallel representations through affine combinations (DBLP:conf/iclr/Gastaldi17, ), while Shake-Drop extends this mechanism to a larger number of CNN architectures (yamada2018shakedrop, ).

Implicit: The last family of regularizers broadly encapsulates methods that do not directly propose novel regularization techniques but have an implicit regularization effect as a virtue of their ‘modus operandi’ (NIPS2019_8960, )

. For instance, Batch Normalization improves generalization by reducing the internal covariate shifts

(pmlr-v37-ioffe15, ), while early stopping of the optimization procedure also yields a similar generalization effect (Yao2007, ). On the other hand, stabilizing the convergence of the training routine is another implicit regularization, for instance by introducing learning rate scheduling schemes (loshchilov-ICLR17SGDR, ). The recent strategy of stochastic weight averaging relies on averaging parameter values from the local optima encountered along the sequence of optimization steps (izmailov2018averaging, ), while another approach conducts updates in the direction of a few ‘lookahead’ steps (DBLP:conf/nips/ZhangLBH19, ).

4 Regularization Cocktails for Multilayer Perceptrons

4.1 Problem Definition

A training set is composed of features and targets , while the test dataset is denoted by . A parametrized function , i.e., a neural network, approximates the targets as , where the parameters

are trained to minimize a differentiable loss function

as . To generalize into minimizing , the parameters of are controlled with a regularization technique that avoids overfitting to the peculiarities of the training data. With a slight abuse of notation we denote to be the predictions of the model whose parameters are optimized under the regime of the regularization method , where represents the hyperparameters of . The training data is further divided into two subsets as training and validation splits, the later denoted by , such that can be tuned on the validation loss via the following hyperparameter optimization objective:

(1)

After finding the optimal (or in practice at least a well-performing) configuration , we re-fit on the entire training dataset, i.e., and .

While the search for optimal hyperparameters is an active field of research in the realm of AutoML (automl_book, ), still the choice of the regularizer mostly remains an ad-hoc practice, where practitioners select a few combinations among popular regularizers (Dropout, L2, Batch Normalization, etc.). In contrast to prior studies, we hypothesize that the optimal regularizer is a cocktail mixture of a large set of regularization methods, all being simultaneously applied with different strengths (i.e., dataset-specific hyperparameters). Given a set of regularizers , each with its own hyperparameters , the problem of finding the optimal cocktail of regularizers is:

(2)

The intuitive interpretation of Equation 2 is searching for the optimal hyperparameters (i.e., strengths) of the cocktail’s regularizers using the validation set, given that the optimal prediction model parameters are trained under the regime of all the regularizers being applied jointly. We stress that, for each regularizer, the hyperparameters include a conditional hyperparameter controlling whether the -th regularizer is applied at all or skipped. The best cocktail might comprise only a subset of regularizers.

4.2 Cocktail Search Space

To build our regularization cocktails we combine the 13 regularization methods listed in Table 1, which are selected among the categories of regularizers covered in Section 3. The regularization cocktail’s search space with the exact ranges for the selected regularizers’ hyperparameters is given in the same table. In total, the optimal cocktail is searched in a space of 19 hyperparameters.

While we can in principle use any hyperparameter optimization method, we decided to use the multi-fidelity Bayesian optimization method BOHB (falkner-icml-18, ) since it achieves strong performance across a wide range of computing budgets by combining Hyperband (10.5555/3122009.3242042, ) and Bayesian Optimization (DBLP:journals/jgo/Mockus94, ) and still has the convergence guarantees of Hyperband. Furthermore, BOHB can deal with the categorical hyperparameters we use for enabling or disabling regularization techniques and the corresponding conditional structures. In Appendix A we provide a brief description of how BOHB works. Some of the regularization methods cannot be combined, and we, therefore, introduce the following constraints to the proposed search space: (i) Shake-Shake and Shake-Drop are not simultaneously active since the latter builds on the former; (ii) Only one data augmentation technique out of Mix-Up, Cut-Mix, Cut-Out, and FGSM adversarial learning can be active at once due to a technical limitation of the base library we use (DBLP:journals/corr/abs-2006-13799, ).

Group Regularizer Hyperparameter Type Range Conditionality
Implicit BN BN-active Boolean
SWA SWA-active Boolean -
LA LA-active Boolean
Step size Continuous LA-active
Num. steps Integer LA-active
W. Decay WD WD-active Boolean
Decay factor Continuous WD-active
M. Averaging DO DO-active Boolean
Dropout shape Nominal DO-active
Drop rate Continuous DO-active
SE SE-active Boolean -
Structural SC SC-active Boolean
MB choice Nominal SC-active
SD

Max. probability

Continuous
SS - - -
Augmentation Augment Nominal
MU Mix. magnitude Continuous
CM Probability Continuous
CO Probability Continuous
Patch ratio Continuous
AT - - -
Table 1: The configuration space for the regularization cocktail regarding the explicit regularization hyperparameters of the methods and the conditional constraints enabling or disabling them. (BN: Batch Normalization, SWA: Stochastic Weight Averaging, LA: Lookahead Optimizer, WD: Weight Decay, DO: Dropout, SE: Snapshot Ensembles, SC: Skip Connection, MB: Multi-branch choice, SD: Shake-Drop, SS: Shake-Shake, MU: Mix-Up, CM: Cut-Mix, CO: Cut-Out, and AT: FGSM Adversarial Learning)

5 Experimental Protocol

5.1 Experimental Setup and Datasets

We use a large collection of 40 tabular datasets (listed in Table 7 of Appendix D

). This includes 31 datasets from the recent open-source OpenML AutoML Benchmark

(amlb2019, )111The remaining 8 datasets from that benchmark were too large to run effectively on our cluster.. In addition, we added 9 popular datasets from UCI (asuncion2007uci, ) and Kaggle that contain roughly 100K+ instances. Our resulting benchmark of 40 datasets includes tabular datasets that represent diverse classification problems, containing between 452 and 416 188 data points, and between 4 and 2 001 features, varying in terms of the number of numerical and categorical features. The datasets are retrieved from the OpenML repository (vanschoren2014openml, ) and split as training, validation, and testing sets. The data is standardized to have zero mean and unit variance where the statistics for the standardization are calculated on the training split.

We ran all experiments on a CPU cluster, each node of which contains two Intel Xeon E5-2630v4 at 2.2GHz with 20 CPU cores and a total memory of 128GB. We chose the PyTorch library

(paszke2019pytorch, ) as a deep learning framework and extended the AutoDL-framework Auto-Pytorch (mendoza-automlbook18a, ; DBLP:journals/corr/abs-2006-13799, ) with our implementations for the regularizers of Table 1.

To optimally utilize resources, we ran BOHB with 10 workers in parallel, where each worker had access to 2 CPU cores and 12GB of memory, executing one configuration at a time. Taking into account the dimensions of the considered configuration spaces, we ran BOHB for at most 4 days, or at most

hyperparameter configurations, whichever came first. During the training phase, each configuration was run for 105 epochs, in accordance with the cosine learning rate annealing with restarts (described in the following subsection). For the sake of studying the effect on more datasets, we only evaluated a single train-val-test split. After the training phase is completed, we report the results of the best hyperparameter configuration found, retrained on the joint train and validation set.

5.2 Fixed Architecture and Optimization Hyperparameters

In order to focus exclusively on investigating the effect of regularization we fix the neural architecture to a simple multilayer perceptron (MLP) and also fix some hyperparameters of the general training procedure. These fixed hyperparameter values, as specified in Table 3 of Appendix B.1, have been tuned for maximizing the performance of an unregularized neural network on our dataset collection (see Table 7 in Appendix D

). We use a 9-layer feed-forward neural network with 512 units for each layer, a choice motivated by previous work 

(orhan2017skip, ).

Moreover, we set a low learning rate of after performing a grid search for finding the best value across datasets. We use AdamW (loshchilov2018decoupled, ), which implements decoupled weight decay, and cosine annealing with restarts (loshchilov-ICLR17SGDR, ) as a learning rate scheduler. Using a learning rate scheduler with restarts helps in our case because we keep a fixed initial learning rate. For the restarts, we use an initial budget of 15 epochs, with a budget multiplier of 2, following published practices (DBLP:journals/corr/abs-2006-13799, ). Additionally, since our benchmark includes imbalanced datasets, we use a weighted version of categorical cross-entropy and balanced accuracy (brodersen2010balanced, )

as the evaluation metric.

5.3 Research Hypotheses and Associated Experiments

Hypothesis 1:

Regularization cocktails outperform state-of-the-art deep learning architectures on tabular datasets.

Experiment 1:

We compare our well-regularized MLPs against the recently proposed deep learning architectures Node (Popov2020Neural, ) and TabNet (49698, ). We also compare against AutoGluon Tabular (DBLP:journals/corr/abs-2003-06505, ) and add an unregularized version of our MLP for reference, as well as a version of our MLP regularized with Dropout (where the dropout hyperparameters are tuned on every dataset).

Hypothesis 2:

Regularization cocktails outperform Gradient-Boosted Decision Trees, as the most commonly used traditional ML method for tabular data.

Experiment 2:

We compare against state-of-the-art classifiers for tabular data. In particular, we compare against Gradient Boosted Decision Trees (GBDT), the de-facto state-of-the-art in tabular datasets. We use two different implementations of GBDT: an implementation from scikit-learn 

scikit-learn and optimized by Auto-sklearn (autosklearn, ), and the popular XGBoost (Chen:2016:XST:2939672.2939785, ).

5.4 Experimental Setup for the Baselines

All baselines use the same train, validation, and test splits, the same seed, and the same HPO resources and constraints as for our automatically constructed regularization cocktails (4 days on 20 CPU cores with 128GB of memory). After finding the best incumbent configuration, the baselines are refitted on the union of the training and validation sets and evaluated on the test set. The baselines consist of two recent neural architectures, AutoGluon Tabular with neural networks, and two implementations of GBDT, as follows:

TabNet:

This library does not provide an HPO algorithm by default; therefore, we also used BOHB for this search space, with the hyperparameter value ranges recommended by the authors (49698, ).

Node:

This library does not offer an HPO algorithm by default. We performed a grid search among the hyperparameter value ranges as proposed by the authors (Popov2020Neural, ); however, we faced multiple memory and runtime issues in running the code. To overcome these issues we used the default hyperparameters the authors used in their public implementation222https://github.com/Qwicen/node/blob/master/.

AutoGluon Tabular:

This library constructs stacked ensembles with bagging among diverse neural network architectures having various kinds of regularization (DBLP:journals/corr/abs-2003-06505, ). The training of the stacking ensemble of neural networks and its hyperparameter tuning are integrated into the library. While AutoGluon Tabular by default uses a broad range of traditional ML techniques, here, in order to study it as a “pure” deep learning method, we restrict it to only use neural networks as base learners.

ASK-GBDT:

The GBDT implementation of scikit-learn offered by Auto-sklearn (autosklearn, ) uses SMAC for HPO, and we used the default hyperparameter search space given by the library.

XGBoost:

The original library (Chen:2016:XST:2939672.2939785, ) does not incorporate an HPO algorithm by default, so we used BOHB for its HPO. We defined a search space for XGBoost’s hyperparameters following the best practices by the community; we describe this in the Appendix B.2.

For in-depth details about the different baseline configurations with the exact hyperparameter search spaces, please refer to Appendix B.2.

width=1.0 Dataset #Ins./#Feat. MLP MLP+D XGB. ASK-G. TabN. Node AutoGL. MLP+C anneal 898 / 39 84.131 86.916 85.416 90.000 84.248 20.000 80.000 89.270 kr-vs-kp 3196 / 37 99.701 99.850 99.850 99.850 93.250 97.264 99.687 99.850 arrhythmia 452 / 280 37.991 38.704 48.779 46.850 43.562 N/A 48.934 61.461 mfeat. 2000 / 217 97.750 98.000 98.000 97.500 97.250 97.250 98.000 98.000 credit-g 1000 / 21 69.405 68.095 68.929 71.191 61.190 73.095 69.643 74.643 vehicle 846 / 19 83.766 82.603 74.973 80.165 79.654 75.541 83.793 82.576 kc1 2109 / 22 70.274 72.980 66.846 63.353 52.517 55.803 67.270 74.381 adult 48842 / 15 76.893 78.520 79.824 79.830 77.155 78.168 80.557 82.443 walking. 149332 / 5 60.997 63.754 61.616 62.764 56.801 N/A 60.800 63.923 phoneme 5404 / 6 87.514 88.387 87.972 88.341 86.824 82.720 83.943 86.619 skin-seg. 245057 / 4 99.971 99.962 99.968 99.967 99.961 N/A 99.973 99.953 ldpa 164860 / 8 62.831 67.035 99.008 68.947 54.815 N/A 53.023 68.107 nomao 34465 / 119 95.917 96.232 96.872 97.217 95.425 96.217 96.420 96.826 cnae 1080 / 857 87.500 90.741 94.907 93.519 89.352 96.759 92.593 95.833 blood. 748 / 5 67.836 68.421 62.281 64.985 64.327 50.000 67.251 67.617 bank. 45211 / 17 78.076 83.145 72.658 72.283 70.639 74.607 79.483 85.993 connect. 67557 / 43 73.627 76.345 72.374 72.645 72.045 N/A 75.622 80.073 shuttle 58000 / 10 99.475 99.892 98.563 98.571 88.017 42.805 83.433 99.948 higgs 98050 / 29 67.752 66.873 72.944 72.926 72.036 N/A 73.798 73.546 australian 690 / 15 86.268 86.268 89.717 88.589 85.278 83.468 88.248 87.088 car 1728 / 7 97.442 99.690 92.376 100.000 98.701 46.119 99.675 99.587 segment 2310 / 20 94.805 94.589 93.723 93.074 91.775 90.043 91.991 93.723 fashion. 70000 / 785 90.464 90.507 91.243 90.457 89.793 N/A 91.336 91.950 jungle. 44819 / 7 97.061 97.237 87.325 83.070 73.425 N/A 93.017 97.471 numerai 96320 / 22 50.262 50.301 52.363 52.421 51.599 52.364 51.706 52.668 devnagari 92000 / 1025 96.125 97.000 93.310 77.897 94.179 N/A 97.734 98.370 helena 65196 / 28 16.836 23.983 21.994 21.144 19.032 N/A 27.115 27.701 jannis 83733 / 55 51.505 55.118 55.225 55.593 56.214 N/A 58.526 65.287 volkert 58310 / 181 65.081 66.996 64.170 63.428 59.409 N/A 70.195 71.667 miniboone 130064 / 51 90.639 94.099 94.024 94.137 62.173 N/A 94.978 94.015 apsfailure 76000 / 171 87.759 91.194 88.825 91.797 51.444 N/A 88.890 92.535 christine 5418 / 1637 70.941 70.756 74.815 74.447 69.649 73.247 74.170 74.262 dilbert 10000 / 2001 96.930 96.733 99.106 98.704 97.608 N/A 98.758 99.049 fabert 8237 / 801 63.707 64.814 70.098 70.120 62.277 66.097 68.142 69.183 jasmine 2984 / 145 78.048 76.211 80.546 78.878 76.690 80.053 80.046 79.217 sylvine 5124 / 21 93.070 93.363 95.509 95.119 83.595 93.852 93.753 94.045 dionis 416188 / 61 91.905 92.724 91.222 74.620 83.960 N/A 94.127 94.010 aloi 108000 / 129 92.331 93.852 95.338 13.534 93.589 N/A 97.423 97.175 ccfraud 284807 / 31 50.000 50.000 90.303 92.514 85.705 N/A 91.831 92.531 clickpred. 399482 / 12 63.125 64.367 58.361 58.201 50.163 N/A 54.410 64.280 Wins/Loses/Ties MLP+C vs 35/5/0 30/8/2 26/12/2 29/11/0 38/2/0 19/2/0 30/9/1 - Wilcoxon -value MLP+C vs -

Table 2: Comparison of well-regularized MLPs vs. rivals in terms of balanced accuracy. N/A values indicate a failure due to exceeding the cluster’s memory (24GB per process) or runtime limits (4 days). The acronyms stand for MLP+D: MLP with Dropout, XGB.: XGBoost, ASK-G.: GBDT by Auto-sklearn, AutoGL.: Autogluon, TabN.: TabNet and MLP+C: our MLP regularized by cocktails.

6 Experimental Results

Figure 1: Comparison of our proposed dataset-specific cocktail (MLP+C) against the top three baselines. Each dot in the plot represents a dataset, the y-axis our method’s errors and the x-axis the baselines’ errors.

Table 2 presents the comparative results of our MLPs regularized with the proposed regularization cocktails against seven baselines: two state-of-the-art architectures, AutoGluon Tabular with neural networks, two Gradient-Boosted Decision Tree (GBDT) implementations, as well as two reference MLPs (unregularized and regularized only with Dropout). It is worth re-emphasizing that the hyperparameters of all the presented baselines (except the unregularized MLP, which has no hyperparameters) are carefully tuned on a validation set as detailed in Section 5 and the appendices referenced therein. The table entries represent the test sets’ balanced accuracies achieved over the described large-scale collection of 40 datasets. Figure 1 visualizes these results, showing substantial improvements for our method.

(a) CD of MLP+C vs. neural networks
(b) CD of MLP+C vs. GBDT
(c) CD of MLP+C vs. all baselines
Figure 2: Critical difference diagrams with a Wilcoxon significance analysis on 40 datasets. Connected ranks via a bold bar indicate that performances are not significantly different ().

To assess the statistical significance, we analyze the ranks of the classification accuracies across the 40 datasets. We use the Critical Difference (CD) diagram of the ranks based on the Wilcoxon significance test, a standard metric for comparing classifiers across multiple datasets (10.5555/1248547.1248548, ). The overall empirical comparison of the elaborated methods is given in Figure 2. The analysis of neural network baselines in Subplot 1(a) reveals a clear statistical significance of the regularization cocktails against the other methods. Apart from AutoGluon, the other neural architectures are not competitive even against a MLP regularized only with Dropout and optimized with our standard, fixed training pipeline of Adam with cosine annealing. To be even fairer to the weaker baselines (TabNet and Node) we tried boosting them by adding early stopping (indicated with "+ES"), but their rank did not improve. Overall, the large-scale experimental analysis deduces that Hypothesis 1 in Section 5.3 is validated and that well-regularized simple deep MLPs outperform specialized neural architectures.

Next, we analyze the empirical significance of our well-regularized MLPs against the GBDT implementations in Figure 1(b). The results show that our MLPs outperform both GBDT variants (XGBoost and auto-sklearn) with a statistically significant margin. We added early stopping ("+ES") to XGBoost, but it did not improve its performance. Among the GBDT implementations, XGBoost has a non-significant margin over auto-sklearn. We conclude that well-regularized simple deep MLPs outperform GBDT, which validates Hypothesis 2 in Section 5.3.

The final cumulative comparison in Figure 1(c) provides a further result: none of the specialized previous deep learning methods (TabNet, NODE, AutoGluon Tabular) outperforms GBDT significantly. To the best of our awareness, this paper is therefore the first to demonstrate that neural networks beat GBDT with a statistically significant margin over a large-scale experimental protocol that conducts a thorough hyperparameter optimization for all methods.

Lastly, Figure 3 provides a further analysis on the most prominent regularizers of the MLP cocktails, based on the frequency of regularization methods that our HPO procedure selected for each dataset’s cocktail. In the left plot, we show the frequent individual regularizers, while in the right plot the frequencies are grouped by types of regularizers. The grouping reveals that a cocktail for each dataset often has at least one ingredient from every regularization family (detailed in Section 3), highlighting the need for jointly applying diverse regularization methods.

Figure 3: Left: Cocktail ingredients occurring in at least 30% of the datasets. Right: Clustered histogram (union of member occurrences) with the acronyms from Table 1. Implicit: {BN, LA, SWA}, M. Averaging: {DO, SE}, Structural: {SC, SS, SD}, D. Augmentation: {MU, CM, CO, AT}.

7 Conclusion

Summary. Focusing on the important domain of tabular datasets, this paper studied improvements to deep learning (DL) by better regularization techniques. We presented regularization cocktails, per-dataset-optimized combinations of many regularization techniques, and demonstrated that these improve the performance of even simple neural networks enough to substantially and significantly surpass XGBoost, the current state-of-the-art method for tabular datasets. We conducted a large-scale experiment involving 13 regularization methods and 40 datasets and empirically showed that (i) modern DL regularization methods developed in the context of raw data (e.g., vision, speech, text) substantially improve the performance of deep neural networks on tabular data; (ii) regularization cocktails significantly outperform recent neural networks architectures, and most importantly iii) regularization cocktails outperform GBDT on tabular datasets.

Limitations.

Compared to traditional machine learning methods, such as XGBoost, fitting deep neural networks is slow, and our regularization cocktails require per-dataset hyperparameter optimization on top. Therefore, in many data science applications, practitioners may currently still prefer the cheaper, albeit less accurate, traditional methods. To comprehensively study basic principles, we have also chosen an empirical evaluation that has many limitations. We only studied classification, not regression. We only used somewhat balanced datasets (the ratio of the minority class and the majority class is above 0.05). We did not study the regime of extremely few data points (our smallest data set contained 452 data points, our largest 416 188 data points). We also did not study datasets with extreme outliers, missing labels, semi-supervised data, streaming data, and many more modalities in which tabular data arises. An important point worth noticing is that the recent neural network architectures (Section 

5.4) could also benefit from our regularization cocktails, however, integrating the regularizers into the baseline libraries requires considerable coding efforts.

Future Work. This work opens up the door for a wealth of exciting follow-up research. Firstly, the per-dataset optimization of regularization cocktails may be substantially sped up by using meta-learning across datasets metalearning_vanschoren . Secondly, as we have used a fixed neural architecture, our method’s performance may be further improved by using joint architecture and hyperparameter optimization. Thirdly, regularization cocktails should also be tested under all the data modalities under “Limitations” above. In addition, it is interesting to validate the gain of integrating our well-regularized MLPs into modern AutoML libraries, by combining them with enhanced feature preprocessing and ensembling.

Take-away. Even simple neural networks can achieve competitive classification accuracies on tabular datasets when they are well regularized, using dataset-specific regularization cocktails found via standard hyperparameter optimization.

Acknowledgements. The authors acknowledge funding by the Robert Bosch GmbH and Eva Mayr-Stihl foundation. A part of this work was supported by the German Federal Ministry of Education and Research (BMBF, grant RenormalizedFlows 01IS19077C). The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG.

References

  • (1) S. Arik and T. Pfister. Tabnet: Attentive interpretable tabular learning. In

    AAAI Conference on Artificial Intelligence

    , 2021.
  • (2) S. Arora, N. Cohen, W. Hu, and Y. Luo. Implicit regularization in deep matrix factorization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alche Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 7413–7424. Curran Associates, Inc., 2019.
  • (3) A. Asuncion and D. Newman. Uci machine learning repository, 2007.
  • (4) K. Henning Brodersen, C., and E. Klaas Stephan J. M Buhmann. The balanced accuracy and its posterior distribution. In

    2010 20th International Conference on Pattern Recognition

    , pages 3121–3124. IEEE, 2010.
  • (5) T. Chen and C. Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794, 2016.
  • (6) E. Cubuk, B. Zoph, D. Mane, and V.Vasudevanand Q. Le. Autoaugment: Learning augmentation strategies from data. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , June 2019.
  • (7) J. Demšar. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res., 7:1–30, December 2006.
  • (8) T. Devries and G. Taylor. Improved regularization of convolutional neural networks with cutout. ArXiv, abs/1708.04552, 2017.
  • (9) N. Erickson, J. Mueller, A. Shirkov, H. Zhang, P. Larroy, M. Li, and A. Smola. Autogluon-tabular: Robust and accurate automl for structured data. CoRR, abs/2003.06505, 2020.
  • (10) S. Falkner, A.Klein, and F. Hutter. BOHB: Robust and efficient hyperparameter optimization at scalae. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018), pages 1436–1445, July 2018.
  • (11) M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter. Efficient and robust automated machine learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, page 2755–2763. MIT Press, 2015.
  • (12) X. Gastaldi. Shake-shake regularization of 3-branch residual networks. In 5th International Conference on Learning Representations, ICLR. OpenReview.net, 2017.
  • (13) P. Gijsbers, E. LeDell, S. Poirier, J. Thomas, B. Bischl, and J. Vanschoren. An open source automl benchmark. arXiv preprint arXiv:1907.00909 [cs.LG], 2019. Accepted at AutoML Workshop at ICML 2019.
  • (14) I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR, 2015.
  • (15) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  • (16) D. Hendrycks, N. Mu, E. Cubuk, B. Zoph, J. Gilmer, and B. Lakshminarayanan. Augmix: A simple method to improve robustness and uncertainty under data shift. In International Conference on Learning Representations, 2020.
  • (17) G. Huang, Y. Li, G. Pleiss, Z. Liu, J. Hopcroft, and K. Weinberger. Snapshot Ensembles: Train 1, Get M for Free. International Conference on Learning Representations, November 2017.
  • (18) G. Huang, Z. Liu, L. van der Maaten, and K. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • (19) F. Hutter, L. Kotthoff, and J. Vanschoren, editors. Automated Machine Learning: Methods, Systems, Challenges. Springer, 2019. In press, available at http://automl.org/book.
  • (20) S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448–456. PMLR, 07–09 Jul 2015.
  • (21) P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. Wilson. Averaging weights leads to wider optima and better generalization. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI, pages 876–885. AUAI Press, 2018.
  • (22) L. Katzir, G. Elidan, and R. El-Yaniv. Net-{dnf}: Effective deep modeling of tabular data. In International Conference on Learning Representations, 2021.
  • (23) D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
  • (24) D. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS’15, page 2575–2583. MIT Press, 2015.
  • (25) C. Lee, K. Cho, and W. Kang.

    Mixout: Effective regularization to finetune large-scale pretrained language models.

    In International Conference on Learning Representations, 2020.
  • (26) L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res., 18(1):6765–6816, January 2017.
  • (27) S. Lim, I. Kim, T. Kim, C. Kim, and S. Kim. Fast autoaugment. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alche-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 6665–6675. Curran Associates, Inc., 2019.
  • (28) I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR) 2017 Conference Track, April 2017.
  • (29) I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
  • (30) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  • (31) H. Mendoza, A. Klein, M. Feurer, J. Tobias Springenberg, M. Urban, M. Burkart, M. Dippel, M. Lindauer, and F. Hutter. Towards automatically-tuned deep neural networks. In F. Hutter, L. Kotthoff, and J. Vanschoren, editors, AutoML: Methods, Sytems, Challenges, chapter 7, pages 141–156. Springer, December 2019.
  • (32) J. Mockus. Application of bayesian approach to numerical methods of global and stochastic optimization. J. Glob. Optim., 4(4):347–365, 1994.
  • (33) A. Emin Orhan and X. Pitkow. Skip connections eliminate singularities. arXiv preprint arXiv:1701.09175, 2017.
  • (34) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026–8037, 2019.
  • (35) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • (36) R. Polikar. Ensemble Learning. In C. Zhang and Y. Ma, editors, Ensemble Machine Learning: Methods and Applications, pages 1–34. Springer US, 2012.
  • (37) S. Popov, S. Morozov, and A. Babenko. Neural oblivious decision ensembles for deep learning on tabular data. In International Conference on Learning Representations, 2020.
  • (38) I. Shavitt and E. Segal. Regularization learning networks: Deep learning for tabular datasets. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, page 1386–1396. Curran Associates Inc., 2018.
  • (39) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, January 2014.
  • (40) C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi.

    Inception-v4, inception-resnet and the impact of residual connections on learning.

    In Thirty-first AAAI conference on artificial intelligence, 2017.
  • (41) R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267–288, 1996.
  • (42) A. Tikhonov. On the stability of inverse problems. In Doklady Akademii Nauk SSSR, 1943.
  • (43) J. Vanschoren. Meta-learning. In F. Hutter, L. Kotthoff, and J. Vanschoren, editors, Automated Machine Learning - Methods, Systems, Challenges, The Springer Series on Challenges in Machine Learning, pages 35–61. Springer, 2019.
  • (44) J. Vanschoren, J. Van Rijn, B. Bischl, and L. Torgo. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49–60, 2014.
  • (45) Y. Yamada, M. Iwamura, and K. Kise. Shakedrop regularization, 2018.
  • (46) Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289–315, August 2007.
  • (47) S. Yun, D. Han, S. Joon, S. Chun, J. Choe, and Y. Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision (ICCV), 2019.
  • (48) H. Zhang, M. Cisse, Y. Dauphin, and D. Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
  • (49) M. Zhang, J. Lucas, J. Ba, and G. Hinton. Lookahead optimizer: k steps forward, 1 step back. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, pages 9593–9604, 2019.
  • (50) L. Zimmer, M. Lindauer, and F. Hutter. Auto-pytorch tabular: Multi-fidelity metalearning for efficient and robust autodl. IEEE TPAMI, 2021. IEEE Early Access.
  • (51) H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology), 67(2):301–320, 2005.

Appendix A Description of BOHB

BOHB [10] is a hyperparameter optimization algorithm that extends Hyperband [26] by sampling from a model instead of sampling randomly from the hyperparameter search space.

Initially, BOHB performs random search and favors exploration. As it iterates and gets more observations, it builds models over different fidelities and trades off exploration with exploitation to avoid converging in bad regions of the search space. BOHB samples from the model of the highest fidelity with a probability and with from random. A model is built for a fidelity only when enough observations exist for that fidelity; by default, this limit is set to equal + 1 observations, where is the dimensionality of the search space.

Appendix B Configuration Spaces

b.1 Method implicit search space

Category Hyperparameter Type Range
Cosine Annealing Iterations multiplier Continuous
Max. iterations Integer
Network Activation Nominal
Bias initialization Nominal
Blocks in a group Integer
Embeddings Nominal
Number of groups Integer
Resnet shape Nominal
Type Nominal
Units in a layer Integer
Preprocessing Preprocessor Nominal
Training Batch size Integer
Imputation Nominal
Initialization method Nominal
Learning rate Continuous
Loss module Nominal
Normalization strategy Nominal
Optimizer Nominal
Scheduler Nominal
Seed Integer
Table 3: The configuration space of the training and model architecture hyperparameters. All these hyperparameters only have one value in their range, meaning they are fixed.

Table 3 presents the network architecture and the training pipeline choices used in all our experiments for the individual regularizers and for the regularization cocktails.

b.2 Benchmark search space

For Experiment 3, we set up the search space and the individual configurations of the state-of-the-art competitors used for the comparison as follows:

Auto-Sklearn.

The estimator is restricted to only include GBDT, for the sake of fully comparing against the algorithm as a baseline. We do not activate any preprocessing since our regularization cocktails do not make use of preprocessing algorithms in the pipeline. The time left is always selected based on the time it took BOHB to find the hyperparameter with the best validation accuracy from the start of the hyperparameter optimization phase. The ensemble size is kept to

since our method only uses models from one training run, not multiple ones. The seed is set to as it was set in the experiments with the regularization cocktail, to obtain the same data splits. To keep the comparison fair, there is no warm start for the initial configurations with meta-learning, since, our method also does not make use of meta-learning. Lastly, the number of workers in parallel is set to , to match the parallel resources that were given to the experiment with the regularization cocktails. The search space of the hyperparameters is left to the default search space offered by Auto-Sklearn which is shown in Table 4.

Hyperparameter Type Range
Nominal
Continuous
Continuous
Integer
Integer
Integer
Continuous
Table 4: The search space of the training and model hyperparameters for the gradient boosting estimator of the Auto-Sklearn tool.

XGBoost.

To have a well-performing configuration space for XGBoost we augmented the default configuration spaces previously used in Auto-Sklearn333https://github.com/automl/auto-sklearn/blob/v.0.4.2/autosklearn/pipeline/components/classification/xgradient_boosting.py with further recommended hyperparameters and ranges from Amazon444https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html We reduced the size of some ranges since the ranges given at this website were too broad and resulted in poor performance.. In Table 5 we present a refined version of the configuration space that achieves a better performance on the benchmark. We would like to note that we did not apply One-Hot encoding to the categorical features for the experiment, since, we observed better overall results when the categorical features were label encoded.

Hyperparameter Type Range Log scale
Continuous
Continuous
Continuous
Integer -
Continuous
Continuous -
Continuous -
Continuous -
Integer -
Integer -
Continuous
Continuous -
Table 5: The search space of the training and model hyperparameters for the gradient boosting algorithm from the XGBoost library.

TabNet.

For the search space of the TabNet model, we used the default hyperparameter ranges suggested by the authors which were found to perform the best in their experiments.

Hyperparameter Type Values
Integer
Continuous
Continuous
Integer
Continuous
Integer
Integer
Continuous
Integer
Continuous
Table 6: The search space of the training and model hyperparameters for TabNet.

For our experiments with the Tabnet and XGBoost models, we also used BOHB for hyperparameter tuning, using the same parallel resources and limiting conditions as for our regularization cocktail.

In the above search spaces for the experiments with the XGBoost and TabNet models, we have not included early stopping, although, we did run experiments where both models had early stopping activated. For both models, the results were not better than their counterparts that did not make use of early stopping. Lastly, for both experiments, we imputed the missing values with the most frequent strategy. The reason behind our choice was that the implementation that we used did not accept the median strategy for categorical value imputation.

AutoGluon.

The library is configured to construct stacked ensembles with bagging among diverse neural network architectures having various kinds of regularization with the to achieve the best predictive accuracy. Furthermore, we used the same seed as for our MLPs with regularization cocktails to obtain the same dataset splits. We allowed AutoGluon to make use of early stopping and additionally, we allowed feature preprocessing since different feature preprocessing techniques are embedded in different model types, to allow for better overall performance. For all the other training and hyperparameter settings we use the library’s default555We used version 0.2.0 of the autogluon library following the explicit recommendation of the authors on the efficacy of their proposed stacking without needing any HPO [9].

Node.

For our experiments with NODE we used the official implementation666https://github.com/Qwicen/node. In our initial experiment iterations, we used the search space that is proposed by the authors [37]. However, evaluating the search space proposed is unfeasible, since the memory and run-time requirements of the experiments are very high and can not be satisfied within our cluster constraints. The high run-time and memory issues are noted by the authors in the official implementation.

To alleviate the problems, we used the default configuration suggested by the authors in the examples where the , and . Lastly, we use the same seed as for our experiment with the regularization cocktails to obtain the same data splits.

Appendix C Plots

c.1 Regularization Cocktail Performance

Figure 4: Pairwise statistical significance and comparison. For every entry, the first row showcases the wins, draws and losses of the horizontal method with the vertical method on all datasets, calculated on the test set; the second row presents the p-value for the statistical significance test.

To investigate the performance of our formulation, we compare plain MLPs regularized with only one individual regularization technique at a time against the dataset-specific regularization cocktails. The hyperparameters for all methods are tuned on the validation set and the best configuration is refitted on the full training set. In Figure 4, we present the results of each pairwise comparison. The results presented are calculated on the test set after the refit phase is completed on the best hyperparameter configuration. The p-value is generated by performing a Wilcoxon signed-rank test. As can be seen from the results, the regularization cocktail is the only method that has statistically significant improvements compared to all other methods (with a p-value in all cases). The detailed results for all methods on every dataset are shown in Table 9.

c.2 Dataset-dependent optimal cocktails

To verify the necessity for dataset-specific regularization cocktails, we initially investigate the best-found hyperparameter configurations to observe the occurrences of individual regularization techniques. In Figure 5, we present the occurrences of every regularization method over all datasets. The occurrences are calculated by analyzing the best-found hyperparameter configuration for each dataset and observing the number of times the regularization method was chosen to be activated by BOHB. As can be seen from Figure 5 there is no regularization method or combination that is always chosen for every dataset.

Additionally, we compare our regularization cocktails against the top-5 frequently chosen regularization techniques and the top-5 best performing regularization techniques. For the top-5 baselines, the regularization techniques are activated and their hyperparameters are tuned on the validation set. The results of the comparison as shown in Table 8 show that the cocktail outperforms both top-5 variants, indicating the need for dataset-specific regularization cocktails.

Figure 5: Frequency of the regularization techniques. The occurrences of the individual regularization techniques in the best hyperparameter configurations found by the cocktail across 40 datasets.

c.3 Learning rate as a hyperparameter

In the majority of our experiments, we keep a fixed initial learning rate to investigate in detail the effect of the individual regularization techniques and the regularization cocktails. The learning rate is set to a fixed value that achieves the best results across the chosen benchmark of datasets. To investigate the role and importance of the learning rate in the regularization cocktail performance, we perform an additional experiment, where, the learning rate is a hyperparameter that is optimized individually for every dataset. The results as shown in Table 10, indicate that regularization cocktails with a dynamic learning rate outperform the regularization cocktails with a fixed learning rate in 21 out of 40 datasets, tie in 1 and lose in 18. However, the results are not statistically significant with a -value of and do not indicate a clear region where the dynamic learning rate helps.

Appendix D Tables

In Table 7

we provide information about the datasets that are considered in our experiments. Concretely, we provide descriptive statistics and the identifiers for every dataset. The identifier (the task id) can be used to download the datasets from OpenML (

http://www.openml.org).

width= Task Id Dataset Name Number of Instances Number of Features Majority Class Percentage Minority Class Percentage 233090 anneal 898 39 76.17 0.89 233091 kr-vs-kp 3196 37 52.22 47.78 233092 arrhythmia 452 280 54.20 0.44 233093 mfeat-factors 2000 217 10.00 10.00 233088 credit-g 1000 21 70.00 30.00 233094 vehicle 846 19 25.77 23.52 233096 kc1 2109 22 84.54 15.46 233099 adult 48842 15 76.07 23.93 233102 walking-activity 149332 5 14.73 0.61 233103 phoneme 5404 6 70.65 29.35 233104 skin-segmentation 245057 4 79.25 20.75 233106 ldpa 164860 8 33.05 0.84 233107 nomao 34465 119 71.44 28.56 233108 cnae-9 1080 857 11.11 11.11 233109 blood-transfusion 748 5 76.20 23.80 233110 bank-marketing 45211 17 88.30 11.70 233112 connect-4 67557 43 65.83 9.55 233113 shuttle 58000 10 78.60 0.02 233114 higgs 98050 29 52.86 47.14 233115 Australian 690 15 55.51 44.49 233116 car 1728 7 70.02 3.76 233117 segment 2310 20 14.29 14.29 233118

Fashion-MNIST

70000 785 10.00 10.00 233119 Jungle-Chess-2pcs 44819 7 51.46 9.67 233120 numerai28.6 96320 22 50.52 49.48 233121 Devnagari-Script 92000 1025 2.17 2.17 233122 helena 65196 28 6.14 0.17 233123 jannis 83733 55 46.01 2.01 233124 volkert 58310 181 21.96 2.33 233126 MiniBooNE 130064 51 71.94 28.06 233130 APSFailure 76000 171 98.19 1.81 233131 christine 5418 1637 50.00 50.00 233132 dilbert 10000 2001 20.49 19.13 233133 fabert 8237 801 23.39 6.09 233134 jasmine 2984 145 50.00 50.00 233135 sylvine 5124 21 50.00 50.00 233137 dionis 416188 61 0.59 0.21 233142 aloi 108000 129 0.10 0.10 233143 C.C.FraudD. 284807 31 99.83 0.17 233146 Click prediction 399482 12 83.21 16.79

Table 7: Datasets. The collection of datasets used in our experiments, combined with detailed information for each dataset.

Table 8 shows the results for the comparison between the Regularization Cocktail and the Top-5 cocktail variants. The results are calculated on the test set for all datasets, after retraining on the best dataset-specific hyperparameter configuration.

width=1.0 Task Id Cockt. Top-5 F Top-5 R Task Id Cockt. Top-5 F Top-5 R Task Id Cockt. Top-5 F Top-5 R 233090 89.27 89.71 88.54 233091 99.85 99.85 98.20 233092 61.46 59.94 57.21 233093 98.00 98.75 98.75 233088 74.64 71.43 74.76 233094 82.58 82.01 80.33 233096 74.38 78.03 73.96 233099 82.44 82.35 82.24 233102 63.92 62.21 54.10 233103 86.62 85.90 82.33 233104 99.95 99.96 99.85 233106 68.11 68.81 55.45 233107 96.83 96.67 96.59 233108 95.83 95.83 95.83 233109 67.62 67.32 68.20 233110 85.99 86.35 86.06 233112 80.07 79.57 77.49 233113 99.95 97.95 85.34 233114 73.55 73.25 72.06 233115 87.09 88.11 87.60 233116 99.59 100.00 98.20 233117 93.72 93.94 90.69 233118 91.95 91.83 91.59 233119 97.47 92.66 85.53 233120 52.67 52.49 51.70 233121 98.37 98.41 96.93 233122 27.70 28.82 28.09 233123 65.29 65.13 62.11 233124 71.67 70.87 66.06 233126 94.02 88.13 93.16 233130 92.53 96.24 95.89 233131 74.26 71.86 74.63 233132 99.05 98.95 98.55 233133 69.18 68.75 69.03 233134 79.22 78.21 77.71 233135 94.05 94.43 93.95 233137 94.01 94.33 92.43 233142 97.17 97.06 96.06 233146 64.28 64.53 63.28 233143 92.53 92.13 92.59

Table 8: Top-5 baselines. The test set performance for the Regularization Cocktail against the Top-5 Most Frequent (Top-5 F) and the Top-5 Highest Ranks (Top-5 R) baselines.

Table 9 provides the results of all our experiments for the baseline, the individual regularization methods, and the regularization cocktail. All the results are calculated on the test set, after retraining on the best-found hyperparameter configurations. The evaluation metric used for the performance is balanced accuracy.

width= Task Id PN BN LA SE SWA SC AT SS SD MU CO CM WD DO Cocktail 233090 84.13 86.78 83.99 86.48 87.96 87.21 86.92 84.28 87.21 89.27 85.60 86.77 87.06 86.92 89.27 233091 99.70 99.85 99.70 99.70 99.55 100.00 99.85 99.85 99.69 99.85 99.55 99.85 99.85 99.85 99.85 233092 37.99 41.91 36.14 37.31 25.94 53.42 38.79 55.61 53.26 42.19 32.48 42.22 35.76 38.70 61.46 233093 97.75 98.50 96.00 97.75 69.25 98.25 97.25 97.25 98.25 98.00 98.00 97.75 98.00 98.00 98.00 233088 69.40 68.69 70.83 69.76 69.40 66.43 69.29 66.43 67.14 70.00 70.36 64.29 69.29 68.10 74.64 233094 83.77 83.17 84.36 84.39 83.36 80.82 83.17 83.20 81.98 83.77 81.47 78.65 83.20 82.60 82.58 233096 70.27 66.56 71.95 76.43 75.44 77.40 71.95 65.31 78.31 72.43 76.84 74.94 67.33 72.98 74.38 233099 76.89 77.92 75.95 78.23 76.38 78.38 76.75 75.56 78.61 78.67 82.56 82.23 76.99 78.52 82.44 233102 61.00 62.89 61.32 63.57 56.67 60.79 59.99 43.04 60.77 61.95 63.30 63.49 64.03 63.75 63.92 233103 87.51 87.02 88.25 87.03 87.22 85.90 87.99 87.64 85.90 87.12 87.26 86.59 86.74 88.39 86.62 233104 99.97 99.96 99.96 99.94 2.57 99.97 99.95 92.77 99.97 99.95 99.96 99.97 99.96 99.96 99.95 233106 62.83 68.90 62.46 65.70 62.16 61.85 61.89 44.63 62.05 66.29 65.43 64.99 66.50 67.04 68.11 233107 95.92 95.93 96.01 96.36 95.23 95.76 95.77 95.37 96.22 96.52 96.10 96.55 95.98 96.23 96.83 233108 87.50 91.20 85.65 87.96 50.00 93.98 92.59 94.91 94.44 94.44 93.06 95.37 91.67 90.74 95.83 233109 67.84 73.68 66.52 68.20 66.45 65.20 66.89 66.74 67.03 68.64 67.32 70.18 66.23 68.42 67.62 233110 78.08 72.58 72.70 83.40 66.93 72.74 74.12 70.16 74.76 74.09 85.71 85.76 72.34 83.14 85.99 233112 73.63 74.68 73.37 74.33 77.36 73.86 72.91 72.06 74.35 72.08 76.23 75.74 72.48 76.35 80.07 233113 99.47 99.89 99.92 99.87 55.86 98.11 99.46 90.60 98.11 99.94 99.92 99.91 99.88 99.89 99.95 233114 67.75 68.90 68.81 69.11 67.36 68.08 67.44 67.70 68.56 68.59 71.93 73.13 67.80 66.87 73.55 233115 86.27 85.79 88.73 86.44 87.26 87.74 88.39 87.74 88.39 88.73 88.25 88.90 87.91 86.27 87.09 233116 97.44 100.00 96.79 97.44 87.35 99.47 99.14 97.46 99.69 99.37 97.64 99.04 97.44 99.69 99.59 233117 94.81 92.86 93.51 93.51 90.48 93.72 92.86 92.64 93.72 93.51 93.07 93.72 93.94 94.59 93.72 233118 90.46 90.86 90.73 90.75 81.72 89.91 90.69 86.69 90.06 91.11 91.09 91.88 90.70 90.51 91.95 233119 97.06 93.76 97.79 96.08 92.15 87.83 97.16 87.08 87.68 98.14 96.50 97.51 97.33 97.24 97.47 233120 50.26 50.95 51.29 50.50 51.63 50.92 50.17 50.23 51.00 50.72 52.35 52.10 50.41 50.30 52.67 233121 96.12 97.83 96.45 96.74 92.40 95.31 96.34 91.38 95.15 97.52 97.88 97.80 96.88 97.00 98.37 233122 16.84 22.26 17.20 19.65 20.90 24.53 16.77 18.71 24.35 23.62 23.43 24.10 17.52 23.98 27.70 233123 51.51 51.74 50.86 53.16 56.11 53.58 49.65 49.88 51.94 51.22 60.98 61.67 51.13 55.12 65.29 233124 65.08 66.82 65.57 66.56 66.15 57.71 65.26 64.97 58.04 67.24 70.03 68.84 66.86 67.00 71.67 233126 90.64 58.17 90.42 92.94 92.60 93.99 90.45 88.55 93.98 93.58 93.86 93.87 92.97 94.10 94.02 233130 87.76 87.81 88.98 88.99 70.72 87.99 50.00 85.25 88.35 92.43 50.00 95.81 94.92 91.19 92.53 233131 70.94 69.28 71.59 70.94 71.31 72.14 71.59 71.59 72.32 70.94 72.69 72.42 70.76 70.76 74.26 233132 96.93 98.62 97.52 97.14 94.58 96.85 97.00 97.27 96.90 98.66 98.14 99.15 96.81 96.73 99.05 233133 63.71 65.11 65.00 66.05 64.57 66.21 62.82 64.33 65.98 68.75 66.58 66.28 64.36 64.81 69.18 233134 78.05 75.87 79.05 78.22 80.38 78.38 76.88 78.38 78.38 76.88 77.38 76.54 76.88 76.21 79.22 233135 93.07 92.49 92.10 93.17 93.17 92.10 93.17 93.27 92.10 92.58 92.68 94.53 93.75 93.36 94.05 233137 91.91 93.71 92.16 92.56 90.38 91.58 91.36 88.09 91.60 92.72 92.48 92.39 92.95 92.72 94.01 233142 92.33 96.70 92.90 92.35 63.59 95.47 91.43 93.60 95.56 93.47 93.81 93.25 92.60 93.85 97.17 233143 50.00 92.30 92.76 50.00 70.81 90.28 50.00 50.31 89.26 50.00 50.00 50.00 92.26 50.00 92.53 233146 63.12 60.06 62.79 64.16 63.39 64.42 63.52 54.64 64.21 64.26 64.05 64.57 64.41 64.37 64.28

Table 9: Detailed Table of Results. The test set performance for the plain network, individual regularization methods and for the regularization cocktails.

Additionally, in Table 10 we provide the results of the regularization cocktails with a fixed learning rate and with the learning rate being a hyperparameter optimized for every dataset.

width=0.5 Task Id Fixed LR Cocktail Dynamic LR Cocktail 233090 89.270 90.000 233091 99.850 99.850 233092 61.461 56.518 233093 98.000 98.250 233088 74.643 64.881 233094 82.576 79.654 233096 74.381 70.058 233099 82.443 82.551 233102 63.923 63.884 233103 86.619 87.854 233104 99.953 99.967 233106 68.107 69.081 233107 96.826 96.446 233108 95.833 95.370 233109 67.617 67.836 233110 85.993 86.596 233112 80.073 78.985 233113 99.948 83.263 233114 73.546 73.276 233115 87.088 88.077 233116 99.587 99.690 233117 93.723 93.939 233118 91.950 91.964 233119 97.471 98.039 233120 52.668 52.204 233121 98.370 98.522 233122 27.701 28.008 233123 65.287 63.293 233124 71.667 72.243 233126 94.015 93.930 233130 92.535 94.894 233131 74.262 72.140 233132 99.049 99.404 233133 69.183 68.877 233134 79.217 78.887 233135 94.045 94.435 233137 94.010 93.961 233142 97.175 97.106 233143 92.531 92.592 233146 64.280 64.362

Table 10: The test set performances of the regularization cocktails with a fixed initial learning rate value and a dynamic learning rate chosen by BOHB.