1 Introduction
Since its inception, machine learning has moved from inserting expert knowledge as explicit inductive bias toward generalpurpose methods that learn these biases from data, a trend significantly accelerated by the advent of deep learning
(Krizhevsky et al., 2012). This trend is pronounced in areas such as computer vision (He et al., 2016b), speech recognition (Chan et al., 2016)(Senior et al., 2020) and computational biology (Senior et al., 2020). In computer vision, for example, tools such as deformable part models (Felzenszwalb et al., 2009), SIFT features (Lowe, 1999) and Gabor filters (Mehrotra et al., 1992) have all been replaced with deep convolutional architectures. Crucially, in certain cases, these generalpurpose models learn similar biases to the ones present in the traditional, expertdesigned tools.Gabor filters in convolutional nets are a prime example of this phenomenon: convolutional networks learn Gaborlike filters in their first layer (Zeiler and Fergus, 2014). However, simply replacing the first convolutional layer by a Gabor filter worsens performance, showing that the learned parameters differ from the Gabor filter in a helpful way. A natural analogy can then be drawn between Gabor filters and the use of convolution itself: convolution is a ‘handdesigned’ bias which expresses local connectivity and weight sharing. Is it possible to learn these convolutional biases from scratch, the same way convolutional networks learn to express Gabor filters?
Reducing inductive bias requires more data, more computation, and larger models—consider replacing a convolutional network with a fullyconnected one with the same expressive capacity. Therefore it is important to reduce the bias in a way that does not damage the efficiency significantly, keeping only the core bias which enables high performance. Is there a core inductive bias that gives rise to local connectivity and weight sharing when applied to images, enabling the success of convolutions? The answer to this question would enable the design of algorithms which can learn, e.g., to apply convolutions to images, and apply more appropriate biases for other types of data.
Current research in architecture search is an example of efforts to reduce inductive bias but often without explicitly substituting the inductive bias with a simpler one (Zoph and Le, 2017; Real et al., 2019; Tan and Le, 2019). However, searching without a guidance makes the search computationally expensive. Consequently, the current techniques in architecture search are not able to find convolutional networks from scratch and only take convolution layer as a building block and focus on learning the interaction of the blocks.
Related Work
Previous work attempted to understand or improve the gap between convolutional and fullyconnected networks. Perhaps the most related work is Urban et al. (2017), titled “Do deep convolutional nets really need to be deep and convolutional?” (the abstract begins “Yes, they do.”). Urban et al. (2017) demonstrate empirically that even when trained with distillation techniques, fullyconnected networks achieve subpar performance on CIFAR10, with a bestreported accuracy of . Our work, however, suggests a different view, with results that significantly bridge this gap between fullyconnected and convolutional networks. In another attempt, Mocanu et al. (2018) proposed sparse evolutionary training, achieving accuracy on CIFAR10. Fernando et al. (2016) also proposes an evolutionbased approach but they only evaluate their approach on MNIST dataset. To the best of our knowledge, the highest accuracy achieved by fullyconnected networks on CIFAR10 is (Lin et al., 2016), achieved through heavy data augmentation and pretraining with a zerobias autoencoder.
In order to understand the inductive bias of convolutional networks, d’Ascoli et al. (2019) embedded convolutional networks in fullyconnected architectures, finding that combinations of parameters expressing convolutions comprise regions which are difficult to arrive at using SGD. Other works have studied simplified versions of convolutions from a theoretical prospective (Gunasekar et al., 2018; Cohen and Shashua, 2017; Novak et al., 2019). Relatedly, motivated by compression, many recent works (Frankle and Carbin, 2019; Lee et al., 2019; Evci et al., 2020; Dettmers and Zettlemoyer, 2019)
study sparse neural networks; studying their effectiveness on learning architectural bias from data would be an interesting direction.
Other recent interesting related work show variants of transformers are capable of succeeding in vision tasks and learning locality connected patterns (Cordonnier et al., 2020; Chen et al., 2020). In order to do so, one needs to provide the pixel location as input which enables the attention mechanism to learn locality. Furthermore, it is not clear that in order to learn convolutions such complex architecture is required. Finally, in a parallel work Zhou et al. (2020) proposed a method for metalearning symmetries from data and showed that it possible to use such method to metalearn convolutional architectures from synthetic data.
Contributions:
Our contributions in this paper are as follows:

We introduce shallow (sconv) and deep (dconv) allconvolutional networks (Springenberg et al., 2015) with desirable properties for studying convolutions. Through systematic experiments on sconv and dconv and their locally connected and fullyconnected counterparts, we make several observations about the role of depth, local connectivity and weight sharing (Section itemize

Local connectivity appears to have the greatest influence on performance.

The main benefit of depth appears to be efficiency in terms of memory and computation. Consequently, training shallow architectures with many more parameters for a long time would compensate most of the lost performance due to lack of depth.

The benefit of depth diminishes even further without weightsharing.
Showing that MDL can be bounded by number of parameters, we argue and demonstrate empirically that architecture families that need fewer parameters to fit a training set a certain degree tend to generalize better in the overparameterized regime.
We prove an MDLbased generalization bound for architectures search which suggests that the sparsity of the found architecture has great affect on generalization. However, weight sharing is only effective if it has a simple structure.
lasso achieves stateoftheart results on training fully connected networks on CIFAR10, CIFAR100 and SVHN tasks. The results are on par with the reported performance of multilayer convolutional networks around year 2013 (Hinton et al., 2012; Zeiler and Fergus, 2013). Moreover, unlike convolutional networks, these results are invariant to permuting pixels.
We show that the learned networks have fewer parameters than their locally connected counterparts. By visualizing the filters, we observe that lasso has indeed learned local connectivity but it has also learned to sample more sparsely in a local neighborhood to increase the receptive field while keeping the number of parameters low.
Somewhat related to the main goal of the paper, we trained ResNet18 with different kernel sizes using lasso and we observed that for all kernel sizes, lasso improves over SGD on CIFAR10, CIFAR100 and SVHN datasets.
: Left panel shows the architectures. Each convolution or fullyconnected layer (except the last one) has batchnormalization followed by ReLU activation. The right panel indicates how the number of parameters in each architecture and their corresponding locally and fullyconnected scales with respect to the number of base channels (shown by
in the left panel) and image dimension. The dotted black line is the maximum model size for training using 16bits on a V100 GPU.2 Disentangling Depth, Weight sharing and Local connectivity
To study the inductive bias of convolutions, we take a similar approach to d’Ascoli et al. (2019), examining two large hypothesis classes that encompass convolutional networks: locallyconnected and fullyconnected networks. Given a convolutional network, its corresponding locallyconnected version features the same connectivity, but eliminates weight sharing. The corresponding fullyconnected network then adds connections between all nodes in adjacent layers. Functions expressible by the fullyconnected networks encompass those of locallyconnected networks, which encompass convolutional networks.
One challenge in studying the inductive bias of convolutions is that the existence of other components such as pooling and residual connections makes it difficult to isolate the effect of convolutions in modern architectures. One solution is to simply remove those from the current architectures. However, that would result is a considerable performance loss since the other design choices of the architecture family were optimized with those components. Alternatively, one can construct an allconvolutional network with a desirable performance.
Springenberg et al. (2015) have proposed a few allconvolutional architectures with favorable performance. Unfortunately, these models cannot be used for studying the convolution since fullyconnected networks that can represent such architectures are too large. Another way to resolve this is by scaling down the number of parameters in the conventional architecture which unfortunately degrades the performance significantly. To this end, below we propose two allconvolutional networks to overcome the discussed issues.2.1 Introducing dconv and sconv for Studying Convolutions
In this work, we propose dconv and sconv, two allconvolutional networks that perform relatively well on image classification tasks while also enjoying desirable scaling with respect to the number of channels in the corresponding convolutional network and input image size. As shown in the left panel of Figure has 8 convolutional layers followed by two fullyconnected layers. However, sconv has only one convolutional layer which is followed by two fullyconnected layers. The right panel in Figure ows how the number of parameters of these models and their corresponding locallyconnected (dlocal, slocal) and fullyconnected (dfc, sfc) networks scale with respect to the number of base channels (channels in the first convolutional layer) and the input image size. dfc has the highest number of parameters given the same number of base channels, which means the largest dconv that can be studied will not have many based channels. On the other hand sfc has a better scaling which allows us to have more base channels in the corresponding sconv. The other interesting observation is that the number of parameters in fullyconnected networks depends on the fourth power of the input image dimension (e.g. 32 for CIFAR10). However, the local and convolutional networks have a quadratic dependency on the image dimension. Note that the quadratic dependency of dconv and sconv to the image size is due to of lack of global pooling.
Figure mpares the performance of dconv and sconv against ResNet18 (He et al., 2016a), and a 3layer (2 hidden layers) fullyconnected network with equal number of hidden units in each hidden layers denoted by 3fc. It is clear that on all tasks, the performance of dconv is comparable but slightly worse than ResNet18 which is expected. Moreover, sconv has a performance which is significantly better than fullyconnected networks but worse than dconv .
Models  #param (M)  CIFAR10  CIFAR100  SVHN  
orig.  fcemb.  400 ep.  4000 ep.  400 ep.  4000 ep.  400 ep.  4000 ep.  
dconv  1.45  256  88.84  89.79  63.73  62.26  95.65  95.86 
dlocal  3.42  256  86.13  86.07  58.58  55.71  95.71  95.85 
dfc  256  256  64.78  63.62  36.61  35.51  92.52  90.97 
sconv  138  256  84.14  87.05  59.48  62.51  92.34  93.38 
slocal  147  256  81.52  85.86  56.64  62.03  92.51  93.98 
sfc  256  256  72.77  78.63  47.72  51.43  88.64  91.80 
3fc  256  256  69.19  75.12  44.95  50.75  85.98  86.02 
2.2 Empirical Investigation
In this section, we disentangle the effect of depth, weight sharing and local connectivity. Table ports the test accuracy of dconv, sconv, their counterparts and 3fc on three datasets. For each architecture, the base channels is chosen such that the corresponding fullyconnected network has roughly 256 million parameters. First note that with that constraint, deep convolutional and locally connected architectures have much smaller number of parameters compare to others. Moreover, for both deep and shallow architectures, there is not a considerable difference in the number of parameters of the convolutional and locally connected networks. Please refer to Figure r
the scaling of each architecture. For each experiment, the results for training 400 and 4000 epochs are reported. Several observations can be made from Table
enumerateImportance of locality: For both deep and shallow architectures and across all three datasets, the gap between a locally connected network and its corresponding fullyconnected version is much higher than that between convolutional and locallyconnected networks. That suggests that the main benefit of convolutions comes from local connectivity.
Shallow architectures eventually catch up to deep ones (mostly): While training longer for deep architectures does not seem to improve the performance, it significantly does so in the case of shallow architectures across all datasets. As a result, the gap between deep and shallow architectures shrinks significantly after training for 4000 epochs.
Without weight sharing, the benefit of depth disappears: sfc outperforms dfc in all experiments. Moreover, when training for 4000 epochs, none of dlocal and slocal have clear advantage over each other.
Structure of fullyconnected network matters: sfc outperforms 3fc and dfc in all experiments by a considerable margin. Even more interesting is that sfc and 3fc have the same number of parameters and depth but sfc has much more hidden units in the first layer compare to 3fc. Please see the appendix for the exact details.
The results in Table ggests that sconv performs comparably to dconv and the gap between sconv and slocal is negligible compare to the gap between slocal and sfc . Therefore, in the rest of the paper, we try to bridge the gap between the performance of sfc and slocal .
3 Minimum Description Length as a Guiding Principle
In this section, we take a short break from experiments and look at the minimum description length as a way to explain the differences in the performance architectures and a guiding principle for finding models that generalize well. Consider the supervised learning setup where given an input data
, we want to predict a label under the common assumption that the pair are generated from a distribution . Let be a training set sampled i.i.d from andbe a loss function. The task is then to learn a function/hypothesis
with low population loss also known as generalization error. In order to find the hypothesis , we start by picking a hypothesis class(eg. linear classifiers, neural networks of a certain family, etc.) and a learning algorithm
. Finally, returns a hypothesis which will be used for prediction. The learning algorithm usually finds a hypothesis among the ones that have low sample loss .A fundamental question in learning concerns the generalization gap between the training error and the generalization error. One way to think about generalization is to think of a hypothesis as an explanation for association of label to input data (eg. given a picture and the label “dog”, the hypothesis is an explanation of what makes this picture, a picture of a dog). The Occam’s razor principle provides an intuitive way of thinking about generalization of a hypothesis (ShalevShwartz and BenDavid, 2014):
A short explanation tends to be more valid than a long explanation.
The above philosophical message can indeed be formalized as follows:
Theorem 1.
(ShalevShwartz and BenDavid, 2014, Theorem 7.7) Let be a hypothesis class and be a prefixfree description language for . Then for any distribution , sample size
, and probability
, with probability over the choice of we have that for any(1) 
where is the length of .
The above theorem connects the generalization gap of a hypothesis to its description length using a prefixfree description language, i.e. for any two distinct and , is not a prefix of . Importantly, the description language should be chosen before observing the training set. The simplest form of a prefix free description language is the bit representation of the parameters. If a model has parameters each of which stored in bits, then the total number of bits to describe a model is . Furthermore, note that this is prefixfree because all hypotheses have the same description length. According to this language, the generalization gap is simply controlled by the number of parameters.
Empirical investigations on generalization of overparameterized models suggest that the number of parameters is a very loose upper bound on the capacity (Lawrence et al., 1998; Neyshabur et al., 2015; Zhang et al., 2017; Dziugaite and Roy, 2017; Neyshabur et al., 2017) (or in MDL language, it is not a compact encoding). However, it is possible that the number of parameters is a compact encoding in the underparameterized regime but some other encoding becomes optimal in the overparameterized regime. One empirical observation about many neural network architectures is that with a wellchosen approach to scale the number of parameters, one can expect that overparametrization does not make a superior architecture inferior. While this is not always the case, it might be enough to distinguish architectures that have very different inductive biases. For example, the left two panels of Figure ow the performance plots of different architectures do not cross each other across the scale. This was also the central observation used in architecture search by Tan and Le (2019).
When the ordering of architectures with the same number of parameters based on their test performance is the same for any number of parameters (similar to the left two panels of Figure the performance in underparameterized regime can be indicative of the performance in overparameterized regime and hence if an architecture can fit the training data with fewer number of parameters, that most likely translate to the superiority of the architecture in the overparameterized regime. In the right panel of Figure e can see that it is indeed the case for all 4 architectures and 3 datasets that we study in this work. ResNet18 requires least number of parameters to fit any dataset while the next ones are dconv, sconv and 3fc respectively and this is the exact order in terms of generalization performance in overparameterized regime where models have around 1 billion parameters.
MDLbased generalization bound for architecture search
Theorem ves a bound on the description length of a hypothesis and we discussed how it can be bounded by number of parameters. What if the learning algorithm searches over many architectures and finds one with small number of parameters? In this case, the active parameters can be denoted as the parameters with nonzero values and the parameter sharing can be modeled as parameters with the same value. Does the performance then only depend on the number of parameters of the final architecture? How does the search space come to the picture and how does weight sharing affect the performance? Let be the number of bits used to present each parameter (usually 16 or 32) and be the maximum number of allowed parameters in the architectures found by the architecture algorithm. Also, let be the number of parameters in the architecture found by the architecture search algorithm. The next theorem shows how the generalization gap can be bounded in this case.
Theorem 2.
Let be a bit presentation of real numbers (), be all possible mappings from numbers to for any and a be prefixfree description language for . For any distribution , sample size , and , with probability over the choice of , for any parameterized function where :
(2) 
where is the length of and for some . Moreover, there exists a prefix free language such that for any , .
The proof is given in the appendix. The above theorem shows that the capacity of the learned model is bounded by two terms: number bits to present the parameters of the learned architecture () and the description length of the weight sharing (). This is rewarding for finding architectures with small number of nonzero weights because in that case the generalization gap would mostly depend on the number of nonzero parameters (using ). However, in the case of weight sharing, the picture is very different. The bound suggests that the generalization gap depends on the description length of the weight sharing and that could in worst case depend on the number of nonzero parameters if there is no structure. Otherwise, when weightsharing is structured, it can be encoded more efficiently and the generalization could potentially only depend on the number of parameters of the final model found by the architecture search. This suggests that simply encouraging parameter values to be close to each other does not improve generalization unless there is a structure in place. Therefore, we leave this for future work and focus on learning networks with small number of nonzero weights in the rest of the paper which seemed to be the most important element based on our empirical investigation in Section algorithm
_0$: the initial parameter vector, $
λ$: coefficient of $ℓ_1$ regularizer, $β$: threshold coefficient, $η$: learning rate, $τ$: number of updates4 lasso : Learning Local Connectivity from Scratch
In the previous section, we discussed that if a learning algorithm can find an architecture with a small number of nonzero weights, then the generalization gap of that architecture would mostly depend on the number of nonzero weights and the dependence on the number of original parameters would be only logarithmic. This is very encouraging. On the other hand, we know from our empirical investigation in Section at locally connected networks perform considerably better than fullyconnected ones. Is it possible to find locally connected networks by simply encouraging sparse connections when training on images? Could such networks bridge the gap between performance of fullyconnected and convolutional networks?
We are interested in architectures with small number of parameters. In order to achieve this, we try the simplest form of encouraging the sparsity by adding an regularizer. In particular, we propose lasso , a simple algorithm that is very similar to lasso (Tibshirani, 1996) except it has an extra parameter that allows for more aggressive soft thresholding. The algorithm is shown in Algorithm Training fullyconnected Networks Table mpares the performance of sfc trained with lasso to stateoftheart methods in training fullyconnected networks. The results show a significant improvement over previous work even considering complex methods such distillation or pretraining. Moreover, there is only very small gap between performance of sfc trained with lasso and its locally connected and convolutional counterparts. However, note that unlike sconv and slocal, the performance of sfc is invariant to permuting the pixels which is a considerable advantage when little is known about the structure of data. To put these results to perspective, these accuracies are on par with best results for convolutional networks in 2013 (Hinton et al., 2012; Zeiler and Fergus, 2013). We fix in the experiments but tune the regularization parameter for each datasets. We also observed that having higher for the layer that corresponds to the convolution, improves the performance which is aligned with our understanding that such a layer could benefit the most from sparsity.
Model  Training Method  CIFAR10  CIFAR100  SVHN 

sconv  SGD  87.05  62.51  93.38 
slocal  SGD  85.86  62.03  93.98 
mlp (Neyshabur et al., 2019)  SGD (no Augmentation)  58.1    84.3 
mlp (Mukkamala and Hein, 2017)  Adam/RMSProp 
72.2  39.3   
mlp (Mocanu et al., 2018)  SET(Sparse Evolutionary Training)  74.84     
mlp (Urban et al., 2017)  deep convolutional teacher  74.3     
mlp (Lin et al., 2016)  unsupervised pretraining with ZAE  78.62     
mlp (3fc)  SGD  75.12  50.75  86.02 
mlp (sfc)  SGD  78.63  51.43  91.80 
mlp (sfc)  lasso ()  82.45  55.58  93.80 
mlp (sfc)  lasso ()  82.52  55.96  93.66 
mlp (sfc)  lasso ()  85.19  59.56  94.07 
Furthermore, to see if lasso succeeds in learning architectures that are as sparse as slocal, we measure the number of nonzero weights in each layer separately. The results are shown in Figure he left panel corresponds to the first layer which is convolutional in sconv and locally connected in slocal but fullyconnected in sfc . As you can see, the solution found by lasso has less nonzero parameters than the number of parameters in slocal and has only slightly more parameters than sconv even though sconv is benefiting from weightsharing. The middle and left plots show that in other layers, the solution found by lasso is still sparse but less so in the final layer.
Our further investigation into the learned filters resulted in surprising results that is shown in Figure trained with SGD learned filters that are very dense but with locally correlated values. However, the same network trained with lasso learns a completely different set of filters. The filters learned by lasso are sparse and locally connected in a similar fashion to slocal filters. Moreover, it appears that the networks trained with lasso have learned that immediate pixels have little new information. In order to benefit from larger receptive fields while remaining local and sparse, these networks have learned filters that seem like a sparse sampling of a local neighborhood in the image. This encouraging finding validates that by using a learning algorithm with better inductive bias, one can start from an architecture without much bias and learn it from data.
4.2 Training Convolutional Networks with Larger Kernel Size
Slightly deviating from our main goal, we tried training ResNet18 with different kernel sizes using lasso and compare it with SGD. Figure ows that lasso improves over SGD for all kernel sizes across all datasets. The improvement is predictably more so when kernel size is large since lasso would learn to adjust it automatically. These results again confirm that lasso can be used in many different settings to adoptively learn the structure.
5 Discussion and Future Work
In this work, we studied the inductive bias of convolutional networks through empirical investigations and MDL theory. We proposed a simple algorithm called lasso that significantly improves training of fullyconnected networks. lasso does not have any specific inductive bias related to images. For example, permuting all pixels in our experiments does not change the performance of lasso. It is therefore interesting to see if lasso can be used in other contexts such as natural language processing or in domains such as computational biology where the data is structured but our knowledge of the structure of the data is much more limited compare to computer vision. Another promising direction is to improve efficiency of lasso to benefit from sparsity for faster computation and less memory usage. The current implementation does not benefit from sparsity in terms of memory or computation. In order to scale up lasso to be able to handle larger networks and input data dimensions, these barriers should be removed. Finally, we want to emphasize general purpose algorithms such as lasso that are able to learn the structure become even more promising as training larger models become more accessible.
Acknowledgement
We thank Sanjeev Arora, Ethan Dyer, Guy GurAri, Aitor Lewkowycz and Yuhuai Wu for many fruitful discussions and Vaishnavh Nagarajan and Vinay Ramasesh for their useful comments on this draft.
References
 Chan et al. (2016) Chan, W., Jaitly, N., Le, Q., and Vinyals, O. (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960–4964. IEEE.
 Chen et al. (2020) Chen, M., Radford, A., Child, R., Wu, J., Jun, H., Dhariwal, P., Luan, D., and Sutskever, I. (2020). Generative pretraining from pixels. In Proceedings of the 37th International Conference on Machine Learning.
 Cohen and Shashua (2017) Cohen, N. and Shashua, A. (2017). Inductive bias of deep convolutional networks through pooling geometry. In Proceeding of the International Conference on Learning Representations.
 Cordonnier et al. (2020) Cordonnier, J.B., Loukas, A., and Jaggi, M. (2020). On the relationship between selfattention and convolutional layers. In International Conference on Learning Representations.
 d’Ascoli et al. (2019) d’Ascoli, S., Sagun, L., Biroli, G., and Bruna, J. (2019). Finding the needle in the haystack with convolutions: on the benefits of architectural bias. In Advances in Neural Information Processing Systems, pages 9330–9340.
 Dettmers and Zettlemoyer (2019) Dettmers, T. and Zettlemoyer, L. (2019). Sparse networks from scratch: Faster training without losing performance. arXiv preprint arXiv:1907.04840.

Dziugaite and Roy (2017)
Dziugaite, G. K. and Roy, D. M. (2017).
Computing nonvacuous generalization bounds for deep (stochastic)
neural networks with many more parameters than training data.
In
Conference on Uncertainty in Artificial Intelligence
.  Evci et al. (2020) Evci, U., Gale, T., Menick, J., Castro, P. S., and Elsen, E. (2020). Rigging the lottery: Making all tickets winners. In Proceedings of the 37th International Conference on Machine Learning.
 Felzenszwalb et al. (2009) Felzenszwalb, P. F., Girshick, R. B., McAllester, D., and Ramanan, D. (2009). Object detection with discriminatively trained partbased models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645.

Fernando et al. (2016)
Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M.,
Lanctot, M., and Wierstra, D. (2016).
Convolution by evolution: Differentiable pattern producing networks.
In
Proceedings of the Genetic and Evolutionary Computation Conference 2016
, pages 109–116.  Frankle and Carbin (2019) Frankle, J. and Carbin, M. (2019). The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceeding of the International Conference on Learning Representations.
 Gunasekar et al. (2018) Gunasekar, S., Lee, J. D., Soudry, D., and Srebro, N. (2018). Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pages 9461–9471.

He et al. (2016a)
He, K., Zhang, X., Ren, S., and Sun, J. (2016a).
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778.  He et al. (2016b) He, K., Zhang, X., Ren, S., and Sun, J. (2016b). Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer.
 Hinton et al. (2012) Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580.
 Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105.

Lawrence et al. (1998)
Lawrence, S., Giles, C. L., and Tsoi, A. C. (1998).
What size neural network gives optimal generalization? convergence properties of backpropagation.
Technical report.  Lee et al. (2019) Lee, N., Ajanthan, T., and Torr, P. H. (2019). Snip: Singleshot network pruning based on connection sensitivity. In Proceeding of the International Conference on Learning Representations.
 Lim et al. (2019) Lim, S., Kim, I., Kim, T., Kim, C., and Kim, S. (2019). Fast autoaugment. In Advances in Neural Information Processing Systems, pages 6665–6675.
 Lin et al. (2016) Lin, Z., Memisevic, R., and Konda, K. (2016). How far can we go without convolution: Improving fullyconnected networks. Proceeding of the International Conference on Learning Representations workshop track.
 Lowe (1999) Lowe, D. G. (1999). Object recognition from local scaleinvariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee.
 Mehrotra et al. (1992) Mehrotra, R., Namuduri, K. R., and Ranganathan, N. (1992). Gabor filterbased edge detection. Pattern recognition, 25(12):1479–1494.
 Mocanu et al. (2018) Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., Gibescu, M., and Liotta, A. (2018). Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):1–12.
 Mukkamala and Hein (2017) Mukkamala, M. C. and Hein, M. (2017). Variants of rmsprop and adagrad with logarithmic regret bounds. In Proceedings of the 34th International Conference on Machine Learning, pages 2545–2553. JMLR. org.
 Neyshabur et al. (2017) Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. (2017). Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5947–5956.
 Neyshabur et al. (2019) Neyshabur, B., Li, Z., Bhojanapalli, S., LeCun, Y., and Srebro, N. (2019). Towards understanding the role of overparametrization in generalization of neural networks. In Proceeding of the International Conference on Learning Representations.
 Neyshabur et al. (2015) Neyshabur, B., Tomioka, R., and Srebro, N. (2015). In search of the real inductive bias: On the role of implicit regularization in deep learning. International Conference on Learning Representations workshop track.
 Novak et al. (2019) Novak, R., Xiao, L., Lee, J., Bahri, Y., Yang, G., Hron, J., Abolafia, D. A., Pennington, J., and SohlDickstein, J. (2019). Bayesian deep convolutional networks with many channels are gaussian processes. Proceeding of the International Conference on Learning Representations.
 Real et al. (2019) Real, E., Aggarwal, A., Huang, Y., and Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, volume 33, pages 4780–4789.
 Ritchie et al. (2020) Ritchie, S., Slone, A., and Ramasesh, V. (2020). Caliban: Dockerbased job manager for reproducible workflows.
 Senior et al. (2020) Senior, A. W., Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., Qin, C., Žídek, A., Nelson, A. W., Bridgland, A., et al. (2020). Improved protein structure prediction using potentials from deep learning. Nature, pages 1–5.
 ShalevShwartz and BenDavid (2014) ShalevShwartz, S. and BenDavid, S. (2014). Understanding machine learning: From theory to algorithms. Cambridge university press.
 Springenberg et al. (2015) Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2015). Striving for simplicity: The all convolutional net. In International Conference on Learning Representations.
 Tan and Le (2019) Tan, M. and Le, Q. V. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 37th International Conference on Machine Learning.
 Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288.
 Urban et al. (2017) Urban, G., Geras, K. J., Kahou, S. E., Aslan, O., Wang, S., Caruana, R., Mohamed, A., Philipose, M., and Richardson, M. (2017). Do deep convolutional nets really need to be deep and convolutional? In International Conference on Learning Representations.
 Zeiler and Fergus (2013) Zeiler, M. D. and Fergus, R. (2013). Stochastic pooling for regularization of deep convolutional neural networks. In International Conference on Learning Representations.
 Zeiler and Fergus (2014) Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer.
 Zhang et al. (2017) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations.
 Zhou et al. (2020) Zhou, A., Knowles, T., and Finn, C. (2020). Metalearning symmetries by reparameterization. arXiv preprint arXiv:2007.02933.

Zoph and Le (2017)
Zoph, B. and Le, Q. V. (2017).
Neural architecture search with reinforcement learning.
International Conference on Learning Representations.
Appendix A Experimental Setup
We used Caliban (Ritchie et al., 2020) to manage all experiments in a reproducible environment in Google Cloud’s AI Platform. In all experiments, we used 16bit operations on V100 GPUs except Batch Normalization which was kept at 32bit.
a.1 Architectures
The ResNet18 architecture used in our experiments is the same its original version introduced in He et al. (2016a)
for training ImageNet dataset except that the first convolution layer has kernel size 3 by 3 and it is not followed by a maxpooling layer. This changes are made to adjust the architecture for smaller image sizes used in our experiments. 3
fc architecture is a fully connected network with two hidden layers having the same number of hidden units with ReLU activation and Batch Normalization. Finally, please see Tables d r details of dconv, sconv and their locally and fully connected counterparts.Layer  Description  #param  

dconv  dlocal  dfc  
conv1  conv3x3  
conv2  conv3x3  
conv3  conv3x3  
conv4  conv3x3  
conv5  conv3x3  
conv6  conv3x3  
conv7  conv3x3  
conv8  conv3x3  
fc1  fc  
fc2  fc(c)  
Total  dconv()  
a.2 Hyperparameters
All architectures in this paper are trained with Cosine Annealing learning rate schedule with initial learning rate , batch size and data augmentation proposed in Lim et al. (2019). We looked at learning rates 0.01 and 1 as well but observed that 0.1 works well across experiments. For models trained with SGD, we tried values 0 and 0.9 for the momentum, and 0, , , , for the weight decay. As for lasso, we did not use momentum, set and tried , , , and for the choice of regularization for layers that correspond to convolutional and fully connected layers separately. Moreover, for all models, we tried both adding dropout to the last two fully connected layers and not adding dropout to any layers. Models in Table e trained for 4000 epochs. All other models in this paper are trained for 400 epochs unless mentioned otherwise. For each experiment, picked models with highest validation accuracy among these choices of hyperparameters.
Layer  Description  #param  

dconv  dlocal  dfc  
conv1  conv9x9  
fc1  fc  
fc2  fc(c)  
Total  sconv()  
Appendix B Proof of Theorem 2
In this section, we restate and prove Theorem *
Proof.
We build our proof based on theorem d construct a prefixfree description language for . We use the first bits to encode which is the dimension of parameters. The next bits then would be used to store the values of parameters. Note that so far, the encoding has varied length based on but it is prefix free because for any two dimensions , the first bits are different. Finally, we add the encoding for which takes bit based on theorem statement. After adding the encoding remains prefixfree. The reason is that for any two , if they have different number of parameters, the first bits will be different. Otherwise, if they have the exact same number of parameters and the same parameter values, the length of the encoding before will be exactly and since is prefix free, the whole encoding remains prefixfree.
Next we construct a prefixfree description such that for any , . Instead of assigning all weights to parameters, it is enough to specify the weights that are assigned to nonzero parameters and the rest of the weights will then be assigned to the parameter with value zero. Similar to before, we use the first bits to specify the number of nonzero weights, i.e. and the next bits to specify the number of parameters . This again helps to keep the encoding prefixfree. Finally for each nonzero weight, we use bits to identify the index and to specify the index of its corresponding parameter. Therefore, the total length will be and this completes the proof. ∎
Comments
There are no comments yet.