1 Introduction
Although many wellknown computer vision benchmarks come with hundreds or more fully annotated images, many realworld applications come with few labels, and it is impossible/very costly to collect more labels/groundtruth.
Working with limited labels/annotation is also the default for many inverse problems. For instance, partialdifferentialequation (PDE) constrained optimization for parameter estimation aims to estimate physical model parameters that predict the observations (labels). Examples include acoustic velocity estimation from observed seismic waves or conductivity from electromagnetic fields. Most inverse problems cannot be ‘solved’ using just the partially observed fields and a physical model that connects input and output. Prior knowledge about the underlying mathematical structure of the quantity to be estimated is typically necessary, i.e., regularization.
In this work, we focus on PDEconstrained optimization problems and regularization, describe their deep connection to learning from limited labels using neural networks, and show that there are subtle differences between the two tasks. Despite these differences, we show how to transfer regularization ideas from PDEconstrained parameter estimation to help training neural networks in the case of limited labels.
Recently, researchers established a connection between timestepping methods for solving differential equations and deep neural networks [8, 21, 3]. This includes the recognition of the standard ResNet [10] as a timedependent heat equation, deep neural networks based on the reactiondiffusion (advection) equation [7, 25], as well as secondorder equations like the nonlinear telegraph equation [2]. These contributions concern various discretizations of the forward problem in the context of inverse problems.
Regarding optimization, [13] showed that the backpropagation algorithm is equivalent to an implementation of the method of Lagrangian multipliers for equality constrained nonlinear optimization. Stating the training of a network as constrained optimization opens the door to methods other than backpropagation: for instance, [22, 6] increase parallelism and [5, 15, 20] exploit the connection to PDEconstrained optimization to reduce memory. In this work, we

show that regularization on the model parameters in PDEconstrained optimization translates to regularization of the output of a neural network;

derive the Lagrangian and backpropagation algorithm for this type of regularization, which reveals no additional computational complications;

networkoutput regularization based on minimal prior knowledge can boost the prediction accuracy when training on few labels.
First, we establish the correspondence between the terminology in deep learning and PDEconstrained optimization. Next, we state the problems and derive an optimization algorithm for both regularized PDEconstrained optimization and training deep neural networks. An example shows that basic assumptions about the output of a network for a given application lead to a simple, yet effective method to improve semantic segmentation results in the labelsparse training regime.
2 Terminology differences
In PDEconstrained optimization, the model is a physical differential equation, such as the heat, or Maxwell equations. The model parameters are the coefficients of the equation, i.e., the spatial distribution of acoustic velocity or electrical conductivity. The input to the model are the initial conditions, or source functions. The output of the physical system is compared to the observed data. When training a neural network, we commonly refer to the input of a network as the data. The output (prediction) of the network is compared to the labels. The model is the structure of the neural network; examples of model parameters are biasses and convolutional kernels. The goal of image segmentation is to estimate model parameters that transform the input data (images) into the labels (segmentation).
3 Outputregularized neural network training as PDEconstrained optimization
Rather than standard tensor notation, we employ matrixvector product descriptions to stay close to PDEconstrained optimization literature. We blockvectorize states and Lagrangian multipliers, while weight/parameter tensors are flattened into blockmatrices, see
[21, 18, 4]. To keep notation compact, we focus the ResNet [10] with timestep . The network state at layer is given by(3.1) 
The blockmatrix denotes the network parameters where the number of blockrows is equal to the number of output channels, the number of blockcolumns is equal to the number of input channels. The nonlinear pointwise
is the activation function. Given data
and labels, we want to minimize a loss function
, subject to one equality constraint per timestep, i.e.,(3.2)  
There are timesteps (network layers). The matrix selects from the prediction, the pixel indices where we have labels. This is analogous to restricting physical fields to sensor locations [17, 19]. Compared to earlier regularized PDEconstrained optimization formulations [8], we propose to apply to the network output (and not on the parameters as in weightdecay, or parameter smoothness in time [8]
), such that we can explicitly control the properties of the predicted probability maps. This is similar in spirit to
[1], see also [12] for context regarding other types of implicit/explicit regularization. While we could employ automatic differentiation to the above problem, we need to look at the Lagrangian to see what are the effects of output regularization on the subsequent optimization procedures:(3.3)  
where are the Lagrangian multipliers. To construct an algorithm for problem (3.2), we need the partial derivatives of w.r.t. the variables,
(3.4)  
(3.5)  
(3.6)  
(3.7) 
where we left out the derivatives related to the first layer as they are the initial condition that we set equal to the input data while training. denotes the derivative of . The above shows that the derivatives of the regularization term appear in the partial derivative of the Lagrangrian with respect to the last state () only. To compute a gradient, we use the ‘standard’ tools: adjointstate/backpropagation, see [13, 5] for details about their equivalence. Note that to satisfy the firstorder optimality conditions for (3.2), each of the partial derivatives in (3.4) needs to vanish: 1) Forward propagate through the network to satisfy all constraints in (3.2); vanishes. 2) Propagate backward to obtain all Lagrangian multipliers . 3) compute gradients w.r.t. the network parameters for every layer. In Algorithm 1 we show a slightly different version that computes the gradient w.r.t. parameters on the fly. This procedure avoids storing more Lagrangian multipliers than the length of the recursion of the differential equation discretization, instead of storing the multipliers for all layers. Note that it is possible to avoid storing all states via reversible networks that recompute the states on the fly during backpropagation [15].
4 Examples of simple explicit regularizers
So far we showed that PDEconstrained optimization and training neuralnetworks are similar processes. However, the goals are different. PDEconstrained optimization often estimates material properties (parameters) with the help of an additive scaled regularization function . In contrast, when training networks, we primarily care about the prediction (network output). Despite these different objectives, a successful technique to prevent overfitting is regularization of the network parameters, just as in the PDEconstrained optimization case. This is a form of implicit regularization, as it is not trivial to see how the regularization relates to the visual appearance of the output for applications like semantic segmentation and other imagetoimage applications. In the PDEconstrained optimization setting, regularizing the parameters is a form of explicit regularization because the regularization directly acts on the quantity of interest.
The second contribution of this work is to recognize that we can carry over the explicit regularization nature from PDEconstrained optimization to training neural networks by adding a regularization (penalty) term on the network output (final state ).
Consider semantic segmentation in the case of limited data and even more limited supervision. Specifically, consider applications with a single (possibly largescale) example with partial labeling or point annotations [17] where it is difficult or impossible to collect additional examples. In the next section, we present results for such an application: timelapse hyperspectral imaging using partial labeling. Insufficient annotation leads to highfrequency/oscillatory artifacts in the final network state, or, probability map, . Using the prior knowledge that the true thresholded prediction is piecewiseconstant and there are not many isolated pixels or line segments with a class different from their surroundings, a reasonable choice for regularization is a quadratic smoother
(4.8) 
with gradient
(4.9) 
The discrete gradients and are the derivatives of an image in the first or second coordinate, respectively. This regularizer adds to the final Lagrangian multiplier. Subsequently, this information backpropagates and influences the gradient w.r.t. the network parameters . In the above example, the quadratic smoothing makes sure the final output state transitions across class boundaries smoothly.
5 Choice of and numerical example
This example shows that 1) while the overall optimization problem is still nonconvex, crossvalidation to select the regularization parameter results in expected behavior; 2) the proposed explicit regularization can improve predictions. The task is landuse change detection from timelapse (4D) hyperspectral data. The input data [9] are two 3D hyperspectral datasets collected over the same location, with two spatial and one frequency axis, see Figure 1.
The goal is to output a 2D map of the earth’s surface that shows landuse change (Figure 1). For training, there are randomly selected annotated pixels; and
pixels for validation. The available labels were annotated by an expert. The default mode of operation for many hyperspectral tasks is interpolation/extrapolation of the labels to the full map, see, e.g.,
[14, 11, 24, 16, 23]. Generalization to new locations is not a concern. The difficulty of annotation causes limited availability of annotated hyperspectral data; the person that creates the labels needs to know the application and nature of hyperspectral data. Alternatively, the labels come from costly ground truth observations.We train a fully reversible network [15]
with 10 layers with up to 32 channels for 250 stochastic gradient descent iterations with a decaying learning rate. This is sufficiently many iterations, such that the validation loss is no longer decreasing. Training includes basic dataaugmentation via random flips and rotations. We repeat training for a range of
values and assess the results using the mean of the intersection over union per class (mIoU, Figure 2). Figure 3 shows results and errors using no regularization, optimal regularization parameter, and a very large . While the best result is not perfect, the simple regularization on the network output yields a significant increase in mIoU.6 Conclusions
We extended the deep connections between partialdifferentialequation (PDE) constrained optimization and training neural networks on few data with partial annotation. An insufficient number of annotated pixels leads to oscillatory artifacts in image segmentations. In PDEconstrained optimization, regularization on the model parameters mitigates problems related to insufficient labeling (observations). This carries over directly to the neural network setting as, for example, weight decay. While weight decay regularizes the parameters directly, the final prediction of interest relates indirectly. We showed that explicit regularization on the output of a network serves the same purpose as parameter regularization in PDEconstrained optimization. Brief derivation of the optimization problem and algorithm shows this does not cause computational complications. A hyperspectral example illustrates that simple crossvalidation for selecting the regularization strength can improve prediction quality while making minimal assumptions about prior knowledge. Future work should explore more sophisticated regularizers and methods to adapt the regularization strength while training.
References
 [1] Aïcha BenTaieb and Ghassan Hamarneh. Topology aware fully convolutional networks for histology gland segmentation. In Sebastien Ourselin, Leo Joskowicz, Mert R. Sabuncu, Gozde Unal, and William Wells, editors, Medical Image Computing and ComputerAssisted Intervention – MICCAI 2016, pages 460–468, Cham, 2016. Springer International Publishing.
 [2] Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. In AAAI Conference on AI, 2018.

[3]
Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud.
Neural ordinary differential equations.
In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 6571–6583. Curran Associates, Inc., 2018. 
[4]
J. Ephrath, M. Eliasof, L. Ruthotto, E. Haber, and E. Treister.
Leanconvnets: Lowcost yet effective convolutional neural networks.
IEEE Journal of Selected Topics in Signal Processing, pages 1–1, 2020. 
[5]
Amir Gholaminejad, Kurt Keutzer, and George Biros.
Anode: Unconditionally accurate memoryefficient gradients for neural
odes.
In
Proceedings of the TwentyEighth International Joint Conference on Artificial Intelligence, IJCAI19
, pages 730–736. International Joint Conferences on Artificial Intelligence Organization, 7 2019. 
[6]
Stefanie Günther, Lars Ruthotto, Jacob B. Schroder, Eric C. Cyr, and
Nicolas R. Gauger.
Layerparallel training of deep residual neural networks.
SIAM Journal on Mathematics of Data Science
, 2(1):1–23, 2020. 
[7]
Eldad Haber, Keegan Lensink, Eran Treister, and Lars Ruthotto.
IMEXnet a forward stable deep neural network.
In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,
Proceedings of the 36th International Conference on Machine Learning
, volume 97 of Proceedings of Machine Learning Research, pages 2525–2534, Long Beach, California, USA, 09–15 Jun 2019. PMLR.  [8] Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, dec 2017.
 [9] Mahdi Hasanlou and Seyd Teymoor Seydi. Hyperspectral change detection: an experimental comparative study. International Journal of Remote Sensing, 39(20):7029–7083, 2018.

[10]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 770–778, 2016.  [11] M. He, B. Li, and H. Chen. Multiscale 3d deep convolutional neural network for hyperspectral image classification. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3904–3908, Sep. 2017.
 [12] Jan Kukačka, Vladimir Golkov, and Daniel Cremers. Regularization for deep learning: A taxonomy. arXiv preprint arXiv:1710.10686, 2017.
 [13] Yann Lecun. A theoretical framework for backpropagation. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, CMU, Pittsburg, PA, pages 21–28. Morgan Kaufmann, 1988.
 [14] H. Lee and H. Kwon. Contextual deep cnn based hyperspectral classification. In 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 3322–3325, July 2016.
 [15] Keegan Lensink, Eldad Haber, and Bas Peters. Fully hyperbolic convolutional neural networks. arXiv preprint arXiv:1905.10484, 2019.
 [16] Ying Li, Haokui Zhang, and Qiang Shen. Spectral–spatial classification of hyperspectral imagery with 3d convolutional neural network. Remote Sensing, 9(1), 2017.
 [17] Bas Peters, Justin Granek, and Eldad Haber. Multiresolution neural networks for tracking seismic horizons from few training images. Interpretation, 7(3):SE201–SE213, 2019.
 [18] Bas Peters, Eldad Haber, and Justin Granek. Does shallow geological knowledge help neuralnetworks to predict deep units? In SEG Technical Program Expanded Abstracts 2019, pages 2268–2272, 2019.
 [19] Bas Peters, Eldad Haber, and Justin Granek. Neural networks for geophysicists and their application to seismic data interpretation. The Leading Edge, 38(7):534–540, 2019.
 [20] Bas Peters, Eldad Haber, and Keegan Lensink. Symmetric blocklowrank layers for fully reversible multilevel neural networks. arXiv preprint arXiv:1912.12137, 2019.
 [21] Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial differential equations. Journal of Mathematical Imaging and Vision, pages 1–13, 2018.
 [22] Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training neural networks without gradients: A scalable admm approach. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2722–2731, New York, New York, USA, 20–22 Jun 2016. PMLR.
 [23] Qin Xu, Yong Xiao, Dongyue Wang, and Bin Luo. Csamso3dcnn: Multiscale octave 3d cnn with channel and spatial attention for hyperspectral image classification. Remote Sensing, 12(1), 2020.
 [24] Zhixiang Xue. A general generative adversarial capsule network for hyperspectral image spectralspatial classification. Remote Sensing Letters, 11(1):19–28, 2020.
 [25] Tianjun Zhang, Zhewei Yao, Amir Gholami, Kurt Keutzer, Joseph Gonzalez, George Biros, and Michael Mahoney. Anodev2: A coupled neural ode evolution framework. arXiv preprint arXiv:1906.04596, 2019.
Comments
There are no comments yet.