1 Introduction
Most state of the art neural networks [14, 29, 10] rely on some variant of RobbinsMonroe [28] based stochastic optimization. The requirement for utilizing this algorithm includes the assumption that the gradients of the functional be Lipschitz continuous. In this work we attempt to study approximate gradient pathways that allow for arbitrary nonlinear functions as submodules of neural networks. We do so by introducing a smooth neural network approximation (DAB) to the nondifferentiable function and utilize its gradients during training time. At inference, we drop the DAB network entirely, thus requiring no extra memory or compute.
2 Related Work
Method / Objective 




DNI [17] / DPG [16] / DGL [2]  Asynchronous network updates.  no  yes  yes  
Backprop Alternatives [31, 19, 9, 1, 5, 6, 23]  Optimize arbitrary functions.  yes  no  yes  
Score Function Estimator [8, 22]  Differentiate nondifferentiable functions.  yes  no  yes  
StraightThrough Estimator [3]  Ignore nondifferentiable functions.  yes  yes  no  
DAB (ours)  Differentiate nondifferentiable functions.  yes  yes  yes 
Traditional Solutions
: Traditional solutions to handling nondifferentiable functions in machine learning tend to cluster around using the Score Function Estimator (SFE)
[8, 22] (also known as REINFORCE [35]) or the StraightThrough Estimator (STE) [3]. While the SFE is an unbiased estimate of the gradients, it generally suffers from high variance
[11] and needs to be augmented with Control Variates [7] that require manual tuning and domain knowledge. The STE on the other hand is a solution that simply copies gradients back, skipping the nondifferentiable portion (i.e. treating it as an identity operation). Furthermore, the STE does not allow for operators that change dimension, i.e. , since it is unclear how the gradients of the larger/smaller output would be copied back.Backpropagation Alternatives
: Machine learning has a rich history of backpropagation alternatives, ranging from Simulated Annealing
[31][19][9], Evolutionary Strategies [1], and Bayesian approaches such as MCMC based sampling algorithms [5, 6]. These algorithms have generally been shown to not scale to complex, large dimension optimization problems [27] that are embodied in large neural network models. More recent work in the analysis of backpropagation alternatives [23] have demonstrated that it is possible to learn weight updates through the use of random matrices; while no statement is made about training / convergence time.Asynchronous Neural Network Updates: Recent work such as Decoupled Neural Interfaces (DNI) [17] and Decoupled Parallel Backpropagation (DPG) [16] introduced an auxiliary network to approximate gradients in RNN models. Similar approximation techniques have been introduced [2] (DGL) to allow for greedy layerwise CNN based training. The central objective with these models is to enable asynchronous updates to speed up training time. Our work differs from all these solutions in that our objective is not to improve training speed / parallelism, but to learn a function approximator of a nondifferentiable function such that it provides a meaningful training signal for the preceding layers in the network. This approach allows us to utilize complex, nondifferentiable functions such as kmeans, sort, signum, etc, as intermediary layers in neural network pipelines.
3 Preliminaries
In their seminal work [28], Robbins and Monroe developed a framework of optimization to solve for the roots of a function , under the assumption of the existence of a unique solution. They characterized the iterative update rule as:
(1) 
Given an observable random variable
, parametrized by , the objective is defined as solving for , wherein is some constant. If we assume that the gradient of is Lipschitz continuous, we can replace with its gradient; this is due to the fact that we can bound the difference between iterative updates of with and without the application of :(2) 
Given that we can upper bound the normedparameter difference by the normedfunctional gradients and through the assumption of small iterates in parameter space (K < 1), repeated application of this update rule converges to a fixed point. This derives from the Banach Fixed Point Theorem that states:
theoTheorem Given a metric space and a contractive mapping , then admits a unique fixed point .
In the specific case of Equation 2, the metric is the L2norm. Note, a norm is a more rigid constraint than a metric since norms require translation invariance and the scaling property in addition to all the requirements of a metric.
4 Model
A graphical model is listed in Figure 1 and depicts a generic version of our framework. Given some true input data distribution, , and a set of ( in Figure 1) functional approximators, , our learning objective is defined as maximizing the log likelihood, coupled with a new regularizer, , introduced in this work:
(3)  
(4)  
(5) 
Since the latent representations are simple functional transformations, we can represent the distributions ^{1}^{1}1. (Equation 4), by dirac distributions centered around their functional evaluations: . This allows us to rewrite our objective as shown in Equation 5, where is a problem specific hyperparameter. A key point is that during the forward pass of the model we use the nondifferentiable function, .
4.1 Choice of metric under simplifying assumptions
In this section we analyze the regularizer introduced in Equation 5 in the special case where the nondifferentiable function output,
, is a (differentiable) linear transformation of the previous layer coupled with additive Gaussian noise (aleatoric uncertainty):
(6)  
(7) 
Under these simplifying assumptions our model induces a Gaussian loglikelihood as shown in Equation 7. At this point we can directly maximize the above likelihood using maximum likelihood estimation. Alternatively, if we have apriori knowledge we can introduce it as a prior, , over the weights
, and minimize the negative loglikelihood times the prior to evaluate the posterior, i.e. the MAP estimate. If we make a conjugate prior assumption,
, then:(8)  
(9)  
(10) 
This analysis leads us to the well known result that a linear transformation with aleatoric Gaussian noise results in a loss proportional to the L2 loss (Equation 10). However, what can we say about the case where is a nonlinear, nondifferentiable output? In practice we observe that using the L2 loss, coupled with a nonlinear neural network transformation,
produces strong results. To understand why, we appeal to the central limit theorem which states that the scaled mean of the random variable converges to a Gaussian distribution as the sample size increases. Furthermore, if we can assume a zero mean, positive variance, and finite absolute third moment, it can be shown that the rate of convergence to a Gaussian distribution is proportional to
, where is the number of samples [4]. We explored alternatives such as the Huber loss [15], cosine loss, L1 loss and crossentropy loss, but found the L2 loss to consistenty produce strong results and utilize it for all presented experiments.5 Experiments
We quantify our proposed algorithm on three different benchmarks: sequence sorting, unsupervised representation learning, and image classification. For a full list of hyperparameters, model specifications and,
example PyTorch
[25] code see the Appendix.5.1 Sequence Sorting
Length (T)  ELUDense  PtrNet[33] 

SignumRNN (ours)  SignumDense (ours)  
T=5  86.46 4.7% (x5)  90%  94%  99.3 0.09% (x5)  99.3 0.25% (x5)  
T=10  0 0% (x5)  28%  57%  92.4 0.36% (x5)  94.2 0.1% (x5)  
T=15  0 0% (x5)  4%  10%  87.2 0.3% (x5)  79.8 0.8% (x5) 
In this experiment, we analyze sequence sorting with neural networks. input sequences of length
are generated by sampling a uniform distribution,
. The objective of the model, , is to predict a categorical output distribution, corresponding to the index of the sorted input sequence, . We follow [32] and evalute the allornone (called outofsequence in [32]) accuracy for all presented models. This metric penalizes an output, , for not predicting the entire sequence in correct order (no partialcredit), .We develop two novel models to address the sorting problem: a simple feedforward neural network (Figure
2left) and a sequential RNN model (Figure 2right). The central difference between a traditional model and the ones in Figure 2, is the incorporation of a nondifferentiable (hard) function shown in red in both model diagrams. During the foward pass of the model, we directly use the (hard) nondifferentiable function’s output for the subsequent layers. The DAB network receives the same input as the nondifferentiable function and caches its output. This cached output is used in the added regularizer presented in Section 4.1 in order to allow the DAB to approximate the nondifferentiable function ( in Figure 2). During the backward pass (dashed lines), the gradients are routed through the DAB instead of the nondifferentiable function. While it is possible to utilize any nondifferentiable function, in this experiment we use the following margin signum function:(11) 
We contrast our models with state of the art for sequence sorting ([33, 32]) and a baseline ELUDense multilayer neural network and demonstrate (Table 1) that our model outperforms all baselines (in some cases by over 75%). These gains can be attributed to the choice of (nondifferentiable) nonlinearty that we use in our model. We believe that the logic of sequence sorting can be simplified using a function that directly allows binning of intermediary model outputs into , which in turn simplifies implementing a swap operation.
5.1.1 Effect of Pondering
The model presented in [32] evaluates the effect of pondering in which they iterate an LSTM with no further inputs. This pondering allows the model to learn to sort its internal representation. Traditional sorting algorithms run operations on the dimensional input sequence. Iterating the LSTM attempts to parallel this. We introduce a similar pondering loop into our model and show the performance benefit in Figure 3; we observe a similar performance gain, but notice that the benefits decrease after five pondering iterations.
5.2 Unsupervised Representations
In this experiment, we study the usefulness of learnt unsupervised representations by traditional latent variable models such as the Variational Autoencoder (VAE)
[20]. Variational Autoencoders, coupled with discrete reparameterization methods [24, 18] enable learning of compact binary latent representations. Given an input random variable , VAEs posit an approximate posterior, , over a latent variable, , and maximize the Evidence Lower BOund (ELBO). We contrast the VAE ELBO with our optimization objective below ^{2}^{2}2Note that the backward pass for the DAB follows the same logic as presented earlier.:VAE  DAB 
We posit that good latent representions should not only be compact (in terms of bitsperpixel), but also useful as a mechanism to linearly disentangle a complex input space as well as reconstruct the original sample well. Simple, disentangled latent representations are the ultimate goal of unsupervised learning, and we demonstrate the usefulness that nondifferentiable functions bring to this goal. We do so through the use of two metrics: the MSSSIM
[34] and linear classification of posterior samples. The MSSSIM is a metric typically used in compression related studies and allows us to get a sense of how similar (in structure) the reconstructed sample is to the original. Linear classification of posterior samples provides us with an evaluation of disentangled latent representations: a quintessential feature of a good unsupervised representation. Importantly, we do not specifically train the model to induce better linearly separability as that would necessitate the use of supervision.In Figures 4 and 5 we contrast our models (dab) against traditional bernoulli and discrete gumbelreparameterized models [24, 18]
and a naive downsample, binarythreshold and classify solution (
threshold). We summarize the variants we utilize below:Functional Form  
dabbernoulli  Sample from nonreparameterized distribution: . 
dabbinary  
dabsignum  Equation 11. BPP is scaled by due to trinary representation. 
threshold  bilinear(x, BPP), threshold(x, ) and linearly classify for the best . 
We begin by utilizing the training set of Fashion MNIST, CIFAR10, and ImageNet to train the baseline bernoulli and discrete VAEs as well as the models with the nondifferentiable functions (
dab) presented above. We train five models per level of bpp for FashionMNIST and CIFAR10 and evaluate the MSSSIM and linear classification accuracy at each point. We repeat the same, but only for bpp=0.00097 for Imagenet due to computational restrictions. The linear classifier is trained on the same training dataset^{3}^{3}3We use the encoded posterior representation as input to the linear classifier. after the completion of training the main model. We present the mean and standard deviation results in Figures 4 and 5 for all three datasets. We observe that our models perform better in terms of testreconstruction (MSSSIM) and also provides a more disentangled latent representation (in terms of linear test accuracy). We observe either dabsignum or dabbinary performing better than all variants across all datasets. Since only the activation is being changed, the benefit can be directly attributed to the use of the nondifferentiable functions used as activations.5.3 Image Classification

Mean  +/ Std  Functional Form  
Baseline  92.87%  0.06%  Identity(x)  
Signum  91.95%  0.07%  Equation 11  
Sort  92.93%  0.1%  sortrow(x) sortcol(x)  
Topk  92.21%  0.14%  (sortrow(x) sortcol(x))[0:k]  
KMeans  91.97%  0.16%  kmeans(x, k=10) 
In this experiment we evaluate how well our model performs in classifying images of CIFAR10 using a Resnet18 model tailored to operate on images. We evaluate a variety of nondifferentiable functions and present their test accuracy and standard deviation in Table 2. We observe that utilizing a Sort as the final activation in the Resnet18 model improves upon the vanilla model (Baseline) by 0.1%. While these results are statistically significant, the difference seems rather small. In contrast, when we used the same nondifferentiable function in a simpler model for the same problem, we observed a larger difference (10%) between the testaccuracies. We attribute this to the regularization effect induced by the choice of nondifferentiable activation.
5.3.1 Ablation / Case Studies
Layer Placement: In order to validate where to place the nondifferentiable
function within the Resnet18 architecture, we perform an ablation study
wherein we train each model 5 times (Figure
6left). Since the Resnet18 model has four
residual blocks, we place the nondifferentiable function at the output
of each block. We observe that the network
remains stable throughout training when placing the
nondifferentiable function at the fourth layer and use this for all
experiments presented in Table 2. We posit that
this is due to the fact that networks typically learn low level
Haar like filters at initial layers and enacting a complex, nondifferentiable
function at an initial layer destroys the coherence during the
learning process.
Conditioning of Preceding Layer: We utilize the sort nondifferentiable function shown in Table
2 to explore the effect of the regularizer
introduced in Equation 5. We calculate the empirical
earth mover distance between the input layer to the
nondifferentiable function ( in Figure 1)
and its output ( in Figure
1). We repeat the experiment five times and report
the mean and standard deviation in Figure 6middle. We
observe that the regularizer conditions the input layer into being
more ameanable to sorting, as demonstrated by the decrease in the test
EMD over time.
Contrasting with STE: We evaluate the testaccuracy of the StraightThroughEstimator (STE) in contrast to DAB. The STE was originally utilized to bypass differentiating through a simple argmax operator [3], however, here we analyze how well it performs when handling a complex operand such as sorting. Since the STE cannot operate over transformations that vary dimensionality, we use a simplified version of the sort operator from the previous experiment. Instead of sorting the rows and columns as in Table 2, we simply flatten the feature map and run a single sort operation. This allows us to utilize the STE in this scenario. We observe in Figure 6right that DAB clearly outperforms the STE.
6 Discussion
Extensive research in machine learning has focused on discovering new (sub)differentiable nonlinearities to use within neural networks [13, 21, 26]. In this work, we demonstrate a novel method to allow for the incorporation of generic, nondifferentiable functions within neural networks and empirically demonstrate their benefit through a variety of experiments using a handful of nondifferentiable operators such as kmeans, sort and signum. Rather than manually deriving subdifferentiable solutions (eg: [12]), using the StraightThroughEstimator (eg: [30]) or relying on REINFORCE, we directly use a neural network to learn a smooth approximation to the nondifferentiable function. This work opens up the use of much more complex nondifferentiable operators within neural network pipelines.
References
 [1] T. Asselmeyer, W. Ebeling, and H. Rosé. Evolutionary strategies of optimization. Physical Review E, 56(1):1171, 1997.
 [2] E. Belilovsky, M. Eickenberg, and E. Oyallon. Decoupled greedy learning of cnns. arXiv preprint arXiv:1901.08164, 2019.
 [3] Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
 [4] A. C. Berry. The accuracy of the gaussian approximation to the sum of independent variates. Transactions of the american mathematical society, 49(1):122–136, 1941.
 [5] A. E. Gelfand and A. F. Smith. Samplingbased approaches to calculating marginal densities. Journal of the American statistical association, 85(410):398–409, 1990.
 [6] W. R. Gilks, S. Richardson, and D. Spiegelhalter. Markov chain Monte Carlo in practice. Chapman and Hall/CRC, 1995.
 [7] P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013.
 [8] P. W. Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84, 1990.
 [9] D. E. Goldberg and J. H. Holland. Genetic algorithms and machine learning. Machine learning, 3(2):95–99, 1988.
 [10] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 [11] W. Grathwohl, D. Choi, Y. Wu, G. Roeder, and D. Duvenaud. Backpropagation through the void: Optimizing control variates for blackbox gradient estimation. ICLR, 2018.
 [12] E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In Advances in neural information processing systems, pages 1828–1836, 2015.
 [13] R. H. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortexinspired silicon circuit. Nature, 405(6789):947, 2000.

[14]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 770–778, 2016.  [15] P. J. Huber. Robust estimation of a location parameter. In Breakthroughs in statistics, pages 492–518. Springer, 1992.
 [16] Z. Huo, B. Gu, and H. Huang. Training neural networks using features replay. In Advances in Neural Information Processing Systems, pages 6660–6669, 2018.
 [17] M. Jaderberg, W. M. Czarnecki, S. Osindero, O. Vinyals, A. Graves, D. Silver, and K. Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1627–1635. JMLR. org, 2017.
 [18] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbelsoftmax. ICLR, 2017.
 [19] J. Kennedy. Particle swarm optimization. Encyclopedia of machine learning, pages 760–766, 2010.
 [20] D. P. Kingma and M. Welling. Autoencoding variational bayes. ICLR, 2014.
 [21] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Selfnormalizing neural networks. In Advances in neural information processing systems, pages 971–980, 2017.
 [22] J. P. Kleijnen and R. Y. Rubinstein. Optimization and sensitivity analysis of computer simulation models by the score function method. European Journal of Operational Research, 88(3):413–427, 1996.

[23]
T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman.
Random synaptic feedback weights support error backpropagation for deep learning.
Nature communications, 7:13276, 2016. 
[24]
C. J. Maddison, A. Mnih, and Y. W. Teh.
The concrete distribution: A continuous relaxation of discrete random variables.
ICLR, 2017.  [25] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPSW, 2017.
 [26] P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
 [27] L. M. Rios and N. V. Sahinidis. Derivativefree optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247–1293, 2013.
 [28] H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
 [29] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inceptionv4, inceptionresnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
 [30] A. van den Oord, O. Vinyals, et al. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306–6315, 2017.
 [31] P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
 [32] O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. ICLR, 2016.
 [33] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700, 2015.
 [34] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In The ThritySeventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398–1402. Ieee, 2003.

[35]
R. J. Williams.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning.
Machine learning, 8(34):229–256, 1992.
7 Appendix
7.1 Simple Pytorch Implementation
We provide an example of the base class for any hard function along with an example of the margin signum operand (Equation 11) below. The BaseHardFn
accepts the input tensor
x along with the DAB approximation (soft_y). Coupling this with the DAB loss (Equation 4.1) provides a basic interface for using DABs with any model.7.2 Model HyperParameters
FashionMNIST  CIFAR10  ImageNet  Sorting  Classification  
Optimizer  Adam  RMSProp  RMSProp  Adam  Adam  
LR  1e3  1e4  1e4  1e4  1e4  
BatchSize  128  128  192  1024  128  
Activation  ELU  ReLU  ELU  Tanh  ELU  
Normalization  Batchnorm 


None  Batchnorm  
LayerType  Similar to UNet 


LSTM (gradclip 5) + Dense(256)  CifarResnet18  
DAB  10  70  2  10  10 
Comments
There are no comments yet.