Most state of the art neural networks [14, 29, 10] rely on some variant of Robbins-Monroe  based stochastic optimization. The requirement for utilizing this algorithm includes the assumption that the gradients of the functional be Lipschitz continuous. In this work we attempt to study approximate gradient pathways that allow for arbitrary non-linear functions as sub-modules of neural networks. We do so by introducing a smooth neural network approximation (DAB) to the non-differentiable function and utilize its gradients during training time. At inference, we drop the DAB network entirely, thus requiring no extra memory or compute.
2 Related Work
|Method / Objective||
|DNI  / DPG  / DGL ||Asynchronous network updates.||no||yes||yes|
|Backprop Alternatives [31, 19, 9, 1, 5, 6, 23]||Optimize arbitrary functions.||yes||no||yes|
|Score Function Estimator [8, 22]||Differentiate non-differentiable functions.||yes||no||yes|
|Straight-Through Estimator ||Ignore non-differentiable functions.||yes||yes||no|
|DAB (ours)||Differentiate non-differentiable functions.||yes||yes||yes|
: Traditional solutions to handling non-differentiable functions in machine learning tend to cluster around using the Score Function Estimator (SFE)[8, 22] (also known as REINFORCE ) or the Straight-Through Estimator (STE) 11] and needs to be augmented with Control Variates  that require manual tuning and domain knowledge. The STE on the other hand is a solution that simply copies gradients back, skipping the non-differentiable portion (i.e. treating it as an identity operation). Furthermore, the STE does not allow for operators that change dimension, i.e. , since it is unclear how the gradients of the larger/smaller output would be copied back.
: Machine learning has a rich history of backpropagation alternatives, ranging from Simulated Annealing19]9], Evolutionary Strategies , and Bayesian approaches such as MCMC based sampling algorithms [5, 6]. These algorithms have generally been shown to not scale to complex, large dimension optimization problems  that are embodied in large neural network models. More recent work in the analysis of backpropagation alternatives  have demonstrated that it is possible to learn weight updates through the use of random matrices; while no statement is made about training / convergence time.
Asynchronous Neural Network Updates: Recent work such as Decoupled Neural Interfaces (DNI)  and Decoupled Parallel Backpropagation (DPG)  introduced an auxiliary network to approximate gradients in RNN models. Similar approximation techniques have been introduced  (DGL) to allow for greedy layerwise CNN based training. The central objective with these models is to enable asynchronous updates to speed up training time. Our work differs from all these solutions in that our objective is not to improve training speed / parallelism, but to learn a function approximator of a non-differentiable function such that it provides a meaningful training signal for the preceding layers in the network. This approach allows us to utilize complex, non-differentiable functions such as kmeans, sort, signum, etc, as intermediary layers in neural network pipelines.
In their seminal work , Robbins and Monroe developed a framework of optimization to solve for the roots of a function , under the assumption of the existence of a unique solution. They characterized the iterative update rule as:
Given an observable random variable, parametrized by , the objective is defined as solving for , wherein is some constant. If we assume that the gradient of is -Lipschitz continuous, we can replace with its gradient; this is due to the fact that we can bound the difference between iterative updates of with and without the application of :
Given that we can upper bound the normed-parameter difference by the normed-functional gradients and through the assumption of small iterates in parameter space (K < 1), repeated application of this update rule converges to a fixed point. This derives from the Banach Fixed Point Theorem that states:
theoTheorem Given a metric space and a contractive mapping , then admits a unique fixed point .
In the specific case of Equation 2, the metric is the L2-norm. Note, a norm is a more rigid constraint than a metric since norms require translation invariance and the scaling property in addition to all the requirements of a metric.
A graphical model is listed in Figure 1 and depicts a generic version of our framework. Given some true input data distribution, , and a set of ( in Figure 1) functional approximators, , our learning objective is defined as maximizing the log likelihood, coupled with a new regularizer, , introduced in this work:
Since the latent representations are simple functional transformations, we can represent the distributions 111. (Equation 4), by dirac distributions centered around their functional evaluations: . This allows us to rewrite our objective as shown in Equation 5, where is a problem specific hyper-parameter. A key point is that during the forward pass of the model we use the non-differentiable function, .
4.1 Choice of metric under simplifying assumptions
In this section we analyze the regularizer introduced in Equation 5 in the special case where the non-differentiable function output,
, is a (differentiable) linear transformation of the previous layer coupled with additive Gaussian noise (aleatoric uncertainty):
Under these simplifying assumptions our model induces a Gaussian log-likelihood as shown in Equation 7. At this point we can directly maximize the above likelihood using maximum likelihood estimation. Alternatively, if we have apriori knowledge we can introduce it as a prior, , over the weights
, and minimize the negative log-likelihood times the prior to evaluate the posterior, i.e. the MAP estimate. If we make a conjugate prior assumption,, then:
This analysis leads us to the well known result that a linear transformation with aleatoric Gaussian noise results in a loss proportional to the L2 loss (Equation 10). However, what can we say about the case where is a non-linear, non-differentiable output? In practice we observe that using the L2 loss, coupled with a non-linear neural network transformation,
produces strong results. To understand why, we appeal to the central limit theorem which states that the scaled mean of the random variable converges to a Gaussian distribution as the sample size increases. Furthermore, if we can assume a zero mean, positive variance, and finite absolute third moment, it can be shown that the rate of convergence to a Gaussian distribution is proportional to, where is the number of samples . We explored alternatives such as the Huber loss , cosine loss, L1 loss and cross-entropy loss, but found the L2 loss to consistenty produce strong results and utilize it for all presented experiments.
We quantify our proposed algorithm on three different benchmarks:
sequence sorting, unsupervised representation learning, and image
classification. For a full list of hyper-parameters, model
specifications and, example PyTorch
example PyTorch code see the Appendix.
5.1 Sequence Sorting
|Signum-RNN (ours)||Signum-Dense (ours)|
|T=5||86.46 4.7% (x5)||90%||94%||99.3 0.09% (x5)||99.3 0.25% (x5)|
|T=10||0 0% (x5)||28%||57%||92.4 0.36% (x5)||94.2 0.1% (x5)|
|T=15||0 0% (x5)||4%||10%||87.2 0.3% (x5)||79.8 0.8% (x5)|
In this experiment, we analyze sequence sorting with neural networks. input sequences of length
are generated by sampling a uniform distribution,. The objective of the model, , is to predict a categorical output distribution, corresponding to the index of the sorted input sequence, . We follow  and evalute the all-or-none (called out-of-sequence in ) accuracy for all presented models. This metric penalizes an output, , for not predicting the entire sequence in correct order (no partial-credit), .
We develop two novel models to address the sorting problem: a simple feed-forward neural network (Figure2-left) and a sequential RNN model (Figure 2-right). The central difference between a traditional model and the ones in Figure 2, is the incorporation of a non-differentiable (hard) function shown in red in both model diagrams. During the foward pass of the model, we directly use the (hard) non-differentiable function’s output for the subsequent layers. The DAB network receives the same input as the non-differentiable function and caches its output. This cached output is used in the added regularizer presented in Section 4.1 in order to allow the DAB to approximate the non-differentiable function ( in Figure 2). During the backward pass (dashed lines), the gradients are routed through the DAB instead of the non-differentiable function. While it is possible to utilize any non-differentiable function, in this experiment we use the following -margin signum function:
We contrast our models with state of the art for sequence sorting ([33, 32]) and a baseline ELU-Dense multilayer neural network and demonstrate (Table 1) that our model outperforms all baselines (in some cases by over 75%). These gains can be attributed to the choice of (non-differentiable) non-linearty that we use in our model. We believe that the logic of sequence sorting can be simplified using a function that directly allows binning of intermediary model outputs into , which in turn simplifies implementing a swap operation.
5.1.1 Effect of Pondering
The model presented in  evaluates the effect of pondering in which they iterate an LSTM with no further inputs. This pondering allows the model to learn to sort its internal representation. Traditional sorting algorithms run operations on the dimensional input sequence. Iterating the LSTM attempts to parallel this. We introduce a similar pondering loop into our model and show the performance benefit in Figure 3; we observe a similar performance gain, but notice that the benefits decrease after five pondering iterations.
5.2 Unsupervised Representations
In this experiment, we study the usefulness of learnt unsupervised representations by traditional latent variable models such as the Variational Autoencoder (VAE). Variational Autoencoders, coupled with discrete reparameterization methods [24, 18] enable learning of compact binary latent representations. Given an input random variable , VAEs posit an approximate posterior, , over a latent variable, , and maximize the Evidence Lower BOund (ELBO). We contrast the VAE ELBO with our optimization objective below 222Note that the backward pass for the DAB follows the same logic as presented earlier.:
We posit that good latent representions should not only be compact (in terms of bits-per-pixel), but also useful as a mechanism to linearly disentangle a complex input space as well as reconstruct the original sample well. Simple, disentangled latent representations are the ultimate goal of unsupervised learning, and we demonstrate the usefulness that non-differentiable functions bring to this goal. We do so through the use of two metrics: the MS-SSIM and linear classification of posterior samples. The MS-SSIM is a metric typically used in compression related studies and allows us to get a sense of how similar (in structure) the reconstructed sample is to the original. Linear classification of posterior samples provides us with an evaluation of disentangled latent representations: a quintessential feature of a good unsupervised representation. Importantly, we do not specifically train the model to induce better linearly separability as that would necessitate the use of supervision.
and a naive downsample, binary-threshold and classify solution (threshold). We summarize the variants we utilize below:
|dab-bernoulli||Sample from non-reparameterized distribution: .|
|dab-signum||Equation 11. BPP is scaled by due to trinary representation.|
|threshold||bilinear(x, BPP), threshold(x, ) and linearly classify for the best .|
We begin by utilizing the training set of Fashion MNIST, CIFAR10, and ImageNet to train the baseline bernoulli and discrete VAEs as well as the models with the non-differentiable functions (dab-) presented above. We train five models per level of bpp for FashionMNIST and CIFAR10 and evaluate the MS-SSIM and linear classification accuracy at each point. We repeat the same, but only for bpp=0.00097 for Imagenet due to computational restrictions. The linear classifier is trained on the same training dataset333We use the encoded posterior representation as input to the linear classifier. after the completion of training the main model. We present the mean and standard deviation results in Figures 4 and 5 for all three datasets. We observe that our models perform better in terms of test-reconstruction (MS-SSIM) and also provides a more disentangled latent representation (in terms of linear test accuracy). We observe either dab-signum or dab-binary performing better than all variants across all datasets. Since only the activation is being changed, the benefit can be directly attributed to the use of the non-differentiable functions used as activations.
5.3 Image Classification
|Mean||+/- Std||Functional Form|
In this experiment we evaluate how well our model performs in classifying images of CIFAR10 using a Resnet18 model tailored to operate on images. We evaluate a variety of non-differentiable functions and present their test accuracy and standard deviation in Table 2. We observe that utilizing a Sort as the final activation in the Resnet18 model improves upon the vanilla model (Baseline) by 0.1%. While these results are statistically significant, the difference seems rather small. In contrast, when we used the same non-differentiable function in a simpler model for the same problem, we observed a larger difference (10%) between the test-accuracies. We attribute this to the regularization effect induced by the choice of non-differentiable activation.
5.3.1 Ablation / Case Studies
Layer Placement: In order to validate where to place the non-differentiable
function within the Resnet18 architecture, we perform an ablation study
wherein we train each model 5 times (Figure
6-left). Since the Resnet18 model has four
residual blocks, we place the non-differentiable function at the output
of each block. We observe that the network
remains stable throughout training when placing the
non-differentiable function at the fourth layer and use this for all
experiments presented in Table 2. We posit that
this is due to the fact that networks typically learn low level
Haar like filters at initial layers and enacting a complex, non-differentiable
function at an initial layer destroys the coherence during the
Conditioning of Preceding Layer: We utilize the sort non-differentiable function shown in Table
2 to explore the effect of the regularizer
introduced in Equation 5. We calculate the empirical
earth mover distance between the input layer to the
non-differentiable function ( in Figure 1)
and its output ( in Figure
1). We repeat the experiment five times and report
the mean and standard deviation in Figure 6-middle. We
observe that the regularizer conditions the input layer into being
more ameanable to sorting, as demonstrated by the decrease in the test
EMD over time.
Contrasting with STE: We evaluate the test-accuracy of the Straight-Through-Estimator (STE) in contrast to DAB. The STE was originally utilized to bypass differentiating through a simple argmax operator , however, here we analyze how well it performs when handling a complex operand such as sorting. Since the STE cannot operate over transformations that vary dimensionality, we use a simplified version of the sort operator from the previous experiment. Instead of sorting the rows and columns as in Table 2, we simply flatten the feature map and run a single sort operation. This allows us to utilize the STE in this scenario. We observe in Figure 6-right that DAB clearly outperforms the STE.
Extensive research in machine learning has focused on discovering new (sub-)differentiable non-linearities to use within neural networks [13, 21, 26]. In this work, we demonstrate a novel method to allow for the incorporation of generic, non-differentiable functions within neural networks and empirically demonstrate their benefit through a variety of experiments using a handful of non-differentiable operators such as kmeans, sort and signum. Rather than manually deriving sub-differentiable solutions (eg: ), using the Straight-Through-Estimator (eg: ) or relying on REINFORCE, we directly use a neural network to learn a smooth approximation to the non-differentiable function. This work opens up the use of much more complex non-differentiable operators within neural network pipelines.
-  T. Asselmeyer, W. Ebeling, and H. Rosé. Evolutionary strategies of optimization. Physical Review E, 56(1):1171, 1997.
-  E. Belilovsky, M. Eickenberg, and E. Oyallon. Decoupled greedy learning of cnns. arXiv preprint arXiv:1901.08164, 2019.
-  Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
-  A. C. Berry. The accuracy of the gaussian approximation to the sum of independent variates. Transactions of the american mathematical society, 49(1):122–136, 1941.
-  A. E. Gelfand and A. F. Smith. Sampling-based approaches to calculating marginal densities. Journal of the American statistical association, 85(410):398–409, 1990.
-  W. R. Gilks, S. Richardson, and D. Spiegelhalter. Markov chain Monte Carlo in practice. Chapman and Hall/CRC, 1995.
-  P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013.
-  P. W. Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84, 1990.
-  D. E. Goldberg and J. H. Holland. Genetic algorithms and machine learning. Machine learning, 3(2):95–99, 1988.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  W. Grathwohl, D. Choi, Y. Wu, G. Roeder, and D. Duvenaud. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. ICLR, 2018.
-  E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In Advances in neural information processing systems, pages 1828–1836, 2015.
-  R. H. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947, 2000.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In , pages 770–778, 2016.
-  P. J. Huber. Robust estimation of a location parameter. In Breakthroughs in statistics, pages 492–518. Springer, 1992.
-  Z. Huo, B. Gu, and H. Huang. Training neural networks using features replay. In Advances in Neural Information Processing Systems, pages 6660–6669, 2018.
-  M. Jaderberg, W. M. Czarnecki, S. Osindero, O. Vinyals, A. Graves, D. Silver, and K. Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1627–1635. JMLR. org, 2017.
-  E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. ICLR, 2017.
-  J. Kennedy. Particle swarm optimization. Encyclopedia of machine learning, pages 760–766, 2010.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014.
-  G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Self-normalizing neural networks. In Advances in neural information processing systems, pages 971–980, 2017.
-  J. P. Kleijnen and R. Y. Rubinstein. Optimization and sensitivity analysis of computer simulation models by the score function method. European Journal of Operational Research, 88(3):413–427, 1996.
T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman.
Random synaptic feedback weights support error backpropagation for deep learning.Nature communications, 7:13276, 2016.
C. J. Maddison, A. Mnih, and Y. W. Teh.
The concrete distribution: A continuous relaxation of discrete random variables.ICLR, 2017.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
-  P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
-  L. M. Rios and N. V. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247–1293, 2013.
-  H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
-  C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
-  A. van den Oord, O. Vinyals, et al. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306–6315, 2017.
-  P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
-  O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. ICLR, 2016.
-  O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700, 2015.
-  Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398–1402. Ieee, 2003.
R. J. Williams.
Simple statistical gradient-following algorithms for connectionist reinforcement learning.Machine learning, 8(3-4):229–256, 1992.
7.1 Simple Pytorch Implementation
We provide an example of the base class for any hard function along with an example of the -margin signum operand (Equation 11) below. The BaseHardFn
accepts the input tensorx along with the DAB approximation (soft_y). Coupling this with the DAB loss (Equation 4.1) provides a basic interface for using DABs with any model.
(logits)of DAB approximator network.
7.2 Model Hyper-Parameters
|Layer-Type||Similar to U-Net||
|LSTM (gradclip 5) + Dense(256)||CifarResnet18|