On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks

03/14/2018 ∙ by Yanzhi Wang, et al. ∙ 0

Large-scale deep neural networks are both memory intensive and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated in both industry and academia. Specific forms of binary neural networks (BNNs) and stochastic computing based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior). The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable for hardware.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large-scale neural networks are both memory-intensive and computation-intensive, thereby posing stringent requirements on the computing platforms when deploying those large-scale neural network models on memory-constrained and energy-constrained embedded devices. In order to overcome these limitations, the hardware accelerations of deep neural networks have been extensively investigated in both industry and academia [1, 2, 3, 4, 5, 6, 7, 8]. These hardware accelerations are based on FPGA and ASIC devices and can achieve a significant improvement on energy efficiency, along with small form factor, compared with traditional CPU or GPU based computing of deep neural networks. Both characteristics are critical for the battery-powered embedded and autonomous systems.

Hardware systems, including FPGAs and ASICs, have much higher peak performance for binary operations compared to floating point ones. Besides, it is also desirable to reduce the model size of deep neural network such that the whole model can be stored using on-chip memory, thereby reducing the timing and energy overheads of off-chip storage and communications. As a result, the Binary Neural Networks (BNNs), proposed by [9], are particularly appealing since they can be implemented almost entirely with binary operations, with the potential to attain performance in the tera-operations per second (TOPS) range on FPGAs or ASICs.

Besides BNNs, reference work [10, 11, 12, 13, 14, 15, 16]

have also proposed to utilize the hardware-oriented Stochastic Computing (SC) technique for developing (large-scale) deep neural networks, i.e., SCNNs. The SC technique represents a number using the portion of 1’s in a bit sequence. Many key operations in neural networks, such as multiplications and additions, can be implemented in a single gate in SC. For example, multiplication of two stochastic numbers can be implemented using a single AND gate or XNOR gate (depending on unipolar or bipolar representations). It enables the efficient implementation of deep neural networks with extremely small hardware footprint.

The BNNs and SCNNs are essentially alike: Both rely on binary operations and very simple calculations in hardware such as AND, XNOR gates, multiplexers and counters. For their distinctions, SCNNs "stretch" in the temporal domain and use a bit sequence (stochastic number) to approximate a real number, whereas BNNs "span" in the spatial domain and require more input and hidden neurons to maintain the desired accuracy.

Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy. Will SCNNs and BNNs be accurate for any types of neural networks and applications? More specifically, conventional neural networks with at least one hidden layer satisfy the universal approximation property [17] in that they can approximate an arbitrary continuous or measurable function given enough number of neurons in the hidden layer. Will SCNNs and BNNs satisfy such property as well? Finally, what are the relative pros and cons of SCNNs and BNNs in theory, and at the hardware level?

In this paper we aim to answer the above questions. We consider the "ideal" SCNNs and BNNs that are independent of specific hardware implementations. As the key contribution of this paper, we prove that SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior in these networks). The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. This is because it is difficult to directly prove the property for BNNs, as BNNs represent functions with discrete (binary) input values instead of continuous ones.

Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable for hardware.

2 Background and Related Work

2.1 Stochastic Computing and SCNNs

Stochastic computing (SC) is a paradigm that represents a number, named stochastic number, by counting the number of ones in a bit-stream. For example, the bit-stream 0100110100 contains four ones in a ten-bit stream, thus it represents . In the bit-stream, each bit is independent and identically distributed (i.i.d.) which can be generated in hardware using stochastic number generators (SNGs). Obviously, the length of the bit-streams can significantly affect the calculation accuracy in SC [18]. In addition to this unipolar encoding format, SC can also represent numbers in the range of using the bipolar encoding format. In this scenario, a real number is processed by . Thus 0.4 can be represented by 1011011101, as .

Fig. 1: (a) Unipolar encoding format and (b) bipolar encoding format. (c) AND gate for unipolar multiplication. (d) XNOR gate for bipolar multiplication. (e) MUX gate for addition.

Compared to conventional computing, the major advantage of stochastic computing is the significantly lower hardware cost for a large category of arithmetic calculations. A summary of the basic computing components in SC, such as multiplication and addition, is shown in Figure 1. As an illustrative example, a unipolar multiplication can be performed by a single AND gate since (assuming independence), and a bipolar multiplication is performed by a single XNOR gate since .

Besides multiplications and additions, SC-based activation functions are also developed

[19, 20]. As a result, SC has become an interesting and promising approach to implement large-scale neural networks [21, 11, 22, 12] with high performance/energy efficiency and minor accuracy degradation.

2.2 Binary Neural Networks (BNNs)

BNNs use binary weights, i.e., weights that are constrained to only two possible values (not necessarily 0 and 1) [9]. BNNs also have great potential to facilitate consumer applications on low-power devices and embedded systems. [2, 3] have implemented BNNs in FPGAs with high performance and modest power consumption.

BNNs constrain the weights to either or during the forward propagation process. As a result, many multiply-accumulate operations are replaced by simple additions (and subtractions) using single gates. This results in a huge gain in hardware resource efficiency, as fixed-point adders/accumulators are much less expensive both in area and energy than fixed-point multiply-accumulators [23].

The real-valued weights are transformed into the two possible values through the following stochastical binarization operation:

(1)

where is the hard sigmoid function:

(2)

A hard sigmoid rather than the soft version is used because it is far less computationally expensive.

At training time, BNNs randomly pick one of two values for each weight, for each minibatch, for both the forward and backward propagation phases of backpropagation. However, the stochastic gradient descent (SGD) update is accumulated in a real-valued variable storing the parameter to average the noise for keeping sufficient resolution. Moreover, binarization process adds some noise into the model, which provides a form of generalization to address the over-fitting problem.

2.3 Universal Approximation Property

For feedforward neural networks with one hidden layer, [24] and [25] have proved separately the universal approximation property, which guarantees that for any given continuous function or measurable function and any error bound , there always exists a single-hidden layer neural network that approximates the function within integrated error. Besides the approximation property itself, it is also desirable to cast a limit on the maximum amount of neurons. In this direction, [26]

showed that feedforward networks with one layer of sigmoidal nonlinearities achieve an integrated squared error with order of O

, where is the number of neurons.

More recently, several interesting results were published on the approximation capabilities of deep neural networks or neural networks using structured matrices. [27] have shown that there exists certain functions that can be approximated by three-layer neural networks with a polynomial amount of neurons, while two-layer neural networks require exponentially larger amount to achieve the same error. [28] and [29] have shown the exponential increase of linear regions as neural networks grow deeper. [30] proved that with layers, the neural network can achieve the error bound for any continuous function with O() parameters in each layer. Recently, [31] have proved that neural networks represented in structured, low displacement rank matrices preserve the universal approximation property. These recent research have sparked the research interests on the theoretical properties of neural networks with simplifications/approximations which are suitable for high-efficiency hardware implementations.

3 Neural Network of Interests and SCNNs

Our problem statement follows the flow of reference work [31] for investigating the universal approximation property. Let denote the -dimensional unit cube, . The space of continuous functions on is denoted by . A feedforward neural network with units of neurons arranged in a single hidden layer is denoted by a function , satisfying the form

(3)

where , , , , and is a nonlinear sigmoidal activation function. The denotes weights associated with hidden neuron and is applied to input . denotes the -th weight of output neuron, and is applied to the output of -th neuron in the hidden layer. is the bias of unit .

Definition 1.

A sigmoidal activation function satisfies

Definition 2.

Starting from the neural network of interests, we define an SCNN satisfying the form:

(4)

where each element in is denoted by , and each element in is denoted by . , , and are stochastic numbers represented by -bit streams, as approximations of , , and , respectively. These bit-streams are independent in each bit and , , and will converge to , , and as , respectively. The computation in follows the SC rules described in Section 2.1.

In the above definitions we focus on an "ideal" SCNN that assumes accurate activation and output layer calculation (which is reasonable because the output layer size is typically very small). The SCNN of interest, as illustrated in Figure 2, does not depend on specific hardware implementations that may be different in practice. We also do not specify any limitation on the weight and input ranges because they can be effectively dealt with by pre-scaling techniques.

4 The Universal Approximation Property of SCNNs and BNNs

Fig. 2: The structure of SCNN of interest.

In this section, we prove that SCNNs and BNNs satisfy the universal approximation property with probability 1. More specifically, we first prove the property for SCNNs and then use SCNNs as a "bridge" to prove for BNNs. This two-step proof is due to the fact that directly proving the property for BNNs is difficult, as BNNs represent functions with binary input values.

4.1 Universal Approximation Property of SCNN

In this section we will prove a lemma on the closeness of stochastic approximation for the inputs of each neuron, a lemma on the closeness of approximations for the outputs, and finally extend the universal approximation theorem from [24] to SCNNs.

Lemma 1.

As the bit-stream length , the stochastic number converges to almost surely.

Proof.

Let be the sample space of all bit-streams generated to represent elements in , , and . For each instance , use notations , , and

to represent stochastic numbers (or vectors) calculated from the corresponding

-bit streams associated with

. Moreover, define three constant random variables representing the target real values, namely for each

,

(5)

We shall prove that for every :

(6)

From the construction of the random variables, we have that for each and

Therefore, these exists such that for all and all , we have

where . Use an argument of triangle inequality to show

(7)

Since can be arbitrarily small, it implies

(8)

Since this is true for every , we conclude that

(9)

In other words, we proved that as , the stochastic number almost surely converges to . ∎

Lemma 2.

If the sigmodial function has bounded derivative, then the stochastic number almost surely converge to the real value as the bit-stream length , .

Proof.

We have the following inequalities:

(10)

For the currently utilized activation functions, including sigmoid, tanh (hyperbolic tangent), ReLU functions, there is an upper bound on the derivatives. The maximum absolute value of the derivatives is often 1. Then, from the above Lemma 1 about the almost sure convergence of

to , we arrive at the almost sure convergence of . ∎

Based on the above lemmas and the original universal approximation theorem, we arrive at the following universal approximation theorem for SCNNs.

Theorem 4.1.

(Universal Approximation Theorem for SCNNs). For any continuous function defined on and any , we define an event that there exists an SCNN function in the form of Eqn. (4) that satisfies

(11)

This event is satisfied almost surely (with probability 1).

Proof.

From the universal approximation theorem stated in [24], we know that there exists a function representing a deterministic neural network such that for all . For each positive integer define as the SCNN function obtained by replacing each parameter of with its -bit stochastic representation. Then we have

(12)

Applying Lemma 1 and 2, we can bound the first term as

(13)

where is the size of the hidden layer in the neural network represented by , and is the -th weight in the output layer.

Deriving from Lemma 2, we know that for , with probability 1 there exists such that

(14)

for . Incorporating into Eqn. (13) we have . Further incorporating into Eqn. (12) we have for . Thereby we have formally proved that universal approximation theorem holds with probability 1 for SCNNs. ∎

Besides the universal approximation property, it is also critical to derive an appropriate bound for bit length in order to provide insights for the actual neural network implementations. The next theorem gives an explicit bound on the bit length for close approximation with high probability.

Theorem 4.2.

For the SCNN function in Theorem 4.1, let be any integer that satisfies

(15)

Then with probability at least , holds for all .

Proof.

Different from the above proof based on the strong law of large numbers (almost sure convergence), deriving bounds is more related to the weak law (convergence in probability). As the former case will naturally ensure the latter, we have the following convergence in probability property: For any , there exists , such that for any , we have

(16)

Based on a reverse order of the above proof of universal approximation, the above inequality is satisfied when we have

(17)

Furthermore, the above inequality is satisfied when we have

(18)

As each bit in stochastic number satisfies a binary distribution with expectation

, the maximum variance is

. Due to i.i.d. property, the maximum variance () of is . According to the Chebyshev’s inequality , we let and obtain

(19)

Then we derive an upper bound of as

(20)

4.2 Universal Approximation of BNNs and Equivalence between SCNNs and BNNs

In this section we start from the formal definition of BNNs of interests and then state the universal approximation property. Similar to the definition of SCNNs in Section 3, here we focus on an "ideal" BNN that is independent of actual BNN implementations. An illustration is shown in Figure 3.

Fig. 3: The structure of BNN of interest.
Definition 3.

A BNN of interest is defined as a function , satisfying:

(21)

where the input vector and weight vector for each represent vectors of binary values. Let denote the dimensionality in these two vectors (dimension of inputs). is a binary bias value. The computation in follows the BNN rules as described in Section 2.2. Similar to SCNNs, we also consider here accurate activation and output layer calculation. This is reasonable and also applied in BNN deployments because the output layer size is typically very small.

The Equivalence of SCNNs and BNNs: The BNNs can be transformed into SCNNs, and vice versa. We illustrate the former case as an example. Let denote the length of stochastic number and the number of inputs in SCNN becomes . Then the first input stochastic number (i.e., the first bits in ), the second input stochastic number , and so on. This also applies to the weight stochastic numbers. The bias stochastic number can be a sign extension of . In this way the BNN is transformed into SCNN described in Definition 2. The transformation from SCNN to BNN is similar.

Because of the universal approximation property of SCNNs and the equivalence of BNNs, we arrive at the universal approximation for BNNs as well.

Theorem 4.3.

(Universal Approximation Theorem for BNNs). For any continuous function defined on , , we define an event that there exists an BNN function in the form of Eqn. (21) that satisfies

(22)

This event is satisfied almost surely (with probability 1).

Proof.

Apply Theorem 4.1 to obtain a close approximation of with SCNN functions, then build a BNN function that closely approximations the SCNN function. ∎

The equivalence in SCNNs and BNNs also leads to the same bound, defined as the total number of input bits required to achieve universal approximation. The reasoning is using proof by contradiction. Suppose that SCNNs have a lower bound, i.e., . Then there exists an SCNN with inputs each with bits satisfying the universal approximation property. From the above equivalance analysis we can construct a BNN with input bits that also achieves such property, which is smaller and thus in contradiction with the bound . And vice versa.

5 Energy Complexity and Hardware Design Implications

5.1 Energy Complexity Analysis

The energy complexity, as defined and described in [32, 33], specifies the asymptotic energy consumption with the growth of neural network size. It can be perceived as a multiplication of the time complexity and parallelism degree, and therefore is important for hardware implementations and evaluations. As an example, when the input size (number of bits) is , a ripple carry adder has an energy complexity of O() whereas a multiplier has energy complexity of O(). On the other hand, both of their time complexity is O(). The reason is because the ripple carry adder is a sequential computation whereas the multiplier is a parallel computation.

Next we provide an analysis on the energy complexity of the key calculation in in SCNNs and in BNNs. From the equivalence analysis in Section 4.2, we have and for satisfying the universal approximation property. According to the hardware implementation details in Section 2, the multiplication of two bits has energy complexity of O(1), then the multiplication of two stochastic numbers has energy complexity of O(). The addition of a set of stochastic numbers has energy complexity of O() using simple calculation units like multiplexers or energy complexity O() using more accurate accumulation units like the approximate parallel counter (APC) [12]. As a result, the overall energy complexity in is O() (for less accurate results) or O() (for more accurate results). For the whole layer with neurons, the overall energy complexity is or . The energy complexity for BNNs with is the same due to the equivalence.

5.2 Hardware Design Implications

Despite the same energy complexity, the actual hardware implementations of SCNNs and BNNs are different. As discussed before, SCNNs "stretch" in the temporal domain whereas BNNs span in the spatial domain. This is in fact the most important advantage of SCNNs. For BNN actual implementations, there is often an imbalance between the input I/O size and the computation requirement. The total computation requirement (please refer to the energy complexity discussion) is low, but the input requirement is huge even compared with conventional neural networks. This makes actual BNN implementations I/O bound systems, as in actual hardware tapeouts the I/O clock frequency is much lower compared with the computation clock frequency. In other words, the advantage of low and simple computation in BNNs is often not fully exploited in actual deployments [2, 3]. This limitation can be effectively mitigated by SCNNs, because the spatial requirement is effectively traded-off with the temporal requirement. In this aspect SCNNs can use lower I/O account and thereby more effective usage of hardware computation and memory storage resources compared with BNN counterparts, thereby becoming more suitable for hardware implementations.

On the other hand, BNNs are more heavily optimized in literature compared with SCNNs. Especially, many research work [9, 34] are dedicated for effective training methods for BNNs making efficient usage of randomization techniques. On the other hand, the research on SCNNs are mainly from the hardware aspect [10, 11, 22]. For training these work use a straightforward way of transforming directly (every input and weight) from conventional neural networks to stochastic numbers. As a result, it will be effective to take advantage of the training methods for BNNs, transform into SCNNs that are more suitable for hardware implementations using the method described in Section 4.2. In this way, we can effectively exploit the advantage while hiding weakness in both SCNNs and BNNs.

6 Conclusion

SCNNs and BNNs are low-complexity variants of deep neural networks that are particularly suitable for hardware implementations. In this paper, we conduct theoretical analysis and comparison between SCNNs and BNNs in terms of universal approximation property, energy complexity, and suitability for hardware implementations. More specifically, we prove that the "ideal" SCNNs and BNNs satisfy the universal approximation property with probability 1. The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and present a way of effectively exploiting the advantage of each type while hiding the weakness.

References

  • [1] Mahajan, D., Park, J., Amaro, E., Sharma, H., Yazdanbakhsh, A., Kim, J.K., Esmaeilzadeh, H.:

    Tabla: A unified template-based framework for accelerating statistical machine learning.

    In: High Performance Computer Architecture (HPCA), 2016 IEEE International Symposium on, IEEE (2016) 14–26
  • [2] Zhao, R., Song, W., Zhang, W., Xing, T., Lin, J.H., Srivastava, M., Gupta, R., Zhang, Z.:

    Accelerating binarized convolutional neural networks with software-programmable fpgas.

    In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ACM (2017) 15–24
  • [3] Umuroglu, Y., Fraser, N.J., Gambardella, G., Blott, M., Leong, P., Jahre, M., Vissers, K.: Finn: A framework for fast, scalable binarized neural network inference. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ACM (2017) 65–74
  • [4] http://www.techradar.com/news/computing-components/processors/google-s-tensor-processing-unit- explained-this-is-what-the-future-of- computing-looks-like-1326915
  • [5] https://www.sdxcentral.com/articles/news/intels-deep-learning-chips-will-arrive- 2017/2016/11/
  • [6] Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A., Dally, W.J.: Eie: efficient inference engine on compressed deep neural network. In: Proceedings of the 43rd International Symposium on Computer Architecture, IEEE Press (2016) 243–254
  • [7] Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., Li, L., Chen, T., Xu, Z., Sun, N., et al.: Dadiannao: A machine-learning supercomputer. In: Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, IEEE Computer Society (2014) 609–622
  • [8] Moons, B., Uytterhoeven, R., Dehaene, W., Verhelst, M.: 14.5 envision: A 0.26-to-10tops/w subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28nm fdsoi. In: Solid-State Circuits Conference (ISSCC), 2017 IEEE International, IEEE (2017) 246–247
  • [9] Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural networks with binary weights during propagations. In: Advances in neural information processing systems. (2015) 3123–3131
  • [10] Ren, A., Li, Z., Ding, C., Qiu, Q., Wang, Y., Li, J., Qian, X., Yuan, B.: Sc-dcnn: highly-scalable deep convolutional neural network using stochastic computing. In: Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, ACM (2017) 405–418
  • [11] Yu, J., Kim, K., Lee, J., Choi, K.: Accurate and efficient stochastic computing hardware for convolutional neural networks. In: Computer Design (ICCD), 2017 IEEE International Conference on Computer Design, IEEE (2017) 105–112
  • [12] Kyounghoon Kim, Jongeun Lee, K.C.: Approximate de-randomizer for stochastic circuits. In: SoC Design Conference (ISOCC), 2015 International SoC Design Conference, IEEE (2015)
  • [13] Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197) (2014) 668–673
  • [14] Li, B., Najafi, M.H., Yuan, B., Lilja, D.J.: Quantized neural networks with new stochastic multipliers
  • [15] Neftci, E.: Stochastic neuromorphic learning machines for weakly labeled data. In: Computer Design (ICCD), 2016 IEEE 34th International Conference on, IEEE (2016) 670–673
  • [16] Andreou, A.S., Chatzis, S.P.: Software defect prediction using doubly stochastic poisson processes driven by stochastic belief networks. Journal of Systems and Software 122 (2016) 72–82
  • [17] Csáji, B.C.: Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary 24 (2001)  48
  • [18] Brown, B.D., Card, H.C.: Stochastic neural computation. i. computational elements. IEEE Transactions on computers 50(9) (2001) 891–905
  • [19] Li, H., Wei, T., Ren, A., Zhu, Q., Wang, Y.: Deep reinforcement learning: Framework, applications, and embedded implementations. arXiv preprint arXiv:1710.03792 (2017)
  • [20] Li, J., Yuan, Z., Li, Z., Ding, C., Ren, A., Qiu, Q., Draper, J., Wang, Y.: Hardware-driven nonlinear activation for stochastic computing based deep convolutional neural networks. In: Neural Networks (IJCNN), 2017 International Joint Conference on Neural Networks, IEEE (2017) 1230–1236
  • [21] Yuan, Z., Li, J., Li, Z., Ding, C., Ren, A., Yuan, B., Qiu, Q., Draper, J., Wang, Y.: Softmax regression design for stochastic computing based deep convolutional neural networks. In: Proceedings of the on Great Lakes Symposium on VLSI 2017, ACM (2017) 467–470
  • [22] Li, Z., Ren, A., Li, J., Qiu, Q., Yuan, B., Draper, J., Wang, Y.: Structural design optimization for deep convolutional neural networks using stochastic computing. In: Proceedings of the Conference on Design, Automation & Test in Europe, European Design and Automation Association (2017) 250–253
  • [23] David, J.P., Kalach, K., Tittley, N.: Hardware complexity of modular multiplication and exponentiation. IEEE Transactions on Computers 56(10) (2007)
  • [24] Cybenko, G.:

    Approximation by superpositions of a sigmoidal function.

    Mathematics of control, signals and systems 2(4) (1989) 303–314
  • [25] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5) (1989) 359–366
  • [26] Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory 39(3) (1993) 930–945
  • [27] Delalleau, O., Bengio, Y.: Shallow vs. deep sum-product networks. In: Advances in Neural Information Processing Systems. (2011) 666–674
  • [28] Montufar, G.F., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. In: Advances in neural information processing systems. (2014) 2924–2932
  • [29] Telgarsky, M.: Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485 (2016)
  • [30] Liang, S., Srikant, R.: Why deep neural networks for function approximation? arXiv preprint arXiv:1610.04161 (2016)
  • [31] Zhao, L., Liao, S., Wang, Y., Tang, J., Yuan, B.: Theoretical properties for neural networks with weight matrices of low displacement rank. arXiv preprint arXiv:1703.00144 (2017)
  • [32] Martin, A.J.: Towards an energy complexity of computation. Information Processing Letters 77(2-4) (2001) 181–187
  • [33] Khude, N., Kumar, A., Karnik, A.: Time and energy complexity of distributed computation in wireless sensor networks. In: INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE. Volume 4., IEEE (2005) 2625–2637
  • [34] Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061 (2016)