NeuPart: Using Analytical Models to Drive Energy-Efficient Partitioning of CNN Computations on Cloud-Connected Mobile Clients

05/09/2019 ∙ by Susmita Dey Manasi, et al. ∙ University of Minnesota 0

Data processing on convolutional neural networks (CNNs) places a heavy burden on energy-constrained mobile platforms. This work optimizes energy on a mobile client by partitioning CNN computations between in situ processing on the client and offloaded computations in the cloud. A new analytical CNN energy model is formulated, capturing all major components of the in situ computation, for ASIC-based deep learning accelerators. The model is benchmarked against measured silicon data. The analytical framework is used to determine the energy optimal partition point between the client and the cloud at runtime. On standard CNN topologies, partitioned computation is demonstrated to provide significant energy savings on the client over fully cloud-based or fully in situ computation. For example, at 60 Mbps bit rate and 0.5 W transmission power, the optimal partition for AlexNet [SqueezeNet] saves up to 47.4 energy over fully cloud-based computation, and 31.3 in situ computation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 13

Code Repositories

CNNergy

An Analytical CNN Energy Model


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation

Machine learning using deep convolutional neural networks (CNNs) constitutes a powerful approach that is capable of processing a wide range of visual processing tasks with high accuracy. Due to the highly energy-intensive nature of CNN computations, today’s deep learning (DL) engines using CNNs are largely based in the cloud [1, 2]

, where energy is less of an issue than on battery-constrained mobile clients. Although a few simple emerging applications such as facial recognition are performed

in situ on mobile processors, today’s dominant mode is to offload DL computations from the mobile device to the cloud. The deployment of specialized hardware accelerators for embedded DL to enable energy efficient execution of CNN tasks is the next frontier.

Limited client battery life places stringent energy limitations on embedded DL systems. This work focuses on a large class of inference engine applications where battery life considerations are paramount over performance: e.g., for a health worker in a remote area, who uses a mobile client to capture images processed using DL to diagnose cancer [3], a farmer who takes crop photographs and uses DL to diagnose plant diseases [4], or an unmanned aerial vehicle utilizing DL for monitoring populations of wild birds [5]. In all these examples, client energy is paramount for the operator in the field and processing time is secondary, i.e., while arbitrarily long processing times are unacceptable, somewhat slower processing times are acceptable for these applications. Moreover, for client-focused design, it is reasonable to assume that the datacenter has plentiful power supply, and the focus is on minimizing client energy rather than cloud energy. While this paradigm may not apply to all DL applications (e.g., our solution is not intended to be applied to applications such as autonomous vehicles, where high-speed data processing is important), the class of energy-critical client-side applications without stringent latency requirements encompasses a large corpus of embedded DL tasks that require energy optimization at the client end.

Fig. 1: An AlexNet computation, showing client/cloud partitioning.

To optimize client energy, this work employs computation partitioning between the client and the cloud. Fig. 1 shows an inference engine computation on AlexNet [6], a representative CNN topology, for recognition of an image from the camera of a mobile client. If the CNN computation is fully offloaded to the cloud, the image from the camera is sent to the datacenter, incurring a communication overhead corresponding to the number of data bits in the compressed image. Fully in situ CNN computation on the mobile client involves no communication, but drains its battery during the energy-intensive computation.

(a) Computation cost
(b) Communication cost
Fig. 2: (a) The cumulative computation energy up to each layer of AlexNet. (b) Number of bits of compressed output data at each layer, to be transmitted to the cloud.

Computation partitioning between the client and the cloud represents a middle ground: the computation is partially processed in situ, up to a specific CNN layer, on the client. The data is then transferred to the cloud to complete the computation, after which the inference results are sent back to the client, as necessary. We propose NeuPart, a partitioner for DL tasks that minimizes client energy. NeuPart is based on an analytical modeling framework, and analytically identifies, at runtime, the energy-optimal partition point at which the partial computation on the client is sent to the cloud.

Fig. 2 concretely illustrates the tradeoff between communication and computation for AlexNet. In (a), we show the cumulative computation energy (obtained from our analytical CNN energy model presented in Section IV) from the input to a specific layer of the network. In (b), we show the volume of compressed data that must be transmitted when the partial in situ computation is transmitted to the cloud. The data computed in internal layers of the CNN tend to have significant sparsity (over 80%, as documented later in Fig. 9): NeuPart leverages this sparsity to transmit only nonzero data at the point of partition, thus limiting the communication cost.

The net energy cost for the client for a partition at the layer can be computed as:

(1)

where is the processing energy on the client, up to the CNN layer, and is the energy required to transmit this partially computed data from the client to the cloud. The inference result corresponds to a trivial amount of data to return the identified class, and involves negligible energy.

NeuPart focuses on minimizing for the energy-constrained client (as stated earlier, this work targets energy-constrained clients and relatively latency-insensitive applications such as [3, 4, 5]). For the data in Fig. 2, increases monotonically as we move deeper into the network, but can reduce greatly. Thus, the optimal partitioning point for here lies at an intermediate layer .

I-B Contributions of NeuPart

The specific contributions of NeuPart are as follows. First, unlike prior computation partitioning works [7, 8, 9], NeuPart addresses the client/cloud partitioning problem for specialized DL accelerators that are more energy-efficient than CPUs/GPUs/FPGAs, and demonstrates the optimal tradeoff point under various communication environments. Second, the core of NeuPart is driven by a new analytical model (we name our model ‘CNNergy’) for CNN energy, , that accounts for the complexities of scheduling computations over multidimensional data. CNNergy captures key parameters of the hardware and maps the hardware to perform computation on various CNN topologies. CNNergy is benchmarked on several CNNs: AlexNet, SqueezeNet-v1.1 [10], VGG-16 [11], and GoogleNet-v1 [12]. CNNergy is far more detailed than prior energy models [13] and incorporates implementation specific details of a DL accelerator, capturing all major components of the in situ computation, including the cost of arithmetic computations, memory and register access, control, and clocking.

It is important to note that CNNergy may potentially have utility beyond this work, and has been open-sourced at https://github.com/manasiumn37/CNNergy. For example, it provides a breakdown of the total energy into specific components, such as data access energy from different memory levels of a DL accelerator, data access energy associated with each CNN data type from each level of memory, MAC computation energy. CNNergy can also be used to explore design phase tradeoffs such as analyzing the impact of changing on-chip memory size on the total execution energy of a CNN.

The paper is organized as follows. Section II discusses prior approaches to computational partitioning, and is followed by Section III

, where the general framework of CNNergy for energy estimation on CNN accelerators is outlined. Next, Sections 

IV-VI present the details of CNNergy and validate the model in several ways, including against silicon data. A method for performing the NeuPart client/cloud partitioning at runtime is discussed in Section VII, after which Section VIII evaluates NeuPart on widely used CNN topologies. The paper concludes in Section IX.

Ii Related Work

Computational partitioning has previously been used in the general context of distributed processing [14]. A few prior works [7, 8, 9] have utilized computation partitioning in the context of mobile DL. In [7], tasks are offloaded to a server from nearby IoT devices for best server utilization, but no attempt is made to minimize edge device energy. The work in [9] uses limited application-specific profiling data to schedule computation partitioning. Another profiling-based scheme [8] uses client-specific profiling data to form regression based computation partitioning model for each device. A limitation of profiling-based approaches is that they require profiling data for each mobile device or each DL application, which implies that a large number of profiling measurements are required for real life deployment. Moreover, profiling-based methods require the hardware to already be deployed and cannot support design-phase optimizations. Furthermore, all these prior approaches use a CPU/GPU based platform for the execution of DL workloads.

In contrast with prior methods, NeuPart works with specialized DL accelerators, which are orders of magnitude more energy-efficient as compared to the general purpose machines [1, 15, 16], for client/cloud partitioning. NeuPart specifically leverages the highly structured nature of computations on CNN accelerators, and shows that an analytical model predicts the client energy accurately (as demonstrated in Section V). The analytical framework used in the NeuPart CNNergy incorporates implementation-specific details that are not modeled in prior works. For example, [8] uses (a) 8-bit uncompressed raw image to transmit at the input, which is not typical: in a real system, images are compressed before transmission to reduce the communication overhead; (b) higher bit width (i.e., 32-bit data) for the intermediate layers and ignores any data sparsity at the intermediate CNN layers, thereby, failing to fully leverage the inherent computation-communication tradeoff of CNN and benefit of the computation partitioning scheme. As mentioned previously, the analytical model is likely to have applications in work on design-stage optimizations, beyond just the computational partitioning work in this paper.

Iii Computations on CNN Hardware

Iii-a Fundamentals of CNNs

The computational layers in CNNs can be categorized into three types: convolution (Conv), fully connected (FC), and pooling (Pool). The computation in a CNN is typically dominated by the Conv layers. In each layer, the computation involves three types of data:

  • ifmap, the input feature map

  • filter, the filter weights, and

  • psum, the intermediate partial sums.

Fig. 3: Illustration of ifmap, filter, and ofmap in a Conv layer.
Parameters Description
Height/width of a filter
Padded height/width of an ifmap
Height/width of an ofmap
#of channels in an ifmap and filter
#of 3D filters in a layer
#of channels in an ofmap

Convolution stride

TABLE I: Parameters for a CNN layer shape.

Table I summarizes the parameters associated with a convolution layer. As shown in Fig. 3, for a Conv layer, filter and ifmap are both 3D data types consisting of multiple 2D planes (channels). Both the ifmap and filter have the same number of channels, , while and .

During the convolution, an element-wise multiplication between the filter and the green 3D region of the ifmap in Fig. 3 is followed by the accumulation of all products (i.e., psums), and results in one element shown by the green box in the output feature map (ofmap). Each channel () of the filter slides through its corresponding channel () of the ifmap with a stride (), repeating similar multiply-accumulate (MAC) operations to produce a full 2D plane (

) of the ofmap. A nonlinear activation function (e.g., a rectified linear unit, ReLU) is applied after each layer, introducing sparsity (i.e., zeros) at the intermediate layers, which can be leveraged to reduce computation.

The above operation is repeated for all filters to produce 2D planes for the ofmap, i.e., the number of channels in the ofmap equals the number of 3D filters in that layer. Due to the nature of the convolution operation, there is ample opportunity for data reuse in a Conv layer. FC layers are similar to Conv layers but are smaller in size, and produce a 1D ofmap. Computations in Pool layers serve to reduce dimensionality of the ofmaps produced from the Conv layers by storing the maximum/average value over a window of the ofmap.

Iii-B Executing CNN Computations on a Custom ASIC

Iii-B1 Architectural Features of CNN Hardware Accelerators

Since the inference task of a CNN comprises a very structured and fixed set of computations (i.e., MAC, nonlinear activation, pooling, and data access for the MAC operands), specialized hardware accelerators are very suitable for their execution. Various dedicated accelerators have been proposed in literature for efficient processing of CNNs [1, 17, 16, 18, 19]. The architecture in Google TPU [1] consists of a 2D array of parallel MAC computation units, large on-chip buffer for the storage of ifmap and psum data, additional local storage inside the MAC computation core for the filter data, and finally, off-chip memory to store all the feature maps and filters together. Similarly, along with an array of parallel processing elements, the architectures in [19, 16, 18] use a separate on-chip SRAM to store a chuck of filter, ifmap, and psum data, and an external DRAM to completely store all the ifmap/ofmap and filter data. In [16, 17], local storage is used inside each processing element while allowing data communication between processing elements to facilitate better reuse of data. Additionally, [19, 18, 17] exploit the inherent data sparsity in the internal layers of CNN to save computation energy.

It is evident that the key architectural features of these accelerators are fundamentally similar: an array of processing elements to perform neural computation in parallel, multiple levels of memory for fast data access, and greater data reuse.

Fig. 4: General framework of the analytical CNN energy model (CNNergy).

Iii-B2 General Framework of CNNergy

Since the goal of NeuPart is to minimize client energy, we develop an analytical model (CNNergy) for energy dissipation in a CNN hardware architecture. The general framework of CNNergy is illustrated in Fig. 4. We use CNNergy to determine the in situ computation energy ( in (1)), accounting for scheduling and computation overheads.

One of the largest contributors to energy is the cost of data access from memory. This data reuse is critical for energy-efficient execution of CNN computations to reduce unnecessary high-energy memory accesses, particularly the ifmap and filter weights, and is used in [1, 17, 16, 18, 19]. This may involve, for example, ifmap and filter weight reuse across convolution windows; ifmap reuse across filters, and reduction of psum terms across channels. Given the accelerator memory hierarchy, #of parallel processing elements, and CNN layer shape, Block 

1
of Fig. 4 is an automated scheme for scheduling MAC computations while maximizing data reuse. The detailed methodology for obtaining these scheduling parameters is presented in Section IV-C.

Depending on the scheduling parameters, the subvolume of the ifmap to be processed at a time is determined. Block 

2
then computes the corresponding energy for the MAC operations and associated data accesses. The computation in Block 

2
is repeated to process the entire data volume in a layer, as detailed in Section IV-D.

The framework of CNNergy is general and its principles apply to a large class of CNN accelerators. However, to validate the framework, we demonstrate it on a specific platform, Eyeriss [17], for which ample performance data is available, including silicon measurements, to validate CNNergy. Eyeriss has an array of processing elements (PEs), each with:

  • a multiply-accumulate (MAC) computation unit.

  • register files (RFs) for filter, ifmap, and psum data.

We define , , and as the maximum number of -bit filter, ifmap, and psum elements that can be stored in a PE.

The accelerator consists of four levels of memory: DRAM, global SRAM buffer (GLB), inter-PE RF access, and local RF within a PE. During the computations of a layer, filters are loaded from DRAM to the RF. In the GLB, storage is allocated for psum and ifmap. After loading data from DRAM, ifmaps are stored into the GLB to be reused from the RF level. The irreducible psums navigate through GLB and RF as needed. After complete processing of a 3D ifmap, the ofmaps are written back to DRAM.

Iv Analytical CNN Energy Model

We formulate an analytical model (CNNergy) for the CNN processing energy (used in (1)), , up to the layer, as

(2)

where is the energy required to process layer of the CNN. To keep the notation compact, we drop the index “” in the remainder of this section. We can write as:

(3)

where is the energy to compute MAC operations associated with the layer, represents the energy associated with the control and clocking circuitry in the accelerator, and is the memory data access energy,

(4)

i.e., the sum of data access energy from on-chip memory (from GLB, Inter-PE, and RF), and from the off-chip DRAM.

The computation of these energy components, particularly the data access energy, is complicated by their dependence on the data reuse pattern in the CNN. In the following subsections, we develop a heuristic for optimal data reuse and describe the methodology in our CNNergy for estimating these energy components.

Fig. 5: A simplified schematic illustrating how CNNergy processes an ifmap (for one 3D filter).

Iv-a Conceptual Illustration of CNNergy

Fig. 5 illustrates how the 3D ifmap is processed by convolving one 3D filter with ifmap to obtain one 2D channel of ofmap; this is repeated over all filters to obtain all channels of ofmap. Due to the large volume of data in a CNN layer and the limited availability of on-chip storage (register files and SRAM), the data is divided into smaller subvolumes, each of which is processed by the PE array in one pass to generate psums, where the capacity of the PE array and local storage determine the amount of data that can be processed in a pass.

All psums are accumulated to produce the final ofmap entry; if only a subset of psums are accumultated, then the generated psums are said to be irreducible. The pink region of size shows the ifmap volume that is covered in one pass, while the green region shows the volume that is covered in multiple passes before a write-back to DRAM.

As shown in the figure (for reasons provided in Section IV-C), consecutive passes first process the ifmap in the -direction, and then the -direction, and finally, the -direction. After a pass, irreducible psums are written back to GLB, to be later consolidated with the remainder of the computation to build ofmap entries. After processing the full -direction (i.e., all the channels of a filter and ifmap) the green ofmap region of size is formed and then written back to DRAM. The same process is then repeated until the full volume of ifmap/ofmap are covered.

Fig. 5 is a simplified illustration that shows the processing of one 3D ifmap using one 3D filter. Depending on the amount of available register file storage in the PE array, a convolution operation using filters can be performed in a pass. Furthermore, subvolumes from multiple images (i.e., ifmaps) can be processed together, depending on the SRAM storage capacity.

Due to the high cost of data fetches, it is important to optimize the pattern of fetch operations from the DRAM, GLB, and register file by reusing the fetched data. The level of reuse is determined by the parameters , , , , , , , , and . Hence, the efficiency of the computation is based on the choice of these parameters. The mapping approach that determines these parameters, in a way that attempts to minimize data movement, is described in Section IV-C.

Fig. 6: (a) The , , directions. (b)–(e) Example showing convolution operation in the PEs (adapted from [20]).

Iv-B Dataflow Illustration in the PE array

Iv-B1 Processing the ifmap in sets – Concept

In this section we describe how the convolution operations are distributed in the 2D PE array of size . We use the row-stationary scheme to manage the dataflow for convolution operations in the PE array as it is shown to offer higher energy-efficiency than other alternatives [20, 21].

We explain the row-stationary dataflow with a simplified example shown in Fig. 6, where a single channel of the filter and ifmap are processed (i.e., ). Fig. 6(b) shows a basic computation where a filter (green region) is multiplied with the part of the ifmap that it is overlaid on, shown by the dark blue region. Based on the row-stationary scheme for the distributed computation, these four rows of the ifmap are processed in four successive PEs within a column of the PE array. Each PE performs an element-wise multiplication of the ifmap row and the filter row to create a psum. The four psums generated are transmitted to the uppermost PEs and accumulated to generate their psum (dark orange).

Extending this operation to a full convolution implies that the ifmap slides under the filter in the negative -direction with a stride of , while keeping the filter stationary: for , two strides are required to cover the ifmap in the -direction. In our example, for each stride, each of the four PEs performs the element-wise multiplication between one filter row and one ifmap row, producing one 1D row of psum, which is then accumulated to produce the first row of psum, as illustrated in Fig. 6(c).

Thus, the four dark blue rows in Fig. 6(c) are processed by four PEs (one per row) in a column of the PE array. The reuse of the filter avoids memory overheads due to repeated fetches from other levels of memory. To compute psums associated with other rows of the ofmap, a subarray of 12 PEs (4 rows 3 columns) processes the ifmap under a -direction ifmap stride. The ifmap regions thus processed are shown by the dark regions of Fig. 6(d),(e).

We define the amount of processing performed in rows, across all columns of the PE array, as a set. For a PE array, a set coresponds to the processing in Fig. 6(c)–(e).

In the general context of the PE array, a set is formed from PEs. Therefore, the number of sets which can fit in the full PE array (i.e., the number of sets in a pass) is given by:

(5)

i.e., is the ratio of the PE array height to the filter height.

Iv-B2 Processing the ifmap in sets – Realistic scenario

We now generalize the previously simplified assumptions in the example to consider typical parameter ranges. We also move from the assumption of , to a more typical .

First, the filter height can often be less than the size of the PE array. When , the remaining PE array rows can process more filter channels simultaneously, in multiple sets.

Second, typical RF sizes in each PE are large enough to operate on more than one 1D row. Under this scenario, within each set

, a group of 2D filter planes is processed. There are several degrees of freedom in mapping computations to the PE array. When several 1D rows are processed in a PE, the alternatives for choosing this group of 1D rows include:

  1. choosing filter/ifmap rows from different channels of the same 3D filter/ifmap;

  2. choosing filter rows from different 3D filters;

  3. combining (i) and (ii), where some rows come from the channels of the same 3D filter and some rows come from the channels under different 3D filters.

Across sets in the PE array, similar mapping choices are available. Different groups of filter planes (i.e., channels) are processed in different sets. These groups of planes can be chosen either from the same 3D filter, or from different 3D filters, or from a combination of both.

Thus, there is a wide space of mapping choices for performing the CNN computation in the PE array.

Iv-B3 Data reuse

Due to the high cost of data accesses from the next memory level, it is critical to reuse data as often as possible to achieve energy efficiency. Specifically, after fetching data from a higher access-cost memory level, data reuse refers to the use of that data element in multiple MAC operations. For example, after fetching a data element from GLB to RF, if that data is used across MAC operations, then the data is reused times with respect to the GLB level.

We now examine the data reuse pattern within the PE array. Within each PE column, an ifmap plane is being processed along the -direction, and multiple PE columns process the ifmap plane along the -direction. Two instances of data reuse in the PE array are:
(1) In each set, the same ifmap row is processed along the PEs in a diagonal of the set. This can be seen in the example set in Fig. 6(c)-(e), where the third row of the ifmap plane is common in the PEs in in (c), in (d), and in (e), where refers to the PE in row and column . (2) The same filter row is processed in the PEs in a row: in Fig. 6(c)-(e), the first row of the filter plane is common to all PEs in all three columns of row 1.
Thus, data reuse can be enabled by broadcasting the same ifmap data (for instance (1)) and the same filter data (for instance (2)) to multiple PEs for MAC operations after they are fetched from a higher memory level (e.g., DRAM or GLB).

Notation Description
Computation Scheduling Parameters
#of filters processed in a pass
#of ifmap/filter channels processed in a pass
() Height of ifmap (ofmap) processed in a pass
() Width of ifmap (ofmap) processed in a pass
()
Height of ifmap (ofmap) processed before a
write back to DRAM
#of ifmap from different images processed together
Accelerator Hardware Parameters
Size of RF storage for filter in one PE
Size of RF storage for ifmap in one PE
Size of RF storage for psum in one PE
Height of the PE array (#of rows)
Width of the PE array (#of columns)
GLB Size of GLB storage
bit width of each data element
TABLE II: List of parameters for computation scheduling and accelerator hardware constraints.

Iv-C Obtaining Computation Scheduling Parameters

As seen in Section IV-B, depending on the specific CNN and its layer structure, there is a wide space of choices for computations to be mapped to the PE array. The mapping of filter and ifmap parameters to the PE array varies with the CNN and with each layer of a CNN. This mapping is a critical issue in ensuring low energy, and therefore, in this work, we develop an automated mapping scheme for any CNN topology. The scheme computes the parameters for scheduling computations. The parameters are described in Section IV-A and summarized in Table II. The table also includes the parameters for the accelerator hardware constraints.

For general CNNs, for each layer, we develop a mapping strategy that follows predefined rules to determine the computation scheduling. The goal of scheduling is to attempt to minimize the movement of three types of data (i.e., ifmap, psum, and filter), since data movement incurs large energy overheads. In each pass, the mapping strategy uses the following priority rules:
(i) We process the maximum possible channels of an ifmap to reduce the number of psum terms that must move back and forth with the next level of memory.
(ii) We prioritize filter reuse, psum reduction over ifmap reuse.

The rationale for Rule (i) and (ii) is that since a very large number of psums is generated in each layer, psum reduction is the most important factor for energy, particularly because transferring psums to the next pass involves expensive transactions with the next level of memory. This in turn implies that filter weights must remain stationary for maximal filter reuse. Criterion (ii) lowers the number of irreducible psums: if the filter is changed and ifmap is kept fixed, the generated psums are not reducible.

In processing the ifmap, proceeding along the - and -directions enables the possibility of filter reuse as the filter is kept stationary in the RF while the ifmap is moved. In contrast, if passes were to proceed along the -direction, filter reuse would not be possible since new filter channels must be loaded from the DRAM for the convolution with ifmap. Therefore, the -direction is the last to be processed. In terms of filter reuse, the - and -directions are equivalent, and we arbitrarily prioritize the -direction over the -direction.

We use the notion of a set and a pass (Section IV-B) in the flow graph to devise the choice of scheduling parameters:

Iv-C1 Computing and

The value of and is limited by the number of columns, , in the PE array. The corresponding value of is found using the relation

(6)

Iv-C2 Computing and

The number of channels of each ifmap in a pass is computed as

(7)

where is the number of channels per set, and is the number of sets per pass (given by (5)). Recall that the first priority rule of CNNergy is to process the largest possible number of ifmap channels at a time. Therefore, to compute , we find the number of filter rows that can fit into an ifmap RF, i.e., .

To enable per-channel convolution, the filter RF of a PE must be loaded with the same number of channels as from a single filter. The remainder of the dedicated filter RF storage can be used to load channels from different filters so that one ifmap can be convolved with multiple filters resulting in ifmap reuse. Thus, after maximizing the number of channels of an ifmap/filter to be processed in a pass, the remaining storage can be used to enable ifmap reuse. Therefore, the number of filters processed in a pass is

(8)

Iv-C3 Computing , , , ,

During a pass, the ifmap corresponds to the pink region in Fig. 5, and over multiple passes, the entire green volume of the ifmap in the figure is processed before a writeback to DRAM.

We first compute ifmap and psum, the storage requirements of ifmap and psum, respectively, during the computation. The pink region has dimension and over several passes it creates, for each of the filters, a set of psums for the region of the ofmap that are not fully reduced (i.e., they await the results of more passes). Therefore,

(9)
(10)

where corresponds to the bit width for ifmap and psum.

Next, we determine how many ifmap passes can be processed for a limited GLB size, GLB. This is the number, , of pink regions that can fit within the GLB, i.e.,

(11)

To compute , we first set it to the full ifmap width, , and we set to the full ofmap height, , to obtain . If , i.e., ifmappsumGLB, then and are reduced until the data fits into the GLB and .

From the values of and computed above, we can determine and using the relations

(12)
Fig. 7: Flow graph to obtain the computation scheduling parameters.

Fig. 7 shows the flow graph that summarizes how the parameters for scheduling the CNN computation are computed. The module takes the CNN layer shape (Table I) and the accelerator hardware parameters (Table II) as inputs. Based on our automated mapping strategy, the module outputs the computation scheduling parameters (Table II).

Iv-C4 Exception Rules

The mapping method handles exceptions:

  • If , some PE columns will remain unused. This is avoided by setting . If the new ifmappsumGLB, is reduced so that the data fits into the GLB.

  • If , all channels are processed in a pass while increasing , as there is more room in the PE array to process more filters. The cases , proceed by reducing .

  • All Conv layers whose filter has the dimension

    (e.g., inside the inception modules of GoogleNet, or the fire modules of SqueezeNet) are handled under a fixed exception rule that uses a reduced

    , and suitably increased .

The exceptions are triggered only for a few CNN layers (i.e., layers having relatively few channels or filters).

INPUT: Computation scheduling parameters: , , , , , , , , (defined in Table II);
CNN layer shape parameters: , , , , , , , (defined in Table I);
Technology parameters for energy per operation (specified in Table III)
OUTPUT: Energy to process a CNN layer,
STEPS:

1:Determine the subvolume from each ifmap () and from each filter () processed in one pass.
2:Determine the number of psums () generated during a pass from each 3D filter and each image.
3:Compute , # of ifmap elements accessed in one pass from DRAM and GLB, using (13)
4:Compute , # of psum elements to read (write) from (to) GLB during one pass, using (14)
5:Compute , # of filter elements to load from DRAM for one pass, using (15)
6:Compute # of passes before a writeback of ofmap to DRAM ( passes along and -directions)
7:Compute , data access energy to process a subvolume of ifmap over which filter data is reused, using (16)
8:Repeat Step 7 to compute , data access energy to produce and perform DRAM write of region of each ofmap channel over filters and images, using (17)
9:Repeat Step 8 to compute , total data access energy, using (18)
10:Compute using (19)
11:Compute using (20)
12:Compute , using (3)
Algorithm 1 Algorithm for Energy Computation.

Iv-D Energy () Computation

In the previous section, we have determined the subvolume of ifmap and filter data to be processed in a pass. From the scheduling parameters we can also compute the number of passes before a writeback of ofmap to DRAM. Therefore, we have determined the schedule of computations to generate all channels of ofmap. We now estimate each component of in (3). The steps for this energy computation are summarized in Algorithm 1, which takes as input the computation scheduling parameters, CNN layer shape parameters, and technology-dependent parameters that specify the energy per operation (Table III), and outputs .

Iv-D1 Computing ,

We begin by computing the subvolume of data loaded in each pass (Lines 15). In Fig 5, is illustrated as the pink ifmap region which is processed in one pass for an image, and is the number of psum entries associated with the orange ofmap region for a single filter and single image. The filter data is reused across () passes, and we denote the number of filter elements loaded for these passes by . Thus, for filters and images,

(13)
(14)
(15)

To compute energy, we first determine , the data access energy required to process volume of each ifmap over filters and images. In each pass, a volume of the ifmap is brought from the DRAM to the GLB for data access; psums move between GLB and RF; and RF-level data accesses () occur for the four operands associated with each MAC operation in a pass. Therefore, the corresponding energy can be computed as:

(16)

Here, denotes the energy associated with operation , and each energy component can be computed by multiplying the energy per operation by the number of operations. Since filter data is reused across () passes, all components in (16), except the energy associated with filter access, are multiplied by this factor. Each psum is written once and read once, and accounts for both operations.

Next, all channels of ifmap (i.e., the entire green ifmap region in Fig. 5) are processed to form the green region of each ofmap channel, and this data is written back to DRAM. To this end, we compute , the data access energy to produce fraction of each ofmap channel over filters and images, by repeating the operations in (16) to cover all the channels:

(17)

Finally, the computation in (17) is repeated to produce the entire volume of the ofmap over all filters. Therefore, the total energy for data access is

(18)

Here, the multipliers , , and represent the number of iterations of this procedure to cover the entire ofmap. These steps are summarized in Lines 69 of Algorithm 1. Finally, the computation energy of the Conv layer is computed by:

(19)

where is the energy per MAC operation, and it is multiplied by the number of MACs required for a CNN layer.

Fig. 8: Energy validation of CNNergy: (a) AlexNet without (b) AlexNet with model (c) GoogleNet-v1.

Iv-D2 Sparsity

The analytical model exploits sparsity of the data (i.e., zeros in ifmap/ofmap) at internal layers of a CNN. Except the input ifmap to the first Conv layer of a CNN, all data communication with the DRAM (i.e., ifmap read or ofmap write) is performed in run-length compressed (RLC) format [17]. In addition, for a zero valued ifmap, the MAC computation as well as the associated filter and psum read (write) from (to) RF level is skipped to reduce energy.

Iv-D3 Computing

The control overheard includes the clock power, overheads for control circuitry for the PE array, network-on-chip to manage data delivery, I/O pads, etc. Of these, the clock power is a major contributor (documented as 33%–45% in [17]), and other components are relatively modest. The total control energy, , is modeled as:

(20)

where is the clock power, is the number of cycles required to process a single layer, and is the clock period; is the control energy from components other than the clock network. For a supply voltage of , the clock power is computed as:

(21)
(22)

where is leakage in the clock network. The switching capacitance, , includes capacitances of the global clock buffers, wires, clocked registers in the PEs, and clocked SRAM (GLB) components, i.e., the decoder, registers, and precharges for the bitline and sense amplifier:

(23)

The clock is distributed as an H-tree, and we choose the size and number of the clock buffers to maintain a slew rate within 10% of . We model as 15% of excluding , similar to data from the literature.

16-bit MAC Memory access [], 65nm [21]
, RF Inter-PE GLB DRAM
45nm [22] access access access access
Energy 0.95 pJ 1 2 6 200
TABLE III: Technology parameters used for CNNergy.

V Validation of CNNergy

We validate CNNergy against limited published data for AlexNet and GoogleNet-v1:
(i) EyMap, the Eyeriss energy model, utilizing the mapping parameters provided in [17]. This data only provides parameters for the five convolution layers of AlexNet.
(ii) EyTool, Eyeriss’s energy estimation tool [23], excludes and supports AlexNet and GoogleNet-v1 only.
(iii) EyChip, measured data from 65nm silicon [17] (AlexNet Conv layers only, excludes ).

Note that our CNNergy exceeds the capability of these:

  • CNNergy is suitable for customized energy access (i.e., any intermediate CNN energy component is obtainable).

  • CNNergy can find energy for various accelerator parameters.

  • CNNergy can analyze a vast range of CNN topologies and general CNN accelerators, not just Eyeriss.

To enable a direct comparison with Eyeriss, 16-bit fixed point arithmetic precision is used to represent feature maps and filter weights. The technology parameters are listed in Table III. The available process data is from 45nm and 65nm nodes, and we use the factor to scale 45nm data for direct comparison with measured 65nm silicon.

For the control energy, capacitive components in (22) and (23) are extracted from the NCSU 45nm PDK [24]. To estimate , per unit wire capacitance of top metal layer, input gate capacitance of clock buffer, capacitive load at the clock input of a flip-flop (i.e., register) as well as clock-load components from SRAM memory are extracted. The results are scaled to 65nm node by the scaling factor . The resultant clock power is computed by (21), and the for each layer in (20) is inferred as , where the numerator is a property of the CNN topology and the denominator is obtained from [17].

Fig. 8 compares the energy obtained from CNNergy, EyTool, and EyMap to process an input image for AlexNet. As stated earlier, EyTool excludes ; accordingly, our comparison also omits . The numbers match closely.

Fig. 8 shows the energy results for AlexNet including the component in (3) for both CNNergy and EyMap. The EyTool data that neglects is significantly off from the more accurate data for CNNergy, EyMap, and EyChip, particularly in the Conv layers. Due to unavailability of reported data, the bars in Fig. 8 only show the Conv layer energy for EyMap and EyChip. Note that EyChip does not include the component of (4).

Fig. 8 compares the energy from CNNergy with the EyTool energy for GoogleNet-v1. Note that the only available hardware data for GoogLeNet-v1 is from EyTool, which does not report control energy: this number matches the non- component of CNNergy closely. As expected, the energy is higher when is included.

Vi Transmission energy and Delay Computation

Vi-A Transmission Energy () Estimation

The transmission energy, , is a function of the available data bandwidth, which may vary depending on the environment that the mobile client device is in. We use the following model [14] to estimate the energy required to transmit data bits from the mobile client to the cloud.

(24)

where is the transmission power of the client, and is the effective transmission bit rate. For the highly sparse data at internal layers of a CNN, run-length compression (RLC) encoding is used to reduce the transmission overhead. The number of transmitted RLC encoded data bits, , is:

(25)

Here, is the number of output data bits at each layer including zero elements, Sparsity is the fraction of zero elements in the respective data volume, and is the average RLC encoding overhead for each bit associated with the nonzero elements in the raw data (i.e., to encode each bit of a nonzero data element, on average, bits are required). Using 4-bit RLC encoding (i.e., to encode information about the number of zeros between nonzero elements) for 8-bit data (for evaluations in Section VIII), and 5-bit RLC encoding for 16-bit data (during Eyeriss validation in Section V), is 3/5 and 1/3, respectively (note that this overhead only applies to the few nonzeros in a very sparse data).

Vi-B Inference Delay () Estimation

Although our framework aims to optimize client energy, we also evaluate the total time required to complete an inference () in the client+cloud. For a computation partitioned at the layer, the inference delay is modeled as:

(26)

where [] denote the layer latency at the client [cloud], is the number of layers in the CNN, and is the time required for data transmission at the layer. The latency for each layer is computed as in Section V where the Throughput comes from the client and cloud platforms.

Vii Runtime Partitioning by NeuPart

In this section, we discuss how NeuPart is used during runtime for partitioning CNN workloads between a mobile client and the cloud. Fig. 9 shows the average (

) and standard deviation (

) of data sparsity at various CNN layers over

10,000 ImageNet validation images for AlexNet, SqueezeNet-v1.1, GoogleNet-v1, and VGG-16. For all four networks, the standard deviation of sparsity at all layers is an order of magnitude smaller than the average. However, at the input layer (i.e., the input image itself), when the image is transmitted in standard JPEG compressed format, the sparsity of the JPEG compressed image,

Sparsity-In, shows significant variation (documented in Fig. 11), implying that the transmit energy can vary significantly.

Fig. 9: Average () and standard deviation () of Sparsity over 10,000 images for AlexNet, SqueezeNet-v1.1, GoogleNet-v1, and VGG-16.

Therefore, a significant observation is that for all the intermediate layers, Sparsity is primarily a feature of the network and not the input data, and can be precomputed offline as a standard value, independent of the input image. This implies that , which depends on Sparsity, can be computed offline for all the intermediate layers without incurring any optimality loss on the partitioning decision. Only for the input layer it is necessary to compute during runtime. The runtime optimization algorithm is therefore very simple and summarized in Algorithm 2 (for notational convenience we use superscript/subscript to indicate layer in this algorithm).

INPUT:

: Cumulative CNN energy vector;


: RLC encoded data bits for various layers;
: Available transmission bit rate; : Transmission power
OUTPUT: Optimal partition layer,
METHOD:

1:Obtain JPEG compressed image sparsity, Sparsity-In
2:Update RLC data size for input layer, using Sparsity-In and (25)
3:Compute ,         
4:Compute ,           
5:Obtain
6:return
Algorithm 2 Algorithm for Runtime Optimal Partitioning.

The cumulative CNN energy vector () up to each layer of a CNN (i.e., ) depends on the network topology and, therefore, precomputed offline by CNNergy. Likewise, for layer 2 to is precharacterized using the average Sparsity value associated with each CNN layer. During runtime, for an input image with JPEG-compressed sparsity Sparsity-In, for layer 1 (i.e., input layer) is computed (Line 2). Finally, at runtime, with a user specified transmission bit rate , and transmission power , is obtained for all the layers, and the layer that minimizes is selected as the optimal partition point, (Lines 36).

Since is a user specified parameter in the runtime optimization algorithm, depending on the communication environment (i.e., a bad connection, variable bandwidth), a user can provide the available bit rate and obtain the partitioning decision based on that value at runtime.

Overhead of Runtime Optimization: The computation of Algorithm 2 requires only () multiplications, divisions, () additions, and comparison operations (Lines 25), where is the number of layers in the CNN topology. For standard CNNs, is a very small number (e.g., for AlexNet, GoogleNet-v1, SqueezeNet-v1.1, and VGG-16, lies between 12 and 22). This makes NeuPart computationally very cheap to find the optimal partition layer at runtime.

Note that the inference result returned from the cloud computation corresponds to a trivial amount of data (i.e., only one number associated with the identified class) which is, for example, 5 orders of magnitude lower than the number of data bits to transmit at the P2 layer of AlexNet (already very low, see Fig. 2(b)). Therefore, the cost of receiving the result makes no perceptible difference in the partitioning decision.

Viii Results

Viii-A In Situ/Cloud Partition

We now evaluate the computational partitioning scheme, using the models in Sections IV and VI. Similar to the state-of-the-art [1, 25], we use 8-bit inference for our evaluation. The energy parameters from Table III are quadratically scaled for multiplication and linearly scaled for addition and memory access to obtain 8-bit energy parameters. We compare the results of partitioning with

  • FCC: fully cloud-based computation

  • FISC: fully in situ computation on the client

The energy cost in (1) for each layer of a CNN is analyzed under various communication environments for the mobile cloud-connected client. For the transmission power in (24), we use similar power numbers reported in literature from smartphone measurments [26, 27, 28]. We present analysis using bit rate () as a variable parameter to evaluate the benefit from the computation partitioning scheme as the available bandwidth changes. For all plots: (i) “In” is the input layer (i.e, the input image data); (ii) layers starting with “C”, “P”, and “FC” denote Conv, Pool, and FC layer, respectively; (iii) layers starting with “Fs” and “Fe” denote squeeze and expand layer, respectively, inside a fire module of SqueezeNet-v1.1.

At the In layer, before transmission, the image is JPEG-compressed with a quality level of (lower provides better compression but the worsened image induces CNN accuracy degradation). The energy overhead associated with JPEG compression [29] is incorporated in for the In layer but is negligible.

Fig. 10: Energy cost () at various layers for (a) AlexNet, (b) SqueezeNet-v1.1.
Fig. 11: Distribution of images with respect to Sparsity-In.

For an input image, Fig. 10 shows the energy cost associated with each layer of AlexNet at 100 Mbps bit rate and 1 W transmission power . The minimum occurs at an intermediate layer, P2, of AlexNet which is 34.7% energy efficient than the In layer (FCC) and 26.6% energy efficient than the last layer (FISC). It is now clear that offloading data at an intermediate layer is more energy-efficient for the client than FCC or FISC. Fig. 10 shows a similar result with an intermediate optimal partitioning layer for SqueezeNet-v1.1. Here, the Fs4 layer is optimal with an energy efficiency of 50.2% and 34.4% as compared to FCC and FISC, respectively.

The cost of FCC is image-dependent, and varies with the sparsity, Sparsity-In, of the compressed JPEG image, which alters the transmission cost to the cloud. Fig. 11 shows that the 5500 test images in the ImageNet database show large variations in Sparsity-In

. We divide this distribution into four quartiles, delimited at points

, , and .

Fig. 12: Percentage energy savings of the client device under different communication environment for AlexNet at (a) Sparsity-In = 51.99% () (b) Sparsity-In = 60.80% (Median, ) (c) Sparsity-In = 69.09% ().

For representative images whose sparsity corresponds to , , and , Fig. 12 shows the energy savings on the client at the optimal partition of AlexNet, as compared to FCC (left axis) and FISC (right axis). For various bit rates, the plots correspond to two different of 0.5 W and 1.3 W, corresponding to typical smartphones specifications.

In Fig. 12, a 0% savings with respect to FCC [FISC] indicates the region where the In [output] layer is optimal implying that FCC [FISC] is the most energy-efficient choice. Figs. 12 and 12 show that for a wide range of communication environments, the optimal layer is an intermediate layer and provides significant energy savings as compared to both FCC and FISC. However, this also depends on image sparsity: a higher value of Sparsity-In makes FCC more competitive or even optimal, especially for images in quartile IV (Fig. 12). However, for many images in the I-III quartiles, there is a large space where offloading neural computation at the intermediate optimal layer is energy-optimal. Similar trends are seen for SqueezeNet-v1.1 where the ranges of for which an intermediate layer is optimal are even larger than AlexNet with higher energy savings.

The optimum partition is often, but not always, at an intermediate point for all CNNs. For example, for GoogleNet-v1, a very deep CNN, in many cases either FCC or FISC is energy-optimal, due to the large amount of computation as well as the comparatively higher data dimension associated with its intermediate layers. However, for smaller Sparsity-In values (i.e., images which do not compress well), the optimum can indeed occur at an intermediate layer, implying energy savings by the client/cloud partitioning. For the VGG-16 CNN topology, the optimal solution is FCC, rather than partial on-board computation or FISC. This is not surprising: VGG-16 incurs extremely high computation cost along with large data volume in the deeper layers, resulting in high energy for client side processing.

Average percent energy savings with respect to
FCC FISC
CNN Quartile
I II III IV
AlexNet 47.4% 33.8% 17.9% 0.9% 31.3%
SqueezeNet 70.0% 62.3% 53.2% 31.2% 31.3%
GoogleNet 23.7% 5.3% 0.0% 0.0% 9.6%
TABLE IV: Energy savings at optimal layer for different CNN topologies ( = 60Mbps; = 0.5W for AlexNet and SqueezeNet-v1.1, 1W for GoogleNet-v1).

Under a fixed transmission power and bit rate, Table IV reports the average energy savings at the optimal layer as compared to FCC and FISC for all the images lying in Quartiles I–IV, specified in Fig. 11. Note that the savings with respect to FISC do not depend on Sparsity-In. The shaded regions in Table IV indicate the regions where energy saving is obtained by the client/cloud partitioning. For AlexNet, the optimum occurs at an intermediate layer mostly for the images in Quartiles I–III while providing up to 47.4% average energy savings. For SqueezeNet-v1.1, in all four quartiles, most images show an optimum at an intermediate layer and provide up to 70% average energy savings on the client.

Fig. 13: Evaluations on AlexNet: (a) Inference delay () with respect to FCC and FISC while partitioned at energy-optimal intermediate layers. (b) Energy cost with variation in bit rate () when partitioned at P1, P2, and P3 layers. (c) Total AlexNet energy vs. GLB size.

Evaluation of Inference Delay: To evaluate the inference delay (), we use GoogleTPU [1], a widely deployed DNN accelerator in datacenters, as the cloud platform, with in (26) use Throughput = 92 TeraOps/s. At the median Sparsity-In value (), Fig. 13 compares the of energy optimal partitioning of AlexNet with FCC and FISC for various bit rate. The delay of FISC does not depend on communication environment and exhibits a constant value whereas the delay of FCC reduces with higher bit rate. The range of for which an intermediate layer becomes energy optimal is extracted using (Fig. 12). The blue curve in Fig. 13 shows the inference delay when partitioned at those energy optimal intermediate layers. It is evident that in terms of inference delay, energy optimal intermediate layers are either better than FCC (lower bit-rate) or closely follow FCC (higher bit-rate) and most of the cases are better than FISC.

Impact of Variations in : We have analyzed the impact of changes in the available bandwidth (e.g., due to network crowding) on the optimal partition point. For an image with Sparsity-In of and 0.5 W , Fig. 13 shows the energy cost of AlexNet when partitioned at P1, P2, and P3 layers (the candidate layers for an intermediate optimal partitioning). It shows that the energy valley is very flat with respect to when the minimum shifts from P3 to P2 and from P2 to P1 layer (the green vertical lines). Therefore, changes in negligibly change energy gains from computational partitioning. For example, in Fig. 13, layer P3 is optimal for Mbps, P2 is optimal for Mbps, and P1 is optimal for Mbps. However, if changes from 85 to 100 Mbps, even though the optimal layer changes from P2 to P1, the energy for partitioning at P2 instead of P1 is virtually the same.

Viii-B Design Space Exploration Using CNNergy

We demonstrate how our analytical CNN energy model (CNNergy) in Section IV can be used to perform a design space exploration of the CNN hardware accelerator. For the 8-bit inference on an AlexNet workload, Fig 13 shows the total energy as a function of the global SRAM buffer (GLB) size. The trend in GLB energy vs. size was extracted using CACTI [30].

When the GLB size is low, data reuse becomes difficult since the GLB can only hold a small chunk of ifmap and psum at a time. This leads to much higher total energy. As the GLB size is increased, data reuse improved until it saturates. Beyond a point, the energy increases due to higher GLB access cost. The minimum energy occurs at a size of 88kB. However, a good engineering solution is 32kB because it saves 63.6% memory cost over the optimum, with only a 2% optimality loss. Our CNNergy supports similar design space exploration for other accelerator parameters as well.

Ix Conclusion

In order to best utilize the battery-limited resources of a cloud-connected mobile client, this paper presents an energy-optimal DL scheme that uses partial in situ execution on the mobile platform, followed by data transmission to the cloud. An accurate analytical model for CNN energy (CNNergy) has been developed by incorporating implementation-specific details of a DL accelerator architecture. To estimate the energy for any CNN topology on this accelerator, an automated computation scheduling scheme is developed, and it is shown to match the performance of layer-wise ad hoc scheduling approach of prior work [17]. The analytical framework is used to predict energy optimal partition point for mobile client at runtime, while executing CNN workloads, with an efficient algorithm. The in situ/cloud partitioning scheme is also evaluated under various communication scenario. The evaluation results demonstrate that there exist a wide communication space for AlexNet and SqueezeNet where energy-optimal partitioning can provide remarkable energy savings on the client.

Acknowledgment

This work was supported in part by NSF Award CCF-1763761.

References

  • [1]

    N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, “In-datacenter Performance Analysis of a Tensor Processing Unit,” in

    Proc. ISCA, June 2017, pp. 1–12.
  • [2] K. Lee, “Introducing Big Basin: Our Next-generation AI Hardware,” 2017, https://tinyurl.com/y9jh469b.
  • [3] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level Classification of Skin Cancer with Deep Neural Networks,” Nature, vol. 542, no. 7639, pp. 115–118, Feb. 2017.
  • [4] S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using Deep Learning for Image-based Plant Disease Detection,” Frontiers in Plant Science, vol. 7, p. 1419, Sep. 2016.
  • [5] S.-J. Hong, Y. Han, S.-Y. Kim, A.-Y. Lee, and G. Kim, “Application of Deep-learning Methods to Bird Detection Using Unmanned Aerial Vehicle Imagery,” Sensors, vol. 19, no. 7, p. 1651, April 2019.
  • [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proc. Adv. NIPS, 2012, pp. 1097–1105.
  • [7] H. Li, K. Ota, and M. Dong, “Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing,” IEEE Network, vol. 32, no. 1, pp. 96–101, Jan. 2018.
  • [8] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge,” in Proc. ASPLOS, 2017, pp. 615–629.
  • [9] A. E. Eshratifar and M. Pedram, “Energy and Performance Efficient Computation Offloading for Deep Neural Networks in a Mobile Cloud Computing Environment,” in Proc. GLSVLSI, 2018, pp. 111–116.
  • [10] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level Accuracy with 50x Fewer Parameters and 0.5 MB Model Size,” arXiv preprint arXiv:1602.07360, 2016.
  • [11] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” in Proc. CVPR, June 2015, pp. 1–9.
  • [13] B. Zhang, A. Davoodi, and Y. H. Hu, “Exploring Energy and Accuracy Tradeoff in Structure Simplification of Trained Deep Neural Networks,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 8, no. 4, pp. 836–848, Dec. 2018.
  • [14] K. Kumar and Y. H. Lu, “Cloud Computing for Mobile Users: Can Offloading Computation Save Energy?” Computer, vol. 43, no. 4, pp. 51–56, April 2010.
  • [15] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “A High-Throughput Neural Network Accelerator,” IEEE Micro, vol. 35, no. 3, pp. 24–32, May 2015.
  • [16] Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam, “ShiDianNao: Shifting Vision Processing Closer to the Sensor,” in Proc. ISCA, June 2015, pp. 92–104.
  • [17] Y. H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An Energy-efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks,” IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127–138, Jan. 2017.
  • [18] A. Aimar, H. Mostafa, E. Calabrese, A. Rios-Navarro, R. Tapiador-Morales, I. Lungu, M. B. Milde, F. Corradi, A. Linares-Barranco, S. Liu, and T. Delbruck, “NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 3, pp. 644–656, March 2019.
  • [19] J. Sim, J. Park, M. Kim, D. Bae, Y. Choi, and L. Kim, “A 1.42TOPS/W Deep Convolutional Neural Network Recognition Processor for Intelligent IoE Systems,” in Proc. ISSCC, Jan. 2016, pp. 264–265.
  • [20] V. Sze, Y. Chen, T. Yang, and J. S. Emer, “Efficient Processing of Deep Neural Networks: A Tutorial and Survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329, Dec. 2017.
  • [21] Y. Chen, J. Emer, and V. Sze, “Eyeriss: A Spatial Architecture for Energy-efficient Dataflow for Convolutional Neural Networks,” in Proc. ISCA, June 2016, pp. 367–379.
  • [22] M. Horowitz, “Computing’s Energy Problem (and What We Can Do About It),” in Proc. ISSCC, Feb. 2014, pp. 10–14.
  • [23] “Deep Neural Network Energy Estimation Tool,” available at https://energyestimation.mit.edu.
  • [24] “NCSU FreePDK45,” available at https://www.eda.ncsu.edu/wiki/.
  • [25] P. Gysel, M. Motamedi, and S. Ghiasi, “Hardware-oriented Approximation of Convolutional Neural Networks,” arXiv preprint arXiv:1604.03168, 2016.
  • [26] F. Qian, Z. Wang, A. Gerber, Z. Mao, S. Sen, and O. Spatscheck, “Profiling Resource Usage for Mobile Applications: A Cross-layer Approach,” in Proc. Int. Conf. Mobile Systems, Applications, and Services, 2011, pp. 321–334.
  • [27] M. Altamimi, A. Abdrabou, K. Naik, and A. Nayak, “Energy Cost Models of Smartphones for Task Offloading to the Cloud,” IEEE Transactions on Emerging Topics in Computing, vol. 3, no. 3, pp. 384–398, Sep. 2015.
  • [28] A. P. Miettinen and J. K. Nurminen, “Energy Efficiency of Mobile Clients in Cloud Computing,” in Proc. HotCloud, 2010, p. 4.
  • [29] F. S. Snigdha, D. Sengupta, J. Hu, and S. S. Sapatnekar, “Optimal Design of JPEG Hardware Under the Approximate Computing Paradigm,” in Proc. DAC, June 2016, pp. 1–6.
  • [30] “CACTI,” available at http://www.hpl.hp.com/research/cacti/.