I Introduction
The maturation of energy harvesting (EH) technology and the recent emergence of intermittent computing, which stores harvested energy in energy storage and supports an episode of program execution during each power cycle, creates the opportunity to build sophisticated batteryless energyneutral sensors. One of the most promising applications of such sensors is to build persistent, eventdriven IoT systems in which the main device (e.g. a batterydraining processing system) can remain dormant, with nearzero power consumption, until awakened by an EHpowered sensor, which monitors events of interest constantly with harvested energy. To realize this capability, the EHpowered sensor has to frequently make decisions locally with sensor data, as it is prohibitive to send the raw data to other devices and offload the computation to them.
Deep neural networks (DNNs) can effectively extract features from noisy input data. However, they are usually computationally expensive. Typical neural networks have tens of millions of weights and use billions of operations to finish one inference. Even a small DNN (e.g. MobileNetV2 [17]) has over a million weights and millions of operations. However, microcontrollers (MCUs) are constrained in resources. Typical MCUs have limited storage (e.g. Flash or FRAM) size (several or tens of KB) and run in low frequency (several or tens of MHz). Directly deploying DNN to MCU is infeasible because the model size exceeds the storage capacity. Even if the DNN model can fit into the limited storage, the time to finish one inference is still too long (tens or hundreds of seconds).
DNN inference on intermittentlypowered devices remains largely unexplored. Existing work [4]
made the first step to implement DNNs on an intermittently powered MCU. However, multiple power cycles are needed to finish one inference in most cases. Since the harvested power is usually weak and unpredictable, the latency to obtain the final inference result can be indefinitely long. Recently, the multiexit network with classifiers in shallower layers is proposed
[18, 6]. They are very promising for EHpowered devices with limited energy budget because they can reduce the inference energy cost and latency by exiting from earlyexits while maintaining the accuracy.However, to achieve efficient inference with multiexit networks on EHpowered devices, the first challenge is how to fit the multiexit network onto MCUs while keeping a high accuracy of each exit. Simply compressing the network with existing network compression approaches [4] does not work well since they only consider the accuracy of the final exit. For a multiexit network, only considering the final exit during compression will significantly degrade the accuracy of earlyexits. Unfortunately, the EHpowered system often chooses earlyexits in shallower layers to generate the result with the limited energy budget, which results in low accuracy. Therefore, how to compress the network considering the accuracy and energy cost of each exit remains a problem. It becomes more complicated when the power source is considered. Powered by dynamic EH, the chances that each exit is selected are different depending on both the power condition and the accuracy/energy cost of each exit after compression. To maximize the average accuracy of all the events, the compression algorithm has to take the power condition and accuracy/energy cost of multiple exits into consideration. Maximizing the average accuracy across all the events is equivalent to maximizing the number of interesting events that are correctly processed in a fixed amount of harvested energy, which is important for EHpowered devices.
The second challenge is how to select the exit for each event during runtime to achieve a high average accuracy in the longterm. The exit needs to be selected based on the available EH energy and the difficulty of each input. Two sequential decisions need to be made. First, when an event happens, simply selecting the exit with the highest accuracy that current energy can support can result in low average accuracy in the long run. This is because even if current EH efficiency is high, it can be low in the future. Instead of using up all the available energy for one inference to achieve high accuracy, a better strategy is to reserve some energy for the future events. Otherwise the following events will have low accuracy or even be missed because of insufficient energy. Second, the inference difficulty of each input needs to be considered. The difficulty is only known at an exit by inspecting the entropy of current result. If the confidence is low at the selected exit, a second decision needs to be made on whether an incremental inference is needed to propagate the input to a following exit for a higher accuracy. To make these two decisions, the EH condition and the difficulty of current event need to be considered.
To address these two challenges, we propose a twophase approach to automatically compress multiexit neural networks before deployment and conduct runtime exit selection. In the first phase, we aim to compress the multiexit network to fit it onto MCUs and achieve high average accuracy of all events. First, we will consider typical EH power traces and event distribution, which determine the probability of selecting each exit. Priority will be given to the exits which have higher probability of being selected. Since the probability of selecting each exit will change after we compress the network due to the change of computation complexity for each exit, we develop a reinforcementlearning (RL) based approach to automatically search the best pruning rate, bitwidth of weights and activations in all the layers to maximize the average accuracy.
In the second phase, we aim to maximize the average accuracy for all the events during runtime. We employ Qlearning [20] to learn the best exit under different EH energy conditions. To select the exit for an event, we use the current available energy level and the charging efficiency as the state, and use all exits as the actions the learning method can take. Qlearning is lightweight as it uses a lookup table (LUT) to select actions. The learning process only involves updating the LUT. To decide whether to conduct incremental inference, we use the confidence of the result at the selected exit and current available energy as the state. The action is a binary decision, representing to continue the inference or to output the current result.
In summary, the main contributions of the paper include:

Intermittent inference model. We propose an intermittent and incremental inference model to guarantee an inference result before power failure occurs. Waiting for the next power cycle is not needed while further refining is still possible.

Power traceaware compression. We develop a power trace aware and multiple exitsguided compression technique to compress multiexit networks to fit onto MCUs while maximizing the average inference accuracy.

Runtime adaptation. We propose an online exit selection method to select the exit for each event considering the EH condition and difficulty of each input.
Experimental results show that the proposed techniques improve the number of correctly processed events per energy unit by 3.6x over [4], a stateoftheart (SOTA) intermittent inference framework. It also outperforms [3], a NAS framework to generate networks for MCUs, by 18.9x. The latency of all the processed events is improved by 7.8x and 10.2x over these two approaches, respectively.
Ii EventTriggered Intermittent Inference
In the existing stateoftheart deployment of DNNs on EHpowered devices [4], when the power is not sufficient to finish the entire forwardpass, the system is forced to pause during the inference process and wait until enough energy is harvested. However, the unpredictable EH process can result in indefinite waiting time to harvest sufficient energy, by which time the event may become obsolete. To solve this problem, we employ networks with multiexits [6]. As shown in Figure 1(b)(c), this simple network has 3 exits, and each exit has a different accuracy and energy cost on CIFAR10. As shown in Figure 1(a), when an event triggers the inference, an exit will be selected according to the available energy and the energy cost of each exit. In this example, when Event 1 occurs, the stored energy is sufficient to support the inference to Exit 3, which is selected as the exit. However, when Event 2 occurs, the energy can only support the inference to Exit 1. At each exit, the confidence of the result is measured by the entropy. If the confidence is higher than a threshold, the inference exits from this point. Otherwise, when more energy is available, an incremental inference will be made to proceed to the following exit for higher accuracy. In this example, since the confidence of Event 2 in Exit 1 is below the threshold, an incremental inference is conducted to proceed to Exit 2. This process alleviates the indefinitely long waiting time problem and an inference result with confidence can be obtained during each power cycle.
Metric We use local inference to filter sensor readings from events so that only the interesting events are used to wake up the main device. Our figure of merit is the number of interesting events that are correctly processed in a fixed amount of harvested energy. We denote it as IEpmJ, or Interesting Events per milliJoule. Maximizing IEpmJ is equivalent to maximizing the average accuracy of all events:
(1) 
is the number of correctly processed events. is the number of all the events in which events are processed by inference and events are missed due to insufficient energy. is a subset of and . where represents event is correctly processed and otherwise. Since and are constants determined by the EH environment, maximizing IEpmJ is equivalent to maximizing the average accuracy of all events, which is the number of correctly processed events over all the events .
To deploy this inference model, the multiexit network needs to be compressed to fit onto resourceconstrained MCUs. The compression approach will be introduced in Section III.
Iii Power TraceAware, ExitGuided Network Compression
In this section, we will develop an EH powered traceaware and exitguided network compression algorithm. It aims to fit the multiexit network onto MCUs and maximize the average accuracy by allocating layerwise pruning rate and quantization bitwidth. Existing compressing algorithms, which uniformly compress network, will significantly degrade the accuracy of exits in shallow layers as shown in Figure 1(b). Different from existing algorithms that only consider the accuracy of the final classifier, this approach takes the accuracy of all exits into consideration and conducts nonuniform compression. As shown in Figure 1(b), if we take a nonuniform approach, which compresses less in the shallow layers and more in the deep layers, the accuracy drop for all exits will be small. What is more, some exits will be chosen more often than the others under a given power trace and event distribution. Thus, we will prioritize the accuracy of these exits during the compression process. In this way, we can improve the average inference accuracy across all events.
The overview of the compression approach is shown in Figure 2. This approach takes the multiexit network, EH power trace, and event distribution as the input and generates nonuniform pruning rate and the bitwidth allocation policy for each layer. Based on the pruning rate, channel pruning is applied to each layer to prune out the input channels [11]. The channel to be pruned out is selected by the importance of the channel, i.e. the magnitude of weights applied to the input channel, and the less important ones are pruned out. Based on the bitwidth policy, linear quantization [19] is applied to both the weights and activations. After compression, the network is deployed onto MCUs and the runtime algorithm will select the exit for each event, which will be introduced in Section IV
. During compression, the approach first generates an initial layerwise compression policy. The compression policy prioritizes the exits with higher probability and provides them with relatively higher accuracy by adjusting the layerwise compression policy. After applying the compression policy, the probability distribution of each exit is changed and the compression policy needs further finetuning. To accelerate the above iterative design process, we propose a reinforcement learning (RL)based algorithm to coexplore the pruning and quantization policies and the probability distribution of each exit.
Iiia Problem Formulation
Given a fullprecision network with multiple earlyexits, we will explore the accuracy and energy cost allocation for each exit to maximize the average accuracy (equivalent to maximizing IEpmJ defined in Section II) under the given EH power trace and event distribution. This is achieved by nonuniformly allocating the pruning rate and quantization bitwidth for each layer. Both pruning and quantization reduce the FLOPs and weight size of the network but with different emphasis. Pruning mainly reduces the FLOPs, while quantization mainly reduces the model size.
Pruning Gvien a pruning rate , we employ channel pruning to prune out the entire input channels of a convolutional or fullyconnected layer. The advantages are twofold. It reduces the FLOPs of the previous layer by reducing the number of output channels. It also reduces the FLOPs of the current layer by reducing the number of input channels. Besides, it can be directly implemented on offtheshelf MCUs without overhead. More specifically, given the pruning rate for layer , we reduce the filter weights from shape to such that . For convolutional layers, and are the number of output and input channels, respectively, and is the filter kernel size. For fullyconnected layers, and are the number of output and input activations, and . The input channels to be pruned are selected according to the sum of absolute weights applied to them. We use to represent the weights of filter connected to input channel . The importance of input channel is:
(2) 
All the input channels are sorted by and the least important ones are pruned out to make .
Quantization For each layer , we employ linear quantization for both the weights and activations following the bitwidth and . Given weight bitwidth , the linearly quantized weight is:
(3) 
where truncates the value into the range that bits can represent. is the scaling factor, which is determined by minimizing the quantization error . As for activations, the quantization procedure is similar except the range for
is changed. Since all the activations are nonnegative due to the ReLU function, we truncate the activations into the range
.The goal here is to find the best pruning and quatization rate. Formally, the multiexit network compression problem under the power trace and event distribution constraints is formulated as:
Max  (4) 
s.t.  (5)  
(6)  
(7)  
(8) 
The objective is to maximize the average accuracy (equivalent to maximizing IEpmJ defined in Section II) of the given events and under the power trace. In the objective function Eq.(4), represents the accuracy of the exit for event . For event , an exit is selected from exits by the policy . A simple policy is selecting the exit for an event such that the energy cost at the selected exit does not exceed currently available energy. The first constraint listed in Eq.(5) is that for each of the events, the total harvested energy from the beginning to current time is greater than or equal to the total energy cost for all the happened events. Here, is the harvested energy after event and before event , and is the energy cost when exiting from exit following policy . The second constraint listed in Eq.(6) is that the accuracy of exit is determined by the pruning rate , weight bitwidth and activation bitwidth of all layers before the layer where exit is located. Similarly, the third constraint listed in Eq.(7) is that the energy cost exiting from exit is determined by all the pruning rates and bitwidth allocations before this exit. The last constraint listed in Eq.(8) is the weight size can fit into the target MCU and the total FLOPs is reduced to the target value .
Given the pruning rate and bitwidth , the objective function can be immediately calculated. This is done by first evaluating Eq.(6) on the representative dataset to get and measuring on the hardware or from the proxy FLOPs. Following the energy constraint Eq.(5) and exit selection policy, the exit for event is determined. Given , the objective function Eq.(4) is calculated. However, the search space is prohibitively large to find the optimal allocation policy. Assume the network has layers. The quantization bitwidth and are both selected from , and the pruning rate is in the range [0.05,1.0] with a step size 0.05. The design space as large as , which prohibits direct searching.
IiiB RLBased Nonuniform Compression
To effectively search for the optimal parameters, we model the pruning and quantization task as a reinforcement learning problem. As shown in Figure 3, we use two agents to generate the pruning rate and quantization bitwidth layerbylayer. The compressed network is then evaluated with the EH power trace and event distribution. Here, the exit is selected according to the available energy when an event happens. After that, the reward representing the average accuracy of all events is given to the agents to update their policies. After the exploration, the agents will generate the pruning rate and quantization bitwidth for each layer to maximize Eq.(4) and equivalently IEpmJ.
State Two agents share the layerwise state during training and generate different actions. The key point is that both the pruning and quantization information are encoded in the observation. Each agent observes the peer’s action in the last layer such that it can take action accordingly. For layer , the shared observation is:
(9) 
is the layer index. is the pruning rate of the previous layer. and are the bitwidth of weights and activations of the previous layer. is the reduced FLOPs in previous layers, and is the FLOPs in the following layers. and are the reduced weight size and the remaining weight size. is a binary value indicating whether this layer is a convolutional or fullyconnected layer. and are the number of input and output channels for the convolutional layer, or the number of input and output activations for the fullyconnected layer. Each dimension of is normalized to to make them on the same scale.
Action Two agents generate different actions. One agent generates the action for the layerwise pruning rate. The other agent generates two actions, one for the layerwise weight bitwidth and one for activation bitwidth . We use continuous action space to generate accuracy pruning rate and quantization bitwidth. We do not use discrete action space because finegrained pruning rate and quantization bitwidth need a large number of discrete actions to represent, which results in inefficient exploration during training. To apply the agents’ actions to the compression process, the continuous action representing the pruning rate can be directly applied to pruning. The action for quantization is first linearly mapped from the continuous action space to the discrete bitwidth in the range for weights and for activations. Then the bitwidth is applied to the quantization of weights and activations.
Reward Two agents have different reward functions and due to different goals. Their rewards consist of the accuracy part and the compression part. aims to maximize the average accuracy of all events under the given power trace and event distribution. We use the percentage of each exit being selected to guide the compression process. is defined as:
(10) 
where is the percentage of exit being selected. It is determined by both the power trace and event distribution in Eq.(4)(8).
The compression goal of the pruning agent is to keep the FLOPs of all exits under the targeted value . The quantization agent aims to keep the weight size under the target value . Considering the accuracy reward in Eq.(10), the reward for two agents are defined as follows:
(11) 
(12) 
where and are the reward scaling factors. When the compression goal is satisfied, the reward is the scaled accuracy. Otherwise, the reward is a negative value to punish the agents.
Agent We use two RL agents, one for pruning and the other for quantization. Separate agents enable us to set different rewards to achieve different goals simultaneously. The agents leverage the deep deterministic policy gradient (DDPG) [12] algorithm to explore the design space. The agents process the network layerbylayer. In the learning process, one step represents the agent processes one layer. For each layer, two agents take the step simultaneously and proceed to the next layer. One episode consists of many steps. It starts from the first layer and ends at the last layer.
During exploration, each agent aims to maximize the overall reward of one episode. The actionvalue Qfunction is estimated as
(13) 
The Qfunction is updated by minimizing the loss:
(14) 
where is the number of sampled steps during exploration. The policy is updated using the sampled policy gradient:
(15) 
Iv Runtime Exit Selection and Incremental Inference
During the compression process, the exit selection for an event is determined statically using a static policy, e.g. a lookup table (LUT). However, naively following the static policy during runtime can result in low average accuracy in the long term. For example, when the EH power is low in the long run, even if the system has sufficient energy to select the exit with the highest accuracy and energy cost for the current inference, a better decision can be selecting an exit with lower energy cost to reserve energy for following events. This dynamic exitselection can improve the average accuracy. Besides, if the confidence at the selected exit is low, an incremental inference by proceeding to the following exit can improve the accuracy. We propose an online algorithm to make these two sequential decisions.
During runtime, both the power trace and the event distribution are unknown in advance. To select the best exit for each event, we propose to employ a lightweight RL algorithm, Qlearning [20]. Qlearning consists of the state set , the action set and the reward function . The state set contains the current available energy and the charging efficiency . Since both and are continuous values, to make the number of elements in finite, we discretize and with appropriate step size. The action set represents all the possible exits, which is . The reward is the accuracy of the selected exit . The agent aims to learn the optimal policy such that to maximize the reward . When an event happens, the agent takes two steps, one for selecting the action and the other for updating the Qtable. The action for the exit is selected by finding the highest Qvalue in current state, represented as , where denotes the Qvalue of actionstate pair . The entry in the Qtable is updated as:
(16) 
The overhead of Qlearning is negligible. It only needs a lookup table (LUT) with stateaction pairs as the entries, and the learning process is updating the LUT by Eq.(16).
To further improve the average accuracy, a second decision is made at the chosen exit for event . If the confidence of the result is low and the remaining energy is high, the algorithm can decide to propagate the input further to the next exit for higher accuracy. The decision is made based on the confidence of the result and current available energy. We use the entropy of the result as the measure of confidence [18]. We use another Qtable to make the decision.
V Experimental Results
We conduct extensive experiments to demonstrate the effectiveness of our approaches in terms of nonuniform compression, IEpmJ and accuracy, FLOPs and latency, and runtime adaptation.
Va Experimental Setup
The experiments are targeting on TI MSP432 MCU. To power the MCU, we use solar profile from [15]. The backbone of the multiexit model is LeNet [10]. We use LeNet because most stateoftheart DNNs designed for mobile devices cannot fit into typical MCUs even after compression. For example, MobileNetV2 [17] and DARTS [13] require 4.6MB and 6.6MB weight storage, respectively. However, a typical MCU has tens of KBs weight storage. We extend LeNet to four convolutional layers and equip it with two earlyexits along the data path. The original network needs 580KB weight storage when represented with 32bit floatingpoint numbers. The FLOPs of three exits are 0.4452M, 1.2602M and 1.6202M with corresponding accuracy 64.9%, 72.0% and 73.0%. The energy cost is 1.5mJ per million FLOPs. We are using the CIFAR10 dataset and 500 events are randomly distributed across the duration of the EH power trace.
VB Nonuniform Pruning and Quantization
Our approach effectively finds out the pruning rate and quantization bitwidth allocation policy to maximize the average accuracy under the model size and FLOPs constraint. Figure 4 shows the layerwise preserve rate and quantization bitwidth. The FLOPs constraint is set to 1.15M FLOPs, and the target model size is set to 16KB. Under these constraints, our approach efficiently allocates the limited FLOPs and weight size budge to maximize the accuracy. For pruning, the convolutional layers are pruned more because they are more FLOPsintensive than the fullyconnected layers. Different from pruning, the quantization allocates more accuracy to convolutional layers by setting their bitwidth to 8. FCB21 and FCB31 are quantized to 1bit possibly because they have large weight size and less sensitive to data precision. The search takes 6 hours on a Nvidia P100 GPU.
VC IEpmJ and Average Accuracy
The proposed approaches substantially outperform the SOTA baselines in terms of IEpmJ (Interesting Events per milliJoule) and equivalently the average accuracy of all events. We compare with three baselines. SonicNet is from the SOTA intermittent inference framework [4]. SpArSeNet is a network generated by a Neural Architecture Search framework for MCUs [3]. LeNetCifar is the LeNet [10] adapted for CIFAR10 dataset.
The result of IEpmJ is shown in Figure 5. Our approach outperforms SonicNet, SpArSeNet and LeNetCifar by 3.6x, 18.9x and 0.28x, respectively. Our approach achieves 0.89 interesting events per milljoule, while SonicNet and SpArSeNet only achieve 0.25 and 0.05, respectively. During compression, our approach considers the accuracy and energy cost of each exit, the EH power trace and the event distribution to compress the network such that the IEpmJ is maximized. In terms of the accuracy of all events, where the accuracy of missed event is set to 0, our approach achieves average accuracy of 50.1%, while SonicNet, SpArSeNet and LeNetCifar only achieve 14.0%, 2.6% and 39.2%, respectively. As for the accuracy of all the processed events, our approach achieves 65.4%, slightly lower than 75.4%, 82.7%, 74.7% by the baselines. This is because we aim to improve the longterm accuracy to maximize IEpmJ instead of the accuracy for a single event. Solely aiming at the perinference accuracy will generate network with high energy cost and result in high percentage of missed events, which degrades IEpmJ.
VD FLOPs and Latency
FLOPs Our approach effectively reduces the FLOPs of each exit to maximize the average accuracy of all events. Reducing FLOPs is important because with lower FLOPs and lower energy cost per inference, the saved energy can be allocated to other events which could have been missed due to insufficient energy. Figure 6 shows the FLOPs of each exit before and after compression. The FLOPs are reduced by 0.31x, 0.44x and 0.67x for three exits, respectively. The reduction ratio of each exit is automatically decided by our approach. Different from our approach, the SonicNet has 2.0M FLOPs and SpArSeNet has 11.4M FLOPs because they did not consider the limited EH energy and only prioritize the perinference accuracy. This results in high energy cost per inference, low IEpmJ and low average accuracy across all events because large portion of the events are missed. The LeNetCifar is manually designed by domain experts and has low FLOPs, which fortunately fits the EH scenario well.
Latency Our approach greatly reduces both perevent latency and perinference latency. First, the perevent latency is from the occurrence of an event to the end of inference. Across all the processed events, our approach improves the perevent latency by 7.8x, 10.2x and 3.15x over three baselines. More specifically, the average latency of our approach is 18.0 time units (1 second per time unit), while the latency of three baselines are 139.9, 183.4 and 56.7 time units, respectively. The improvement shows our approach smartly selects the earlyexits to quickly output a result when the EH energy is low, instead of waiting for multiple power cycles to reach the final exit as the baselines do. Second, our approach also improves the perinference latency, which is from the start to the end of an inference. As shown in Figure 6, using the FLOPs as the proxy for the perinference latency, our approach improves the average perinference latency by 4.1x, 23.2x and 0.46x over three baselines.
VE Runtime Adaptation
The average accuracy of all events is further improved by the runtime exit selection. The runtime adaptation effectively learns from the EH environment and selects the exit for each event to maximize the average accuracy. The adaptation approach outperforms the static LUT by 10.2%. Figure 6(a) shows the average accuracy of all events is improved during the runtime adaptation. The lightweight Qlearning approach gradually learns to optimize the exit selection. Figure 6(b) shows the percentage and number of events exiting from each of the three exits. Compared with the static LUT, the Qlearning approach prioritizes the exit 1 shown in the blue bar to decrease the energy cost of each inference. By the strategy adaptation, the Qlearning approach processes 11.2% more events than the static LUT. The overhead of Qlearning is negligible by updating its Qtable.
Vi Related Work
Intermittent Execution. EH techniques extract power from the ambient environment and provide an attractive power alternative in sensing scenarios where it is difficult to employ batteries [21]. Solar, wind and kinetic energy [9] are all promising EH sources. With an unstable power supply, EHpowered computing systems have to run intermittently [7]. Various optimization and tools such as Chain [2] have also been proposed to ensure correctness and improve efficiency. Gobieski et al. [4] made the first step to implement DNNs in an intermittentlypowered sensor. It guarantees the correctness of DNN inference across multiple power failures. The drawback is that we must wait for multiple power cycles to finish one inference. Since the harvested power is usually weak and unpredictable, it takes indefinite amount of time to obtain the final inference result.
MultiExit Network. The multiexit neural network has been investigated in various studies. Instead of only having one final inference result, networks can have early result to save time or energy. In [18, 6], a subset of networks is selected for faster inference by trading off accuracy. These approaches allow dynamic tradeoff between inference latency and accuracy. However, none of the works are targeted on EHpowered MCUs, which are constrained in the weight storage and energy budgets. The large weight size and FLOPs of their models are prohibitive for direct deployment to EHpowered MCUs. Pruning and quantization are needed for the deployment.
Network Compression. There are extensive explorations on network pruning and quantization. For quantization, [16] employs binary filters and inputs for CNNs. [19] automates the quantization of each layer by a learningbased method. [14, 1] consider quantization during the neural architecture search (NAS) for efficient hardware implementation [8]. For pruning, [5] employs RL to automatically explore the layerwise pruning rate for channel pruning [11]. However, these pruning and quantization methods only consider network with one exit, which will greatly degrade the accuracy of earlyexits. Besides, the above approaches only focus on either quantization or pruning, not both of them. To deploy multiexit networks to MCUs, an automated approach to conduct the quantization and pruning simultaneously while considering the accuracy of all exits is needed.
Vii Conclusion
This work aims to enable eventdriven sensing and decision capabilities for EHpowered devices by deploying lightweight DNNs onto EHpowered devices. We provide an intermittent inference model to provide timely and accuracy result. We propose a twophase approach to deploy multiexit neural networks onto EHpowered MCUs. For the first phase, we develop a power traceaware and exitguided network compression algorithm to compress the networks to maximize the overage accuracy. For the second phase, we develop a runtime exit selection algorithm to adapt to dynamic EH environment and event distribution. The experimental results show superior accuracy and latency compared with stateoftheart techniques.
Acknowledgement: This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI1445606, at the Pittsburgh Supercomputing Center (PSC). This research was supported in part by the University of Pittsburgh Center for Research Computing through the resources provided.
References
 [1] (2020) NASS: optimizing secure inference via neural architecture search. arXiv preprint arXiv:2001.11854. Cited by: §VI.
 [2] (2016) Chain: tasks and channels for reliable intermittent programs. In ACM SIGPLAN Notices, Cited by: §VI.
 [3] (2019) SpArSe: sparse architecture search for cnns on resourceconstrained microcontrollers. arXiv preprint arXiv:1905.12107. Cited by: §I, §VC.
 [4] (2019) Intelligence beyond the edge: inference on intermittent embedded systems. In Proceedings of the TwentyFourth International Conference on Architectural Support for Programming Languages and Operating Systems, Cited by: §I, §I, §I, §II, §VC, §VI.

[5]
(2018)
Amc: automl for model compression and acceleration on mobile devices.
In
Proceedings of the European Conference on Computer Vision (ECCV)
, Cited by: §VI.  [6] (2017) Multiscale dense convolutional networks for efficient prediction. arXiv preprint arXiv:1703.09844 2. Cited by: §I, §II, §VI.
 [7] (2019) Qlearning based routing for transiently powered wireless sensor network: workinprogress. In Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis Companion, pp. 1–2. Cited by: §VI.
 [8] (2019) Accuracy vs. efficiency: achieving both through fpgaimplementation aware neural architecture search. In Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1–6. Cited by: §VI.
 [9] (2018) Power management for kinetic energy harvesting iot. IEEE Sensors Journal 18 (10), pp. 4336–4345. Cited by: §VI.
 [10] (1998) Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §VA, §VC.
 [11] (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Cited by: §III, §VI.
 [12] (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §IIIB.
 [13] (2018) Darts: differentiable architecture search. arXiv preprint arXiv:1806.09055. Cited by: §VA.
 [14] (2019) On neural architecture search for resourceconstrained hardware platforms. arXiv preprint arXiv:1911.00105. Cited by: §VI.
 [15] . Oak Ridge National Laboratory (ORNL); Rotating Shadowband Radiometer (RSR); Oak Ridge, Tennessee (Data); NREL Report No. DA550056512. http://dx.doi.org/10.5439/1052553. Cited by: §VA.
 [16] (2016) Xnornet: imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, Cited by: §VI.

[17]
Mobilenetv2: inverted residuals and linear bottlenecks.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, Cited by: §I, §VA.  [18] (2016) Branchynet: fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), Cited by: §I, §IV, §VI.
 [19] (2019) HAQ: hardwareaware automated quantization with mixed precision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §III, §VI.
 [20] (1992) Qlearning. Machine learning 8 (34), pp. 279–292. Cited by: §I, §IV.
 [21] (2019) Workinprogress: cooperative communication between two transiently powered sensors by reinforcement learning. In 2019 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ ISSS), pp. 1–2. Cited by: §VI.
Comments
There are no comments yet.