EdgeFlow: Open-Source Multi-layer Data Flow Processing in Edge Computing for 5G and Beyond

01/07/2018 ∙ by Chao Yao, et al. ∙ Peking University 0

Edge computing has evolved to be a promising avenue to enhance the system computing capability by offloading processing tasks from the cloud to edge devices. In this paper, we propose a multi- layer edge computing framework called EdgeFlow. In this framework, different nodes ranging from edge devices to cloud data centers are categorized into corresponding layers and cooperate together for data processing. With the help of EdgeFlow, one can balance the trade-off between computing and communication capability so that the tasks are assigned to each layer optimally. At the same time, resources are carefully allocated throughout the whole network to mitigate performance fluctuation. The proposed open-source data flow processing framework is implemented on a platform that can emulate various computing nodes in multiple layers and corresponding network connections. Evaluated on a mobile sensing scenario, EdgeFlow can significantly reduce task finish time and is more tolerant to run-time variation, compared to traditional cloud computing and the pure edge computing approach. Potential applications of EdgeFlow, including network function visualization, Internet of Things, and vehicular networks, are also discussed in the end of this work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 9

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As we are moving towards the 5G communication era, various modern applications including Internet-of-Things (IoT), vehicular networks, mobile caching, and E-health, have been generating tremendous amount of data every day. The data explosion motivates new challenges and requirements for the equipment upgrade on each device and the computing framework evolution throughout the whole network. Besides deployment of more powerful servers in cloud data-centers (CCs), the computation capabilities of wireless access points (APs), such as macro-cell base stations (MBSs), small-cell base stations (SBSs), and WiFi APs, have been improved continuously. In addition, most of them have been equipped with Linux operating systems nowadays [1] to support processing of complex computing programs. At the same time, the processing power of edge devices (EDs), such as internet protocol cameras, mobile phones, personal laptops, and smart cars, is also increased rapidly thanks to improvement of System-on-Chip (SoC) platforms.

Along with the developments of the computing capabilities in the CCs, the APs and the EDs, the computing framework has also evolved as well. Traditionally, the EDs and the APs are only responsible for data collection and task submission to CCs. Such a computing model has a number of limitations due to following reasons. The sustained colossal computation load will incur the enormous resource overload of the CCs, such as on computing resources, energy supply, and cooling systems. In addition, the remote geographic localization of the CCs will result in the long transmission latency from the EDs to the APs and finally to the CCs, especially when the network includes a great number of EDs and APs and only have limited communication resources. Therefore, a promising solution, namely edge computing, is proposed to leverage the idle computing resources at the edge of the network, i.e., the EDs and the APs, and to save the communication resources as well [2].

I-a Existing Edge Computing Platforms

In an edge computing scenario, part of computing tasks can be offloaded from the cloud end (e.g. the CC), to the edge end of networks, such as EDs and APs. When the data have already been processed at the edge end, only a small amount of results rather than the raw data in a huge quantity need to be transmitted to the CCs. Thus, the transmission pressure can be reduced as well [3].

Beyond the concept, a number of practical edge computing platforms have already been designed, where some typical ones are listed as follows.

  • Cloudlet: Cloudlet is proposed to reduce the transmission delay through letting the data generators, (usually the EDs), send the computing tasks to the nearest deployed servers rather than the remote CCs, where the WiFi APs are selected to help collect data from the EDs and then send the data to the servers nearby [4].

  • Femto Cloud: Femto Cloud is a fog computing platform that leverages the nearby underutilized EDs, to serve the computing tasks at the network edge, which uses the greedy heuristic optimization model to schedule the incoming tasks 

    [5].

  • Paradrop: Paradrop is an edge-computing platform deployed on the smart WiFi routers [6]. With a complex computer equipped inside the routers, the Paradrop can enable new applications involving video, e.g. augmented reality, sensor-actuator coordination, and educational applications without the assistance of the remote CCs.

  • Iox: Iox is a fog device product from the Cisco [7]. Similar to the Paradrop, the Iox works by hosting applications in a guest operating system running directly on the smart router. It is mainly developed to support ubiquitous IoT business applications.

Although existing platforms demonstrate the potentials to implement the edge computing in practical networks, they have some common limitations. First, most of them only exploit the computing resources in the edge end. For example, Cloudlet, Paradrop and Iox try to leverage computing power on the APs, while Femto Clould tries to utilize the processing power on the EDs. However, the coordination of the whole computing resources throughout the CCs, the APs and the EDs, is still not well exploited. Second, all existing platforms only addressed that processing time can be reduced when the EDs and the APs process more data. Although it can alleviate the transmission pressure of the data, it may aggravate the computing pressures on the EDs and the APs at the edge [8]. How to balance the computing and communication trade-off still remains as an open problem. Third, existing solutions are normally proposed based on an assumption that the run-time environment is stable. However, in real scenarios, run-time variations, such as data burst in some ED, can have impact on processing efficiency. Thus, it may cause significant performance fluctuation.

I-B EdgeFlow

Targeting the issues mentioned above, an EdgeFlow framework is proposed to coordinate the task partitioning among all data processing devices, and deal with the computing and communication trade-off through the optimal resources allocation.

The EdgeFlow is composed of multiple layers. In this work, we categorize all devices into three layers. At the bottom layer, various EDs are located. The data in the EdgeFlow is continuously generated by each ED as a flow. The middle layer includes different types of APs. The top layer is normally a CC. Note that the total system can be further extended to more layers as required in real scenarios. We focus on the three-layer case in this work to simplify discussion.

Each device in the EdgeFlow possesses some computing resources, e.g. CPUs. When a user submits a data processing task (usually directly notified to the CC), the EdgeFlow can assign part of the task to the EDs, part to the APs, and the rest in the CC. The task offloading can directly determine how much computing resources of each device is needed. When the data has been fully processed at the lower layer, only the results need to be transmitted to the upper layer. Otherwise, the raw data needs to be transmitted to the upper layer. Then, the limited communication resources, especially the wireless resources, e.g. time slots, which are supporting the transmission from the lower layer to the upper layer, can also be optimally allocated in the EdgeFlow. The algorithm of the entire task division and the computing and communication resources allocation is summarized in a time aligned task offloading (TATO) scheme. For implementation, the demo platform is deployed on the Intel Next Units of Computing (Intel NUCs) and the Universal Software Radio Peripherals (USRPs)[9], which is available in [10]. It can emulate various computing nodes in multiple layers and corresponding network connection. Evaluated on the face recognition scenario, EdgeFlow can significantly reduce task finish time and is tolerant to run-time variation.

The rest of the article is organized as follows: The system architecture is given in Section II. The system schedule is presented in Section III and the TATO scheme is presented in Section IV, respectively. The demo is implemented in Section V. The potential applications for EdgeFlow are discussed in Section VI. Finally, the conclusion is given in Section VII.

Ii System Architecture

The system architecture of EdgeFlow is shown in Fig. 1. The bottom layer includes a large number of EDs, such as wireless sensors, mobile phones, and personal laptops. The middle layer includes the APs, such as SBSs, MBSs and WiFi APs. The top layer is a CC, including multiple servers. Each ED is connected to at most one AP by wireless links, while each AP is connected to the CC by wired links. Any one of all the nodes in this architecture is assumed to have a certain amount of computing and communication capabilities. An online user is able to inquire some information from the system by initiating a task and assigning it to the CC. The CC is able to complete the task with the help of the APs and the EDs by utilizing their computing and communication abilities. In the following subsections, the system functions of these three layers are listed in detail.

Fig. 1: The architecture of the EdgeFlow framework.

Ii-a Edge Devices

Generally, the data flow that related to the users’ tasks is generated in the EDs [11]. The EDs are responsible for the jobs of sensing, collecting, and generating raw data. With a certain amount of computing ability, each ED is able to process some of the raw data and submit the unprocessed data and the processed data together to its corresponding AP.

Ii-B Access Points

Each AP receives the raw data from its controlled EDs. Correspondingly, to facilitate the transmission between the AP and the EDs, the wireless transmission resources allocation among the EDs are also scheduled by the AP. Besides, similar with each ED, each AP can continue to process a part of the raw data from EDs, and then use the wired link to submit data to the CC on the top layer. These data include those results processed by the EDs and APs, and the rest of the raw data.

Ii-C Cloud Center

The CC can collect the data from the APs through the wired links and process the rest of the raw data. Then, the CC forwards the final result to the user that generates the task. Furthermore, the CC gathers the global information and carries out the task offloading strategy, e.g. to decide the amount of data processed at each device for each layer and help calculate the optimal computing and communication resources allocation.

Iii System Schedule

In this section, the system schedule of EdgeFlow is presented, as shown in Fig. 2. There are four procedures, including task notification, system registration, task offloading, and data processing, which are shown in detail.

Fig. 2: The system schedule of EdgeFlow.

Iii-a Task Notification

The user submits a task to the CC. Then, the CC broadcasts the task to the APs. Finally, each AP broadcasts the task notification information to the EDs which it controls. It is only a short message which notifies the EDs and the APs that the CC is ready to deploy the application.

Iii-B System Registration

After receiving the task notification information, each device estimates its own computing capability and decides whether to participate in the task (depending on its idle computing resources). Then, the EDs and the APs upload their registration information with the amount of the available computing and communication resources to the CC. After that, the CC can create a logical graph of the involved nodes with the information of the resources.

Iii-C Task Offloading

After the CC receives the information from the available EDs and APs, the CC determines a task offloading strategy, TATO, which is introduced in detail in the following Section IV. Based on TATO, the CC assigns the task execution environment, the task division files and the resources allocation configurations to the EDs and the APs. The task execution environment tells each node how to process the task, which only needs to be assigned once in each task. The task division file is utilized to tell each device how much data it will process. The resources allocation configuration tells each device the amount of computing resources. Besides, the schedule configuration also tells each AP how to allocate the wireless communication resources among the EDs it controls, and how much wired bandwidth it can use for data submission to the CC.

Iii-D Data Processing

After the CC completes the task offloading scheme, the system starts the processing procedure. The data processing has five stages.

  • Data Processing at Each ED: Each ED collects the raw data and processes the part of the data decided by TATO.

  • Data Submission to Each AP: Each ED sends the processed results and the rest of the raw data to the corresponding AP through the wireless link.

  • Data Processing at Each AP: Each AP processes the part of the data decided by TATO.

  • Data Submission to the CC: Each AP delivers its own processed data, the processed data from its controlled EDs, and the rest of the raw data to the CC through the wired link.

  • Data Processing at the CC: The CC processes the rest of the raw data. Finally, the CC summarizes the results and submits them to the user.

To guarantee the efficiency of the task-offloading strategy, the EdgeFlow framework will periodically estimate the computing and communication resources. When the CC detects the significant change of the resource conditions, the framework will update the task-offloading strategy timely.

Iv Time Aligned Task Offloading

Before showing the details of TATO, we formulate the task offloading as a mathematical problem. Then, we explain TATO for the case with one ED and one AP as well as the case with multiple EDs and multiple APs, respectively. Finally, from the perspective of the generation speed of the data flow, we analyze the properties of TATO.

Iv-a Analytical Model

Fig. 3: The pipeline of the data flow in the EdgeFlow system.

After one user submits a task, the data are generated at a speed of on each ED, i.e., a data flow. Then, in any given time span, , the data that each ED can generate are . We use the task division percentage parameters , , and to describe the data that each ED, each AP, and the CC need to process, respectively. Then, as depicted in Fig. 3, to cope with the data flow at a speed , there are five stages to process or transmit the data, of which the spent time can be calculated as:

  • Data Processing Time at Each ED, : The amount of the data that each ED needs to process is determined by , , and . Then, can be calculated as , where is the computing throughput of each ED in one second.

  • Data Submission Time to Each AP, : The data, which an ED needs to transmit to the AP, include the processed data by the ED and the rest of the raw data. Then, can be calculated as , where is the compression ratio after the data processing, and is the transmission speed which depends on the wireless communication resources allocated to the ED [12].

  • Data Processing Time at Each AP, : The raw data arriving at each AP can be calculated as . Among them, the amount of the data that each AP needs to process is . Then, can be calculated as , where is the computing throughput of each AP in one second.

  • Data Submission Time to the CC, : The total data, which an AP needs to submit, include the processed data by the ED, the processed data of its own, and the rest of the raw data. Similar with the analysis of the ED, the data submission time, , can be calculated as , where is the transmission speed which depends on the wired bandwidth allocated to the AP.

  • Data Processing Time at the CC, : The raw data arriving at the CC can be calculated as . Then, can be calculated as , where is the computing throughput of the CC.

These stages can be regarded as working concurrently (e.g. when the ED transfers a litter data to the AP, the AP can start to process the data quickly). The whole data processing and transmission on the data flow can be regarded as a pipeline (where EDs, APs and the CC are the workers who process their incoming data on an assembly line in order). The processing or submission time is the time of the specific layer to process the corresponding data or transmit all the data to the upper layer. In the pipeline system, the task finish time depends on the longest time among all pipeline stages mentioned above, . When the time of a data processing stage, , equals to , it indicates that the computing throughput is the system’s bottleneck. When the time of a data transmission stage, , equals to , it indicates that the transmission speed is the system’s bottleneck, which limits the task finish time. The objective of TATO is to minimize the longest time of the above mentioned five consuming stages.

Iv-B TATO with One ED and One AP

TATO is proposed to help divide the task and allocate the computing and communication resources. The network with one ED, one AP, and the CC is used to demonstrate the computing and communication trade-off, the time-aligned principle, and the specific process of TATO.

Iv-B1 Computing and Communication Tradeoff

The computing and communication tradeoff exists on each device. For example, when an ED processes more data, it will consume more computing resources. However, since more data has been processed into the compressed results, the ED will transmit less data to the AP. Similar tradeoff can be observed in the AP. Thus, TATO will first balance the computing and communication tradeoff based on the time calculated in Section IV-A.

Iv-B2 Time-Aligned Principle

Since the whole data processing and transmission can be regarded as a pipeline, the ideal case is that all parts in the EdgeFlow keeps working. That is to say, the time of all the data processing and transmission stages remains equal, namely time-aligned principle. However, this case can hardly happen when we analytically solve the time minimization problem in Section IV-A. The time of some processing stages cannot reach the longest time, which indicates that they work faster than the slowest ones and part of the computing (or communication) resources are wasted. Fortunately, the time-aligned principle can also help to solve the problem even though the most ideal case cannot happen. In the non-ideal cases, based on the time-aligned principle, when we make as many the time periods of the stages equal to the longest time as possible, the minimization of the time can analytically prove to be solved. This principle can help to design the specific process of TATO.

Iv-B3 TATO Scheme

TATO is divided into three steps, which are stated as follows and illustrated in Fig. 4.

Fig. 4: TATO for the network with one ED, one AP, and the CC.
  • Step 1. Task division at the ED: When , it takes more time for the ED to process the data than to transmit the data. This indicates that the ED uses too many computing resources and wastes some transmission resources. Therefore, the data processed by the ED will be reduced and the ED will transmit more raw data. If , it takes more time to transmit the data to the AP. This means that the computing resources are not fully used on the ED. Therefore, TATO will let the ED process more data. Through this way, the algorithm reaches the optimal result where and achieve the optimal trade-off point at the ED,  111A special case is that still happens even though all data are processed by the ED. This indicates that the transmission is too slow. Thus, the optimal solution is to let all data be processed by the ED..

  • Step 2. Task division at the AP: To fully use the computing resources at the AP, we initiate the task division to maximize under the limitation that . Then, when , the transmission speed is not the bottleneck. Hence, the algorithm achieves an optimal solution. When , it takes more time to transmit the data to the CC, which results in the waiting of the data transmission. Then, the system allocates more data to the ED for processing, and returns to Step 1. Through iterations (or analytically solutions), the algorithm reaches the optimal result, where , and achieve the optimal trade-off point at the AP, .

  • Step 3. Task division at the CC: At the CC, all the rest of data should be processed. When , EdgeFlow reaches an optimal solution. When , it takes more time to process the data by the CC. Then, TATO will process more data at the ED and the AP, and then repeat Step 1 and 2, to reduce the processing time . With iterations (or analytically solutions), the algorithm reaches the optimal result, where , , and achieve the optimal trade-off point at the CC, .

Through the three steps above, the system achieves the optimal solution to minimize the task finish time.

Iv-C TATO with Multiple EDs and Multiple APs

The network with multiple EDs and multiple APs are further considered to demonstrate the resources allocation among devices. Based on the observations on the case with one ED and one AP, the following corollaries of TATO can be intuitively achieved.

Iv-C1 Computing Resources Allocation

Since the computing resources are held independently from device to device, each device can solitarily adjust its computing throughput and the computing resources provided for the task. Observed from the case with one ED and one AP, the optimal point of TATO can be achieved when all devices take full use of their computing resources. This can also be proved for the cases with multiple EDs and multiple APs. Then, TATO tries to divide the tasks to let the devices on the same layer have the same data processing time, when all the devices take full use of their computing resources.

Iv-C2 Communication Resources Allocation

For the wireless communication resources, each AP can allocate them on the multiple EDs it controls. The time-aligned principle also works in this case theoretically. That is to say, TATO tries to let the transmission time less than or equal to the data processing time on the same devices. Then, TATO makes as many the time periods of transmission stages equal to as possible. For the wired transmission bandwidth, we assume they are independent among different APs. Then, it is intuitively to take full use of the wired transmission bandwidth for each AP.

Similar with the case with one ED and one AP, TATO for multiple devices can also be divided into three steps. Due to the page limit, the similar statements are omitted.

Iv-D Performance Analysis on the Data Generation Speed

Besides the optima on the computing and communication tradeoff, the task division, and the computing and transmission resources allocation, we discuss the properties of TATO under the various data generation speed.

Iv-D1 Tasks with Light Data

The light data indicates that the task finish time is shorter than the data arriving period, . Thus, each device in the EdgeFlow has at least time to deal with other tasks. Thus, when multiple tasks exist in the network, TATO has the potential to support the multiple tasks when the sum of their task finish time is less than . In this paper, we only consider one task for implementation and the multiple case is left as one of the future directions.

Iv-D2 Tasks with Heavy Data

The heavy data indicates that the task finish time is longer than the data arriving period, . In other words, a data burst happens. Thus, the raw data will accumulate on each device. TATO tends to let all the exceed computing time and to be equal on various devices. which can allocate the overloaded data uniformly on various devices. The advantage is that when the burst vanishes, the EdgeFlow will process the accumulated data quickly in the parallel manner and recover for the new tasks.

V Implementation and Evaluation

In this section, we provide the simulations to analyze how EdgeFlow processes the data flow. These are accomplished by simulating a simple scenario similar to the face recognition application. In this scenario, each ED has a camera which collects the image data. The application aims to recognize the pedestrian faces and slice out the face part, which will be delivered to the CC. After the analysis of the face part, the CC will perform the appropriate action.

Fig. 5: The implementation for the EdgeFlow framework.

V-a Experimental Setup

As depicted in Fig. 5, there are four EDs, two APs, and one CCs. One single server stands for the CC layer, and two NUC nodes communicate with it performing as two APs. The bandwidth of the wired link between the AP and CC is set to Mbps, which is reasonable for the scale of the wireless mesh backbones [13]. To simulate the wireless network with limited resources, each AP node connects to two ED nodes over USRP devices, which run at the bandwidth of MHz and the transmission power of dBm [14]. In order to simulate the difference of the computing capabilities among various layers, the CPU frequencies of each ED, AP and CC are limited to Hz, Hz and Hz respectively. By default, the rate of image arrival from the camera is one packet per second, and the average compression ratio after the data processing is 10%.

V-B Performance Analysis

To attest the claim that TATO is better than the other three schemes, we make experiments on two scenarios. The pure cloud computing means the input stream is forwarded to CC directly, and all the processing work is accomplished centrally. The pure edge computing means each ED deals with all of its input tasks, and delivers the result towards the cloud. Cloudlet means each ED offloads the face recognition tasks to the corresponding Cloudlet server, which is deployed at the AP.

Fig. 6: Comparison of TATO and the heuristic methods (pure cloud computing, pure edge computing, and Cloudlet) in terms of the image size and system robustness.

In the first experiment, we adjust the size of images and observe the resulting average task finish time. The image size depends on the application monitoring range. More image data requires more computation and transmission resources. The task finish time represents the response time from the data generation to the CC performing the appropriate action. This experiment studies the efficiency of the schemes under different burdens of tasks. As shown in Fig. 6(a), it is plainly evident that TATO is superior in most cases. As the size of input data rises, the system may start to run out of resources and unprocessed data start to accumulate. It can be observed that the other three schemes meet their bottleneck earlier than our scheme, with a lower tolerance of data size.

In another experiment, we analyze system robustness for the tasks with heavy data, which can be reflected by how fast the buffer size recovers to the stable state after the data burst. The buffer size depends on the sum of the pending images, which represents the severity of the data burst. As shown in Fig. 6(b), the first burst causes a data accumulation for the pure edge computing scheme, while the other three are hardly affected. After that, a bigger burst raises and affects all of the three heuristic schemes. On the contrary, with the help of TATO, EdgeFlow gains the most robustness for the tasks with heavy data.

Based on the simulations, when the EDs and the APs possess some communication and computation resources, TATO could lead to a high system throughput with the coordination of computing resources over all layers and perform the tolerance to the tasks with heavy data.

Vi Potential Applications

In the following, we introduce three potential applications for the EdgeFlow framework in the 5G communication networks and beyond. In addition, we clarify the limitations of TATO.

Vi-a Network Function Virtualization

Network function virtualization (NFV) is an network architecture which can virtualize the system into general-purpose high volume servers. Due to the virtualization technology, there are abundant free computing and communication resources on the APs which can carry out extra jobs. However, how to efficiently utilize the available resources is a significant challenge. The EdgeFlow provides an ideal choice to leverage the benefits of functions virtualization. With TATO, it can coordinate the task division among all the virtualized APs. In addition, EdgeFlow is designed based on the open-source Linux operating systems, which can be directly deployed in the NFV architecture, without adding any new type of equipments.

Vi-B Internet of Things

These are a large scale of IoT sensor applications, whose data can be mined and analyzed, such as the smart city[15]. It is evident that the excessive demand for the IoT sensors will quickly overwhelm the processing speed of the traditional cloud computing architecture. With the involvement of the EDs directly connected to the sensors and APs, EdgeFlow can enhance the system computing capabilities to meet the explosive data flow in the IoT scenarios. The data processing before the CC can shrink the amount of the data traffic, which can relieve the communication pressure due to the limited wireless communication resources.

Vi-C Vehicular Networks

The vehicular network technology senses the vehicles’ behaviors and thus enhances the traffic safety. The researchers estimate that there are more than one-gigabyte data generated by each car every second. The data generated from the vehicle require the real-time processing to make the right decision, which severely affect the traffic safety. Thus, the computing tasks must be offloaded to the vehicles and the roadside units. The EdgeFlow is able to be an excellent choice for the vehicular networks, which can reduce the response time with the coordination of the computing resources over all layers. Besides, the system robustness can handle the traffic congestion scenarios efficiently.

Vi-D Limitations and Future work

However, EdgeFlow still has some unsuitable scenarios. The optimal solution with EdgeFlow requires the data to be compressed after the computation procedure. However, some applications do not satisfy this requirement (e.g. in some wireless monitoring scenarios, the applications not only analyse the data but also store the historical data). In these scenarios, the computation procedure will not significantly decrease the amount of data. Therefore, the pre-computing procedure in the EDs and the APs cannot alleviate the transmission pressure. Future work should analyze how to allocate the task-offloading strategy in these unfavorable scenarios.

Vii Conclusion

In this paper, we proposed an open-source multi-layer data flow processing framework, EdgeFlow, which enabled the coordination of the whole computing resources throughout the whole networks. The task-offloading scheme in EdgeFlow, TATO, can achieve the trade-off between the computing and communication resources and divide the tasks among various layers optimally. Through the simulations in the face recognition scenario, TATO can significantly reduce the task finish time and perform a high tolerance to the tasks with heavy data. The framework has also shown its potential implementations for the 5G communication networks and beyond in some typical applications, such as NFV, IoT, and vehicular networks.

References

  • [1] A. Checko, L. H. Christiansen, Y. Yan, L. Scolari, “Cloud RAN for Mobile Networks–A Technology Overview,” IEEE Commun. Surveys & Tutorials, vol. 17, no. 1, pp. 405-426, Sept. 2014.
  • [2]

    W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, “Edge Computing: Vision and Challenges,”

    IEEE Internet of Things J., vol. 3, no. 5, pp. 637-646, Oct. 2016.
  • [3] P. Mach, and Z. Becvar, “Mobile Edge Computing: A Survey on Architecture and Computation Offloading,” IEEE Commun. Surveys & Tutorials, vol. 19, no. 3, pp. 1628-1656, Mar. 2017.
  • [4] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The Case for VM-Based Cloudlets in Mobile Computing,” IEEE Pervasive Computing, vol. 8, no. 4, pp. 14-23, Oct. 2009.
  • [5] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds:Leveraging mobile devices to provide cloud service at the edge,” in Proc. IEEE 8th Int. Conf. Cloud Computing, New York, NY, Aug. 2015, pp. 9-16.
  • [6] D. F. Willis, A. Dasgupta, and S. Banerjee, “Paradrop: a multi-tenant platform for dynamically installed third party services on home gateways,” in Proc. ACM SIGCOMM Wksp. Distributed Cloud Computing, Maui, Hawaii, Sept. 2014, pp. 43-44.
  • [7] S. Yi, C. Li, and Q. Li, “A Survey of Fog Computing:Concepts, Applications and Issues,” in Proc. ACM MobiHoc Wksp. Mobile Big Data, New York, NY, Jun. 2015, pp. 37-42.
  • [8] T. G. Rodrigues, K. Suto, H. Nishiyama, and N. Kato, “Hybrid Method for Minimizing Service Delay in Edge Cloud Computing Through VM Migration and Transmission Power Control,” IEEE Trans. on Comput., vol. 66, no. 5, pp. 810-819, Oct. 2017.
  • [9] H. Zhu, C. Fang, Y. Liu, C. Chen, M. Li, and X. S. Shen, “You can jam but you cannot hide: defending against jamming attacks for geo-location database driven spectrum sharing,” IEEE J. Sel. Areas Commun., vlo. 34, no. 10, pp. 2723-2737, Sept. 2016.
  • [10] The EdgeFlow framework is available at https://github.com/sirius93123/EdgeFlow.
  • [11] I. Vilajosana, J. Llosa, B. Martinez, and M. Domingo-Prieto, “Bootstrapping smart cities through a self-sustainable model based on big data flows,” IEEE Commun. Mag., vol. 51, no. 6, pp. 128-134, Jun. 2013.
  • [12] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and Y. Zhang, “Energy-Efficient Offloading for Mobile Edge Computing in 5G Heterogeneous Networks,” IEEE Access, vol. 4, no. 99, pp. 5896-5907, Aug. 2017.
  • [13]

    N. Kato, Z. M. Fadlullah, B. Mao, F. Tang, O. Akashi, T. Inoue, and K. Mizutani, “The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective,”

    IEEE Wireless Commun., vol. 24, no. 3, pp. 146-153, Dec. 2016.
  • [14] S. Fang, Y. Liu, W. Shen, H. Zhu, and T. Wang, “Virtual multipath attack and defense for location distinction in wireless networks”, IEEE Trans. on Mobile Computing, vol. 16, no. 2, pp. 566-580, Feb. 2017.
  • [15] P. L. Lau, N. Wijerathne, B. K. K. Ng, C. Yuen, ”Sensor Fusion for Public Space Utilization Monitoring in a Smart City”, IEEE J. on Internet of Things, August 2017.