The massive proliferation of personal computing devices is opening up new human-centered designs that blur the boundaries between humans and machines (Sarros2018edge, ). Now, the frontier for research on data management is related to the so-called edge computation and communication, consisting of an architecture of one or more collaborative multitude(s) of computing nodes that are positioned among edge networks with the access of cloud-based services. Such a mediating level is responsible for carrying out a substantial amount of data storage and processing to reduce the retrieval time and have more control over the data with respect to cloud-based services, and to consume fewer resources and energy to reduce the workload (wei2016, ; eciot, ).
The edge computing paradigm has multiple advantages. Firstly, the edge node can reduce the traffic load of backhaul by providing a certain amount computing capability, which is significant for applications such as an online games that need to transmit 60 or even 120 frames per second. As an alternative solution, the server only sends parameters such as character position, timestamp and attribute changes (some common data), and leaves the edge node to compute and render the visual image. Secondly, as a result of the large number of edge nodes deployed in 5G and the big-data analysis based on user preference, the popular contents can be acquired in advance in the interconnecting edge devices, which are only one hop away from the user.
However, edge computing is also faced with many challenges. Firstly, the operation and processing capabilities of an edge device are limited and can fail to meet the demands on real-time service, data optimization, and application intelligence. Secondly, the intelligence of most typical edge-computing services is only embodied in the artificial intelligence (AI)-enabled data storage and processing on the edge. However, the intelligence is missing from the aspects of behavior feedback, automatic networking, load balance, and data-driven network optimization.
Cognitive computing originates from cognitive scientific theory. Now, it makes machines achieve “brain-like” cognitive intelligence through an interactive cognition loop with machine, cyber space and humans. Compared to big-data analytics, it possesses the following features: 1) analysis of the existing data and information in cyber space, to improve the intelligence of the machine; 2) the machine reinterprets and explains the information in the existing cyber space, and accordingly generates new information; humans also participate in this process; 3) the cognition of the machine to the human provides a more intelligent cognitive service. Its enabling paradigms (e.g., agent-based computing) had been researched and the related concrete applications based on cognitive Internet of Things (IoT) platforms and frameworks have been studied in (agent16, ; gian15, ; gian14, ).
Nevertheless, the cognitive computing application mainly depends on the machine-learning model trained on the cloud, while the real-time inference requests are made by end edge devices, which so far have been the most common deployment mode of the cognitive service. The existing problem of such a mode is the large latency in the network operation and the service delivery. However, if the cognitive service is deployed on the network edge, the latency of the network response to the user request will be greatly reduced, so the research of edge deployment for training and inference machine is rapidly increasing.
Therefore, considering both the advantages and disadvantages of edge computing and cognitive computing, a new computing paradigm called Edge Cognitive Computing (ECC) is proposed, which combines edge computing and cognitive computing. Such a new architecture integrates communication, computation, storage and application on edge networks; it can achieve data and resource cognition by cognitive computing. Moreover, it can provide personalized services nearby, enabling the network to have a deeper, human-centered cognitive intelligence.
The main contributions of this paper are as follows:
We propose a new ECC architecture that deploys cognitive computing at the edge of the network to provide dynamic and elastic storage and computing services. In addition, the design issues of how to fuse these key technologies of cognitive computing and edge computing are illustrated in detail.
We propose an ECC-based dynamic cognitive service-migration mechanism that considers both the elastic allocation of the cognitive computing services and the user mobility, to provide a mobility-aware dynamic service-adjustment scheme.
We develop an ECC-based test platform for dynamic service migration and evaluate it by means of several experiments, with the results showing that the proposed ECC can provide dynamic services according to different user demands.
The remainder of the paper is organized as follows. Section II introduces the related work. Section III presents the proposed architecture for edge cognitive computing in detail. Section IV stipulates the related design issues. Section V introduces the dynamic cognitive service migration mechanism. Section VI demonstrates the ECC test platform for dynamic service migration. Finally, Section VII concludes the paper and reports some open issues.
2. Related Work
The need for on-demand state-of-the-art services (smart sensing, e-healthcare, smart transportation, etc.) and the latency issues that affect the overall Quality of Service (QoS) for various applications have paved the way to the powerful paradigm of edge computing.
There is a lot of research on edge computing with respect to energy efficiency and latency. The cooperation and interplay among cloud and edge devices can help to reduce energy consumption in addition to maintaining the QoS for various applications. However, a large number of migrations among edge devices and cloud servers leads to congestion in the underlying networks. Hence, to handle this problem, (Edge18IIoT, ) presented an SDN-based edge-cloud interplay to handle the streaming of big data in the Industrial IoT environment. Edge computing is expected to support not only the ever-growing number of users and devices, but also a diverse set of new applications and services. The work in (Sarros2018edge, ) introduced a system that can pervasively operate in any networking environment and allows for the development of innovative applications by pushing services near to the edge of the network.
In terms of security and privacy in edge computing, there is an increasing realisation that edge devices, which are closer to the user, can play an important part in supporting latency and privacy-sensitive applications (edge17rana, ; iot17singh, ). Therefore, the security challenges relate to the protection of device data, such that an unauthorized person cannot access the data, providing secure data sharing between the device and the edge cloud, and safe data storage on the edge cloud (secu17rana, ; zhou17, ; zhou18, ).
However, the above researches on edge computing mostly focused on solving the communication problems by leveraging computing and storage, like how to reduce the network load, improve the network efficiency and reduce the transmission delay. In addition, these works did not consider how to solve the personalization of actual AI applications and how to provide elastic storage and computing services.
Applying cognitive computing in various applications for smart cities has been widely researched, for example, authors in (soft18, ) studied the role of intelligence algorithms such as machine learning and data analytics within the framework of smart-city applications such as smart transportation, smart parking and smart environment, to address the challenges of big data. (cog18city, )
also explored how deep reinforcement learning and its shift towards semi-supervision can handle the cognitive side of smart-city services. The work in(deep2018a, ) indicated that the application of deep networks has already been successful in big-data areas, and fog-to-things computing can be the ultimate beneficiary of the approach for attack detection because the massive amount of data produced by IoT devices enables deep models to learn better than shallow algorithms. In summary, most of the current researches on cognitive computing has focused on the design of algorithms. However, if cognitive computing wants to be applied and deployed on a large scale, it is necessary not only to solve the problems of how to compute, but also to solve what to compute and where to compute, which needs to deploy cognitive computing at the edge of the network.
There are some researches on applying cognitive computing to edge computing in (learnEdge, )
. The authors first introduced deep learning for IoTs into the edge-computing environment. They also designed a novel offloading strategy to optimize the performance of IoT deep-learning applications with edge computing since the existing edge nodes have limited processing capability. The works in(deep18veh, ) and (sdn17deep, ) proposed a novel deep-reinforcement-learning approach to solve the resource allocation problems in terms of networking, caching and computing in edge computing. However, the above researches did not apply the cognition for applications to guide network-resource optimization, but only considered the resource allocation using some intelligent algorithms, which cannot provide elastic cognitive computing services.
3. The Proposed ECC Architecture and Design Issues
The proposed ECC architecture mainly consists of two components: the edge network and the edge cognition as shown in Fig. 1. The edge network mainly provides the access and resource management of various edge devices. The edge cognition mainly relates to the cognition to edge data, involving service data and network and computing resource data. The edge cognition is mainly composed of two core parts, data cognitive engine and resource cognitive engine. The interaction between data cognitive engine and resource cognitive engine is the key design issue which is also shown at the top of Fig. 1.
In the architecture, the data cognitive engine mainly relies on cognitive computing technologies while the resource cognitive engine mainly uses the related technologies of edge computing. By combining key technologies in cognitive computing (i.e., big-data analysis, machine learning, deep learning) with those in edge computing (i.e., computing offload and migration, mobility management, intelligent proactive caching, resource cooperation management), ECC can better solve the problem of communication bandwidth and delay by the fusion of computing, communication,and storage, and thus improving the network intelligence. Below we will introduce the ECC architecture in details from three aspects: resource cognitive engine, data cognitive engine and the interaction between them.
3.1. Resource Cognitive Engine
This engine can learn the characteristics of edge cloud computing resources, environmental communication resources, and network resources by cognition, and feedback the integrated resource data to the data cognitive engine in real time. At the same time, it can accept the analysis result of the data cognitive engine and realize the real-time dynamic optimization and allocation of resources. As shown in Fig. 1, it mainly includes the resource data pool, network software technologies and resource management technologies. More specifically, the function of this engine is to:
Resource Data Pool: Realize the massive, heterogeneous and real-time connection between terminals (such as smart clothing, intelligent robot, intelligent traffic car, and other access devices), ensure the security, reliability and interoperability of connection, and constitute the resource data pool (computing resources, communication resources and network resources) as a basic architecture for data transmission.
Network Softwarization: Utilize the network software technologies involving network function virtualization (NFV), software-defined network (SDN), self-organized network (SON), and network slicing to realize high reliability and flexibility, ultra-low latency, and extendibility of the edge cognitive system.
Resource Management: Utilize the resource-management technologies involving computing unload, handover strategy, caching and delivery, and intelligent algorithms to build a cognitive engine with resource optimization and energy saving to enhance QoE and meet the different demands of various heterogeneous applications.
The key mechanisms involved in the resource cognitive engine include network slicing, computing unloading, caching and delivery, etc. Using the virtualization technology, network slicing virtualizes the physical infrastructure of 5G into multiple virtualized network slices that are mutually independent and parallel to realize the arrangement and management of the corresponding network resources. The computing unloading is responsible for the consideration of the computing tasks’ assignment problem, aiming to rationally allocate the computing resources on the edge cloud and remote cloud, thus to complete the computing tasks through cooperation. By caching and delivering, the predicted contents are placed on the edge in advance, and thus the low latency and load reduction of the core network are achieved. SDN/NFV can reduce the deployment costs and improve the efficiency of the network control through the virtualization of the network resources.
3.2. Data Cognitive Engine
This engine deals with the real-time data flow in the network environment, introduces the data analysis and automatic service processing capabilities to the edge network, realizes cognition to the service data and the resource data by using various cognitive computing methods, including data mining, machine learning, deep learning, and artificial intelligence as shown in Fig. 1. The main data sources are:
Collect the external data from the data source in the application environment, such as physical signs and real-time disease risk level under the cognitive health surveillance, or real-time behavior information on the mobile user;
Collect dynamically the internal data on computing resources, communication resources and network resources of the edge cloud, such as network type, service data flow, communication quality, and other dynamic environmental parameters.
The key point of the intelligent enhancement of data cognition engine is that multi-dimensional data (including external data related to the user and the service, and the internal data in the resource network environment) are adopted in cognitive computing technology, which is not the case in the traditional data-analysis methods. The data cognitive engine conducts an analysis of the existing data and information (e.g., using deep convolutional network (DCNN) for facial emotion recognition and using hidden markov model (HMM) for user mobility prediction). It then feeds them back to the resource cognitive engine, after which the resource cognitive engine conducts a reinterpretation and analysis of the information to generate new information, which may be further utilized by the data cognitive engine. For instance, in health monitoring, after the monitoring and analysis of the physical health of a smart-cloth-worn user using cognitive computing, a health-risk level of that user will be obtained; then the resource allocation in the whole edge-computing network will be comprehensively adjusted to the risk level of each user, i.e., the data are utilized for the second time and serve resource allocation and network optimization in turn to form a closed-loop system for cognitive intelligence.
3.3. The Interaction between Data Cognitive Engine and Resource Cognitive Engine
The key design issue of the ECC is the interaction between data cognitive engine and resource cognitive engine. In the edge cognitive computing, we put forwards the design idea of realizing the closed-loop optimization with the double cognitive engine to optimize the network resource management technology such as network slice. Here we take cognitive network slicing as an example to illustrate how to fuse the related technologies in cognitive computing and edge computing.
As shown in Fig. 1, data cognitive engine first perceived many requests. The request types of the network-slice service differ from one to another according to different demands (latency, reliability and flexibility) of different cognitive applications. Then data cognitive engine will conduct the fusion cognitive analysis of the heterogeneous data based on current resource distribution situation and real-time requests of the tenant with methods of machine learning and deep learning. Next the data cognitive engine will report the analyzed dynamic traffic pattern to the resource cognitive engine. In the resource cognitive engine there is a joint optimization of the comprehensive benefits and the resource efficiency. Firstly, conduct admission control to perceived requests, then conduct the dynamic resource scheduling and distribution based on the cognition of network resources, and feed the scheduling results back to the data cognitive engine, to realize the cognition of the network-slice resources.
4. Dynamic Cognitive Service Migration Mechanism
Under the ECC architecture, due to the mobility of the user, the heterogeneity of the edge device and the dynamics of the network resources (such as the available storage, the computing resources and the network bandwidth), we should offer the elastic cognitive service, i.e., offering the service in accordance with the personalized demands of the user. The amount of computation consumed by cognitive computing is particularly large, so the computing resources are required to be more elastic and flexible if deploying the cognitive computing on the edge. The ECC proposed in this paper is different from that proposed by those in related work. The ECC mainly focuses on applications related to the artificial intelligence in the IoT, such as automatic pilot, virtual reality, smart clothing, Industry 4.0, emotion recognition, etc. In contrast to traditional content retrieval and mobile computing issues, such applications are often more personalized, so the computing resources are required to be more elastic and flexible.
To describe the proposed ECC architecture better, we implemented the Dynamic Cognitive Service Migration Mechanism. Because the device bearing the computing is varying, a service migration mechanism is needed. In our ECC-based dynamic service migration mechanism, to reduce the latency, the workload should better be finished in the nearest edge device that has enough computation capability at the edge of the network. Thus, according to the user behavior prediction, some contents needed for the service or some jobs for the task are migrated in advance, or the low-resolution work is firstly migrated to the position to be moved. After the user’s pass-by, the service resolution is promoted on that device so offering the elastic service.
4.1. Service Resolution
To better explain the elastic service provided by the ECC, we define a new metric called service resolution to evaluate the user QoE. In view of the different applications, the service resolution has different definitions. For example, the emotion detection depends on the accuracy rate and the latency of the emotion recognition, while both are mutually contradictory. A higher accuracy rate needs more computing resources, with higher latency. However, when the user is insensitive to the accuracy rate and pays more attention to the interactive experience, we can provide a low resolution without influencing the user QoE. For the application of video streaming, the service resolution depends more on the resolution of the video streaming acquired by the user. Table 1 lists the service resolution of the two different applications. The emotion detection deems the accuracy rate as a metric, and the video streaming deems the resolution as a metric, respectively offering three services to meet the QoE under different demands of the user, i.e., offering the elastic service and enhancing the user experience.
|Applications||Service Resolution||Main Metric|
|Emotion Detection||66.3||73.6||79.1||Accuracy (%)|
|Video Streaming||800600||12801024||19201080||Video resolution (pixels)|
We will explain how to offer the elastic cognitive computing service from the perspective of the two applications, as follows.
1) Emotion Detection: As shown in Fig. 2
, we provide three service resolutions for emotion detection: low resolution, medium resolution and high resolution. In the case of limited computing resources, we provide low resolution, i.e., only conducting the facial emotion recognition and using the deep neural network VGG. For the medium resolution, we analyze the facial expression (VGG(vgg14, )) and speech emotion (AlexNet (alex17, )
) simultaneously, and carry out the simple decision fusion. For high resolution, we use the strong computing resources, provide the multi-modal emotion recognition algorithm, and use the deep network i.e., deep belief network (DBN) for the decision fusion. For these three service resolutions, the computing resources consumed are increased gradually, and the accuracy rate of the emotion recognition provided is higher.
The user of emotion detection is always a mobile user, so the dynamic change of the mobile computing resources is one of factors influencing the user QoE. In addition, when the user is moving, the network status is changed, but the emotion recognition is required to maintain ultra-high reliability during the communication process. Thus, it is necessary to adopt the elastic computing mode to solve this problem. This application in need of multiple computing decisions was not considered in previous research. It is a mutual contradiction of ensuring the accuracy rate and the latency of emotion recognition at the same time, and a higher accuracy rate needs more computing resources, with higher latency, as shown in Table 2. However, when the user is insensitive to the accuracy rate and pays more attention to the interactive experience, we can provide low resolution without influencing the user QoE.
|Algorithms||Accuracy (%)||Latency (ms)|
|AlexNet + VGG||73.6||188.4|
|AlexNet + VGG + DBN||79.1||265.3|
2) Video Streaming: Similar to emotion detection, we provide three service resolutions for a video streaming application, i.e., in consideration of different user demands, user mobility and a dynamic network environment simultaneously, we provide the video decoding with different resolutions, respectively, and decompose the video decoding task into different resolution tasks in a similar way. When the user is moving, the edge device node better judges whether to conduct the task migration and which resolution of the task migration is conducted according to the user mobility behavior. For example, when the user moves to the other edge node without determining a long-term stay or a short-term stay, the video decoding task with low resolution can be firstly migrated. In the case of a long-term stay of the user, the high-resolution service can be offered, for avoidance of the untimely migration and resource waste. In addition to considering the user mobility, the migration costs should be considered. The low-resolution service has the lowest migration cost, and the high-resolution service has the highest migration cost.
4.2. Dynamic Service Migration Mechanism
When and how to conduct migration are the two major concerns in dynamic service migration mechanisms. Most migration mechanisms decide when to migrate only relying on network condition, few of them take the user behavior into account (mig17, ; mig14, ). However, deciding when to migrate according to user behavior and mobility has large influence on improving user experience and resource utilization.
As shown in Fig. 3, the Service Manager implements all the functionalities that an edge node needs to deploy its services. It includes a service repository (service repo) where services () to be provided are stored, e.g. dockerized compressed images or the emotion recognition models. The Decision Engine is responsible for deciding which services to deploy. In Fig. 3, the resource cognitive engine manages the computing and network resources of the heterogeneous edge device, and cognizes the user mobility, user demands for service resolution and resource demands for computing tasks in combination with the data cognitive engine. The Decision Engine makes the decision in accordance with the information and migration strategy (based on Q-learning, see below), and accordingly offers the dynamic and elastic cognitive services.
The service providers (SPs), i.e., the edge nodes, manage the virtual networks and let be the set of SPs. Let denote the time instant of service request.
We assume that the edge device has services that need to be migrated and the set of tasks is denoted as . For the migration task , , where is the amount of computing resource required for the task , i.e., the total number of CPU cycles needed to complete the task, and is the data size of the computation task , i.e., the amount of data content to be delivered to the other edge node, specifically, in this work, it stands for the size of the video content or the storage resource consumed by the emotion detection (e.g., the processing code and parameter(s)). Finally, represents the data size of the task result. For instance, in the video decoding case, is the computing resource needed for the video decoding, is the video data size, and is the data size of the decoded video. After the computation, the Service Provider sends the transcoded video content back to the user.
Migration Cost: The traffic volume of migrating a virtual server usually cannot be neglected due to the large size of the server states. The migration cost of a virtual server depends upon the size of the server as well as the bandwidth available on the migration path. For example, for the emotion detection service, the migration cost depends on the emotion recognition models. For a video-streaming service, the migration cost depends on the data size of the decoded video. A higher service resolution has a higher migration cost.
Migration Goal: Minimize the service costs, and in the meantime, improve the QoE by providing different service resolutions based on user demands, user mobility and dynamic network resources. For , a service request at time , we define the score (the metric for user acquired experience) under some migration strategy as and the cost of , so the optimization objective can be defined as follows.
Where, , is the service type acquired by the service request, i.e., the service resolution. We set the value of (0, 1, 2), respectively corresponding to low, medium and high service resolution. is the expectation of a service request. is the time of service acquisition under strategy , relevant to and . is relevant to .
%ͨ the definition of Կ ʱ һ û Է һ ʱ ṩ service resolution Խ ߣ Խ ߵ û 飻 ṩͬ service resolution ʱ û Խ ߣ ÷ ԽС ̵ܺܺõط ӳ û ȡ ķ û Ĺ ϵ Ҳ ζ ˵ û Է Ҫ ʱ ǿ ṩ low resolution ķ Խ ̵ܺģ ̵ͬʱ Ӱ û QoS û ȡ ķ û һ ʱ ȡ delay Խ ߣ ScoreԽ ̵͡
From the definition of , it is observed that, in the case of a definite latency and a definite service demand of the user, the higher the service resolution is provided, and the higher the user experience is gained. While providing the same service resolution, the higher the user expectation is, the lower the score is. can well reflect the relationship between the service acquired by the user and the user expectation. This means that we can provide the low-resolution service if the quality of the service requirements of the user is not high, so as to reduce the energy consumption, without influencing the user QoE. When the service acquired by the user and the user expectation are definite, the higher the delay of service acquired is, the lower the score is.
Optimal Problem Formulation: Our problem can be described as a reinforcement learning scenario. The objective is to find an agent that makes the optimal migration policy for each service request. The optimal migration policy denoted by can maximize the system reward given by:
Let denote the state of environment at time , defined by the locations of the services at that time. For a sequence of batch requests , the goal of the service migration is to determine to maximize the system reward defined by Eq. (2).
We define the reward after the action taken on as:
Similarly, we can also construct a matrix to memorize the experience that the agent has gained from the environment. The Q-value of the state-action pair, , represents the expected total benefits caused by action taken in state . The solution is to exploit from the initial state to a final optimal state through updating accordingly by Algorithm 1. In each iteration of the algorithm, the agent observes the current state and takes action to move to the next state by receiving an immediate reward , which is used to update the by following Eq. (4), and then begins the next iteration.
The means learning rate which determines how much the new information overwrites the old one. The discount factor gives more weight to the most recent reward than others in the future.
5. Testbed and Performance Evaluation
To verify the proposed architecture, an ECC test platform was set up, and a performance evaluation of the dynamic service migration mechanism was conducted in the experimental testbed for the user mobility.
To create the ECC environment, we used several edge-computing nodes that realize the functions of emotion detection and video streaming as shown in Fig 4
(a). The self-designed hardware is with core processor of 4 x ARM Cortex-A33 and the algorithm execution in edge device needs deploying TensorFlow environment. We also used an Android phone as a user mobile device, and designed the Android application program as shown in Fig4(b), and realized the signal monitoring of the edge-computing node, task uploading, result downloading, and service migration. Fig 4(c) illustrates the software interface running on Windows. Fig 4(d) shows the UI of the emotion detection application.
In the experimental setup, we use four edge nodes and two servers, i.e., . For the performance comparison, two schemes are compared with the proposed ECC-based scheme: 1) No migration scheme: service migration is not considered; 2) Nearest migration scheme: if needed, service will be migrated to close access point.
Table 3 lists the values of important parameters considered in the experiments. The task load in the high-resolution migration was 256 MB, the task load in the middle-resolution migration was compressed to 128 MB, and the task load in the low-resolution migration was compressed to 64 MB. The transmission bandwidth between the edge nodes was 5 Mbps.
|5 Mbps||The bandwidth between edge node and .|
|100 Mcycles||The required number of CPU cycles to complete task .|
|1 Mbits||The content size for task .|
|0.01||The learning rate of algorithm.|
|0.8||The discount factor giving more weight to the near future.|
Fig. 5 and 6 plot the experimental results for the performance analysis. Fig. 5 shows the convergence performance of different scenarios in the proposed scheme using the deep reinforcement learning algorithm. From Fig. 5 we can see that the total utility (the cumulative rewards, i.e., the object function defined in Eq. 1) of the different scenarios in the proposed scheme is very low at the beginning of the learning process. With the increase in the number of episodes, the total utility increases until it reaches a relatively stable value, which is around 400 in the scenario that provides high resolution. We can also observe that the rewards of different resolution services are almost the same at the beginning, and at a stable stage high resolution obtains the highest reward, while low resolution obtains the lowest reward. Therefore, the low-resolution service could be first migrated to the corresponding edge node when the user moves to other edge access point, then the high-resolution service should be provided at a stable stage.
Fig. 6 shows the response time of the edge node providing the emotion detection service with low resolution (the first algorithm introduced in Table 2). In general, the response time is proportional to the number of concurrent requests, which means that the more the number of concurrent requests is, the longer the average response time is. We compare the time delay under different schemes, from Fig. 6, we can obtain that the delay of all the schemes increases with the increase of the service request times, our proposed RL-based scheme under ECC architecture has better performance with the lowest latency. This is because these services could be better migrated in advance to the optimal location based on a user-mobility prediction. However, the nearest migration scheme decides to migrate the service just when the user moves to the other access point, which would result in a longer delay and even to lead service disruption. From Fig. 6 we can also observe that due to user mobility, the delay jitter is more severe and the delay difference of these three schemes is bigger when the service request time is higher than 25. The ECC-based scheme is able to reduce the jitter from the results for it learns from the user mobility continuously. Also, it is obvious that the delay was the longest under no migration scheme.
This paper presents an ECC network architecture and introduces the key issues. In addition, an ECC platform for dynamic service migration based on a mobile user’s behavioral cognition was developed and experimentally tested. The experimental results show that the proposed ECC architecture can simultaneously provide higher QoE compared with the general edge-computing architecture without data and resource cognitive engines that achieve the user behaviour prediction to better guide the service migration based on traffic data and the network resource environment. The results effectively demonstrated that edge cognitive computing realizes the cognitive information cycle for human-centered reasonable resource distribution and optimization.
Acknowledgements.This work is supported by the National Natural Science Foundation of China (Grant No. 61802138, 61802139, 61572220). This work has also been partially carried out under the framework of INTER-IoT, Research and Innovation action - Horizon 2020 European Project, Grant Agreement No. 687283, financed by the European Union. Dr. Humar would like to acknowledge the financial support from the Slovenian Research Agency (research core funding No. P2-0246). Dr. Yixue Hao is the corresponding author.
- (1) C. A. Sarros et al., “Connecting the Edges: A Universal, Mobile-Centric, and Opportunistic Communications Architecture,” IEEE Communications Magazine, vol. 56, no. 2, pp. 136-143, Feb. 2018.
W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu. “Edge Computing: Vision and Challenges,”IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.
- (3) O. Salman, I. Elhajj, A. Kayssi, A. Chehab. “Edge computing enabling the Internet of Things,” Internet of Things IEEE, pp. 603–608, 2016.
- (4) M. Chen, Jun Yang , X. Zhu, X. Wang, M. Liu, J. Song, “Smart Home 2.0: Innovative Smart Home System Powered by Botanical IoT and Emotion Detection”, Mobile Networks and Applications, Vol. 22, pp. 1159-1169, 2017.
- (5) G. Fortino, R. Gravina, W. Russo, C. Savaglio, “Modeling and Simulating Internet-of-Things Systems: A Hybrid Agent-Oriented Approach,” Computing in Science and Engineering, vol. 19, no. 5, pp. 68-76, 2017.
- (6) M. Chen, Y. Miao, Y. Hao, K. Hwang, “Narrow Band Internet of Things”, IEEE Access, Vol. 5, pp. 20557-20577, 2017.
- (7) M. Chen, Y. Hao, “Task Offloading for Mobile Edge Computing in Software Defined Ultra-dense Network”, IEEE Journal on Selected Areas in Communications, Vol. 36, No. 3, pp. 587-597, Mar. 2018.
- (8) I. Petri, O.F. Rana, J. Bignell, S. Nepal and N. Auluck. “Incentivising Resource Sharing in Edge Computing Applications,” pp. 204–215, 2017.
- (9) Y. Qian, M. Chen, J. Chen, M. Hossain, A. Alamri, “Secure Enforcement in Cognitive Internet of Vehicles”, IEEE IoT Journal, Vol. 5, No. 2, pp. 1242-1250, 2018.
- (10) M. Villari, M. Fazio, S. Dustdar, O. Rana, L. Chen, R. Ranjan. “Software Defined Membrane: Policy-Driven Edge and Internet of Things Security,” IEEE Cloud Computing, vol. 4, no. 4, pp. 92–99, 2017.
- (11) L. Zhou, D. Wu, Z. Dong, and X. Li, “When Collaboration Hugs Intelligence: Content Delivery over Ultra-Dense Networks,” IEEE Communications Magazine, vol. 55, no. 12, pp. 91-95, 2017.
- (12) L. Zhou, D. Wu, J. Chen, and Z. Dong, “Greening the Smart Cities: Energy-Efficient Massive Content Delivery via D2D Communications,” IEEE Transactions on Industrial Informatics, vol. 14, no. 4, pp. 1626-1634, Apr. 2018.
- (13) H. Habibzadeh, A. Boggio-Dandry, Z. Qin, T. Soyata, B. Kantarci and H. T. Mouftah, “Soft Sensing in Smart Cities: Handling 3Vs Using Recommender Systems, Machine Intelligence, and Data Analytics,” IEEE Communications Magazine, vol. 56, no. 2, pp. 78-86, Feb. 2018.
- (14) M. Mohammadi and A. Al-Fuqaha, “Enabling Cognitive Smart Cities Using Big Data and Machine Learning: Approaches and Challenges,” IEEE Communications Magazine, vol. 56, no. 2, pp. 94-101, Feb. 2018.
- (15) A. Abeshu and N. Chilamkurti, “Deep Learning: The Frontier for Distributed Attack Detection in Fog-to-Things Computing,” IEEE Communications Magazine, vol. 56, no. 2, pp. 169-175, Feb. 2018.
- (16) M. Chen, V. Leung, “From Cloud-based Communications to Cognition-based Communications: A Computing Perspective”, Computer Communications, Vol. 128, pp. 74-79, 2018.
- (17) Y. He, N. Zhao and H. Yin, “Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach,” IEEE Transactions on Vehicular Technology, vol. 67, no. 1, pp. 44-55, Jan. 2018.
- (18) M. Chen, Y. Hao, M. Qiu, J. Song, D. Wu, I. Humar, “Mobility-aware Caching and Computation Offloading in 5G Ultradense Cellular Networks”, Sensors, Vol. 16, No. 7, pp. 974-987, 2016.
S. Zhang, S. Zhang, T. Huang, et al. “Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching,”IEEE Transactions on Multimedia, Vol. 20, No. 6, pp. 1576 - 1590, Oct. 2017.
- (20) M. Chen, X. Shi, Y. Zhang, D. Wu, M. Guizani, “Deep Features Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network”, IEEE Trans. Big Data, DOI:10.1109/TBDATA.2017.2717439, 2017.
- (21) A. Machen, S. Wang, K. Leung, et al. “Live Service Migration in Mobile Edge Clouds,” IEEE Wireless Communications, Vol. 25, No. 1, pp. 140-147, 2017.
- (22) V. Medina, J. Garcia, “A survey of migration mechanisms of virtual machines,” ACM Computing Surveys, vol. 46, no. 3, pp. 30-62, 2014.
- (23) K. Hwang, M. Chen, “Big Data Analytics for Cloud/IoT and Cognitive Computing,” Wiley, U.K., ISBN: 9781119247029, 2017.