Along with the penetration of Artificial Intelligence (AI), smart Internet of Things (IoT) system is evolving as an emerging paradigm to facilitate the development of smart city, intelligent agriculture, and intelligent healthcare[30, 25, 5, 27]. As a kind of attractive IoT applications, Mobile Target Tracking Wireless Sensor Networks (MTT-WSN) has made contributions in many fields, such as illegal vehicle tracking, frontier security, pasturing protection, plant district security, and space exploring.
The MTT-WSN comprised by many static or mobile nodes can track mobile targets based on the IoT technology. The essential sensor nodes can be activated around the monitoring target during the target tracking, whereas other nodes can be turned into a dormant state to reduce system energy consumption. Take civil aviation flight as an instance, sensor nodes can be activated to drive adverse obstructs including flying birds for ensuring the safety of flight take-off and landing. Based on the sensing data, the MTT-WSN can estimate the status of targets to make real-time tracking decisions.
The inherent characteristics of MTT-WSN affect the tracking performance, such as the limited sensing ranges and the scarce energy . Nevertheless, many static sensors are typically deployed randomly with stationary sensing capability. The deployment manner may cause detection failure due to the blind detection zones, or redundant sensing data when coverage zones overlap. The redundant data can also make bandwidth resource dissipation during data transmission. Moreover, the prediction accuracy cannot be ensured because of the detection failure. In this case, the prediction error can be accumulated so that sensor nodes cannot be activated and scheduled correctly. In addition, the limited energy of sensor nodes cannot assure consecutive target tracking when the mobile targets invade with high moving speed [19, 31].
In fact, the tracking performance is coupled with resource scheduling decisions which can be implemented based on cloud computing or edge computing. Although cloud computing can provide sufficient computing resource, it causes high transmission latency because of long round trip of information delivery . The edge computing appears to reduce the latency while resulting in tremendous computing pressure on edge servers. Besides, it may not be able to integrate available resource to perform real-time feedback for time-critical missions. Emerging AI technology can alleviate the above-mentioned disadvantages on resource scheduling [17, 7, 6, 28, 26]. However, it is also difficult to design AI based schemes for real-time and consecutive tracking while keeping low communication and computing overheads.
In this paper, we propose a new hierarchical target tracking structure for consecutive target tracking. Based on this structure, an intelligent cloudlet pattern is designed that is composed of edge servers and MNs, in which the MNs can eliminate the redundant sensing data to reduce the latency of data transmission. Besides, the pattern can realize accurate tracking by integrating the computing resource of edge servers and MNs. Moreover, MNs not only can implement target tracking by the trajectory prediction, but also can activate SNs to observe the invading targets collaboratively. The main contributions are summarized as follows.
We first propose a new hierarchical target tracking structure, in which mobile nodes realize the full-scale area coverage with flexible mobility. Besides, the mobile nodes can coordinate the sensing resource of static nodes to observe targets collaboratively. To obtain efficient sensing data, a multi-resource information fusion scheme is proposed to reduce redundancy of data for the maximal sensing resource utilization.
Based on the edge intelligence technology, an intelligent cloudlet pattern is proposed to ensure accurate tracking. The pattern integrates the computing resource of both mobile nodes and edge servers to improve the accuracy of trajectory prediction. Moreover, the edge servers can estimate the status of targets to improve computation efficiency for the accurate tracking performance.
To realize consecutive tracking under the node scheduling, we present a Long-Term Dynamic Resource Allocation (LTDRA) algorithm. The algorithm can enhance the self-learning nature of traditional reinforcement learning algorithm to explore the optimal decision with the minimal energy consumption and quick algorithmic convergence. In this case, the optimal tracking scheduling can be obtained for the implementation of consecutive tracking.
The rest of this paper is organized as follows. The related work is given in Section II. Section III gives the system structure and the problem formulation. The edge intelligence framework is proposed in Section IV. The evaluation results are provided in Section V. Finally, Section VI concludes this paper.
2 Related Work
In recent years, edge computing has attracted significant attention in MTT systems . For instance, Kuo. et al. proposed an adaptive mechanism of trap coverage with a robust area coverage model in target tracking and services detection to reduce the target-missing time. For merging the cooperation among sensors seamlessly, much work is focused on the collaborative management of computing and moving. A collaborative sensor movement algorithm was proposed on a basis of target learning to minimize the energy consumption. Wan et al. provided a joint range-Doppler-angle estimation solution for intelligent tracking to improve the efficiency of multi-target tracking.
With the development of AI, many researches have studied AI-enabled edge computing [9, 2, 20]. For instance, the authors in the literature  proposed a novel concept to vest the front-end intelligently to realize low-latency in large scale application-oriented IoT scenario. In order to provide real-time information and feedback to the end-users, Sharma introduced a distributed framework for the coordinated process between Mobile Edge Computing (MEC) and cloud computing .
Collaborative computing provides a new opportunity for resource allocation in target tracking applications. Campbell M E et al. proposed a cooperative tracking approach for uninhabited aerial vehicles (UAVs) with camera-based sensors, utilizing a square root sigma point information filter, which brought important properties for numerical accuracy, tracking accuracy, and fusion ability . Kuang et al. presented a collaborative computational framework that was capable of dealing with many real-world visual tracking problems. A novel spatio-temporal weighting scheme was introduced to maximize the separation between target and background, improving classification accuracy . In mobile bionanosensor network, Okaie et al. proposed a cooperative scheme that the bacterium-based autonomous biosensors released repellents to quickly spread over the environment for searching target and release attractants to recruit other biosensors in the environment toward the location around the target for detecting target .
|The size of task execution||The task executing deadline|
|task offloading decision space of||The computing set with multi-element integration|
|The transmission rate from target to the j-th MN||The transmission rate from the i-th SN to the sink node|
|,||The offloading latency and computing latency||Intelligent computing frequency|
|Location information of target node at time t||Kalman gain at time t|
|The execution result of the task in the local||
The state vector of each sensor node
|Energy consumption of sensor node i in sleep.||Energy consumption of sensor node i in idle|
|Energy consumption of sensor node i in checking state.||Energy consumption of sensor node i in working state|
|Communication consumption of sensor node i.||the noise signal amplitude received at sensor i.|
|Transmission consumption of sensor node i.||Received consumption of sensor node i.|
|Sending consumption of sensor node i.||The mean square tracking error for all sensor nodes.|
The above studies are based on the assumption that edge servers have sufficient computing resource to process massive offloading tasks. However, it is not always feasible in practice due to the redundant and complicated data. Many mobile devices can share and provide their computation resource to edge servers, it is difficult to integrate the computing resource of mobile nodes and edge servers due to the dynamic network topology. An edge intelligence based dynamic resource management manner can be a flexible solution to meet the consecutive and accurate target tracking.
3 System Model and Problem Formulation
In this section, we propose a hierarchical network structure to facilitate the real-time target tracking. In this structure, the mobile node acts as the bridge for integrating sensing and computing resource, collecting sensing data with static nodes for collaborative computing with edge serves. Based on the consideration, a multi-objective optimization model is formulated to obtain the optimal target tracking performance.
3.1 System Model
The structure is shown in Fig. 1, there are two types of nodes: static (observation) sensor nodes and mobile nodes. For simplicity, we denote them by ”SNs” and ”MNs”, respectively. Based on the functionality of network components, we divide the MTT system into three hierarchical levels: data sensing, mobile coverage and tracking, and intelligent computing and scheduling.
As shown in Fig. 1, heterogeneous SNs are deployed randomly to detect invading targets with diverse on-board sensors in a monitoring area, which is displayed at the bottom level (data sensing). Different from SNs, MNs can process and compute the sensing tasks on the middle level (mobile coverage and tracking). During the task execution, MNs are not associated with any cluster, so that they can reduce blind sensing zones due to their flexible mobility. Considering the large-range data sensing, MNs can process the massive data by the proposed multi-resource data fusion algorithm to alleviate data transmission pressure. On the top level of intelligent computing and scheduling, scheduling decisions can be made by edge servers from the global viewpoint. Specifically, an intelligent cloudlet pattern is designed to alleviate computing pressure by the integration of resource of both MNs and edge servers. Based on the pattern, MNs can activate nearby SNs to observe targets in a real-time manner. The collaborative tracking scheme can ensure accurate tracking. Meanwhile, the tracking performance can also be saved in edge servers to conduct the following decision of node scheduling. For ease of reference, the key notations are summarized in the Table I.
We use to represent an execution task, where is task size and is execution deadline, i.e., the task is processed within . Once the task is executed, the execution time should be guaranteed. To address the problem, the task is portioned to and , where is implemented in mobile nodes and is executed in edge servers. In this case, collaborative computing can reduce the computing latency to ensure time-sensitive requirements. We assume that time is discrete, and denote the time slot length and time slot index set by t and , respectively. The collected data is transmitted to mobile nodes or edge servers based the analysis of transmission model. MNs can process the data by the proposed multi-source data fusion algorithm. After that, all the data is located at edge servers, which implement the prediction of mobile target trajectory.
3.1.1 Analysis of Transmission Model
To realize the real-time computing process, transmission model is formulated to optimize the offloading destination for low-latency transmission. The offloading destinations of sensing data include MNs and edge servers. The SNs can select their optimal destinations with the minimal bandwidth resource consumption. The optional offloading space is represented as . The SNs can select their optimal destinations collaboratively with the minimal bandwidth resource consumption. When radio bandwidth resource at the time slot , the SN can offload sensing data to the MN , otherwise, the sensing data can be offloaded to the edge server . Assume that the SN has detected the invading target and , sensing data can be transmitted to the neighboring MN . If there exists multiple MNs, the SN can select the optimal destination node with the estimation of available radio bandwidth resource when there exists multiple MNs. If there no exists MNs within its communication range, the SN can select the optimal edge server to offload sensing data with .
For each sensor node, the transmission power and channel gain are denoted as and at each time slot t, respectively. The transmission rate and are expressed as
where is system noise with Gaussian property.
The corresponding transmission latency and are given by
3.1.2 Analysis of Data Fusion Model
In order to track mobile targets with a high successful probability, a multi-element integration scheme is introduced to improve tracking accuracy. When MNj involves tasks computing that is collected from nearby m SNs. These values deviating from median ridiculously are removed. The updated values are represented as a set and is given by
where denotes that the computing result occurred in the MN j. The final performance set is integrated by Cartesian operation with involved m SNs where and the MN j, and is represented as
Intelligent scheduling is executed by computing the remaining task in edge servers. Nearby MNs and edge servers are made up of cloudlet for performing collaborative computing. The cooperative execution time is expressed as
where denotes intelligent computing capacity. Tracking time is consumed once mobile control is distributed with indicator , where . The system latency is expressed as
where and denote the offloading latency and system computing time, respectively.
3.1.3 Trajectory Prediction Model
The trajectory prediction, aiming at the minimum deviation, is modeled as Extend Kalman Filter (EKF) process, which incorporates prediction and update procedures. Unlike series forecasting or grey modeling methods, predicting mobility with inertial motion has excellent merit, especially for discrete-process control. The movement motion with respect to target assumed to be an acoustic is given by
where is location including , F is transfer matrix, and
is noise matrix, namely Gaussian White noise.
In the prediction process, the covariance matrix, i.e., , is given to conduct prediction estimation. At the tth time slot, the noise signal amplitude received at the sensor node i is repressed as , which is identity element of measurement vector , where is the distance between the target and the ith sensor node [3, 21, 14].
In the update process, Kalman gain and deviation value is acquired for consecutive prediction process. The measurement residual, i.e., , estimates prediction process, where is the measurement matrix mapping the actual state space into the measurement space. Kalman gain considering minimum mean square error as objective function is derived as
where is the innovation covariance. The covariance matrix updated by iteration process is expressed as . Consequently, the updated state estimation is given by
3.2 Problem Formulation
Execution cost is one of key measures for target tracking performance and it is invoked to optimize scheduling strategy for the MTT-WSN. Thus, self-states of sensor nodes are divided into four categories which are transferred with each other for saving energy. These states, including sleep, check, idle, and work, are represented as a vector . Sensor nodes stay sleep state when there are no tracking tasks. Then, nodes are activated to idle states for detecting targets. When collecting sensing data, states of nodes are changed into checking state to execute targets position. Mobile nodes are scheduled to track targets cooperatively once work states are enabled. As shown in Fig. 2, indicator vector signifies that the sensor maintains sleep state.
To be simplify, denotes the energy cost of sleep state at a unit time t. During a tracking period , the energy cost of sensor i is given by
The energy cost of idle states is obviously more than that of sleep states. It is assumed that is the unit energy cost of idle states where . Energy consumption during a tracking period is given by
The scheduling schemes among sensors are same and independent. Assume that the sensor node i is scheduled with the probability , periodic consumed energy of sensor i staying the check state is represented as . when , the energy consumption of the sensor i can be represented as
where , and and are circuit and gain consumption of amplifier consumption for transmitting 1 bit, respectively. , , and denote the different data sizes, respectively. , where is the effective switched capacitance depending on the chip architecture. is a real number limited in the [0,1].
The scheduling of sensor nodes is depended on the physical distance between sensor nodes and invading targets as well as their self-energy, which is defined as
where and are weight coefficients satisfying . and is self-energy of sensor node i and the distance to targets. It is noteworthy that is normalized.
When MNs are scheduled, the mobile energy cost is represented as
where is per unit energy cost.
Consequently, different energy cost is represented as a vector , i.e., . When m nodes are deployed in a monitoring area including sensor nodes and mobile nodes. The long-term execution cost is formulated as
C1 denotes that tasks can be completed within deadline. C2 is power control constraints, i.e., execution power cannot exceed maximum power. C3 is the tracking accuracy constraint, i.e., is required accuracy . C4 indicates that the normalized tracking capacity is limited in a feasible range. C5 denotes that only one state is existed at each time slot for each sensor.
4 Long Term Dynamic Resource
To acquire the optimal target tracking strategy, we propose a long-term dynamic resource allocation algorithm. In this algorithm, computing and tracking decisions are executed synchronously for the time-sensitive MTT-WSN requirements.
4.1 The Markov Decision Process
In the MTT network, system actions are only depended on the current system states during tracking process and problem P1 is regarded as a long-term optimal average system cost process. Consequently, the P1 is formulated as an MDP model incorporating state space, action space, reward formulation, as well as state transferring equation.
The state space: At the t-th time slot, the state space includes trajectory prediction, tracking capacity (namely the dump energy, the distance between the target and sensor nodes), and node states. Thus, the state is expressed as .
The action space: The action space is related to the tracking performance, which results from deep training and learning as well as the next state. The specific action space is represented by , where indicates that the sensor i is scheduled or not.
The reward formulation: The action experience is conducted by the reward to encourage the better performance. The causal reward and action are coupled by the reward formulation , where , , and denote the system energy consumption, mean square error, and punishment for unsatisfied performance, respectively, and , , and are the corresponding weight coefficients and satisfy .
The state transforming equation:
The system interaction as an important step is revealed to obtain the optimal system benefits. The probability model is formulated based on the Markov chain, in which the sample values stored in the memory are based on the primary and forward data. The equation is expressed as.
4.2 Analysis of long-term dynamic resource allocation algorithm
As shown in Fig. 3, the proposed LTDRA integrates the hierarchies of data, feature and decision. Specifically, in the data layer, state information, including target mobility and own available capacity, is collaboratively swapped and collected from the MTT environment. Data is transmitted to the feature layer for mobile trajectory prediction. The data is trained in the prediction neural network architecture, which can analyze mobile trajectory in the following time slots (i.e., prospective mobile trajectory). The prediction results are transmitted to the decision layer that makes the strategy of node scheduling, based on the execution deadline and system energy. In the decision layer, smart agent implements self-driven learning by interacting with environment, and the agent iterates node scheduling strategy by sampling from the updated replay memory. A Deep Reinforcement Learning (DRL) algorithm is proposed to overcome data correlation by sampling from the replay memory. Besides, the prediction data is fed into the replay memory of decision layer to acquire fresh knowledge. The prediction process is represented as , where , is the estimation coefficient.
In the decision layer, smart agent implements self-driven learning by interacting with environment and iterates node scheduling strategy by sampling from the updated replay memory. State set forming state space is fed into primary network. The node scheduling actions are acquired from the target network. The two neural networks are synchronously executed to facilitate respective learning process. The target network can evaluate the current state-action pair by using cost function, i.e., . The Q-value that is a unity value can replace the multi-object optimization model for the optimal node scheduling.
The cost function is derived by Bellman equation. The P2 is accumulated reward expectation and represented as
where is the discount factor. denotes the average exception of acquired reward, and is the exception for a long-term cumulative process. The iteration process is given by
where is the reward under the condition of state s and action a, the optimal scheduled strategy is acquired by
Unfortunately, the problem P2 is only proper for those data with low dimensional character. Reducing dimension may result in the high computation complexity, which is not feasible in the MTT network with time-sensitive character. However, an alternative method can obtain an approximate solution to replace the problem P2 and can meet acceptable time complexity and space complexity. The approximation approach is formulated and given by
the corresponding updating process is , where is learning rate.
In the LTDRA algorithm, Q-reality and Q-estimate, i.e., and , are formulated to present the primary network and the target network, respectively. The subscripts and are updated after each iteration and the reward as output is obtained to evaluate each action. The optimal scheduling can be reaped by maximizing each reward. Algorithm flow is specifically reflected in the Algorithm 1.
4.3 Analysis of Algorithm Complexity
In the target tracking network, there exists sensor nodes. We analyze the deep reinforcement learning algorithm from a macroscopic viewpoint. the time complexity is . Consider the internal algorithm flow, we obverse that one action is randomly sampled from a list of actions. Thus, it has a time complexity in the per iteration. In the primary network, the complexity of matrix inversion is , where is a function of the depth and number of the hidden layers . Finally, the whole network time complexity is represented as . Although we add the prediction neural network, the time complexity is not increased, which is discussed in Section V.
|The invading range|
|The number of target nodes||1|
|The number of sensor nodes||56|
|The number of mobile nodes||6|
|The tracking velocity||1m/s|
|The total energy for each sensor node||40J|
|The initial coordinate of target node||(0,50m)|
|The energy consumption with the sleep mode||0.1J/unit time|
|The energy consumption with the idle mode||0.2J/unit time|
|The energy consumption with the check mode||0.6J/unit time|
|The energy consumption with the work mode||1.5J/unit time|
|The tuning range learning rate||[0.1, 0.9]|
|The discount factor of reward function||0.9|
|The size of min-batch||32|
|The size of replay memory||500|
5 Performance Evaluation
Extensive simulation experiments are conducted to evaluate the proposed LTDRA algorithm based on our gathered multitude simulations, including tracking accuracy and system energy consumption.
5.1 Simulation Setup
we use the version of Python 3.7 and the TensorFlow architecture to build the MTT scenario. The main influencing factors, including system energy consumption, tracking accuracy, and execution latency, have been programmed and evaluated to verify the efficiency of our proposed algorithm. Specifically, we design a square monitoring area where sensor nodes are deployed randomly in the initial stage. As shown in Figure 4, those red and blue solid circles in the area denote MNs and SNs, respectively. The number of MNs is set as 6, and that of SNs is set as 50. The noise covariance is set as= = 1, and N . The total energy of each static sensor node is 40J. The initial location and velocity of the mobile target are set among the range , and , respectively. The initial invading direction can be also arbitrary. To clearly express the simulation parameters, we summarize the important parameters in Table II. For comparison, we introduce four benchmark strategies.
Non-cooperative Scheme: This scheme adopts DRL framework and prediction network. The collaborative between mobile nodes and the edge servers is not considered.
Deep Q-learning Scheme: This scheme barely provides the DRL neural network without prediction network, and collaborative scheme is incorporated.
: This scheme with deep learning architecture aims to the execution cost minimization and selects sensor nodes with sufficient energy at each time slott.
Random Selection Scheme: When task is executed in an edge server, the selected probability of each sensor is limited in range [0,1] with the aided of the DRL architecture. The generated values are resorted by descending order, sensors are scheduled if the corresponding probability is greater than 0.5.
5.2 Results Discussion
In this subsection, we verify the feasibility of prediction scheme, show the efficiency of proposed algorithm, and demonstrate the impacts of variable system parameters, respectively.
In Fig. 5, the Mean Squared Error (MSE), i.e., , is formulated to evaluate the prediction accuracy. The algorithm is terminated while the upper bound of round is meet. Fig. 5 depicts the tendency of MSE and number of activated sensors. The value of MSE is reduced gradually then remains stable. When , the number of activated sensor nodes begins to increase and position accuracy of the nodes also is improved. Besides, the position accuracy is convergent gradually although the number of the activated sensor nodes is undulant in a certain range. The reason why the trend is undulant is that the instability of the EKF and the prediction neural network. Specifically, fitting errors may exist when nonlinear motion is matched into linear motion which is processed in the EKF. Besides, the training process may also cause a fluctuation in the neural network.
We show the effectiveness of target tracking and demonstrate the impact of different memory ratios of historical data size and prospective data size on prediction accuracy and system consumption. First, system performance is illustrated in Fig. 6 and Fig. 7 which includes prediction accuracy and system energy consumption under different memory ratios. It is noted that the mentioned tracking accuracy is equivalent to the mentioned trajectory prediction accuracy. When the ratio is 2:1, as shown in Fig. 6 (a), the prediction error decreases dramatically in the early stage. After that, the tracking error approaches a stable status which oscillates intensely in a certain range of [0.8, 1.2]. Fig. 6 (b) provides the tendency of system energy consumption. We can observe that the system energy consumption can decrease smoothly and tend to a steady state when the iteration reaches 500 approximately. When the ratio is 3:1, as shown in Fig. 7 (a), the convergent speed in prediction accuracy is slower than that of Fig. 6 (a). In Fig. 7 (b), the number of the iteration is the same as that of Fig. 6 (b) but the convergent energy consumption is higher.
The following observations can be found in Fig. 6 and Fig. 7. Firstly, the range of prediction error is limited to , which is accepted in many practical tracking scenarios. The trends of system energy consumption and prediction accuracy are convergent and stable as the number of the iteration increases. This implies that our proposed algorithm can ensure constant tracking performance. Secondly, the convergent speed exists slightly different. The reason is different learning rates influence steps of gradient descent, which can generate different weight values in the training process. Finally, it can be observed that our proposed algorithm can achieve a quick convergence with different learning rates based on the combination of prediction neural network and reinforcement learning.
Fig. 8 makes the comparison among different scheduling strategies on system cost. With the increasing number of the iteration, all the scheduling schemes can achieve their own goal to reduce system energy consumption. The following observations are found from this figure. Firstly, the random selection scheme performs the highest system energy consumption. The main consumption is generated due to the mobility. Secondly, the proposed LTDRA algorithm obviously reduces the system consumption compared with the other four benchmarks. In the numerical analysis, the proposed algorithm reduces 14.5%, 31.6%, 42.8% and 47.4% approximately in system energy consumption compared with the deep Q-learning scheme, non-cooperative scheme, greedy scheme and random selection scheme.
Fig. 9 provides the comparison of the tracking accuracy based on different scheduling schemes. In order to improve the system prediction accuracy, edge servers and mobile nodes acquire the whole system status to discover the optimal scheduling scheme cooperatively. For the greedy scheme, it always seeks these sensor nodes with sufficient energy while the accurate prediction cannot be guaranteed. For the non-cooperative scheme, mobile nodes only collect and transmit sensing data to the edge server. In this case, computing results may be high-latency due to massive data transmission. In contrast, the random selection scheme performs worst since the number of scheduling nodes is random. Compared with the non-cooperative scheme, the deep Q-learning algorithm outperforms in system energy consumption. This implies that the collaboration computing is of significance. The proposed scheme performs the lowest system energy consumption based on the joint optimization of tracking accuracy and system energy consumption with the coupled architecture including the deep reinforcement learning and the prediction network.
Fig. 10 gives the trade-off between the prediction accuracy and system cost compared with different scheduling schemes. In numerical value, the proposed LTDRA scheduling algorithm reduces the system energy consumption by the (44.0%, 38.8%, 21.4%, 8.3%), (44.4%, 37.5%, 30.0% ,9.1%), (48.7%, 42.8%, 33.3%, 16.6%), when compared with the random selection scheme, the greedy scheduling scheme, the non-cooperative scheme, and the deep Q-learning scheme considering the different prediction error upper bounds (1.5m, 2m, 2,5m). The results illustrate that the proposed scheme can reduce extra system energy consumption evidently and guaranteeing the tracking accuracy simultaneously. Our proposed intelligent scheduling scheme can significantly guarantee the real time tracking accuracy with the minimal system energy consumption.
Fig. 11 shows the tendency on system prediction accuracy with the number of the iteration. It can be observed that the system prediction error decreases as the number of iterations increases. Compared with the Centralized Implementation (CI) which adopts multi-model Bernoulli filter , our proposed scheme can always obviously outperform during iterative process, and can reduce average 22.5% prediction error approximately. Compared with the above five benchmarks, our LTDRA algorithm performs more stable convergence in the MTT network. In terms of unilateral indicator, our algorithm can reduce average 30% system energy consumption to guarantee long-term target tracking. Moreover, an average 22.5% enhancement in prediction error ensures accurate target prediction and efficient tracking performance. Considering multiple indicators, approximately 25% energy is saved based on the same prediction error level. The LTDRA can also reduce the system response latency for exploring the optimal node scheduling strategy rapidly. On the whole, the validity of our algorithm is confirmed through multidimensional comparisons.
Figure. 12 shows the performance comparison in terms of system execution latency as the number of iterations increases. Based on the proposed hierarchical target tracking structure, the system execution latency under our LTDRA scheme can be significantly reduced compared with other benchmarks. The designed intelligent cloudlet pattern provides the sufficient computing resource to respond real-time and accurate prediction for invading trajectory. Based on the numerical analysis, our proposed scheme can reduce approximately 5%, 10%, and 13% response latency compared with the non-cooperative scheme, the greedy scheme, and the random scheme, respectively.
In this paper, we investigate the MTT-WSN system for accurate and consecutive target tracking. We design a hierarchical target tracking structure to facilitate the sensing and computing process with edge intelligence technology. The structure can achieve collaborative computing in the proposed intelligent cloudlet. Based on the design, a multi-objective optimization problem is formulated to obtain the optimal node scheduling strategy. To solve the problem, a long-term dynamic resource allocation algorithm is proposed to obtain the optimal node scheduling policy. The simulation results reveal that our algorithm can acquire the quick convergence with low response latency. Besides, our proposed algorithm can significantly enhance the tracking accuracy and decrease execution cost as well. The structure also provides a feasible approach for battery-powered MTT-WSN systems.
-  (2018) Tracking of mobile sensors using belief functions in indoor wireless networks. IEEE Sensors Journal 18 (1), pp. 310–319. Cited by: §5.2.
-  (2016) Front-end intelligence for large-scale application-oriented internet-of-things. IEEE Access 4 (), pp. 3257–3272. Cited by: §2.
-  (2016) Distributed information fusion in multistatic sensor networks for underwater surveillance. IEEE Sensors Journal 16 (11), pp. 4003–4014. Cited by: §3.1.3.
-  (2007) Cooperative tracking using vision measurements on seascan uavs. IEEE Transactions on Control Systems Technology 15 (4), pp. 613–626. Cited by: §2.
A bi-layered parallel training architecture for large-scale convolutional neural networks. IEEE Transactions on Parallel and Distributed Systems 30 (5), pp. 965–976. External Links: Cited by: §1.
-  (2019) Distributed deep learning model for intelligent video surveillance systems with edge computing. IEEE Transactions on Industrial Informatics (), pp. 1–1. External Links: Cited by: §1.
-  (2019) A domain adaptive density clustering algorithm for data with varying density distribution. IEEE Transactions on Knowledge and Data Engineering (), pp. 1–1. External Links: Cited by: §1.
-  (2016) Fog and iot: an overview of research opportunities. IEEE Internet of Things Journal 3 (6), pp. 854–864. Cited by: §2.
-  (2019) A nature-inspired node deployment strategy for connected confident information coverage in industrial internet of things. IEEE Internet of Things Journal 6 (6), pp. 9217–9225. Cited by: §2.
-  (2018) Novel sensor scheduling scheme for intruder tracking in energy efficient sensor networks. IEEE Wireless Communications Letters 7 (5), pp. 712–715. Cited by: §2.
-  (1996) Rapid design of neural networks for time series prediction. IEEE Computational Science and Engineering 3 (2), pp. 78–89. Cited by: §1.
-  (2018) Distributed optimal control of sensor networks for dynamic target tracking. IEEE Transactions on Control of Network Systems 5 (1), pp. 142–153. Cited by: §3.1.3.
-  (2018) Quantized kalman filter tracking in directional sensor networks. IEEE Transactions on Mobile Computing 17 (4), pp. 871–883. Cited by: §3.1.3.
-  (2015) Performance evaluation of a fuzzy-based wireless sensor and actuator network testbed for object tracking. In 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), Vol. , pp. 442–447. Cited by: §3.1.3.
-  (2018) Robust mechanism of trap coverage and target tracking in mobile sensor networks. IEEE Internet of Things Journal 5 (4), pp. 3019–3030. Cited by: §2.
-  (2019) LSTM and edge computing for big data feature recognition of industrial electrical equipment. IEEE Transactions on Industrial Informatics 15 (4), pp. 2469–2477. Cited by: §2.
-  (2019) Collaborative energy-efficient moving in internet of things: genetic fuzzy tree versus neural networks. IEEE Internet of Things Journal 6 (4), pp. 6070–6078. Cited by: §1.
-  (2014) Ridge regression and kalman filtering for target tracking in wireless sensor networks. In 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), Vol. , pp. 237–240. Cited by: §3.2.
-  (2018) Collaborative task offloading in vehicular edge multi-access networks. IEEE Communications Magazine 56 (8), pp. 48–54. Cited by: §1.
-  (2017) Live data analytics with collaborative edge and cloud processing in wireless iot networks. IEEE Access 5 (), pp. 4621–4635. Cited by: §2.
-  (2014) Iterated extended kalman filter for time-delay systems with multi-sample-rate measurements. In Proceeding of the 11th World Congress on Intelligent Control and Automation, Vol. , pp. 4532–4536. Cited by: §3.1.3.
-  (2011) Dynamic channel management for advanced, energy-efficient sensor-actor-networks. In 2011 World Congress on Information and Communication Technologies, Vol. , pp. 413–418. Cited by: §3.1.3.
-  (2018) Joint range-doppler-angle estimation for intelligent tracking of moving aerial targets. IEEE Internet of Things Journal 5 (3), pp. 1625–1636. Cited by: §2.
-  (2019) Offloading-assisted energy-balanced iot edge node relocation for confident information coverage. IEEE Internet of Things Journal 6 (3), pp. 4482–4490. Cited by: §1.
-  (2018) An edge cloud-assisted cpss framework for smart city. IEEE Cloud Computing 5 (5), pp. 37–46. Cited by: §1.
-  (2020) Intelligent task offloading for heterogeneous v2x communications. IEEE Transactions on Intelligent Transportation Systems (), pp. 1–13. External Links: Cited by: §1.
-  (2018) Contention-aware reliability efficient scheduling on heterogeneous computing systems. IEEE Transactions on Sustainable Computing 3 (3), pp. 182–194. External Links: Cited by: §1.
-  (2016) Bi-objective workflow scheduling of the energy consumption and reliability in heterogeneous computing systems. Information Sciences, pp. S0020025516305722. Cited by: §1.
-  (2020) Event-triggered adaptive tracking control for multiagent systems with unknown disturbances. IEEE Transactions on Cybernetics 50 (3), pp. 890–901. Cited by: §2.
-  (2017) Energy-efficient localization and tracking of mobile devices in wireless sensor networks. IEEE Transactions on Vehicular Technology 66 (3), pp. 2714–2726. Cited by: §1.
-  (2019) Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE 107 (8), pp. 1738–1762. Cited by: §1.