A Unified Framework for Joint Mobility Prediction and Object Profiling of Drones in UAV Networks

07/31/2018 ∙ by Han Peng, et al. ∙ 0

In recent years, using a network of autonomous and cooperative unmanned aerial vehicles (UAVs) without command and communication from the ground station has become more imperative, in particular in search-and-rescue operations, disaster management, and other applications where human intervention is limited. In such scenarios, UAVs can make more efficient decisions if they acquire more information about the mobility, sensing and actuation capabilities of their neighbor nodes. In this paper, we develop an unsupervised online learning algorithm for joint mobility prediction and object profiling of UAVs to facilitate control and communication protocols. The proposed method not only predicts the future locations of the surrounding flying objects, but also classifies them into different groups with similar levels of maneuverability (e.g. rotatory, and fixed-wing UAVs) without prior knowledge about these classes. This method is flexible in admitting new object types with unknown mobility profiles, thereby applicable to emerging flying Ad-hoc networks with heterogeneous nodes.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently, the use of unmanned aerial vehicles (UAVs) has increased rapidly for many applications including transportation [1], traffic control [2], remote sensing [3], wild-life monitoring [4], smart agriculture [5], surveillance [6], broadband satellite communication enhancement [7] and reconnaissance and border patrolling [8]. According to the federal aviation administration (FAA), more than 1 million drones are registered with the federal government in 2018 [9]. In some applications, completing intricate tasks is not feasible with a single UAV due to drones’ limited flight time, payload and communication range [10]. In these situation, often deploying a network of drones is unavoidable. Also in a more time-sensitive applications such as search and rescue, using networked UAVs significantly raises the chance of mission success [11].

An important property of UAV networks is their extremely dynamic network topologies due to freely flying drones [10]. This becomes even a more challenging issue for the futuristic autonomous UAV networks [12, 13]. The dynamic topology of UAV networks, especially when they are composed of heterogeneous nodes with different levels of maneuverability, reliability, sensing, actuation and communication capabilities calls for a new generation of control, communication, and navigation mechanisms that meet the requirements of these networks [14, 15]. In particular, it paves the road for providing connectivity and seamless communication through proactive and predictive routing algorithms in order to improve the network operational performance [16, 17, 18, 19].

Characterizing network topology changes can significantly improve the operational performance of these networks in terms of control and communications, as demonstrated with the following scenarios. For instance, when the network is composed of nodes with limited communication ranges, predicting network topology and the future positions of nodes can be used to enhance network connectivity by excluding the links that are more prone to failure in predictive routing algorithms, as depicted in Fig. 1. This approach is in a clear contrast with conventional link selection algorithms, where end-to-end routes are set up solely based on the current network topology and link failures are dealt with only after their occurrence. Therefore, the network can suffer from frequent link interruptions and re-establishments [20, 21]. Recently, efforts have been made to develop algorithms for predictive communication, with the main idea of making decisions at different layers of communication protocols by taking into account the anticipated future network topology [22]. This new approach of communication requires network topology prediction through member nodes’ motion trajectory prediction.

Fig. 1: Illustration of UAV networks at two time points and . The communication range of UAV A is shown by a circle. At time , A flies away of its neighbors accessible ranges and loses network connectivity. Predicted network topology can be used to prolong network connectivity by selecting routes that are less likely for upcoming failures (e.g. by excluding A).

Another example is search-and-rescue operation by autonomous UAV nodes. Predicting the local sub-network topology changes can help each individual UAV to take more efficient decisions. For instance, an autonomous UAV in a search operation may decide to cover areas that are not already covered and are less likely to be covered by other UAVs based on their predicted motion trajectories. Therefore prediction of node mobility patterns facilitates a more efficient and timely service by autonomous UAVs as depicted in Fig. 2.

Ii Related Work

Network topology prediction can be realized by predicting motion trajectories of individual nodes. Here, we assume that UAVs are autonomous with no prior path planning. Also, the UAVs are not allowed to convene and share their current locations and future motion trajectories with one another due to security considerations or limited communication resources. Therefore, each UAV intends to predict the motion trajectories of its neighbor nodes based on its own observation.

Several mobility prediction methods have been proposed in the past decade. These methods can be divided into two main categories of data-driven and model-based methods. Data-driven methods require large datasets to extract the most frequent patterns [23]. These methods indirectly capture the influence of natural and man-made textures, users’ behavioral habits, and spatial and temporal variations on the nodes’ mobility [24]. On the other hand, model-based mobility models try to predict the motion trajectory of an object based on its motion history and typically rely on the smoothness of motion trajectories [25]. These methods include piece-wise segment methods [26]

, hidden Markov models (HMM) 

[27], levy flight process [28], Bayesian methods [29], manifold learning [30], and mixture Gaussian models [29]. These methods are typically customized for specific object types such as human [31], self-propellers [32], and articulated rovers [33].

Fig. 2: A network of autonomous UAVs perform a search and rescue operations after a natural disaster. A new UAV shown by green color joins the mission. This UAV processes other UAVs motion trajectories to identify and cover the regions that are less likely to be covered by other nodes.

The main objective of this work is to develop a unified framework suitable to predict motion trajectories of mobile entities of different types. The core of our method is based on Kalman filtering with intermittent observation [34]. However, we use the object’s type-specific motion properties to improve the prediction accuracy through deploying a novel generative model for the system input. Thereby, the utilized state transition model provides model flexibility and generality, while class-specific input enables further prediction accuracy.

The second and more important feature of the proposed method is motion-based object profiling. We note that the predicated node locations in a fully autonomous network are only valid for a near future (a few seconds). Therefore, we need more general and perpetual information about the nodes’ mobility in the majority of applications. Here, we visit the network topology prediction problem from a different viewpoint. Note the fact that flying Adhoc networks (FANET) typically include a wide range of object types including ground vehicles, fixed-wing drones, multi-rotor drones, helicopters, and piloted aircrafts, where each type has a different mobility profile. Here, we intend to exploit the main properties of their mobility and classify the objects based on these properties. This approach is inspired by a human perception in recognizing different object types by observing their motion patterns. This approach can be used to gain long-term information about the future network topology. For instance, it can be used to predict the coverage area of a UAV in a search and rescue operation as shown in Fig. 2.

To summarize, the objective of this project is to develop a unified framework which jointly performs the two tasks of predicting the near future locations of target nodes as well as classifying them into disjoint types with different maneuverability levels based on their motion profiles. We call this algorithm as joint mobility prediction and profiling (JMPP). The proposed system is equipped with a self-tuning module which learns new mobility classes over time without any prior information. This feature provides the flexibility of accepting new object types with new mobility profiles.

It is noteworthy that there are a few recent works focused on clustering motion trajectories using different methods including distance-based clustering [35], waypoint clustering [36], tree-based methods [37], grid-based methods [38], and kernel methods [39]. Some of these methods focus specifically on airspace monitoring [36]. However, a majority of these methods aim at exploiting the most frequently used geographical paths by mobile objects based on distance metrics, rather than the finding different motion classes. See [40] for a more complete review of distance-based methods. Recently, more elegant methods are developed to cluster trajectories based on their shape parameters and not only based on their Euclidean distances. These methods include mixture of multivariate Von Mises distributions [41], sparse nonnegative matrix factorization [42], and circular statistics [41]

. However, these methods try to find explicit similarities between motion patterns of similar objects which might be absent for most cases. In this work, we approach this problem from a deep learning perspective through exploiting the underlying parameters governing the motion dynamics of an object and use it for object profiling.

The closest work we have found in the literature is [41]

, where aircraft motion trajectories are used to classify them into typical manned and expected unmanned aircrafts. They used a trajectory re-sampling technique followed by a mixture of Von Mises distributions to model the trajectories, which are finally clustered using k-medoids algorithm. This method is offline and requires a relatively large dataset of labeled trajectories and is not capable of performing mobility prediction and online clustering. Further, it is limited to two-classes and is not flexible enough to admit new object types. Our proposed method solves these two important issues, following the recent trend of utilizing advanced machine learning methods in optimizing wireless networking

[43, 44, 45, 46].

Iii Universal Mobility Model

In this paper, we view the nodes’ mobility from an observer’s perspective, which can be any of the network nodes that is monitoring its surrounding partners. The kinematic equations of the targets in 3D-space are expressed in terms of state transition equations with a noise term to capture motion turbulence as follows:



is the state vector (representing the location and velocity of the object at time

) and is the observation vector obtained using an arbitrary tracking system. The matrices


define the system, where is the time step of the discretized system. Also, and are used to model the system and measurement noise terms. The key role player, here, is the input vector , which represents the acceleration (or equivalently the mechanical forces that drive the whole system dynamics). Therefore, it can be used to define an object’s motion profile.

Inspired by the fact that the kinematics of most man-made objects are controlled by acceleration/braking and steering mechanisms, it is desirable to divide the velocity vectors into speed and direction terms as follows:


where is the direction of the motion trajectory, and is the angular velocity; both in xy-plane. Similarly, we can find the linear acceleration in direct path and the angular acceleration . The motion in the 3rd dimension (z-axis) can be considered independent of the motion in xy-plane [47]. When the drones hovering in a fixed altitude, we have and the simplified 2D equations can be used [48, 49, 50]. Noting the fact that and are time series typically composed of sporadic positive and negative pulses with random amplitudes, we define the following generative model for the system input:


where , and

are Bernoulli distributed random variables (RVs) representing the probability of change, respectively in the velocity in

plan, the velocity in direction and the angular velocity in -plane. Likewise, the amount of change in the speed in -plane and -axis and the angular velocity in

-plane are modeled with three Gaussian distributions with means

, and , and variances , and . The subscript is the object identification (i.d.). This model can be viewed as a special case of spike and slab

distribution, where the variance of one component approaches zero. Note that this model yields an exponential distribution with a desired memory-less property for the silent intervals between consecutive pulses. The model parameters, denoted by

fully determine the statistical properties of the motion dynamics in (1). These parameters differ from one object to another, but share similarities among objects within a class.

To embrace this fact, we model as a random vector whose elements are controlled by a set of hyper-parameters , shared among objects of class . We omit subscript for notation convenience, when it is clear from the text. This approach captures within-class similarities, while providing sufficient flexibility for per-object variability. Fig. 3 provides graphical representation of this generative model for driving forces.

An appropriate choice for model parameters are conjugate distributions, which provide the convenience of closed-from posterior distributions, when applying Bayes’ rule. More specifically, the posterior distribution of the model parameters, after observing the acceleration vector , belongs to the same family of prior distribution

. The Gaussian family is conjugate to itself (or self-conjugate) with respect to a Gaussian likelihood function; thereby its mean can be represented with a Gaussian distribution. The variance also is represented with an Inverse Gamma distribution. The Bernoulli distribution has Beta distribution as its conjugate 

[51]. Therefore, we choose the following prior distributions for the model parameters:


where is the precision and is an arbitrary shrinkage parameter. Here, we assume that each object belongs to one class . Each class includes nodes with shared hyper parameter . The details of the proposed mobility modeling with probabilistic hierarchical input are presented in Fig. 3.

Fig. 3: The proposed mobility model, which includes a universal kinematics model with a probabilistic input represented by a hierarchical graphical model.

Finding the most likely object class based on its observed motion trajectory involves the following steps as depicted in Fig. 4

. Firstly, we estimate the object’s current and upcoming locations by solving the state transition equations in (

1). This stage also provides an estimate of the system acceleration process for a 2D motion and equivalently for a 3D motion. Hereafter, we assume for notation convenience.

Secondly, we use the expectation maximization (EM) algorithm to obtain the most likely model parameters

which fully define the distribution of the input vector . This information is regarded as a noisy observation of the model parameters

and is fed into the Bayesian inference module in order to find the posterior probability of each class

using prior distributions , and class conditional distributions for all potential classes . The ultimate goal of this stage is to determine the most likely object class based on the observation as follows:


In short, the most likely object class is obtained by observing the object’s motion trajectory . In practice, a relatively short observation period (e.g. ) is sufficient for a reliable object motion profiling. Further, we use the statistical properties of the motion profiles of the observed objects (i.e. ) to refine the hyper parameters of each class, to further improve the prediction accuracy. In other words, we learn and refine class-specific mobility parameters using online observations, as detailed in the following section. Finally, we note that with our proposed flexible model, the system can admit new object types and the number of classes, in (6), can change over time to show the number of currently identified object classes.

Fig. 4: Block diagram of the proposed joint mobility prediction and profiling (JMPP) method.

Iv Joint Mobility Prediction and Profiling

In this section, we elaborate on the details of the proposed method, which includes the following three steps.

Step 1- Denoising: An accurate estimate of the state vector can be obtained by solving the state transition equations in (1) using Kalman filtering, which includes two set of time update and measurement update equations. Time update equations are used to predict the next state vector (the location of the flying object in our modeling) based on the previous state. Measurement equations are used to refine the obtained prediction based on the noisy observation vector. The observation vector represents the location information using an arbitrary tracking system. In the case of intermittent observation, the prediction is performed solely based on the time update equations, if no measurement is available. This system can be solved for an unknown input via optimal state estimation of singular systems [52], as shown in Fig. 5. We use this approach to predict the future locations of a flying object by mitigating the system and measurement noise terms. Another commonly used approach is the prediction of the next location by linearizing the motion trajectory, which deems inefficient for highly non-linear motions.

Fig. 5: Kalman filtering with unknown input. In the absence of reliable measurement readings, prediction is performed solely based on time update equations and measurement update equations are skipped. Equations are from [53].

Step 2- Mobility parameter extraction: The result of the previous stage provides an estimate of the object motion trajectory. The estimate of the instantaneous direct and angular accelerations can be readily obtained using kinematics equations, and , where and are obtained from the estimated state vector using (III). The acceleration parameters are modeled as a sequence of independent RVs with the defined distributions in (III). If we model the estimation errors, and , respectively with zero-mean Gaussian distributions of variance and , then and for object

follow Gaussian mixture model (GMM):


where the model parameters depend on the object class (represented by the vector of hyper-parameters ). An optimal method to tune the parameters of a GMM based on its observations is the expectation maximization (EM) algorithm, which iterates between calculating the expected value of the likelihood function and maximizing the likelihood by updating membership probabilities. Here, we use EM to find the point estimates of , denoted by based on the observations and , i.e.


Step 3- Object profiling: Note that the prior distribution of the model parameters , before observing the object’s motion trajectory is . The output of EM algorithm for each segment of the motion trajectory is , which can be considered as an observation of the actual . Therefore, we can use Bayes’ rule to find the posterior probability of the objects class using


Here, we assign an equal probability for each class ( for ). Finally, the most likely class is determined using (6).

Step 4- Online class recognition module: In the proposed algorithm as mentioned above, we considered a fixed number of motion classes with equal selection probabilities (). This may limit the applicability of the proposed method in practice due to the need for prior knowledge about the motion profile of each class represented by . Further, the system fails in addressing objects of new types with undefined motion profiles. In order to address this issue, we develop an online self tuning module. This module works based on segment wise processing of motion trajectories. Segment includes the observed locations during time interval , namely for time steps and targets . Each segment includes time points. At each segment , we perform steps 1 and 2 to estimate the motion profile of each target node and we show it with . Then, we obtain the current motion profile of object , using:


Note that we have . With this online method, we use the previous estimate of the motion profiles and only process the last segment of the received trajectory to obtain , which is more efficient than processing the entire history of the trajectory. Then, we proceed with step 3 and find the most likely class of each target based on , and the most recent estimate of the hyper-parameters for . The main distinction here is that the number of classes and their representative hyper-parameters are not fixed anymore and they rather are learned from the observed trajectories as depicted in figure 6.

Fig. 6: Online self class recognition module: motion profile of objects are found for all objects based on their observed trajectories at segment as well as their previous updates.

Once we complete steps 1-3 for all objects during interval , we cluster the collected motion profiles, represented by vectors . We use parametric clustering, and try the number of clusters within the range, where is the number of previously recognized clusters, and enables genesis of new clusters as well as death of fake clusters. Therefore, the number of valid motion classes can change over time if a reasonable evidence is provided by the accumulated motion trajectories. In order to identify the optimal number of clusters, we consider within-cluster variance penalized by the number of clusters using . Here, is the clustering algorithm with clusters, and is the resulting averaged within-variance defined as


where is the number of elements in set . In the simulation results in section V

, we use K-means clustering and set

, and using cross-validation. The number of clusters is obtained as


Each cluster represents a mobility class . Therefore, the collected motion profiles of each cluster () is used to refine the relevant clusters hyper parameters by applying maximum likelihood estimation (MLE) to (5).

V Simulation Results

In this section, simulation results are provided to assess the performance of the proposed method in comparison with the state of the art. Here, we assume that each drone is equipped with a tracking system and hence can monitor and estimate the location of surrounding objects. For instance, Lidar systems, ultrasound systems, or visual cameras can be used to accurately measure the surrounding objects [54]. However, most off-the shelf commercial drones (e.g. DJI phantom and Matrice series) do not include pricey tracking systems. For such scenarios, ADS-B technology [55] can be used where drones locate themselves using embedded GPS positioning modules and periodically propagate their positions to other nodes according, to be used for trajectory prediction. For drones in an adversary network, a ground-based tracking system (e.g. a conventional Radar) can be used to locate the flying objects and perform the object classification task.

We use the following simulation parameters unless otherwise specified. We define clusters with hyper-parameters shown in Table I. We use state transition and measurement equations in (1) to develop random motion trajectories as well as their linear measurements for objects ( per class). The system and measurement noise variances are set to and .

class Speed hyper-parameters Direction hyper-parameters
1 (2,50) (10,10) 1 (2,50) (10,10) 1
2 (4,4) (2,0.5) 2 (4,4) (2,0.5) 2
3 (50,2) (2,0.1) 10 (50,2) (2,0.1) 10
TABLE I: Motion profiles for three classes ().
Fig. 7: Estimating motion trajectories of two objects of different types in 3D space with unknown input (driving force) vector.

The results of the first stage using Kalman filtering with unknown input are presented in Fig. 7 for two objects belonging to different mobility classes ( and ). The results show a relatively accurate estimation of locations provided by step-1 of the proposed method for further analysis. The mean squared error ratio is less than for both classes.

Fig. 8: The impact of measurement update rate () on the accuracy of motion trajectory prediction.

Fig. 8 investigates the impact of measurement update rate () on the mobility prediction for the case of intermittent observations. This parameter determines the probability of successful observation attempts, where the value of is valid. This case is more important and shows the utility of the proposed method in predicting future node positions, when the measurement readings are not available. The prediction accuracy significantly declines if the measurement update rate () goes below an acceptable level.

The second utility of the proposed method is object profiling based an motion trajectories. The results of motion profiling accuracy are presented in Table II for randomly generate motions trajectories. The results are promising and exhibit an average classification success rate (CSR) of . These results verify the success of three sequential steps in jointly predicting the motion trajectories and profiling the objects into correct mobility classes. As shown in Fig. 9, this accuracy depends on the quality of the trajectory estimation, which in turn is influenced by the tracking system noise level.

There are very few prior works that consider profiling object classes based on their online motion trajectories. The most closest work we found is [56]

, which proposes a method to classify moving point objects (MPO) based on their motion patterns. This method, we call it MPO, is based on extracting straightness and velocity indexes from the motion trajectories. Further, they classified objects such as cards, pedestrians, bicycles, and motorcycles based on statistical features (e.g. mean, median, min, max, skewness and standard deviation) of the mobility indexes. Here, we compare our method against this method. We also applied common classification methods such as fuzzy c-means (FCM), and K-means directly to the datapoints of the estimated driving forces (

) for each trajectory for the sake of completeness. Finally, inspired by other time-series analysis (e..g ECG signal processing), we trained a Gaussian process (GP) for the observed trajectories to exploit the fundamental property of each trajectory and then classified the objects based on the obtained GP parameters. The results of this comparison are provided in Table III for 300 objects whose trajectories are simulated using three different classes. The comparison shows that our method (JMPP) overcomes all methods by a significant margin, since the proposed method tries to directly recover motion profiles in a reverse-engineering fashion. The proposed method achieves a CSR of compared to obtained using GP.

Actual Class C1 C2 C3
Predicted Class
C1 90 6 4
C2 5 91 4
C3 6 2 92
TABLE II: Motion profiling accuracy of the proposed method in 3D space.
Class # of Traj. Number of Correctly Classified Object
C1 100 36 41 79 81 90
C2 100 44 46 84 85 91
C3 100 35 37 80 84 92
Total 300 115 124 243 250 273
TABLE III: Classification success rate of different object profiling methods based on 3D motion trajectories.
Fig. 9: Classification success rate for object profiling based on 2D motion patterns versus signal to measurement noise level.

Finally, we investigate the performance of the online class recognition method. This module works based on clustering motion profile vectors with a penalized number of clusters. Two key features of this method are online-learning of class-specific hyper-parameters as well as recognizing new objects as they enter the system. These two properties are illustrated in Figs. 10 and 11, respectively. Fig. 10 investigates the accuracy of class-specific model hyper-parameters in comparison with the actual ones used to generate the motion trajectories. The accuracy is represented in terms of mean squared errors (MSE) ratio. For instance if vectors and , represent the actual and estimate vector of hyper-parameters for all classes, the MSE is calculated as , where is the second norm. The results show that the MSE error remains within after receiving a few trajectory segments. However, the performance also depends on the length of each segment. The results show that longer trajectory segments provide more accurate estimate of hyper-parameters. For instance, using segment length of points, ensures that the MSE error remains below after receiving as few as 10 segments. Therefore, the system does not need to have prior knowledge about the motion properties of different object classes, which makes it more desirable for practical situations.

Fig. 10: The performance of the online self tuning module in estimating class-specific hyper-parameters from the observed motion trajectories in terms of mean squared errors (MSE).

Fig. 11 illustrates the capability of the system to recognize and profile new objects with unseen motion properties. For this part, we start with clusters and generate objects for each class. The system processes the observed motion trajectories and correctly recognizes clusters. Now, we start adding objects of a new type with an unseen motion profile to the system. The system, after collecting a few new objects, recognizes the existence of new object class and changes the number of clusters to . We repeat this experiment times and define the probability of correctly recognizing new classes after receiving objects as the ratio of the number of experiments that reports (after observing new objects) to the total number of experiments. The results are shown in Fig. 11. For instance, the system recognizes the arrival of new object type with probability after receiving objects of this new type. The accuracy approaches after receiving about objects of the new type. Therefore, this module enables the system to adaptively generate new object classes over time in addition to tuning the hyper-parameters of existing classes.

Fig. 11: The performance of online self tuning module. The probability of successfully determining the genesis of new classes represented versus the number of objects with new motion profiles in 2D space () averaged over 100 runs.

Vi Conclusions

In this work, a novel framework is proposed for joint mobility prediction and profiling of objects through analyzing their motion trajectories. The idea is to process the motion trajectories in terms of state transition equations to predict the objects future locations and extract the driving forces. Also, we develop a natural hierarchical generative model for the exerted direct and rotational forces. This approach enables us to exploit the motion properties of mobile objects and classify them based on their motion properties. Compared to other methods, our unified framework neither requires a large training dataset (as opposed to data-driven methods) nor is tailored to a specific object class (as opposed to model-based methods). The proposed method yields a success rate of in profiling mobile objects for a reasonable measurement noise level which shows improvement compared to the state of the art method.

Further, a novel online self-tuning algorithm is proposed which tunes the general motion properties of each class (represented by the class-specific hyper-parameters) by processing the accumulated trajectories over time. This approach adaptively generates new motion classes by observing objects with unseen motion profiles. Therefore, no prior information is required about the motion dynamics of different object types, which makes this system desirable for practical applications. The proposed algorithm, if integrated with communication protocols (e.g. routing algorithms in network layer), can facilitate information flow in UAV networks and IoT with flying objects by predicting the future network topology.

Vii Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. 1755984. The authors also acknowledge the U.S. Government’s support in the publication of this paper. This material is based upon work partially funded by AFRL, under AFRL Grant No. FA8075-14-D-0014. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the US government or AFRL.


  • [1] C. A. Thiels, J. M. Aho, S. P. Zietlow, and D. H. Jenkins, “Use of unmanned aerial vehicles for medical product transport,” Air medical journal, vol. 34, no. 2, pp. 104–108, 2015.
  • [2] K. Kanistras, G. Martins, M. J. Rutherford, and K. P. Valavanis, “Survey of unmanned aerial vehicles (UAVs) for traffic monitoring,” in Handbook of unmanned aerial vehicles.   Springer, 2015, pp. 2643–2666.
  • [3] J. Everaerts et al., “The use of unmanned aerial vehicles (UAVs) for remote sensing and mapping,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, no. 2008, pp. 1187–1192, 2008.
  • [4] J. Xu, G. Solmaz, R. Rahmatizadeh, D. Turgut, and L. Boloni, “Internet of things applications: animal monitoring with unmanned aerial vehicle,” arXiv preprint arXiv:1610.05287, 2016.
  • [5] P. K. Freeman and R. S. Freeland, “Agricultural UAVs in the us: potential, policy, and hype,” Remote Sensing Applications: Society and Environment, vol. 2, pp. 35–43, 2015.
  • [6] T. Wall and T. Monahan, “Surveillance and violence from afar: The politics of drones and liminal security-spaces,” Theoretical Criminology, vol. 15, no. 3, pp. 239–254, 2011.
  • [7] C. Joo and J. Choi, “Low-delay broadband satellite communications with high-altitude unmanned aerial vehicles,” Journal of Communications and Networks, vol. 20, no. 1, pp. 102–108, 2018.
  • [8] A. R. Girard, A. S. Howell, and J. K. Hedrick, “Border patrol and surveillance missions using multiple unmanned air vehicles,” in Decision and Control, 2004. CDC. 43rd IEEE Conference on, vol. 1.   IEEE, 2004, pp. 620–625.
  • [9] M. Huerta, “Drones: A story of revolution and evolution,” Jan 2017.
  • [10] L. Gupta, R. Jain, and G. Vaszkun, “Survey of important issues in UAV communication networks,” IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1123–1152, 2016.
  • [11] M. Quaritsch, K. Kruggl, D. Wischounig-Strucl, S. Bhattacharya, M. Shah, and B. Rinner, “Networked uavs as aerial sensor network for disaster management applications,” e & i Elektrotechnik und Informationstechnik, vol. 127, no. 3, pp. 56–63, 2010.
  • [12] S. Rosati, K. Krużelecki, G. Heitz, D. Floreano, and B. Rimoldi, “Dynamic routing for flying ad hoc networks,” IEEE Transactions on Vehicular Technology, vol. 65, no. 3, pp. 1690–1700, 2016.
  • [13] Z. Kaleem and M. H. Rehmani, “Amateur drone monitoring: State-of-the-art architectures, key enabling technologies, and future research directions,” IEEE Wireless Communications, vol. 25, no. 2, pp. 150–159, 2018.
  • [14] F. Afghah, M. Zaeri-Amirani, A. Razi, J. Chakareski, and E. S. Bentley, “A coalition formation approach to coordinated task allocation in heterogeneous UAV networks,” CoRR, vol. abs/1711.00214, 2017. [Online]. Available: http://arxiv.org/abs/1711.00214
  • [15] A. Razi, F. Afghah, and J. Chakareski, “Optimal measurement policy for predicting UAV network topology,” in 51th Asilomar Conference on Signals, Systems and Computers (Asilomar’17), 2017.
  • [16] A. Anand, H. Aggarwal, and R. Rani, “Partially distributed dynamic model for secure and reliable routing in mobile ad hoc networks,” Journal of Communications and Networks, vol. 18, no. 6, pp. 938–947, Dec 2016.
  • [17] V. Sharma, K. Kar, R. La, and L. Tassiulas, “Dynamic network provisioning for time-varying traffic,” Journal of Communications and Networks, vol. 9, no. 4, pp. 408–418, Dec 2007.
  • [18] A. Urra, E. Calle, J. L. Marzo, and P. Vila, “An enhanced dynamic multilayer routing for networks with protection requirements,” Journal of Communications and Networks, vol. 9, no. 4, pp. 377–382, Dec 2007.
  • [19] Y. Zhang, X. Zhang, W. Fu, Z. Wang, and H. Liu, “Hdre: Coverage hole detection with residual energy in wireless sensor networks,” Journal of Communications and Networks, vol. 16, no. 5, pp. 493–501, 2014.
  • [20] J. H. Sarker and R. Jantti, “Connectivity modeling of wireless multihop networks with correlated and independent factors,” in The 6th International Conference on Advanced Communication Technology, 2004., vol. 1, Feb 2004, pp. 474–479.
  • [21] M. Khaledi, A. Rovira-Sugranes, F. Afghah, and A. Razi, “On greedy routing in dynamic UAV networks,” arXiv preprint arXiv:1806.04587, 2018.
  • [22] A. Sugranes and A. Razi, “Predictive routing for dynamic UAV networks,” in IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE), Oct 2017.
  • [23] M. Heß, F. Büther, and K. P. Schäfers, “Data-driven methods for the determination of anterior-posterior motion in pet,” IEEE Transactions on Medical Imaging, vol. 36, no. 2, pp. 422–432, Feb 2017.
  • [24] “Vehicular mobility trace of the city of cologne, germany,” 2016. [Online]. Available: http://kolntrace.project.citi-lab.fr/
  • [25] R. J. Schalkoff and X. Wang, “A model-based viewpoint determination method for multiple object 3-d motion estimation,” in Southeastcon ’89. Proceedings. Energy and Information Technologies in the Southeast., IEEE, Apr 1989, pp. 1074–1079 vol.3.
  • [26] P. P. Choi and M. Hebert, “Learning and predicting moving object trajectory: a piecewise trajectory segment approach,” Robotics Institute, p. 337, 2006.
  • [27] M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun, “Learning motion patterns of people for compliant robot motion,” The International Journal of Robotics Research, vol. 24, no. 1, pp. 31–48, 2005.
  • [28] M. C. Gonzalez, C. A. Hidalgo, and A.-L. Barabasi, “Understanding individual human mobility patterns,” Nature, vol. 453, no. 7196, pp. 779–782, 2008.
  • [29] G. Aoude, J. Joseph, N. Roy, and J. How, “Mobile agent trajectory prediction using bayesian nonparametric reachability trees,” in Infotech@ Aerospace 2011, 2011, p. 1512.
  • [30] G. Lee, R. Mallipeddi, and M. Lee, “Identification of moving vehicle trajectory using manifold learning,” in International Conference on Neural Information Processing.   Springer, 2012, pp. 188–195.
  • [31]

    E. Malmi, “Human mobility prediction: A probabilistic transfer learning approach,” in

    Aalto University School of Science, Feb 2013.
  • [32] A. Nourhani, P. Lammert, A. Borhan, and V. Crespi, “Kinematic matrix theory and universalities in self-propellers and active swimmers,” in Phys Rev E Stat Nonlin Soft Matter Phys, Jun 2014.
  • [33] M. Tarokh and G. J. McDermott, “Kinematics modeling and analyses of articulated rovers,” IEEE Transactions on Robotics, vol. 21, no. 4, pp. 539–553, Aug 2005.
  • [34] X. Jia, Z. L. Wu, and H. Guan, “The target vehicle movement state estimation method with radar based on kalman filtering algorithm,” in Applied Mechanics and Materials, vol. 347.   Trans Tech Publ, 2013, pp. 638–642.
  • [35] R. Sharma and T. Guha, “A trajectory clustering approach to crowd flow segmentation in videos,” in Image Processing (ICIP), 2016 IEEE International Conference on.   IEEE, 2016, pp. 1200–1204.
  • [36] M. Gariel, A. N. Srivastava, and E. Feron, “Trajectory clustering and an application to airspace monitoring,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1511–1524, 2011.
  • [37] G. Yuan, S. Xia, L. Zhang, Y. Zhou, and C. Ji, “An efficient trajectory-clustering algorithm based on an index tree,” Transactions of the Institute of Measurement and Control, vol. 34, no. 7, pp. 850–861, 2012.
  • [38] Y. Mao, H. Zhong, H. Qi, P. Ping, and X. Li, “An adaptive trajectory clustering method based on grid and density in mobile pattern analysis,” Sensors, vol. 17, no. 9, p. 2013, 2017.
  • [39] H. Xu, Y. Zhou, W. Lin, and H. Zha, “Unsupervised trajectory clustering via adaptive multi-kernel-based shrinkage,” in

    Proceedings of the IEEE International Conference on Computer Vision

    , 2015, pp. 4328–4336.
  • [40] G. Yuan, P. Sun, J. Zhao, D. Li, and C. Wang, “A review of moving object trajectory clustering algorithms,” Artificial Intelligence Review, vol. 47, no. 1, pp. 123–144, 2017.
  • [41] A. Mcfadyen, M. O’Flynn, T. Martin, and D. Campbell, “Aircraft trajectory clustering techniques using circular statistics,” in Aerospace Conference, 2016 IEEE.   IEEE, 2016, pp. 1–10.
  • [42] T. J. Pires and M. A. Figueiredo, “Shape-based trajectory clustering.” in ICPRAM, 2017, pp. 71–81.
  • [43] A. Valehi and A. Razi, “Maximizing energy efficiency of cognitive wireless sensor networks with constrained age of information,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 643–654, 2017.
  • [44] ——, “An online learning method to maximize energy efficiency of cognitive sensor networks,” IEEE Communications Letters, vol. 22, no. 5, pp. 1050–1053, 2018.
  • [45] C. Jiang, H. Zhang, Y. Ren, Z. Han, K.-C. Chen, and L. Hanzo, “Machine learning paradigms for next-generation wireless networks,” IEEE Wireless Communications, vol. 24, no. 2, pp. 98–105, 2017.
  • [46] M. A. Alsheikh, S. Lin, D. Niyato, and H.-P. Tan, “Machine learning in wireless sensor networks: Algorithms, strategies, and applications,” IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 1996–2018, 2014.
  • [47] J. Xie, Y. Wan, K. Namuduri, S. Fu, and J. Kim, “A comprehensive modeling framework for airborne mobility,” in AIAA Infotech@ Aerospace (I@ A) Conference, 2013, p. 5053.
  • [48] A. Fotouhi, M. Ding, and M. Hassan, “Dronecells: Improving 5g spectral efficiency using drone-mounted flying base stations,” arXiv preprint arXiv:1707.02041, 2017.
  • [49] W. Wang, X. Guan, B. Wang, and Y. Wang, “A novel mobility model based on semi-random circular movement in mobile ad hoc networks,” Information Sciences, vol. 180, no. 3, pp. 399–413, 2010.
  • [50] O. Bouachir, A. Abrassart, F. Garcia, and N. Larrieu, “A mobility model for uav ad hoc network,” in Unmanned Aircraft Systems (ICUAS), 2014 International Conference on.   IEEE, 2014, pp. 383–388.
  • [51]

    C. Bishop, “Pattern recognition and machine learning,” Jan 2006, p. 117.

  • [52] M. Darouach, M. Zasadzinski, A. B. Onana, and S. Nowakowski, “Kalman filtering with unknown inputs via optimal state estimation of singular systems,” International journal of systems science, vol. 26, no. 10, pp. 2015–2028, 1995.
  • [53] C.-S. Hsieh, “On the optimality of two-stage kalman filtering for systems with unknown inputs,” Asian Journal of Control, vol. 12, no. 4, pp. 510–523, 2010.
  • [54] A. Razi, C. Wang, F. Almaraghi, Q. Huang, Y. Zhang, H. Lu, and A. Rovira-Sugranes, “Predictive routing for wireless networks: Robotics-based test and evaluation platform,” in Computing and Communication Workshop and Conference (CCWC), 2018 IEEE 8th Annual.   IEEE, 2018, pp. 993–999.
  • [55] ADS-B transceivers, receivers and navigation systems for drones.” [Online]. Available: http://www.unmannedsystemstechnology.com/company/uavionix-corporation/
  • [56] S. Dodge, R. Weibel, and E. Forootan, “Revealing the physics of movement: Comparing the similarity of movement characteristics of different types of moving objects,” Computers, Environment and Urban Systems, vol. 33, no. 6, pp. 419–434, 2009.