I Introduction
Context and Main Results
For tracking groundbased maneuvering targets, conventional tracking systems deal with the following switched mode state space model [1, 2, 3]
(1) 
Here denotes discrete time, denotes the kinematic target state such as position and velocity, and denotes the sensor detections (observations). The random processes and denote the state and observation noise respectively. The mode sequence summarizes a sequence of maneuvers or modes that causes the groundbased target to move in a two dimensional spatial trajectory. Conventional tracking of maneuvering targets assumes that the mode sequence
is a finite state Markov chain, and aims to compute the posterior distribution
so as to compute conditional mean estimates of and . This is typically done by a stateoftheart tracking algorithm involving particle filters, Interacting Multiple Models (IMM), and variable structure IMM (VSIMM) [1, 4, 5]. (In VSIMM, the kinematic model of the moving objects depend on the road direction and the terrain type). These Bayesian recursions exploit the Markovian assumption of the mode sequence to estimate .Motivated by intentinference applications, this paper deals with a higher level of abstraction which we call Syntactic Tracking. Suppose we are interested in whether a target is circling a restricted area (perimeter surveillance), or alternatively if a vessel is loitering near the coast (for a possible smuggling attempt). In such cases, the human operator is primarily interested in determining specific patterns in target trajectories from estimated tracks. These patterns can then be used to infer the possible intent of the target [3]. Examples of such specific patterns include loops, arcs, circles, rectangles, and combination of these, and they exhibit complex spatial dependencies. The key modeling contribution of this paper is to construct a syntactic model to characterize various spatial patterns with a linguistic construct called stochastic context free grammar (SCFG). Thus the main goal is to devise SCFG models and associated polynomial time Bayesian syntactic parsing algorithms to extract spatial patterns from the mode sequence estimated by the conventional target tracker. In other words, this paper develops models and automated syntactic filtering algorithms to assist the human operator in determining specific target patterns. The algorithms presented in this paper use the track estimates from an existing tracker to perform syntactic filtering. In this sense, they are at a higher layer of abstraction than conventional tracking and are fully compatible with existing trackers, see Fig.2 for a more detailed schematic. Indeed, it is not the intent of this paper to redesign conventional target tracking which is a well trodden area.
Why Use Stochastic Context Free Grammars (SCFGs)?
In formal language theory, grammars can be classified into four different types depending on the forms of their production rules
[6]. Stochastic regular grammars or finite state automata are equivalent to HMMs. SCFGs (which will be defined in Sec.IIIA) are a significant generalization of regular grammars. Only stochastic regular and SCFGs have polynomial complexity estimation algorithms and are therefore of practical use in radar tracking applications. It is well known in formal language theory, that SCFGs are more general than HMMs (stochastic finite automata) and can capture long range dependencies and recursively embedded structures in patterns.The implementation of the syntactic filtering system with SCFG has several potential advantages:
(i). Userfriendly Models: SCFG have a compact formal representation in terms of production rules that can permit human radar operators to easily codify highlevel rules,
see [7, 8] where the complex dynamics of a multifunction radar were
modeled using SCFGs. In this paper, it allows us (and radar engineers)
to model complex spatial patterns of target trajectories such
as if a target is circling a building or intersecting in trajectory with another target.
This then permits the design of highlevel Bayesian signal processing algorithms
to estimate such trajectories. The ability for the designer to encode knowledge is important because the lack of field data in a defence setting often hinders the application of Bayesian filters as they require
substantial amounts of training data.
(ii) Ability to Model Complex Spatial Trajectories: The recursive embedding structure of the possible target geometric patterns is more naturally modeled in SCFG. As will be shown later, the Markovian type model has dependency that has variable length, and the growing state space is difficult to handle since the maximum range dependency must be considered.
(iii) Predictive Power
: SCFGs are more efficient in modeling hidden branching processes when compared to stochastic regular grammars or hidden Markov models with the same number of parameters. The predictive power of a SCFG measured in terms of entropy is greater than that of the stochastic regular grammar
[9]. SCFG is equivalent to a multitype GaltonWatson branching process with finite number of rewrite rules, and its entropy calculation is discussed in [10].Main Results
For simplicity, our setting is for targets that move in two dimensional space, and airborne GMTI (ground moving target indicator) radar is used as the primary sensing platform throughout the paper. However, the syntactic filtering results of this paper can be used with other sensor technologies such as multiple video/imaging sensors, etc. Because of the vast amount of data generated by GMTI trackers, there is strong motivation to develop automated algorithms that yield a high level interpretation from the tracks. The main results of the paper are:
1. Combined Tracking and Trajectory Inference: Sec.II sets the stage by describing our entire framework for syntactic filtering using conventional track estimates. We review SCFGs, formulate the elementary modes that lead to trajectories such as arcs and modified rectangles, and describe how syntactic tracking fits into a complete tracking system.
2. SCFG Modulated State Space Model: Sec.III presents a SCFG modulated state space model that permits modeling of complex spatial trajectories. We derive probabilistic production rules that characterize the target motion patterns, and present a detailed structural analysis of the SCFG model. Using formal language techniques and the Pumping Lemma [11], we show specific syntactic pattern like an arc generates a context free language, and it cannot be modeled by Markov models efficiently. Moreover, the wellposedness of the syntactic model is studied based on the branching rate of the model, and conditions over which the language distribution is proper are given, i.e. the conditions that ensure the distribution of the language generated by the model sums to one.
3. Bayesian Syntactic Filtering: Sec.IV
presents the Bayesian syntactic filtering algorithm. The interpretation of the syntactic patterns are represented by parse trees built on top of the target trajectories, which is tracked at the detection level by Bayesian filters such as particle filter and IMM/extended Kalman filter
[5], and at the mode level by a generalized Earley Stolcke Bayesian parser [12]. The Earley Stolcke algorithm is a generalization of the ForwardBackward algorithm for Hidden Markov Models (HMM), and it allows real time forward parsing. The complexity of the algorithm is , where is the length of the input string.4. Experimental Validation of Syntactic Filtering: Sec. V gives a detailed experimental analysis of the syntactic filtering algorithm on a real life GMTI example. The GMTI data was collected using the DRDC Ottawa’s Xband Wideband Experimental Airborne Radar (XWEAR)[13, 14], and numerical studies of the syntactic filtering algorithms are performed using the data. The experimental results show that syntactic tracker not only accurately estimates the target’s trajectory pattern, but also can be used to improve the accuracy of conventional trackers.
Literature Review
SCFGs have widely been used in language processing. The complexity of the language in sentence structure and grammatical dependency made state space models such as linear predictive coding [15] and hidden Markov model [16] inadequate, and the application of stochastic grammar in language modeling has been researched extensively, where its syntax naturally models the language’s grammar structure [17]. In addition to language processing, SCFG has been a major computational tool in biology for DNA and RNA sequencing [6]. Because of the threedimensional folding of the proteins and nucleic acids, HMM becomes insufficient, and SCFG is essential for capturing the long range dependencies of spatial folding.
SCFG in Tracking: In conventional tracking, effort has been spent to enhance the tracker by incorporating information other than the kinematic states. In [3], attribute tracking is discussed where target class information such as wing span and jet engine modulation are utilized for data association. In [18], features in targets’ path trajectory, velocity, and radar cross section are used for target and track classification. In contrast to attribute tracking and target track classification, the syntactic models not only can deal with static features, but they are also particularly suitable to finding patterns in mode sequences with complex multiscale structure and recursive nature. For example, in plan recognition, plans of an agent, typically the actions, have to be inferred from observations. [19]
approached the problem with Bayesian network, but due to the complex structure generating the actions, it is too computationally intensive. In addition, in video surveillance, hierarchical hidden Markov model is applied to track sequences of human actions
[20], and it can be shown that the hierarchical hidden Markov model is a special case of SCFG [21]. SCFG can be applied directly to establish high level inferences from primitives generated from observations. In [22], SCFG is applied to detect sequences such as dropping a person off or picking a person up in a parking lot. Moreover, in [23], movements of targets such as Uturns are inferred based on measurements collected from a sensor network. For those SCFG based tracking, the focus is on the high level inference, and the coupling between the high level inference and the Bayesian tracking is typically very loose, i.e. , are independently generated from sensor measurements, and the temporal constraints are imposed only at the higher inference level.GMTI: Conventional singlechannel radars deployed to perform ground surveillance are limited in the sense that they are only capable of performing detection of fast movers, and identification of stationary targets via SAR imaging algorithms. GMTI radar with spacetime adaptive processing (STAP) enables the nearreal time detection of ground moving objects over a large area. STAP is a generalization of adaptive array signal processing techniques based on the Wiener filter [24]
, and it incorporates techniques such as eigenvector projection and the leastsquares method. In conventional adaptive array signal processing, a Wiener filter is formed for a signal vector whose components are the signals received at multiple apertures from a single pulse. In STAP, on the other hand, the Wiener filter is formed for a received signal vector whose components are some function of signals received at multiple apertures, which are moving, for more than one pulse. In other words, STAP provides a twodimensional adaptive filter where the apertures and pulses furnish the spatial and temporal samples. It is noted that although STAPbased GMTI is considered here, the techniques developed can be used in conjunction with other detection techniques, such as detection algorithms in the image domain, i.e., synthetic aperture radar (SAR) based GMTI algorithms.
Ii Overview of GMTI Based Syntactic Tracking
To motivate the syntactic modelling and syntactic tracking algorithms presented in this paper, in this section we present an overview of our approach to syntactic tracking. Our premise for syntactic tracking is that the geometric pattern of a target’s trajectory can be modeled as ”words” (mode sequence) spoken according to a SCFG language
. So the intent or behaviour of the targets can be determined by SCFG signal processing methods (syntactic pattern recognition techniques). The basic idea of the syntactic pattern recognition is that complex patterns can be expressed as simpler patterns. That is, we decompose high level descriptors of target intents into motion trajectories consisting of a fixed set of primitive geometric patterns such as a line or an arc, and the primitive geometric patterns into kinematic modes that can be estimated by a target tracker. In this section, some examples of syntactic tracking are discussed, and the system framework that supports syntactic tracking is presented.
Iia Examples
In this paper, we illustrate the syntactic tracking algorithms with examples from GMTI radar. Based on these GMTI detections, the aim is to construct an algorithm for continuous ground surveillance that infers the meta description of the moving units by classifying and labelling their trajectories according to their geometric patterns. Consider the following examples that motivate our approach to syntactic tracking.
1. Syntactic tracking in threat inference: A vehicle approaches a security gate of a building and turns around. It then circles around the perimeter of the building in the midst of other moving vehicles. Given GMTI track information of multiple moving vehicles, how can this behaviour be recognized as a threat? Equivalently, how can a threat be associated with the complex spatial trajectory of making a Uturn and then circling a building , and how can the spatial trajectory be identified from geometric patterns?
2. Syntactic tracking in military operations: Fig. 1 illustrates examples of high level descriptions of motion patterns that are common in military ground surveillance, where each is characterized by certain combination of geometric patterns [25]; the line abreast and wedge formation are offensive combat formations with each vehicle moving in linear trajectory; pincer, on the other hand, consists of two vehicles maneuvering in mirroring arc trajectories. With this high level description, inferences can be made to determine if the ground units are in offensive, defensive or reconnaissance operation.
IiB Syntactic Target Tracking System Framework
Let denotes the set of geometric patterns of interest. For simplicity, we consider
(2) 
and these geometric patterns are described later in detail in Sec.IIIC. Syntactic filtering is built on top of multiple model approach to target tracking, and it enables the characterization and identification of geometric patterns from the target trajectory. The main stream multiple model approach is the interacting multiple model (IMM) [26], and it recursively computes the state information with the following distribution function
(3) 
In IMM formulation, the exponentially growing number of mode sequences is approximated by merging the hypotheses at each instance to hypotheses, where is the number of modes [2]. However, because of the merging, the geometric information that could be used for higher level intent inference is lost. Instead of merging, syntactic filtering keeps the mode sequence, and applies pruning to keep the computation manageable.
More specifically, the syntactic filtering is only applied to the second term in (3
), the mode probability. In order to estimate its value, only the most likely mode sequence is kept, and, using Bayesian model averaging, the probability is computed approximately as
(4) 
where is the most likely mode sequence given the SCFG model (as models geometric patterns of the target trajectory), and the second term is the conventional IMM tracker. Given the track estimates, syntactic filtering allows classification of the mode sequence into geometric patterns. The maximum a posterior (MAP) pattern is then computed as
(5) 
where is the SCFG of the geometric pattern . The computation of the associated probabilities is discussed in Sec. IV where the SCFG parsing algorithm that performs the syntactic analysis is described.
Given this formulation, the system framework of this syntactic filtering system is summarized in Fig. 2. The system framework consists of five components, and their functionalities are described as follows: The GMTI STAP processor detects ground moving targets and returns their estimated range, angle, and range rate. The data association optimizer assigns sensor measurements to tracks. The multiple model Bayesian tracker keeps track of the detected targets, and recursively computes the targets’ kinematic states and their mode probabilities given the sensor measurements. The geometric pattern knowledgebase stores the prior knowledge of the relevant motions in terms of production rules. Build on top of the conventional multiple model Bayesian tracker, the syntactic pattern estimator (stochastic parser) infers geometric patterns from vehicle’s trajectory, and provides feedback to track estimate in terms of mode probability estimation to enhance tracking accuracy.
Remark: Various techniques already exist to perform data association. The joint probabilistic data association (JPDA) algorithm that evaluates the measurementtotrack association probabilities [12], the multiple hypothesis tracking (MHT) algorithm that enumerates all feasible measurementtotrack hypotheses [3], and the assignment algorithms that solve data association as a constrained optimization problem are all relevant techniques in this field. The focus of the paper is on the syntactic interpretation of target trajectories, and because the assignment algorithms are more modular in the sense that they can work with different tracking algorithms, for example IMM and VSIMM, they are well suited to deal with the data association problem in this paper. [12] not only solves the data association problem, but also the tracking of movestopmove targets.
Iii Syntactic Modeling for Ground Surveillance
Given the overview of our approach presented above, this section presents complete details on the syntactic modelling of target trajectories using SCFGs. The background on SCFG is provided in Sec. IIIA. Sec. IIIB discusses the state space models that estimate the mode sequence from GMTI detections, Sec. IIIC and IIID present the syntactic modeling of the geometric patterns with SCFG, and finally, Sec. IIIE proves the wellposedness of the SCFG model (in terms of ability to model specific patterns). This section thus sets the stage for Bayesian algorithms (parsing algorithms) to classify the target trajectory and hence the target’s intent that are presented in Sec.IV.
Iiia SCFG Background
With the motivation outlined above, we will use SCFGs to model geometric spatial patterns of target trajectories. Since SCFGs are not widely used in radar signal processing, we begin with a short formal description of SCFGs and a summary of syntactic analysis (syntactic parsing). In formal language theory, a grammar is a fourtuple [6]. Here is a finite set of nonterminals, is a finite set of terminals, and . is a finite set of probabilistic production rules, and is the starting symbol. As will be shown later in generation of a parse tree, nonterminals are the nodes that may generate other nonterminals and terminals, and terminals are the leaves. Throughout the paper, lower case letters are used to denote terminals, and upper case letters nonterminals. Greek letters are used to denote concatenated strings of terminals and nonterminals.
Definition III.1
[Stochastic Regular Grammar] Stochastic regular grammars, denoted as , are equivalent to hidden Markov models (with termination state ) and have production rules of the form and with probabilities and specified, where . corresponds to the state space of the hidden Markov model, and corresponds to its observation space. The set of all terminal strings generated by regular grammar is called the regular language and it is denoted as .
Definition III.2
[Stochastic Context Free Grammar] SCFG, denoted as , have production rules, , of the form with probabilities specified, where and . denotes the set of all finite length strings of symbols in , excluding strings of length 0 (the case where length 0 string is included is indicated by ). The set of all terminal strings generated by SCFG is called context free language and it is denoted as . The grammar is context free because the left hand side of its production rule only has a single nonterminal (independent of its context). To contrast, a grammar is context sensitive if it has production rules of the form , where and cannot be empty.
A contextfree grammar is selfembedding if there exists a nonterminal such that with . A selfembedding SCFG cannot be represented by a Markov chain [27].
SCFG Example: Let the set of terminals be as illustrated in Fig. 3a), and they represent the direction of travel of a target. A target trajectory is shown in Fig. 3b), and it can be compactly expressed as a string of terminals . Fig. 3 c) demonstrates one likely generation of terminals from the hypothesis that the pattern is an arc, and how segments of the string is “explained” by nonterminals that comprise it. The set of nonterminals in this example are , and the production rules used are
a Arc c a c 
The symbol indicates “replace with”, and the symbol indicates “or”. Suppose we have a concatenated string , where is any combination of nonterminals and terminals, and is a nonterminal, a one step derivation using the rule yields . The derivation process of the example in Fig. 3 can be expressed as a iterative application of the production rules, as shown below:
S 
IiiB State Space Model for Target Trajectory
Let the set of terminals denote the possible directions of travel of the moving target. Fig.3a illustrates these 8 possible acceleration directions of the target depicted by the terminals .
At each time , denotes mode of the target. The target dynamics are modelled as
(6) 
denotes the ground moving target’s position and velocity in Cartesian coordinates, and assuming constant velocity model, the transition matrix model and the noise gain are, respectively,
The process noise is a white Gaussian process with the covariance matrix
where denotes transpose, and is the uncertainty along the direction indicated by and is orthogonal to it. Thus the modes modulate the process noise
and cause it to switch between different variance values.
Remark: The above model is more suitable for ground targets compared to acceleration models (e.g. mean adaptive acceleration models and the semiMarkov jump process models) since ground moving vehicles do not exhibit such maneuverability. Standard kinematic models assume equal variance for the process noise in all unit directions to allow for the target to move with equal probabilities among the unit directions. To model the modes, in this paper the process noise is assumed to have different noise variance along and perpendicular to the direction of the modes. If we know the ground target is moving along a particular direction, then the covariance perpendicular to the direction should be small.
The observation model describing the output of the GMTI STAP measurements is
(7) 
is the range, is the range rate, is the azimuth angle, and . The covariance matrix is a diagonal matrix with the diagonal elements equal to the variances of the range, range rate, and azimuth angle measurements, which are denoted as , , and respectively. To compensate for the radar’s platform motion, we define the coordinates where is the coordinate of the sensor platform at time ; similarly for and .
IiiC SCFG and Syntactic Trajectory Modeling
With the above model, we now show that if the modes in (6) are generated by a SCFG instead of a regular grammar, the target’s trajectory exhibits sophisticated geometric patterns. For clarity, we focus on the following three examples of geometric patterns: line, arc and mrectangle (which is defined below). We show below that a line can be generated by a regular grammar, but arcs and mrectangles can be generated by SCFGs and cannot be generated by regular grammars. Therefore, if we want to infer a target’s intent by estimating whether it is moving in a line, arc or mrectangle, we need to use SCFGs and associated syntactic signal processing. To save space we will only describe rectangles and arcs that are aligned with the horizontal and vertical axes. It is a trivial extension to consider rotated versions of these trajectories. Similarly other trajectory patterns such as extended trapeziums, etc can be considered, see [27] where complex patterns such as Chinese characters are considered.
Language of Lines: Recalling Definition III.2, let denote the language of lines. It includes lines of arbitrary length, for example the string . Such strings can be generated by a regular grammar (Markov dependency). For example, suppose we have a concatenated string , where is any combination of nonterminals and terminals, and is a nonterminal, a one step derivation using the rule yields . The derivation process is similar to that of a hidden Markov model.
Language of Arcs: The language of an arc, denoted , can be compactly expressed as , where there is same number of matching upward and downward modes and arbitrary number of forward modes . For each in the string, there must be a matching , and the corresponding grammar rule is , where is empty string. The arbitrary number of forward modes, on the other hand, can be modeled by the rule . As a result, the basic production rules applied to construct arcs are . However, as is known in the parsing literature, the inclusion of causes the parsing algorithm not to halt in all cases, is removed. The final equivalent production rules for an arc is .
The rules needed to generate patterns such as arc have syntax that is more complex than a regular grammar. Using the Pumping Lemma, we will show in Lemma 1 that a HMM cannot model such an arc because of the self embedding (long range memory) – the model needs to capture the fact that after steps in direction , the target eventually moves by steps in the direction . (Recall the definition of selfembedding given in Sec. IIIA).
Language of mRectangles: Let denote the language of mrectangles (modified rectangles). Examples of mrectangle strings are , , etc. Thus a mrectangle is a 4 sided geometrical pattern comprising of three left turns (or 3 right turns) each of ninety degrees, with two sides of equal length. Note that mrectangles are not necessarily closed trajectories (if they were closed, they would coincide with a rectangle).
Why do we consider mrectangles instead of rectangles? There are at least two reasons. First, using to the pumping lemma, Lemma 3 shows that the language comprising of rectangles is not a SCFG. Second from a modeling point of view, in order to recognize suspicious behaviour of a target moving around a building, mrectangles are more robust since unlike a rectangle, the start and end points do not have to coincide.
Examples: To model the threat inference example provided at the beginning of Sec. II, where a threat is related to suspicious Uturns and circling of a building, an arc language may be used to approximate Uturns and a mrectangle language to circling around the restricted area. The pincer operation, on the other hand, consists of two arcs in close proximity and of opposite direction. As a result, given continuous of the trajectories by the syntactic tracking, a pincer operation can be identified by the following attributes: 1) two arcs of comparable size are identified, and 2) their locations are close together within a certain bound. Moreover, maritime events may also be identified by syntactic tracking. For example, a smuggling event may be modeled as one circling trajectory being approached by a linear trajectory. The labelling of trajectories can identify vessels that are loitering in the open sea, and detect other vessels moving toward them.
IiiD Dynamics of Syntactic Motion Patterns as SCFG
We are now ready to formulate the syntactic model for syntactic filtering using a SCFG. The kinematic modes of the multiple mode Bayesian filter, as illustrated in Fig. 3a), are modeled by the terminal set
The geometric patterns described in the previous section are modeled by the nonterminal set
(8) 
The nonterminal is the starting symbol, and the meaning of the terminals and the nonterminals is explained below. Finally, the prior knowledge of the generation of the geometric languages in terms of the terminals and nonterminals is encoded by the production rules
(9) 
The nonterminal , generates lines in the direction . (respectively, generates arcs pointing upward (downward) and to the right (see pincer in Fig. 1). and are the clockwise and counterclockwise mrectangles respectively, and and are the turns that consist of the two equal length segments. The production rule of the turn and the arc are similar in form because they are both designed to capture the long range dependency of two line segments. It should be noted that the grammar is a small subset for illustrative purpose, and no intention is made to be exhaustive. The grammar is application specific, and it can be regarded as an guiding example for other development. The analysis of the grammar is provided in Sec. IIIE.
Given the grammar, probability distribution is defined over the production rules. For each nonterminal
, the probability of its production rules must sum to 1, i.e.In practice, the production rule probabilities can be estimated from data. The probability assignment has to follow a requirement to keep the grammar stable, and it will be discussed in the analysis that is presented in the next subsection.
IiiE Structural Analysis of the SCFG Model
This section provides analysis of the languages presented in Sec. IIIC. Our results are the following:
(i) The relation is formally shown. More specifically, using the Pumping Lemma [11], and are shown to be more general than regular grammars, and based on the structure of their production rules, the languages are generated by CFGs, i.e.
.
A regular grammar (HMM) cannot generate exclusively randomly sized mrectangles
or only randomly sized arcs. (Of course a regular grammar can generate an arc or a mrectangle
with some probability amongst a variety of random trajectories – but that is of little use
in trajectory classification). It will also be shown that the language of rectangles is not CFG, which motivates the use of mrectangles.
(ii) The second result provides conditions under which the SCFG model is well posed, and it boils down to checking the spectral radius of the stochastic mean matrix defined below.
IiiE1 Language of Trajectories
The analysis of the geometric languages is based on the following Pumping Lemma that is proved in [11].
(i) Pumping Lemma for Regular Languages: Let be a regular language, then there exists a constant such that if is any string in such that is at least and for any way of breaking into with , can be written as such that and .
(ii) Pumping Lemma for ContextFree Languages: Let be a context free language, then there exists a constant such that if is any string in such that is at least , can be written as , subject to the following conditions:

. That is, the middle portion is not too long.

. Since and are the pieces to be ”pumped”, this condition says that at least one of the strings we pump must not be empty.

For all , in . That is, the two strings and may be ”pumped” any number of times, including , and the resulting string will still be a member of .
Using the Pumping Lemma, we show that the arc and the mrectangular languages are not regular.
Lemma 1
The arc trajectory language is not regular.
Proof Suppose is a regular language. Consider , and choose , and . By the Pumping Lemma for regular languages, can be written as such that and , which means for any , . When , . However, since , , and it contradicts the definition of .
Lemma 2
The mrectangular trajectory language is not regular.
Proof Suppose is a regular language. Consider , and choose , and . By the Pumping Lemma for regular languages, for any , can be written as . When , . However, since , , and it contradicts the definition of .
As mentioned in Sec.IIIC, we deal with mrectangles because the language generating standard rectangular trajectories is not context free. We now formally show this using the Pumping Lemma. The construction of a rectangular trajectory can be expressed by a language , where and signifies the length and width of the rectangle. It is sufficient to show that a subset of the language, i.e. (which represents the language of square trajectories) is not context free.
Lemma 3
The rectangular trajectory language is not context free.
Proof Suppose is a context free language. Let . The first condition dictates that is a substring of or . Let be a substring of , then is a substring of , and contains only and . must be a string in the language by the Pumping Lemma, contains ’s and ’s, but has fewer than ’s and ’s. By contradiction, we can conclude that is not context free. Same steps can be applied when is a substring of .
As a result, in order to deal with rectangular type trajectories in a CFG domain, mrectangle language with the form is considered.
IiiE2 Well Posedness of the Model
Before concluding this section, we need to address one more modeling issue. In a regular grammar (HMM plus start and end states with nonzero probability of reaching the end state) since there is no selfimbedding, the length of the data string generated is finite with probability one. However, in a SCFG due to the self imbedding, it is possible for strings generated by the production rules to never terminate. Such instability is not desirable from a modeling point of view. So we need to restrict the model parameters to ensure that the generation of the geometric patterns is stable, i.e., the derivation process is subcritical [10] and terminates in finite time with finite length with probability one. This finiteness criteria provides a constraint on the SCFG model parameters, which may be used as a bound on the parameter values. We discuss this point by first defining the stochastic mean matrix.
Definition III.3
For , the stochastic mean matrix is a square matrix with its th entry being the expected number of variables resulting from rewriting A:
Here is the probability of applying the production rule , and is the number of instances of in [28].
The finiteness constraint is satisfied if the grammar satisfies the following theorem.
Theorem 1
If the spectral radius of is less than one, the generation process of the stochastic context free grammar will terminate, and the derived sentence is finite.
Proof The proof can be found in [28].
Iv Syntactic Filtering Algorithms
Based on the SCFG modulated state space model constructed in Sec. III, algorithms to estimate the mode sequence and to perform the syntactic analysis are developed in this section. For example, we are interested in classifying whether the target trajectory is either a line, an arc or a mrectangle. Because the mode estimates are generated iteratively as the process unfolds, we use the EarleyStolcke parsing algorithm to parse data from left to right recursively [29, 22]. EarleyStolcke parsing algorithm is a top down parser, and it is different from the more common bottom up parsers such as the CYK algorithm [6]. Sec.IVA gives an overview of the syntactic parsing approach. Sec. IVB discusses the implementation of the mode estimator that produces estimates of mode sequences, and Sec. IVC summarizes the implementation of the syntactic pattern estimator based on the extended version of the EarleyStolcke parser.
Iva Syntactic Parsing and Target Tracking
The operation of inferring the production rules used given a string of terminals (e.g. fhhbd) is called stochastic parsing, and in the context of syntactic filtering, given a SCFG, a track consists of both a sequence of kinematic estimates and a set of parser states. The definition of a parser state and its semantics in terms of a track in target tracking are discussed in this section, and the algorithm that recursively computes parser states from kinematic measurements is presented in Sec. IVC.
The Earley Stolcke parser described below can be viewed as a generalization of the forward algorithm (which is used for HMMs) to the SCFG [29]. Given the string of terminals from the tracker, the control structure the parser uses to store incomplete parse trees is defined as
(10) 
where and are nonterminals, and are substrings of nonterminals and terminals, and contains the string . ”.” is the marker that specifies the end position, indexed by , and is the beginning index of the substring that is partially parsed by the nonterminal . is called forward probability and it is the sum of probabilities of all incomplete parse trees containing , and is called inner probability and it is the sum of probabilities of all incomplete parse trees containing .
Illustration of syntactic analysis for syntactic filtering is provided in Fig. 4. Consider a trajectory generated by (1) and a mode sequence that is estimated as a string of terminals from the trajectory. At each time , denotes the target’s kinematic mode, i.e., its direction of travel, the aim of syntactic analysis is to infer the geometric patterns that might have produced the trajectory based on a SCFG formulation. Syntactic analysis recursively builds different parse trees, represented by a collection of parser states, as hypotheses to ”explain” the geometric patterns. (Details are provided in Sec. IVC.) More specifically, syntactic filtering extends multiple mode tracking algorithm with the incorporation of syntactic analysis, and the semantics of the parser state (10) are summarized here:

Radar scans to are processed by the parser, and the position of the current scan in the input mode sequence is labeled by the dot ”.”.

Nonterminal represents a geometric pattern and it is a hypothesis used to characterize the input mode sequence generated by scans to .

keeps the likelihood probability of the mode sequence given the nonterminal, and the likelihood probability of .

Future mode evolution could be predicted based the production rules of .
In other words, syntactic filtering tracks the evolution of the mode sequence, and iteratively builds different hypothesis trees of nonterminals (geometric patterns and their elements) to explain the mode sequence.
IvB Syntactic Enhanced Tracker
The mode estimator (5) that computes can be implemented using any approximate multiple mode Bayesian tracker, for example, an extended Kalman with IMM or a multiple mode particle filter. In either case, the nonlinearity in the observation model implies that an approximate filter needs to be used since finitedimensional optimal filters do not exist. As will be described below, the multiple mode tracker outputs the mode probability for mode . It is this mode probability estimate that is fed into the syntactic parser described in Sec. IVC .
IvB1 Multiple Model Sequential Markov Chain Monte Carlo (particle filter)
Let , where is a continuous value kinematic state, is a discrete value IMM mode, and
denotes transpose. The posterior probability distribution of the state space is approximated by
. The random measure are the particles and their associated weights to characterize the posterior distribution, and is the number of particles. The multiple mode particle filter algorithm consists of three steps [4]:
sampling of the IMM mode transitions,

sampling of the mode conditioned kinematic state, and

resampling to avoid degeneracy.
These three steps are now described:
Given the set of IMM modes at time , the sampling of the IMM mode involves generating based on the transition matrix .
The sampling of the mode conditioned kinematic state involves sampling from the transition probability and calculating the associated weight. The optimal importance density is given the IMM mode sampled from step 1, yet the most popular and simpler importance function is . The unnormalized weight of each sampled particle is updated by the following equation
where is the importance density. Using the simplified importance density, it becomes
The normalized weight is then .
The resampling involves a mapping of random measure to with uniform weights. The resampled particles are generated by resampling with replacement times from the random measure . The resampling is necessary if the effective sample size is less than a threshold sample size, and the effective sample size is computed as
If resampling is not performed, degeneracy problem would occur which means after a certain recursive steps, all but one particle will have negligible normalized weights.
IvB2 Extended Kalman filter with IMM
Because Eq. (7) is highly nonlinear, extended Kalman filter is needed to process the observations. Consider the following measurement model:
where
(11) 
and is the measurement noise in the converted model. The converted covariance matrix is
whose elements are
In order to run extended Kalman filter, the Jacobian of the converted measurement function is
As will be shown in Sec. IV, the terminal probability models the input uncertainty for the parsing process, and the position estimate is stored in the low and high marks of the Earley state for enforcing consistency of the tracks. According to the kinematic model, we can compute the two variables based on the interacting multiple models (IMM) [5], and its algorithm is summarized here:

Calculating the mixing probabilities

Mixing

Modelmatched filtering

Mode probability update

Estimate and covariance combination
IvC Extended Earley Stolcke Parsing of Target Trajectory
We are now ready to describe the syntactic signal processing algorithms with Earley Stolcke parser, and also the extensions of the parser needed to integrate it with the tracking algorithm described above. Recall the system framework illustrated in Fig. 2, the parser assumes the existence of tracking and data association modules, and performs syntactic analysis of their outputs. The parser is extended to 1) model the uncertainties of the mode estimates generated by the Bayesian tracker, 2) keep parsing robust against nondetections generated by the data association module, 3) perform track initiation for syntactic filtering, and 4) prune unlikely tracks to tradeoff track completeness with lower computational complexity. The extensions are largely based on those described in [22], but altered to fit the specific case of syntactic filtering with GMTI measurements. The extensions are discussed later when parsing operations are introduced.
In order to introduce the extensions, modifications to both the parser state and the production rules are necessary. The parser state of the Earley Stolcke parser is redefined as
where is the kinematic state of the track at scan and the state at scan . Let be the euclidean distance, and a similarity function to measure the spatial correlation of two kinematic states. Many spatial correlation models may be applied [30], and the function used in this paper is a power exponential function, , where and are determined experimentally. The production rule, on the other hand, is modified to model nondetection events due to both a miss or target moving slower than the minimum detectable velocity. For every production rule that involves the generation of terminals, a nonterminal is added, i.e. the rule will be modified to include , where will be mapped to a nondetection returned by the data association module.
Parsing Example: To give more intuition, here is a simple example of parsing a very short input string “bb”. The steps are illustrated in Table I. For simplicity, only a subset of the production rules listed in (9) are used, only the line terminals, i.e. , and their associated production rules are used.
To initialize the parsing process, a dummy parser state is inserted, where and are the extracted kinematic states of the target from the GMTI detection. The dummy parser state is the first entry in column of the table, and it indicates that at the index position 0, the start symbol is applicable to parse the input string. With the dummy parser state in place, the parser builds the parse tree by iteratively applying three operations: prediction, scanning, and completion, which will be discussed in detail later. The operations are applied sequentially, and each operation works on the set of parser states produced by the previous operation.
Given a set of parser states (which contains only the initial dummy parser state at index 0), the prediction operation searches for parser states whose index marker has a nonterminal to its right. (In the case of the dummy parser state, the nonterminal to the right of the index marker is the start symbol ). For those nonterminals, the prediction operation generates a set of predicted states with their production rules. Please see the entries below the dummy parser state under the heading “Prediction”. Given the predicted parser states, the scanning operator looks if there are parser states whose index marker has a terminal to its right. If the terminal of those parser states matches the input string at the indexed position, their index markers are advanced by one position. The generated parser states are called the scanned parser states. Please see the entries in column 1 under the heading “Scanning”. It can be seen only the predicted parser states with terminal are advanced because the input terminal at index 1 is . Lastly, given the scanned parser states, the completion operation looks if there are parser states whose index marker is at the end of its production rule. If any are found, the parser states that generated those scanned parser states will have their index advanced by one position. Please see the entries under the heading “Completion in column 1. The completed parser state generates the completed state . The three operations will be applied iterative until the dummy state is completed. The details of the three operations are discussed next in turn.
0  1  2 

Scanning  Scanning  
Prediction  
Completion  Completion  
Prediction  
IvC1 Prediction
The prediction operator adds parser states that are applicable to explain the unparsed input string. For all parser states of the form
where and may be empty, and is the nonterminal, the operator adds ’s production rule,
as a predicted parser state. The and are updated according to
and
where is a reflective transitive closure of a left corner relation and it computes the probability of indefinite left recursion in the productions. (The detail of the relation is omitted as it has little significance in this paper. Interested readers can refer to [29].) The new predicted parser state inherits the kinematic states because it explains the same substring of the mode sequence. The pruning capability of the parser can be implemented by discarding the predicted parser states if its forward probability is lower than a threshold. The value of the threshold balances system loading and track completeness. In addition, the prediction stage may also be modified to capture a track with an unknown beginning. At each time instant when the prediction operation is run, a dummy parser state of the form can be inserted if there are GMTI detection that cannot be associated with any partial parse tree. With this dummy state, the parser is not limited to capture patterns that were started at the time instant 0.
IvC2 Scanning
The scanning operator matches the terminal in the input string to the parser states generated from the prediction operator. For all parser states of the form
where and can be empty, the parser state
is added if the terminal at is , where is the kinematic state of the terminal estimated by the Bayesian filter, and is its probability distribution (uncertainty of the mode estimate from the Bayesian filter). The and are updated according to
and
It is noted that by including in updating and , the parsing process also takes the input uncertainty in account.
IvC3 Completion
The completion operator advances the marker position of the pending predicted parser states if their derived parser states match the input string completely. The scanned parser states whose marker is at the end of their rule have the form
and it has corresponding parser states (pending predicted parser states) of the form
i.e. the parser states that generated the scanned parser states at the prediction stage. The two parser states generate and add a completed parser state
It is important to notice how the indices of the parser states are related. The indices of the pending predicted parser state indicate that the nonterminal was applied at , and its derived parser state (the scanned parser state) indicates that , which corresponds to a substring of , matches the terminal substring to , it can then be concluded that the pending predicted parser state can now explains the substring to so its marker is advanced accordingly. The associated and probabilities are updated according to
and
respectively, where is a reflective transitive closure of a unit production relation and it computes the probability of an infinite summation due to cyclic completions (interested reader can refer to [29] for more detail), and the similarity function here models the consistency between the pending predicted parser state and the completed parser state. If the likelihood probabilities of the completed parser state is lower than a threshold, it will be pruned to trade track completeness with computation reduction.
The parsing algorithm can be extended to incorporate further domain knowledge of the human operator. For example, selection logic can be added to the prediction operator, that instead of adding all probable states, only adds those whose production rules yield terminal symbols compatible with the input string. In other words, instead of purely top down parsing, bottom up information could be incorporated to speed up the parsing algorithm.
V Experimental Setup and Results
The numerical studies in this section demonstrate how stochastic parsing with target tracking can discern geometric patterns with real GMTI data collected by DRDC. Sec. VA describes the experiment setup and the data model. Sec. VB discusses the preprocessing required to transform measurements from various coordinate systems. Sec. VC summarizes the numerical results. Finally, Sec. VD shows that by feeding back the higher level syntactic estimates to the standard tracker, substantial improvements in performance are possible.
Va Experimental Setup
The GMTI data is collected using DRDC Ottawa’s Xband Wideband Experimental Airborne Radar (XWEAR)[13, 14]. It is a reflectorantennabased multifunction radar that is designed to collect coherent radar echos with various modes for wide area search and imaging. The XWEAR radar’s data collection modes include search modes, where the antenna is rotating, stripmap SAR and spotlight SAR imaging modes, and widearea surveillance GMTI mode. The introduction of a multimode feed, i.e., the ability to carry two electromagnetic modes, enables a twochannel GMTI capability[14]. The XWEAR radar is used to collect data for investigations into wideband synthetic aperture radar (SAR), inverse SAR (ISAR), maritime surveillance, and GMTI.
The navigation subsystem of the XWEAR radar consists of an inertial measurement unit (IMU) mounted near the antenna phase centre (APC), and an embedded global positioning/inertial navigation system (EGI) mounted near the centre of gravity of the aircraft. In order to collect coherent radar echoes, the radar data needs to be compensated for undesirable APC motion (e.g., changes in aircraft ground speed and deviation from ideal flight path) that introduces pulsetopulse errors. The IMU provides highrate (200 Hz) measurements of velocity and angular increments. The strapdown navigator algorithms process these measurements and yield estimates of APC position and velocity, and antenna orientation. The EGI blends its own inertial data with GPS data using an internal Kalman filter and the resulting accuracy in position and velocity is about 2 m and 0.03 m/s respectively. The EGI output is used in an external Kalman filter to give longterm stability to the strapdown navigation solution from the IMU. The phase corrections are then applied relative to a reference trajectory, so that the resulting data is coherent.
In flight trials, the radar was installed and flown on a Convair 580 aircraft. The data was collected over western Ottawa. A SAR image of the scene is shown in Figure 5. The aircraft was moving at about 200 knots, or 100 m/s, with aircraft positions recorded as discussed above. The ground moving target is a truck that is moving in trajectories that form various geometric patterns. The GPS data of the truck was also recorded for ground truth. The antenna was pointed to a fixed point on the ground, and the target always had nonzero radial velocity so that the target could be observed continuously by STAPbased GMTI techniques. The elevation angle is neglected as it does not provide any additional information. This is because in the GMTI case, the target is moving on a known plane. Then, if the pointing angle and range resolution are known, a particular range bin is equivalent to an elevation angle of the target.
Pulse Length  5s 

PRF  12 kHz 
Carrier Frequency  9.75 GHz 
Polarization  Transmit and ReceiveHorizontal 
Antenna  1m width, 2.5 (4) azimuth (elevation) beamwidth 
VB GMTI Dataset
Detection using STAP was carried out using a coherent processing interval (CPI) of about 128 pulses and the pulse repetition frequency was 1 kHz. The duration of the data acquisition studied here is about 108 seconds. Since the target of interest had a fairly high SNR and moved above the minimum detectable velocity of the GMTI sensor for a significant fraction of the time, movestopmove pattern is not considered in this instance. In addition, the tracker was not fed all of the detections that were found at every CPI as there were several false alarms. Instead, only detections that were present in 3 (or more) out of 7 consecutive CPIs were used in the tracking algorithm.
Since tracker inputs are based on several CPIs, a target need not be detectable at every CPI. Similarly, by requiring multiple detections in a set of CPIs, several false alarms could be eliminated. This was found to be sufficient to eliminate false alarms for this data set, although a more sophisticated tracking algorithm will be required for targets that have low SNR. The standard deviations used in the GMTI measurement model for range, azimuth angle, and range rate, were 5 m, 2.5 degrees, and 0.1 m/s respectively, and the state model noise used for the CV model was chosen to be 0.05 and 0.5 for the parallel and the orthogonal component respectively. No terrain data is used to modulate the measurement model.
The sensor platform coordinates, provided by the global positioning system onboard the aircraft, are given in the geodetic coordinate system. The GMTI measurements, which include range, range rate, and azimuth angle, are collected in the local spherical coordinates. The tracking algorithms developed are defined in a tangential plane Cartesian coordinate system. As a result, in order to apply the tracking algorithms developed, it is necessary to express the GMTI measurements in terms of quantities defined on the tangential plane Cartesian coordinates. The origin of the Cartesian coordinates is chosen to be the ECEF coordinates of the scene centre.
VC Numerical Studies of Syntactic Filtering
The performance of the syntactic filtering is illustrated by dealing with two geometric patterns: an arc pattern in a pincer scenario, and a mrectangle in loitering situation. Numerical studies are done with both the particle filter and the IMM/extended Kalman filter, but since the results are very similar, only the results of the IMM/Extended Kalman filter is shown. The tracking result illustrated in Fig. 6 is based on a run of the DRDC flight trials. The solid line of the figure on the top is the real GMTI track, and the dotted line is the output of the IMM/Extended Kalman filter. It can be observed that the tracker performs quite well even during the turns of the truck trajectory. An intuitive explanation for this performance is the constraints imposed by the IMM modes . Since the mode constrains the noise term and thus reduces the uncertainty of the state estimates, a better estimate of the track is expected even at the turns.
The IMM/Extended Kalman filter generates the terminals for syntactic parsing, which, as described in Sec. IIIB, corresponds to the IMM modes. The bottom panel in Fig. 6 shows the estimated IMM modes, and only four modes are shown for easy display. The syntactic parsing of the IMM modes could be either soft or hard (as in soft or hard decision making). Hard parsing parses the estimated IMM modes, and soft parsing parses the probabilities of the IMM modes. We focus mainly on soft parsing, and numerical results of parsing the arc and the square pattern are shown next.
Fig. 7 shows the likelihood probabilities of different geometric patterns as an arc is parsed, and the most likely parse tree. The parsing algorithm initially classifies the trajectory as a line, but as more data arrives, it correctly identifies the trajectory as an arc. Fig. 8 shows two arcs in the pincer trajectory. The detection data arrived not as two independent tracks, but an an out of order interleaved sequence. The parsing algorithm performs the data association as described in Sec. IVC, and parses the two arcs separately. It should note that an arc is a palindrome and it is important to identify an arc irrespective of its dimension and orientation.
Fig. 9 illustrates the likelihood probabilities of different geometric patterns as an mrectangle is parsed. We used a much longer track in this study to demonstrate the practicality of the algorithm. However, the parse tree is omitted due to its large size. As it can be seen from the top panel of the figure, the correct geometric pattern maintains its high probability as the probabilities of other patterns drop because the input sequence does not support them. Some patterns such as vertical line and clockwise mrectangle had high probabilities initially because the initial segment of the input terminal string matches their syntactic structure. However, as more terminals are parsed, their probabilities drop. This observation means that it is possible to prune a parse tree as its probability drops below a certain threshold. If the input terminal sequence does not support the syntactic rules of a syntactic pattern, the parse tree corresponding to the pattern could be pruned completely, and which could greatly reduce the computational complexity and the storage requirement.
VD Performance of Syntactic Enhanced Tracker
Above parsing results demonstrate how SCFG signal processing can estimate the geometric patterns of the target trajectories. A natural question is: Can the syntactic tracker estimates be fed back to the standard tracking algorithm to improve performance? For example if the syntactic tracker estimates that the target is moving in an arc, this information should be useful to the lower level tracking algorithm.
We used the syntactic tracker of Sec. IVC and fed the estimates to the multiple mode Bayesian filter using (4), where the mode probability is computed as the weighted sum of the IMM mode estimates and the SCFG parser estimates. The SCFG parser calculates the probability based on the outputs of the prediction states of EarleyStolcke parser at each time instant (Detail of the computation can be found in [16]). Since the IMM and the SCFG offers complimentary information of the mode, we mix the two models equally for each mode estimate, i.e., . Fig. 10 demonstrates the reduction in estimator covariance with knowledge of the extracted geometric pattern. The solid line shows covariance of the tracker as the target is moving in a mrectangle, and the dotted line shows covariance of the assisted tracker. The jumps in covariance correspond to the times when the target is making sharp turns, and knowledge about the target trajectory’s geometric pattern allows the tracker to make better predictions of the turns, and thus reduce covariance.
Vi Conclusion
In this paper we considered syntactic (higherlevel) tracking of ground targets using GMTI radar. The goal of such syntactic filtering is to assist human radar operators in making inferences about the target behaviour given track estimates. Our premise for syntactic signal processing is that the geometric pattern of a target’s trajectory can be modeled as ”words” (modes) spoken by a SCFG language. The syntactic tracker constructs a parse tree of the geometric patterns that form the target trajectory and provides valuable information about the targets’ intent. The parsing of the motion trajectories is implemented with Earley Stolcke parsing algorithm, and we extend its control structure with a particle filter and a IMM/Extended Kalman filter to deal with the GMTI data. The parsing algorithm and the Bayesian filters were implemented, and numerical studies are presented using real GMTI data collected with DRDC Ottawa’s XWEAR radar.
References
 [1] Y. BarShalom and X. Li, Estimation and Tracking: Principles, Techniques and Software. Boston: Artech House, 1993.
 [2] X. R. Li and V. P. Jilkov, “Survey of maneuvering target tracking. part v: Multiplemodel methods,” IEEE Transactions on Aerospace and Electronic Systems, pp. 1255–1321, 2005.
 [3] S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems. Artech House, 1999.
 [4] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech, 2004.
 [5] T. Kirubarajan, Y. BarShalom, K. R. Pattipati, and I. Kadar, “Ground target tracking with variable structure IMM estimator,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, pp. 26–46, 2000.
 [6] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison, Biological sequence analysis: Probabilistic models of proteins and nucleic acids. Cambridge University Press, 1998.
 [7] N. Visnevski, V. Krishnamurthy, A. Wang, and S. Haykin, “Syntactic modeling and signal processing of multifunction radars: A stochastic context free grammar approach,” Proceedings of the IEEE, vol. 95, no. 5, pp. 1000–1025, May 2007.
 [8] A. Wang and V. Krishnamurthy, “Signal interpretation of multifunction radars: Modeling and statistical signal processing with stochastic context free grammar,” IEEE Trans. Signal Proc., vol. 56, no. 3, pp. 1106–1119, 2008.
 [9] K. Lari and S. J. Young, “The estimation of stochastic context free grammars using the InsideOutside algorithm,” Computer Speech and Language, vol. 4, pp. 35–56, 1990.
 [10] M. I. Miller and A. O’Sullivan, “Entropies and combinatorics of random branching processes and contextfree languages,” IEEE Transactions on Information Theory, vol. 38, pp. 1292–1310, 1992.
 [11] J. E. Hopcroft, R. Motwani, and J. D. Ullman, Introduction to Automata Theory, Languages, and Computation, 3rd ed. Pearson Education, 2007.
 [12] L. Lin, Y. BarShalom, and T. Kirubarajan, “New assignmentbased data association for tracking movestopmove targets,” IEEE Trans. Aerospace and Electronic Systems, vol. 40, no. 2, pp. 714–725, 2004.
 [13] A. Damini, M. McDonald, and G. E. Haslam, “Xband wideband experimental airborne radar for SAR, GMTI and maritime surveillance,” IEE Proceedings, Radar, Sonar and Navigation, vol. 150, pp. 305–312, 2003.
 [14] B. Balaji and A. Damini, “Multimode adaptive signal processing: a new approach to GMTI,” IEEE Trans. Aerospace and Electronic Systems, vol. 42, no. 3, pp. 1121–1126, 2006.
 [15] J. Coleman, Introducing Speech and Language Processing. Cambridge University Press, 2005.
 [16] F. Jelinek, Statistical Methods for Speech Recognition. MIT Press, 1997.

[17]
C. D. Manning and H. Schütze,
Foundations of Statistical Natural Language Processing
. The MIT Press, 1999.  [18] H. Ghadaki and R. Dizaji, “Target track classification for airport surveillance radar,” in IEEE Conference on Radar, 2006.
 [19] E. Charniak, Statistical Language Learning. MIT Press, 1993.
 [20] S. Luhr, H. H. Bui, S. Venkatesh, and G. A. W. West, “Recognition of human activity through hierarchical stochastic learning,” in Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, 2003.
 [21] S. Fine, Y. Singer, and N. Tishby, “The hierarchical hidden markov model: Analysis and applications,” Machine Learning, vol. 32, pp. 41–62, 1998.
 [22] Y. A. Ivanov and A. F. Bobick, “Recognition of visual activities and interactions by stochastic parsing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 852–872, 2000.
 [23] D. Lymberopoulos, A. S. Ogale, A. Savvides, and Y. Aloimonos, “A sensory grammar for inferring behaviors in sensor networks,” in International conference on Information processing in sensor networks, 2006, pp. 251–259. [Online]. Available: http://portal.acm.org/citation.cfm?id=1127777.1127817
 [24] R. Klemm, SpaceTime Adaptive Processing. Stevenage, UK: IEE Press, 1998.
 [25] M. Carlotto, “MTI data clustering and formation recognition,” IEEE Trans. Aerospace and Electronic Systems, vol. 9, no. 2, pp. 237–252, 2001.
 [26] Y. BarShalom, X. Li, and T. Kirubarajan, Estimation with applications to tracking and navigation. John Wiley, 2001.
 [27] K. S. Fu, Syntactic Pattern Recognition and Applications. PrenticeHall, 1982.
 [28] Z. Chi, “Statistical properties of probabilistic contextfree grammars,” Computational Linguistics, vol. 25, pp. 131–160, 1999.
 [29] A. Stolcke, “An efficient probabilistic contextfree parsing algorithm that computes prefix probabilities,” Computational Linguistics, vol. 21, no. 2, pp. 165–201, 1995.
 [30] J. O. Berger, V. D. Oliverira, and B. Sanso, “Objective bayesian analysis of spatially correlated data,” Journal of American Statistical Association, vol. 96, no. 456, pp. 1361–1374, 2001.