1 Introduction
Greedy transitionbased dependency parsers are widely used in different NLP tasks due to their speed and efficiency. They parse a sentence from left to right by greedily choosing the highestscoring transition to go from the current parser configuration or state to the next. The resulting sequence of transitions incrementally builds a parse for the input sentence. The scoring of the transitions is provided by a statistical model, previously trained to approximate an oracle, a function that selects the needed transitions to parse a gold tree.
Unfortunately, the greedy nature that grants these parsers their efficiency also represents their main limitation. mcdonald07emnlp show that greedy transitionbased parsers lose accuracy to error propagation: a transition erroneously chosen by the greedy parser can place it in an incorrect and unknown configuration, causing more mistakes in the rest of the transition sequence. Training with a dynamic oracle Goldberg and Nivre (2012) improves robustness in these situations, but in a monotonic transition system, erroneous decisions made in the past are permanent, even when the availability of further information in later states might be useful to correct them.
honnibal13 show that allowing some degree of nonmonotonicity, by using a limited set of nonmonotonic actions that can repair past mistakes and replace previouslybuilt arcs, can increase the accuracy of a transitionbased parser. In particular, they present a modified arceager transition system where the and transitions are nonmonotonic: the former is used to repair invalid attachments made in previous states by replacing them with a leftward arc, and the latter allows the parser to link two words with a rightward arc that were previously left unattached due to an erroneous decision. Since the transition is still monotonic and leftward arcs can never be repaired because their dependent is removed from the stack by the arceager parser and rendered inaccessible, this approach can only repair certain kinds of mistakes: namely, it can fix erroneous rightward arcs by replacing them with a leftward arc, and connect a limited set of unattached words with rightward arcs. In addition, they argue that nonmonotonicity in the training oracle can be harmful for the final accuracy and, therefore, they suggest to apply it only as a fallback component for a monotonic oracle, which is given priority over the nonmonotonic one. Thus, this strategy will follow the path dictated by the monotonic oracle the majority of the time. honnibaljohnson:2015:EMNLP present an extension of this transition system with an transition allowing it some extra flexibility to correct past errors. However, the restriction that only rightward arcs can be deleted, and only by replacing them with leftward arcs, is still in place. Furthermore, both versions of the algorithm are limited to projective trees.
In this paper, we propose a nonmonotonic transition system based on the nonprojective Covington parser, together with a dynamic oracle to train it with erroneous examples that will need to be repaired. Unlike the system developed in Honnibal et al. (2013); Honnibal and Johnson (2015), we work with full nonmonotonicity. This has a twofold meaning: (1) our approach can repair previous erroneous attachments regardless of their original direction, and it can replace them either with a rightward or leftward arc as both arc transitions are nonmonotonic;^{1}^{1}1The only restriction is that parsing must still proceed in lefttoright order. For this reason, a leftward arc cannot be repaired with a rightward arc, because this would imply going back in the sentence. The other three combinations (replacing leftward with leftward, rightward with leftward or rightward with rightward arcs) are possible. and (2) we use exclusively a nonmonotonic oracle, without the interferences of monotonic decisions. These modifications are feasible because the nonprojective Covington transition system is less rigid than the arceager algorithm, as words are never deleted from the parser’s data structures and can always be revisited, making it a better option to exploit the full potencial of nonmonotonicity. To our knowledge, the presented system is the first nonmonotonic parser that can produce nonprojective dependency analyses. Another novel aspect is that our dynamic oracle is approximate, i.e., based on efficientlycomputable approximations of the loss due to the complexity of calculating its actual value in a nonmonotonic and nonprojective scenario. However, this is not a problem in practice: experimental results show how our parser and oracle can use nonmonotonic actions to repair erroneous attachments, outperforming the monotonic version developed by dyncovington in a large majority of the datasets tested.
2 Preliminaries
2.1 NonProjective Covington Transition System
The nonprojective Covington parser was originally defined by covington01fundamental, and then recast by Nivre2008 under the transitionbased parsing framework.
The transition system that defines this parser is as follows: each parser configuration is of the form , such that and are lists of partially processed words, is another list (called the buffer) containing currently unprocessed words, and is the set of dependencies that have been built so far. Suppose that our input is a string , whose word occurrences will be identified with their indices for simplicity. Then, the parser will start at an initial configuration , and execute transitions chosen from those in Figure 1 until a terminal configuration of the form is reached. At that point, the sentence’s parse tree is obtained from .^{2}^{2}2In general is a forest, but it can be converted to a tree by linking headless nodes as dependents of an artificial root node at position . When we refer to parser outputs as trees, we assume that this transformation is being implicitly made.
:  
:  
:  
only if (singlehead) and (acyclicity).  
:  
only if (singlehead) and (acyclicity). 
These transitions implement the same logic as the double nested loop traversing word pairs in the original formulation by covington01fundamental. When the parser’s configuration is , we say that it is considering the focus words and , located at the end of the first list and at the beginning of the buffer. At that point, the parser must decide whether these two words should be linked with a leftward arc ( transition), a rightward arc ( transition), or not linked at all ( transition). However, the two transitions that create arcs will be disallowed in configurations where this would cause a violation of the singlehead constraint (a node can have at most one incoming arc) or the acyclicity constraint (the dependency graph cannot have cycles). After applying any of these three transitions, is moved to the second list to make and the focus words for the next step. As an alternative, we can instead choose to execute a transition which lets the parser read a new input word, placing the focus on and .
The resulting parser can generate any possible dependency tree for the input, including arbitrary nonprojective trees. While it runs in quadratic worstcase time, in theory worse than lineartime transitionbased parsers (e.g. Nivre (2003); GómezRodríguez and Nivre (2013)
), it has been shown to outspeed linear algorithms in practice, thanks to feature extraction optimizations that cannot be implemented in other parsers
Volokh and Neumann (2012). In fact, one of the fastest dependency parsers ever reported uses this algorithm Volokh (2013).2.2 Monotonic Dynamic Oracle
A dynamic oracle is a function that maps a configuration and a gold tree to the set of transitions that can be applied in and lead to some parse tree minimizing the Hamming loss with respect to (the amount of nodes whose head is different in and ). Following goldberg2013training, we say that an arc set is reachable from configuration , and we write , if there is some (possibly empty) path of transitions from to some configuration , with . Then, we can define the loss of configuration as
and therefore, a correct dynamic oracle will return the set of transitions
i.e., the set of transitions that do not increase configuration loss, and thus lead to the best parse (in terms of loss) reachable from . Hence, implementing a dynamic oracle reduces to computing the loss for each configuration .
goldberg2013training show a straightforward method to calculate loss for parsers that are arcdecomposable, i.e., those where every arc set that can be part of a wellformed parse verifies that if for every (i.e., each of the individual arcs of is reachable from a given configuration ), then (i.e., the set as a whole is reachable from ). If this holds, then the loss of a configuration equals the number of gold arcs that are not individually reachable from , which is easy to compute in most parsers.
dyncovington show that the nonprojective Covington parser is not arcdecomposable because sets of individually reachable arcs may form cycles together with alreadybuilt arcs, preventing them from being jointly reachable due to the acyclicity constraint. In spite of this, they prove that a dynamic oracle for the Covington parser can be efficiently built by counting individually unreachable arcs, and correcting for the presence of such cycles. Concretely, the loss is computed as:
where is the set of individually reachable arcs of from configuration ; is the set of individually unreachable arcs of from , computed as ; and denotes the number of cycles in a graph .
Therefore, to calculate the loss of a configuration , we only need to compute the two terms and . To calculate the first term, given a configuration with focus words and (i.e., ), an arc will be in if it is not in , and at least one of the following holds:

, (i.e., we have read too far in the string and can no longer get as right focus word),

, (i.e., we have as the right focus word but the left focus word has already moved left past , and we cannot go back),

there is some such that , (i.e., we cannot create because it would violate the singlehead constraint),

and are on the same weakly connected component of (i.e., we cannot create due to the acyclicity constraint).
The second term of the loss, , can be computed by first obtaining as . Since the graph has indegree 1, the algorithm by Tarjan72 can then be used to find and count the cycles in time.
Algorithm 1 shows the resulting loss calculation algorithm, where CountCycles is a function that counts the number of cycles in the given graph and WeaklyConnected returns whether two given nodes are weakly connected in .
3 NonMonotonic Transition System for the Covington NonProjective Parser
We now define a nonmonotonic variant of the Covington nonprojective parser. To do so, we allow the and transitions to create arcs between any pair of nodes without restriction. If the node attached as dependent already had a previous head, the existing attachment is discarded in favor of the new one. This allows the parser to correct erroneous attachments made in the past by assigning new heads, while still enforcing the singlehead constraint, as only the most recent head assigned to each node is kept.
To enforce acyclicity, one possibility would be to keep the logic of the monotonic algorithm, forbidding the creation of arcs that would create cycles. However, this greatly complicates the definition of the set of individually unreachable arcs, which is needed to compute the loss bounds that will be used by the dynamic oracle. This is because a gold arc may superficially seem unreachable due to forming a cycle together with arcs in , but it might in fact be reachable if there is some transition sequence that first breaks the cycle using nonmonotonic transitions to remove arcs from , to then create
. We do not know of a way to characterize the conditions under which such a transition sequence exists, and thus cannot estimate the loss efficiently.
Instead, we enforce the acyclicity constraint in a similar way to the singlehead constraint: and transitions are always allowed, even if the prospective arc would create a cycle in . However, if the creation of a new arc generates a cycle in , we immediately remove the arc of the form from (which trivially exists, and is unique due to the singlehead constraint). This not only enforces the acyclicity constraint while keeping the computation of simple and efficient, but also produces a straightforward, coherent algorithm (arc transitions are always allowed, and both constraints are enforced by deleting a previous arc) and allows us to exploit nonmonotonicity to the maximum (we can not only recover from assigning a node the wrong head, but also from situations where previous errors together with the acyclicity constraint prevent us from building a gold arc, keeping with the principle that later decisions override earlier ones).
In Figure 2, we can see the resulting nonmonotonic transition system for the nonprojective Covington algorithm, where, unlike the monotonic version, all transitions are allowed at each configuration, and the singlehead and acyclicity constraints are kept in by removing offending arcs.
:  
:  
:  
:  
4 NonMonotonic Approximate Dynamic Oracle
To successfully train a nonmonotonic system, we need a dynamic oracle with error exploration, so that the parser will be put in erroneous states and need to apply nonmonotonic transitions in order to repair them. To achieve that, we modify the dynamic oracle defined by dyncovington so that it can deal with nonmonotonicity. Our modification is an approximate dynamic oracle: due to the extra flexibility added to the algorithm by nonmonotonicity, we do not know of an efficient way of obtaining an exact calculation of the loss of a given configuration. Instead, we use upper or lower bounds on the loss, which we empirically show to be very tight (less that 1% relative error with respect to the real loss) and are sufficient for the algorithm to provide better accuracy than the exact monotonic oracle.
First of all, we adapt the computation of the set of individually unreachable arcs to the new algorithm. In particular, if has focus words and (i.e., ), then an arc is in if it is not in , and at least one of the following holds:

, (i.e., we have read too far in the string and can no longer get as right focus word),

(i.e., we have as the right focus word but the left focus word has already moved left past , and we cannot move it back).
Note that, since the head of a node can change during the parsing process and arcs that produce cycles in can be built, the two last conditions present in the monotonic scenario for computing are not needed when we use nonmonotonicity and, as a consequence, the set of individually reachable arcs is larger: due to the greater flexibility provided by nonmonotonicity, we can reach arcs that would be unreachable for the monotonic version.
Since arcs that are in this new are unreachable even by the nonmonotonic parser, is trivially a lower bound of the loss . It is worth noting that there always exists at least one transition sequence that builds every arc in at some point (although not all of them necessarily appear in the final tree, due to nonmonotonicity). This can be easily shown based on the fact that the nonmonotonic parser does not forbid transitions at any configuration. Thanks to this, we can can generate one such sequence by just applying the original covington01fundamental criteria (choose an arc transition whenever the focus words are linked in , and otherwise or depending on whether the left focus word is the first word in the sentence or not), although this sequence is not necessarily optimal in terms of loss. In such a transition sequence, the gold arcs that are missed are (1) those in , and (2) those that are removed by the cyclebreaking in and transitions. In practice configurations where (2) is needed are uncommon, so this lower bound is a very close approximation of the real loss, as will be seen empirically below.
average value  relative difference to loss  
Language  lower  loss  pc upper  upper  lower  pc upper  upper 
Arabic  0.66925  0.67257  0.67312  0.68143  0.00182  0.00029  0.00587 
Basque  0.58260  0.58318  0.58389  0.62543  0.00035  0.00038  0.02732 
Catalan  0.58009  0.58793  0.58931  0.60644  0.00424  0.00069  0.00961 
Chinese  0.56515  0.56711  0.57156  0.62921  0.00121  0.00302  0.03984 
Czech  0.57521  0.58357  0.59401  0.62883  0.00476  0.00685  0.02662 
English  0.55267  0.56383  0.56884  0.59494  0.00633  0.00294  0.01767 
Greek  0.56123  0.57443  0.57983  0.61256  0.00731  0.00296  0.02256 
Hungarian  0.46495  0.46672  0.46873  0.48797  0.00097  0.00114  0.01165 
Italian  0.62033  0.62612  0.62767  0.64356  0.00307  0.00082  0.00883 
Turkish  0.60143  0.60215  0.60660  0.63560  0.00060  0.00329  0.02139 
Bulgarian  0.61415  0.62257  0.62433  0.64497  0.00456  0.00086  0.01233 
Danish  0.67350  0.67904  0.68119  0.69436  0.00291  0.00108  0.00916 
Dutch  0.69201  0.70600  0.71105  0.74008  0.00709  0.00251  0.01862 
German  0.54581  0.54755  0.55080  0.58182  0.00104  0.00208  0.02033 
Japanese  0.60515  0.60515  0.60515  0.60654  0.00000  0.00000  0.00115 
Portuguese  0.58880  0.60063  0.60185  0.61780  0.00651  0.00067  0.00867 
Slovene  0.56155  0.56860  0.57135  0.60373  0.00396  0.00153  0.01979 
Spanish  0.58247  0.59119  0.59277  0.61273  0.00487  0.00089  0.01197 
Swedish  0.57543  0.58636  0.58933  0.61104  0.00585  0.00153  0.01383 
Average  0.59009  0.59656  0.59954  0.62416  0.00355  0.00176  0.01513 
This reasoning also helps us calculate an upper bound of the loss: in a transition sequence as described, if we only build the arcs in and none else, the amount of arcs removed by breaking cycles (2) cannot be larger than the number of cycles in . This means that is an upper bound of the loss . Note that, contrary to the monotonic case, this expression does not always give us the exact loss, for several reasons: firstly, can have nondisjoint cycles (a node may have different heads in and since attachments are not permanent, contrary to the monotonic version) and thus removing a single arc may break more than one cycle; secondly, the removed arc can be a nongold arc of and therefore not incur loss; and thirdly, there may exist alternative transition sequences where a cycle in is broken early by nonmonotonic configurations that change the head of a wronglyattached node in to a different (and also wrong) head,^{3}^{3}3Note that, in this scenario, the new head must also be wrong because otherwise the newly created arc would be an arc of (and therefore, would not be breaking a cycle in ). However, replacing a wrong attachment with another wrong attachment need not increase loss. removing the cycle before the cyclebreaking mechanism needs to come into play without incurring in extra errors. Characterizing the situations where such an alternative exists is the main difficulty for an exact calculation of the loss.
However, it is possible to obtain a closer upper bound to the real loss if we consider the following: for each cycle in that will be broken by the transition sequence described above, we can determine exactly which is the arc removed by cyclebreaking (if is the arc that will close the cycle according to the Covington arcbuilding order, then the affected arc is the one of the form ). The cycle can only cause the loss of a gold arc if that arc is gold, which can be trivially checked. Hence, if we call cycles where that holds problematic cycles, then the expression , where “pc” stands for problematic cycles, is a closer upper bound to the loss and the following holds:
As mentioned before, unlike the monotonic approach, a node can have a different head in than in and, as a consequence, the resulting graph has maximum indegree rather than , and there can be overlapping cycles. Therefore, the computation of the nonmonotonic terms and requires an algorithm such as the one by dbjohnson to find all elementary cycles in a directed graph. This runs in , where is the number of vertices, is the number of edges and is the number of elementary cycles in the graph. This implies that the calculation of the two nonmonotonic upper bounds is less efficient than the linear loss computation in the monotonic scenario. However, a nonmonotonic algorithm that uses the lower bound as loss expression is the fastest option (even faster than the monotonic approach) as the oracle does not need to compute cycles at all, speeding up the training process.
Algorithm 2 shows the nonmonotonic variant of Algorithm 1, where CountRelevantCycles is a function that counts the number of cycles or problematic cycles in the given graph, depending on the upper bound implemented, and will return 0 in case we use the lower bound.
Unigrams 
; ; ; ; ; ; ; ; ; 
; ; ; ; ; ; ; ; 
; ; ; ; ; ; ; ; ; 
; ; ; ; ; ; ; ; 
; ; ; ; ; ;; ; 
; ; ; ; ; ; ; ; 
; ; ; ; ; ; ; ; 
; ; ; ; ; ; ; ; ; 
Pairs 
+; +; +; +; 
+; +; +; +; +; 
+; 
Triples 
++; ++; ++; 
++; ++; ++; 
++; ++; ++; 
++; 
5 Evaluation of the Loss Bounds
To determine how close the lower bound and the upper bounds and are to the actual loss in practical scenarios, we use exhaustive search to calculate the real loss of a given configuration, to then compare it with the bounds. This is feasible because the lower and upper bounds allow us to prune the search space: if an upper and a lower bound coincide for a configuration we already know the loss and need not keep searching, and if we can branch to two configurations such that the lower bound of one is greater or equal than an upper bound of the other, we can discard the former as it will never lead to smaller loss than the latter. Therefore, this exhaustive search with pruning guarantees to find the exact loss.
Due to the time complexity of this process, we undertake the analysis of only the first 100,000 transitions on each dataset of the nineteen languages available from CoNLLX and CoNLLXI shared tasks Buchholz and Marsi (2006); Nivre et al. (2007). In Table 1, we present the average values for the lower bound, both upper bounds and the loss, as well as the relative differences from each bound to the real loss. After those experiments, we conclude that the lower and the closer upper bounds are a tight approximation of the loss, with both bounds incurring relative errors below 0.8% in all datasets. If we compare them, the real loss is closer to the upper bound in the majority of datasets (12 out of 18 languages, excluding Japanese where both bounds were exactly equal to the real loss in the whole sample of configurations). This means that the term provides a close approximation of the gold arcs missed by the presence of cycles in . Regarding the upper bound , it presents a more variable relative error, ranging from 0.1% to 4.0%.
Thus, although we do not know an algorithm to obtain the exact loss which is fast enough to be practical, any of the three studied loss bounds can be used to obtain a feasible approximate dynamic oracle with full nonmonotonicity.
dynamic  dynamic nonmonotonic  
static  monotonic  lower  pc upper  upper  
Language  UAS  LAS  UAS  LAS  UAS  LAS  UAS  LAS  UAS  LAS 
Arabic  80.67  66.51  82.76  68.48  83.29  69.14  83.18  69.05  83.40  69.29 
Basque  76.55  66.05  77.49  67.31  74.61  65.31  74.69  65.18  74.27  64.78 
Catalan  90.52  85.09  91.37  85.98  90.51  85.35  90.40  85.30  90.44  85.35 
Chinese  84.93  80.80  85.82  82.15  86.55  82.53  86.29  82.27  86.60  82.51 
Czech  78.49  61.77  80.21  63.52  81.32  64.89  81.33  64.81  81.49  65.18 
English  85.35  84.29  87.47  86.55  88.44  87.37  88.23  87.22  88.50  87.55 
Greek  79.47  69.35  80.76  70.43  80.90  70.46  80.84  70.34  81.02  70.49 
Hungarian  77.65  68.32  78.84  70.16  78.67  69.83  78.47  69.66  78.65  69.74 
Italian  84.06  79.79  84.30  80.17  84.38  80.30  84.64  80.52  84.47  80.32 
Turkish  81.28  70.97  81.14  71.38  80.65  71.15  80.80  71.29  80.60  71.07 
Bulgarian  89.13  85.30  90.45  86.86  91.36  87.88  91.33  87.89  91.73  88.26 
Danish  86.00  81.49  86.91  82.75  86.83  82.63  86.89  82.74  86.94  82.68 
Dutch  81.54  78.46  82.07  79.26  82.78  79.64  82.80  79.68  83.02  79.92 
German  86.97  83.91  87.95  85.17  87.31  84.37  87.18  84.22  87.48  84.54 
Japanese  93.63  92.20  93.67  92.33  94.02  92.68  94.02  92.68  93.97  92.66 
Portuguese  86.55  82.61  87.45  83.62  87.17  83.47  87.12  83.45  87.40  83.71 
Slovene  76.76  63.53  77.86  64.43  80.39  67.04  80.56  67.10  80.47  67.10 
Spanish  79.20  76.00  80.12  77.24  81.36  78.30  81.12  77.99  81.33  78.16 
Swedish  87.43  81.77  88.05  82.77  88.20  83.02  88.09  82.87  88.36  83.16 
Average  83.48  76.75  84.46  77.92  84.67  78.18  84.63  78.12  84.74  78.24 
6 Experiments
To prove the usefulness of our approach, we implement the static, dynamic monotonic and nonmonotonic oracles for the nonprojective Covington algorithm and compare their accuracies on nine datasets^{4}^{4}4We excluded the languages from CoNLLX that also appeared in CoNLLXI, i.e., if a language was present in both shared tasks, we used the latest version. from the CoNLLX shared task Buchholz and Marsi (2006) and all datasets from the CoNLLXI shared task Nivre et al. (2007)
. For the nonmonotonic algorithm, we test the three different loss expressions defined in the previous section. We train an averaged perceptron model for 15 iterations and use the same feature templates for all languages
^{5}^{5}5No feature optimization is performed since our priority in this paper is not to compete with stateoftheart systems, but to prove, under uniform experimental settings, that our approach outperforms the baseline system. which are listed in detail in Table 2.6.1 Results
The accuracies obtained by the nonprojective Covington parser with the three available oracles are presented in Table 3, in terms of Unlabeled (UAS) and Labeled Attachment Score (LAS). For the nonmonotonic dynamic oracle, three variants are shown, one for each loss expression implemented. As we can see, the novel nonmonotonic oracle improves over the accuracy of the monotonic version on 14 out of 19 languages (0.32 in UAS on average) with the best loss calculation being , where 6 of these improvements are statistically significant at the .05 level Yeh (2000). The other two loss calculation methods also achieve good results, outperforming the monotonic algorithm on 12 out of 19 datasets tested.
The loss expression obtains greater accuracy on average than the other two loss expressions, including the more adjusted upper bound that is provably closer to the real loss. This could be explained by the fact that identifying problematic cycles is a difficult task to learn for the parser, and for this reason a more straightforward approach, which tries to avoid all kinds of cycles (regardless of whether they will cost gold arcs or not), can perform better. This also leads us to hypothesize that, even if it were feasible to build an oracle with the exact loss, it would not provide practical improvements over these approximate oracles; as it appears difficult for a statistical model to learn the situations where replacing a wrong arc with another indirectly helps due to breaking prospective cycles.
It is also worth mentioning that the nonmonotonic dynamic oracle with the best loss expression accomplishes an average improvement over the static version (1.26 UAS) greater than that obtained by the monotonic oracle (0.98 UAS), resulting in 13 statistically significant improvements achieved by the nonmonotonic variant over the static oracle in comparison to the 12 obtained by the monotonic system. Finally, note that, despite this remarkable performance, the nonmonotonic version (regardless of the loss expression implemented) has an inexplicable drop in accuracy in Basque in comparison to the other two oracles.

Average value  

Algorithm  UAS  LAS  sent./s. 
G&N 2012  84.32  77.68  833.33 
GR et al. 2014*  83.78  78.64   
GR&FG 2015  84.46  77.92  335.63 
H et al. 2013  84.28  77.68  847.33 
This work  84.74  78.24  236.74 
6.2 Comparison
In order to provide a broader contextualization of our approach, Table 4 presents a comparison of the average accuracy and parsing speed obtained by some wellknown transitionbased systems with dynamic oracles. Concretely, we include in this comparison both monotonic Goldberg and Nivre (2012) and nonmonotonic Honnibal et al. (2013) versions of the arceager parser, as well as the original monotonic Covington system GómezRodríguez and FernándezGonzález (2015). The three of them were ran with our own implementation so the comparison is homogeneous. We also report the published accuracy of the nonprojective Attardi algorithm GómezRodríguez et al. (2014) on the nineteen datasets used in our experiments. From Table 4 we can see that our approach achieves the best average UAS score, but is slightly slower at parsing time than the monotonic Covington algorithm. This can be explained by the fact that the nonmonotonic parser has to take into consideration the whole set of transitions at each configuration (since all are allowed), while the monotonic parser only needs to evaluate a limited set of transitions in some configurations, speeding up the parsing process.
6.3 Error Analysis
We also carry out some error analysis to provide some insights about how nonmonotonicity is improving accuracy with respect to the original Covington parser. In particular, we notice that nonmonotonicity tends to be more beneficial on projective than on nonprojective arcs. In addition, the nonmonotonic algorithm presents a notable performance on long arcs (which are more prone to error propagation): average precision on arcs with length greater than 7 goes from 58.41% in the monotonic version to 63.19% in the nonmonotonic parser, which may mean that nonmonotonicity is alleviating the effect of error propagation. Finally, we study the effectiveness of nonmonotonic arcs (i.e., those that break a previouslycreated arc), obtaining that, on average across all datasets tested, 36.86% of the arc transitions taken were nonmonotonic, replacing an existing arc with a new one. Out of these transitions, 60.31% created a gold arc, and only 5.99% were harmful (i.e., they replaced a previouslybuilt gold arc with an incorrect arc), with the remaining cases creating nongold arcs without introducing extra errors (replacing a nongold arc with another). These results back up the usefulness of nonmonotonicity in transitionbased parsing.
7 Conclusion
We presented a novel, fully nonmonotonic variant of the wellknown nonprojective Covington parser, trained with a dynamic oracle. Due to the unpredictability of a nonmonotonic scenario, the real loss of each configuration cannot be computed. To overcome this, we proposed three different loss expressions that closely bound the loss and enable us to implement a practical nonmonotonic dynamic oracle.
On average, our nonmonotonic algorithm obtains better performance than the monotonic version, regardless of which of the variants of the loss calculation is used. In particular, one of the loss expressions developed proved very promising by providing the best average accuracy, in spite of being the farthest approximation from the actual loss. On the other hand, the proposed lower bound makes the nonmonotonic oracle the fastest one among all dynamic oracles developed for the nonprojective Covington algorithm.
To our knowledge, this is the first implementation of nonmonotonicity for a nonprojective parsing algorithm, and the first approximate dynamic oracle that uses close, efficientlycomputable approximations of the loss, showing this to be a feasible alternative when it is not practical to compute the actual loss.
While we used a perceptron classifier for our experiments, our oracle could also be used in neuralnetwork implementations of greedy transitionbased parsing
Chen and Manning (2014); Dyer et al. (2015), providing an interesting avenue for future work. We believe that gains from both techniques should be complementary, as they apply to orthogonal components of the parsing system (the scoring model vs. the transition system), although we might see a ”diminishing returns”effect.Acknowledgments
This research has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 714150  FASTPARSE). The second author has received funding from the TELEPARESUDC project (FFI201451978C22R) from MINECO.
References
 Buchholz and Marsi (2006) Sabine Buchholz and Erwin Marsi. 2006. CoNLLX shared task on multilingual dependency parsing. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL). pages 149–164. http://www.aclweb.org/anthology/W062920.

Chen and Manning (2014)
Danqi Chen and Christopher Manning. 2014.
A fast and accurate
dependency parser using neural networks.
In
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
. Association for Computational Linguistics, Doha, Qatar, pages 740–750. http://www.aclweb.org/anthology/D141082.  Covington (2001) Michael A. Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference. ACM, New York, NY, USA, pages 95–102.
 Dyer et al. (2015) Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 334–343. http://www.aclweb.org/anthology/P151033.
 Goldberg and Nivre (2012) Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arceager dependency parsing. In Proceedings of COLING 2012. Association for Computational Linguistics, Mumbai, India, pages 959–976. http://www.aclweb.org/anthology/C121059.
 Goldberg and Nivre (2013) Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with nondeterministic oracles. Transactions of the Association for Computational Linguistics 1:403–414. http://anthology.aclweb.org/Q/Q13/Q131033.pdf.
 GómezRodríguez and FernándezGonzález (2015) Carlos GómezRodríguez and Daniel FernándezGonzález. 2015. An efficient dynamic oracle for unrestricted nonprojective parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 2631, 2015, Beijing, China, Volume 2: Short Papers. pages 256–261. http://aclweb.org/anthology/P/P15/P152042.pdf.
 GómezRodríguez and Nivre (2013) Carlos GómezRodríguez and Joakim Nivre. 2013. Divisible transition systems and multiplanar dependency parsing. Computational Linguistics 39(4):799–845. http://aclweb.org/anthology/J/J13/J134002.pdf.
 GómezRodríguez et al. (2014) Carlos GómezRodríguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomialtime dynamic oracle for nonprojective dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 917–927. http://aclweb.org/anthology/D141099.
 Honnibal et al. (2013) Matthew Honnibal, Yoav Goldberg, and Mark Johnson. 2013. A nonmonotonic arceager transition system for dependency parsing. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 89, 2013. pages 163–172. http://aclweb.org/anthology/W/W13/W133518.pdf.
 Honnibal and Johnson (2015) Matthew Honnibal and Mark Johnson. 2015. An improved nonmonotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1373–1378. http://aclweb.org/anthology/D151162.
 Johnson (1975) Donald B. Johnson. 1975. Finding all the elementary circuits of a directed graph. SIAM Journal on Computing 4(1):77–84. https://doi.org/10.1137/0204007.
 McDonald and Nivre (2007) Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of datadriven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). pages 122–131. http://www.aclweb.org/anthology/D/D07/D071013.pdf.
 Nivre (2003) Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT 03). ACL/SIGPARSE, pages 149–160.
 Nivre (2008) Joakim Nivre. 2008. Algorithms for Deterministic Incremental Dependency Parsing. Computational Linguistics 34(4):513–553. https://doi.org/10.1162/coli.07056R107027.
 Nivre et al. (2007) Joakim Nivre, Johan Hall, Sandra Kübler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLPCoNLL 2007. pages 915–932. http://www.aclweb.org/anthology/D/D07/D071096.pdf.
 Tarjan (1972) Robert Endre Tarjan. 1972. Depthfirst search and linear graph algorithms. SIAM J. Comput. 1(2):146–160. http://dblp.unitrier.de/db/journals/siamcomp/siamcomp1.html.
 Volokh (2013) Alexander Volokh. 2013. PerformanceOriented Dependency Parsing. Doctoral dissertation, Saarland University, Saarbrücken, Germany.
 Volokh and Neumann (2012) Alexander Volokh and Günter Neumann. 2012. Dependency parsing with efficient feature extraction. In Birte Glimm and Antonio Krüger, editors, KI. Springer, volume 7526 of Lecture Notes in Computer Science, pages 253–256. https://doi.org/10.1007/9783642333477.
 Yeh (2000) Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th International Conference on Computational Linguistics (COLING). pages 947–953. http://aclweb.org/anthology/C/C00/C002137.pdf.
Comments
There are no comments yet.