Structured Training for Neural Network Transition-Based Parsing

06/19/2015
by   David Weiss, et al.
0

We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26 is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset