Evaluating Neural Morphological Taggers for Sanskrit

05/21/2020
by   Ashim Gupta, et al.
0

Neural sequence labelling approaches have achieved state of the art results in morphological tagging. We evaluate the efficacy of four standard sequence labelling models on Sanskrit, a morphologically rich, fusional Indian language. As its label space can theoretically contain more than 40,000 labels, systems that explicitly model the internal structure of a label are more suited for the task, because of their ability to generalise to labels not seen during training. We find that although some neural models perform better than others, one of the common causes for error for all of these models is mispredictions due to syncretism.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset