Neural Token Segmentation for High Token-Internal Complexity

03/21/2022
by   Idan Brusilovsky, et al.
0

Tokenizing raw texts into word units is an essential pre-processing step for critical tasks in the NLP pipeline such as tagging, parsing, named entity recognition, and more. For most languages, this tokenization step straightforward. However, for languages with high token-internal complexity, further token-to-word segmentation is required. Previous canonical segmentation studies were based on character-level frameworks, with no contextualised representation involved. Contextualized vectors a la BERT show remarkable results in many applications, but were not shown to improve performance on linguistic segmentation per se. Here we propose a novel neural segmentation model which combines the best of both worlds, contextualised token representation and char-level decoding, which is particularly effective for languages with high token-internal complexity and extreme morphological ambiguity. Our model shows substantial improvements in segmentation accuracy on Hebrew and Arabic compared to the state-of-the-art, and leads to further improvements on downstream tasks such as Part-of-Speech Tagging, Dependency Parsing and Named-Entity Recognition, over existing pipelines. When comparing our segmentation-first pipeline with joint segmentation and labeling in the same settings, we show that, contrary to pre-neural studies, the pipeline performance is superior.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2017

Morphological Embeddings for Named Entity Recognition in Morphologically Rich Languages

In this work, we present new state-of-the-art results of 93.59, for Turk...
research
07/17/2018

Improving Named Entity Recognition by Jointly Learning to Disambiguate Morphological Tags

Previous studies have shown that linguistic features of a word such as p...
research
03/12/2022

FiNER: Financial Numeric Entity Recognition for XBRL Tagging

Publicly traded companies are required to submit periodic reports with e...
research
08/24/2023

Advancing Hungarian Text Processing with HuSpaCy: Efficient and Accurate NLP Pipelines

This paper presents a set of industrial-grade text processing models for...
research
07/26/2018

Resource-Size matters: Improving Neural Named Entity Recognition with Optimized Large Corpora

This study improves the performance of neural named entity recognition b...
research
04/10/2022

Breaking Character: Are Subwords Good Enough for MRLs After All?

Large pretrained language models (PLMs) typically tokenize the input str...
research
11/12/2022

Addressing Segmentation Ambiguity in Neural Linguistic Steganography

Previous studies on neural linguistic steganography, except Ueoka et al....

Please sign up or login with your details

Forgot password? Click here to reset