fugashi, a Tool for Tokenizing Japanese in Python

10/14/2020
by   Paul McCann, et al.
0

Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high quality open source tokenizers exist they can be hard to use and lack English documentation. This paper introduces fugashi, a MeCab wrapper for Python, and gives an introduction to tokenizing Japanese.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

MFST: A Python OpenFST Wrapper With Support for Custom Semirings and Jupyter Notebooks

This paper introduces mFST, a new Python library for working with Finite...
research
05/25/2023

Revisiting non-English Text Simplification: A Unified Multilingual Benchmark

Recent advancements in high-quality, large-scale English resources have ...
research
03/30/2022

pycefr: Python Competency Level through Code Analysis

Python is known to be a versatile language, well suited both for beginne...
research
01/06/2022

HuSpaCy: an industrial-strength Hungarian natural language processing toolkit

Although there are a couple of open-source language processing pipelines...
research
11/22/2021

FastWARC: Optimizing Large-Scale Web Archive Analytics

Web search and other large-scale web data analytics rely on processing a...
research
10/19/2020

PySBD: Pragmatic Sentence Boundary Disambiguation

In this paper, we present a rule-based sentence boundary disambiguation ...
research
08/08/2022

Bus Stop Spacings Statistics: Theory and Evidence

Transit agencies have been removing a large number of bus stops, but dis...

Please sign up or login with your details

Forgot password? Click here to reset