DeepAI AI Chat
Log In Sign Up

fugashi, a Tool for Tokenizing Japanese in Python

by   Paul McCann, et al.

Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high quality open source tokenizers exist they can be hard to use and lack English documentation. This paper introduces fugashi, a MeCab wrapper for Python, and gives an introduction to tokenizing Japanese.


page 1

page 2

page 3

page 4


MFST: A Python OpenFST Wrapper With Support for Custom Semirings and Jupyter Notebooks

This paper introduces mFST, a new Python library for working with Finite...

Revisiting non-English Text Simplification: A Unified Multilingual Benchmark

Recent advancements in high-quality, large-scale English resources have ...

pycefr: Python Competency Level through Code Analysis

Python is known to be a versatile language, well suited both for beginne...

HuSpaCy: an industrial-strength Hungarian natural language processing toolkit

Although there are a couple of open-source language processing pipelines...

FastWARC: Optimizing Large-Scale Web Archive Analytics

Web search and other large-scale web data analytics rely on processing a...

PySBD: Pragmatic Sentence Boundary Disambiguation

In this paper, we present a rule-based sentence boundary disambiguation ...

Bus Stop Spacings Statistics: Theory and Evidence

Transit agencies have been removing a large number of bus stops, but dis...