A New Corpus for Low-Resourced Sindhi Language with Word Embeddings

11/28/2019
by   Wazir Ali, et al.
0

Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2020

Development of Word Embeddings for Uzbek Language

In this paper, we share the process of developing word embeddings for th...
research
11/21/2021

More Romanian word embeddings from the RETEROM project

Automatically learned vector representations of words, also known as "wo...
research
11/12/2019

How to Evaluate Word Representations of Informal Domain?

Diverse word representations have surged in most state-of-the-art natura...
research
09/04/2017

Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects

This paper describes a preliminary study for producing and distributing ...
research
02/23/2023

Deep learning model for Mongolian Citizens Feedback Analysis using Word Vector Embeddings

A large amount of feedback was collected over the years. Many feedback a...
research
12/10/2019

An Ensemble Method for Producing Word Representations for the Greek Language

In this paper we present a new ensemble method, Continuous Bag-of-Skip-g...
research
02/14/2020

Semantic Relatedness and Taxonomic Word Embeddings

This paper connects a series of papers dealing with taxonomic word embed...

Please sign up or login with your details

Forgot password? Click here to reset